Deep CNNs and Histogram Equalization for Glaucoma Detection

Using CNNs to classify an image into normal or glaucomatous, using retinal fundus images by transfer learning, and evaluating 2 histogram equalisation-based image preprocessing techniques. The dataset used is ACRIMA, containing 705 labelled images: 396 glaucomatous images and 309 normal images.

This project was a collaboration with H Shafeeq Ahmed. Read our paper here.

Model Comparison

The CNN models were fit using 70% of the dataset for training, 10% for validation and 20% for testing.

The model architecture used is as follows:

  • An input layer (256, 256, 3)
  • A data augmentation layer
  • The base model with image-net weights
  • A flatten layer
  • A dense layer with ReLU activation
  • A 0.5 dropout layer
  • A dense output layer with softmax activation

Data augmentation involved random change in contrast, flip along horizontal or vertical, rotation, and translation of the images:

Image augmentation

The notebooks can be accessed in the notebooks folder folder of this repo. Here is an example of the notebook used for training a model based on VGG-19 using CLAHE for image preprocessing.

Results

ModelAccuracySpecificitySensitivityF1 ScoreArea Under ROCNumber of Parameters
VGG-160.97180.95160.98750.97530.997823104066
VGG-190.97890.96770.98750.98140.993328413762
ResNet-500.95770.98390.93750.96150.995657142914
ResNet-1520.95070.98390.9250.95480.994491926146
Inception v30.90850.88710.9250.91930.936440677922
Xception0.92960.95160.91250.93590.979454416682
DenseNet-1210.95770.95160.96250.96250.996023815490
EfficientNetB70.92250.88710.950.93250.9625106041497

Preprocessing Techniques Comparison

Two different preprocessing techniques were compared:

  • Adaptive Histogram Equalisation
  • Contrast-Limited Adaptive Histogram Equalisation

Results

I. Histogram Equalisation

ModelAccuracySpecificitySensitivityF1 ScoreArea Under ROCNumber of Parameters
VGG-160.9436619720.9354838710.950.950.99354838723104066
VGG-190.8943661970.9516129030.850.9006622520.97842741928413762
ResNet-500.9366197180.9516129030.9250.9426751590.99193548457142914
ResNet-1520.9366197180.9677419350.91250.9419354840.9889112991926146
Inception v30.7676056340.8709677420.68750.7692307690.84677419440677922
Xception0.7676056340.9032258060.66250.7625899280.89838709754416682
DenseNet-1210.9295774650.8870967740.96250.939024390.98508064523815490
EfficientNetB70.8380281690.8225806450.850.8553459120.926814516106041497

II. Contrast-Limited Adaptive Histogram Equalisation

ModelAccuracySpecificitySensitivityF1 ScoreArea Under ROCNumber of Parameters
VGG-160.92253521110.86250.9261744970.99657258123104066
VGG-190.92253521110.86250.9261744970.99657258128413762
ResNet-500.9295774650.8709677420.9750.9397590360.98377016157142914
ResNet-1520.9577464790.9354838710.9750.9629629630.99415322691926146
Inception v30.8098591550.7741935480.83750.8322981370.9173387140677922
Xception0.7816901410.9354838710.66250.7737226280.92116935554416682
DenseNet-1210.9507042250.9354838710.96250.9565217390.98770161323815490
EfficientNetB70.8943661970.8387096770.93750.9090909090.925604839106041497

III. Comparing the Averages of Models

Graph comparing the average performance of models across the preprocessing techniques