This challenge is now closed. Thank you to all who participated.

If you are interested in future research, you can still request the dataset as we will be re-opening the evaluation system on the 15th September 2020.

Please request the dataset by emailing:

Submissions are evaluated by computing the F1-Score and the mean Average Precision (mAP), of the DFU dataset.

F1-Score  is  the  harmonic  mean  of  Precision  and  Recall  and  gives  a  better measure  of  the  incorrectly  classified  cases  than  the  Accuracy  Metric.  For  our task, F1-score is used as the False Negatives and False Positives are also crucial. False Positives will cause additional cost and burden to foot clinics, while False Negatives  will  risk  further  foot  complications.  

The second consideration is mAP, which is widely used to measure the overlap percentage of the prediction and ground truth, is commonly used in object detection metrics.

Participants will be ranked on these final metrics.

Submission File

For each image in the test set, you must predict a list of boxes describing objects in the image. Each box is described as:

filename ....