The challenges of image annotation

As we have seen so far, annotating images is a complex process, before automation is even considered. There are two main challenges relating to automatic annotation: the first is to source examples of images annotated by humans which can be used a starting point for automating image annotation; the second is how to automate annotation to a wider group of images beyond the examples used as a starting point in the process (for more details see the ‘Approaches to image annotation’).

Here we focus on the first problem of identifying annotated or labelled image data is generally regarded as a major challenge for automatic image annotation (and is the one focused upon by PeriCoDe). There are several reasons for this: automated image classification approaches use machine learning techniques which require large quantities of human labelled image data in order to work well – being able acquire or generate such massive quantities of data is a huge bottleneck in the process. As well as the issue of quantity, there is also quality requirement for the examples of labelled image data – these need to be accurately labelled (which we have already seen can be problematic in human annotation), as well as useful and relevant to the eventual automated annotation task.

There are several possible ways in which annotated image data can be derived:

In general these approaches tend to focus more on ‘quality’ in the case of custom annotations, or quantity for pre-existing resources, although we note also the human computation approach which has already been noted in relation to digitizing documents (https://en.wikipedia.org/wiki/ReCAPTCHA) can also be applied to gamifying image annotation (http://www.cs.cmu.edu/~biglou/ESP.pdf; archived web page can be found at https://web.archive.org/web/20090106145854/http://espgame.org/). For reference, also look at the patent relating to Facebook’s image annotation method (https://www.google.com/patents/US7945653).