TY - GEN
T1 - Exploiting text-related features for content-based image retrieval
AU - Schroth, G.
AU - Hilsenbeck, S.
AU - Huitl, R.
AU - Schweiger, F.
AU - Steinbach, E.
PY - 2011
Y1 - 2011
N2 - Distinctive visual cues are of central importance for image retrieval applications, in particular, in the context of visual location recognition. While in indoor environments typically only few distinctive features can be found, outdoors dynamic objects and clutter significantly impair the retrieval performance. We present an approach which exploits text, a major source of information for humans during orientation and navigation, without the need for error-prone optical character recognition. To this end, characters are detected and described using robust feature descriptors like SURF. By quantizing them into several hundred visual words we consider the distinctive appearance of the characters rather than reducing the set of possible features to an alphabet. Writings in images are transformed to strings of visual words termed visual phrases, which provide significantly improved distinctiveness when compared to individual features. An approximate string matching is performed using N-grams, which can be efficiently combined with an inverted file structure to cope with large datasets. An experimental evaluation on three different datasets shows significant improvement of the retrieval performance while reducing the size of the database by two orders of magnitude compared to state-of-the-art. Its low computational complexity makes the approach particularly suited for mobile image retrieval applications.
AB - Distinctive visual cues are of central importance for image retrieval applications, in particular, in the context of visual location recognition. While in indoor environments typically only few distinctive features can be found, outdoors dynamic objects and clutter significantly impair the retrieval performance. We present an approach which exploits text, a major source of information for humans during orientation and navigation, without the need for error-prone optical character recognition. To this end, characters are detected and described using robust feature descriptors like SURF. By quantizing them into several hundred visual words we consider the distinctive appearance of the characters rather than reducing the set of possible features to an alphabet. Writings in images are transformed to strings of visual words termed visual phrases, which provide significantly improved distinctiveness when compared to individual features. An approximate string matching is performed using N-grams, which can be efficiently combined with an inverted file structure to cope with large datasets. An experimental evaluation on three different datasets shows significant improvement of the retrieval performance while reducing the size of the database by two orders of magnitude compared to state-of-the-art. Its low computational complexity makes the approach particularly suited for mobile image retrieval applications.
KW - CBIR
KW - text-related visual features
KW - visual location recognition
UR - http://www.scopus.com/inward/record.url?scp=84856326840&partnerID=8YFLogxK
U2 - 10.1109/ISM.2011.21
DO - 10.1109/ISM.2011.21
M3 - Conference contribution
AN - SCOPUS:84856326840
SN - 9780769545899
T3 - Proceedings - 2011 IEEE InternationalSymposium on Multimedia, ISM 2011
SP - 77
EP - 84
BT - Proceedings - 2011 IEEE InternationalSymposium on Multimedia, ISM 2011
T2 - 13th IEEE International Symposium on Multimedia, ISM 2011
Y2 - 5 December 2011 through 7 December 2011
ER -