ImageCLEF

Experimental Evaluation in Visual Information Retrieval

By Henning Müller , Paul Clough , Thomas Deselaers , Barbara Caputo

ImageCLEF Cover Image

Here is a collection of texts centered on the evaluation of image retrieval systems. Highlights issues and challenges in evaluating image retrieval systems and describes various initiatives that provide researchers with the necessary evaluation resources.

Full Description

  • ISBN13: 978-3-6421-5180-4
  • 572 Pages
  • User Level: Science
  • Publication Date: August 20, 2010
  • Available eBook Formats: PDF
  • eBook Price: $159.00
Buy eBook Buy Print Book Add to Wishlist
Full Description
The creation and consumption of content, especially visual content, is ingrained into our modern world. This book contains a collection of texts centered on the evaluation of image retrieval systems. To enable reproducible evaluation we must create standardized benchmarks and evaluation methodologies. The individual chapters in this book highlight major issues and challenges in evaluating image retrieval systems and describe various initiatives that provide researchers with the necessary evaluation resources. In particular they describe activities within ImageCLEF, an initiative to evaluate cross-language image retrieval systems which has been running as part of the Cross Language Evaluation Forum (CLEF) since 2003. To this end, the editors collected contributions from a range of people: those involved directly with ImageCLEF, such as the organizers of specific image retrieval or annotation tasks; participants who have developed techniques to tackle the challenges set forth by the organizers; and people from industry and academia involved with image retrieval and evaluation generally. Mostly written for researchers in academia and industry, the book stresses the importance of combing textual and visual information – a multimodal approach – for effective retrieval. It provides the reader with clear ideas about information retrieval and its evaluation in contexts and domains such as healthcare, robot vision, press photography, and the Web.
Table of Contents

Table of Contents

  1. Part I Introduction.
  2. 1 Seven Years of Image Retrieval Evaluation.
  3. 2 Data Sets Created in ImageCLEF.
  4. 3 Creating Realistic Topics for Image Retrieval Evalua¬tion.
  5. 4 Relev¬ance Judgments for Image Retrieval Evaluation.
  6. 5 Performance Measures Used in Image In
  7. formation Retrieval.
  8. 6 Fusion Techniques for Combining Textual and Visual In¬formation Re¬trieval.
  9. Part II Track Reports.
  10. 7 Interactive Image Retrieval.
  11. 8 Pho¬to¬graphic Image Re¬trieval.
  12. 9 The Wikipedia Image Retrieval Task.
  13. 10 The Ro¬bot Vi¬sion Task.
  14. 11 Object and Concept Recognition for Image Retrieval.
  15. 12 The Medical Image Classification Task.
  16. 13 The Medical Image Retrieval Task.
  17. Part III Participant re¬ports.
  18. 14 Expansion and Re–ranking Approaches for Multi–modal Im¬age Retrieval using Text–based Methods.
  19. 15 Revisiting Sub–topic Retrieval in the Im¬ageCLEF 2009 Photo Retrieval Task.
  20. 16 Knowledge Integration using Textual Infor¬mation for Improv¬ing Im¬ageCLEF Collections.
  21. 17 Leveraging Image, Text and Cross–media Similarities for Di¬versity–focused Multimedia Retrieval.
  22. 18 University of Ams¬terdam at the Visual Concept Detection and Annotation Tasks.
  23. 19 Intermedia Concep¬tual In¬dexing.
  24. 20 Con¬ceptual In¬dexing Contribution to ImageCLEF Medical Retrieval Tasks.
  25. 21 Improving Early Preci¬sion in the ImageCLEF Medical Retrieval Task.
  26. 22 Lung Nodule De¬tection.
  27. 23 Medical Image Classification at Tel Aviv and Bar Ilan Universities.
  28. 24 Idiap on Medical Image Classi
  29. fication.
  30. Part IV External views.
  31. 25 Press Association Images— Im¬age Retrieval Challenges.
  32. 26 Image Retrieval in a Commercial Setting.
  33. 27 An Over¬view of Evaluation Campaigns in Multimedia Re¬trieval.
Errata

Please Login to submit errata.

No errata are currently published