Co-located with the ICIP
Multimedia information retrieval technology is destined to become pervasive in almost every aspect of daily life and a pillar for key-achievements in future scientific and technologic developments. Although the fundaments of information retrieval were laid many years ago, this was done for text databases and most of current well-established retrieval tools are only suitable for text mining. Compared to text-based information retrieval, image and video retrieval is not only less advanced but also more challenging. The need for visual information retrieval tools is a consequence of the rapid growth in consumer-oriented electronic technologies, e.g. digital cameras, camcorders and mobile phones, along with the expansion and globalization of networking facilities. The immediate consequence of this trend is that generating digital content has become easy and cheap while managing and structuring it to produce effective services has not.
To get closer to the vision of useful multimedia-based search and retrieval, the annotation and search technologies need to be efficient and use semantic concepts that are natural to the user. This require tagging and/or annotation of multimedia content with semantic concepts describing digital objects and in more appealing applications using descriptions of emotions and related human expressions. However, semantic annotation mostly depends on human interaction, which is expensive, time consuming and therefore infeasible for many applications. Even the annotation of few hundreds of personal images captured during a single year by a single person is a tedious task that nobody wants to do. As a consequence, multimedia structuring, annotation and retrieval using semantic structures and descriptions of emotions which are natural to humans remains critical.
This workshop provides a forum within the ICIP community to highlight the field of multimedia information retrieval and discuss the current bottlenecks and out-of-the-box ideas in the field. The workshop aims at covering most aspects of MIR research. Topics of interest include (but are not limited to):
- Social networks and collaborative filtering for MIR
- MIR and Arts
- Classification and semantic-based content structuring for MIR
- Novel methods to support and enhance social interaction, including integration of context in social, affective computing, and experience capture
- Contextual metadata extraction
- Models for temporal context, spatial context, imaging context (e.g., camera metadata) and social and cultural context
- Web context for online multimedia annotation, browsing, sharing and reuse
- Context tagging systems, e.g., geotagging, voice annotation
- Context-aware inference algorithms
- Context-aware multi-modal fusion systems (text, document, image, video, metadata, etc.)
- Context-aware collaboration
- Integration of content-based multimedia analysis for low and medium-level signal processing and natural language and speech processing
- Knowledge assisted multimedia data mining
- Relevance feedback for semantic semi-automatic annotation
- Integration of multimedia processing and Semantic Web technologies to enable automatic content sharing, processing and interpretation by machines
- Multimedia ontology infrastructures for specific application domains
- Knowledge based inference for semantic media annotation
- Multimodal techniques, high dimensionality reduction and low-level feature fusion
Submission Deadlines
Submission of papers: | March 28, 2008 |
Notification of acceptance: | April 25, 2008 |
Submission of camera-ready papers: | June 6, 2008 |
Publication
Accepted papers will be published in the IEEE ICIP proceedings (DVD
electronic publication), under
ICIP Workshop on Multimedia
Information Retrieval: New Trends and Challenges.