Zurück zur vorherigen Seite

Memoria-Mea project aims to develop a PIM (Personal Information Management) system for managing multimedia content. It focuses on supporting a person in organizing and retrieving information across the whole collection of multimedia documents he/she has accessed and collected during daily life activities.

multimodal

Memoria-Mea project aims to develop a PIM (Personal Information Management) system for managing multimedia content. It focuses on supporting a person in organizing and retrieving information across the whole collection of multimedia documents he/she has accessed and collected during daily life activities.

The system will support the user in searching, browsing and visualizing “multimedia memories” (e.g. pictures, videos, audio file, text file, etc.) according to his/her preferences, context, etc. thanks to the use of personalized information indexing and classification techniques based on a semantic model. Whereas current multimedia search engines are designed for very large applications and large user community, the novelty of Memoria-Mea project is to support the individual memory, i.e. personal environments and data, by extending existing information retrieval and visualization techniques with cross-modal data mining and ontological models, based on user behavior and needs.

The goal of Memoria-Mea is to investigate towards a personal memory organizer. Memoria-Mea will support an individual person in organizing and retrieving all the multimedia information accessed during his/her daily life. It will automatically log, store, classify and index multimedia memories and further support searching and browsing according to user preferences, context, etc.

The Memoria-Mea project will be split into the following major research challenges:

  • Semantic data annotation for personalising search and visualization strategies : the goal is to annotate multimedia information using different levels of ontologies (domain ontology, personal ontology, etc.) in order to make that information meaningfull for our PCs . This in turn will allow to develop more "intelligent" strategies for searching and visualizing personal multimedia data.
  • Cross-modal data analysis for extending personal search strategies: the goal is to enrich previous annotations with a new indexing layer using cross-modal correlations based on specific user knowledge and behavior. This will allow future searching/browsing strategies that benefit of cross-media links and associations. Browsing works by association, like our human memory does, and thus cross-media links will strengthen the process of remembering which usually consists in associating something with a context or an object already well consolidated in our memory.
  • Innovative visualization and interaction techniques for intuitively browsing/searching personal memories:although the computer contributes to the information explosion, it is also potentially the magic lens for finding, sorting, filtering, and presenting the relevant items through visualization techniques and multimedia queries.

Our goal, which is truly innovative, is to combine these three research approaches to enrich and simplify user experience for searching and browsing through personal multimedia data, daily collected across various devices. The proof of concept of the project will be driven by two major use case scenarios (traveller use case and member-of-a-parliament use case), defined in the first project phase, which will be fully integrated in a final demonstrator.