GARSV - Automatic generation of video sequences' summaries

The goal of the project is to develop a platform which would enable a semi-automatic generation of video summaries, as well as an alignment of video content to existing text documents. The platform will generate structured text documents based on videos, for example minutes of meetings. It will be possible to modify, complete and validate documents. Consequently, the documents will be placed at disposal of target groups. Final users will be able to search for information and will have access to text documents which describe it. In addition, they will be able to navigate and visualize sequences associated directly to their textual search.

Keywords

aligning multimedia content, semi-automatic management of summaries, metadata extraction, calculation of metadata similarities

Outcomes

Development of a demonstrator for the platform in collaboration with the Department of Computer Science of the entity of Neuchâtel (SIEN), which manages the diffusion of sessions of Grand Council of Neuchâtel. Current application enables filming, archiving and visualizing of Grand Council’s sessions. Although these documents represent a rich and very useful information source for deputies and wide public, at present it is not possible to carry out targeted searches in video archives. In addition, the proceedings of Grand Council’s sessions are stored separately, without any links to videos. A tool for the alignment of video sequences to proceedings is therefore necessary in order to integrate and exploit all sources of valuable information. Another interest of our project in this context is an automatic generation of proceedings. Human intervention in the annotation process will be minimized and richness of video and proceedings’ contents will be exploited jointly.

Website of the project

 

Project Information