Try placing hands over your ears when following a conversation. The speech of other people immediately becomes muffled and hard to understand. You might pick some of the words, but…
Audiovisual media content is an essential resource of modern history. It’s our way to communicate and to entertain. To fully benefit from multilingual audiovisual content, we need efficient tools to make visual content accessible in words.
What is MeMAD providing?
We provide new methods that help to translate moving images and sounds into words. MeMAD methods will help us to manage large amounts of audiovisual data cost efficiently.
Who will benefit from MeMAD?
Anyone using audiovisual content will benefit from MeMAD. Professionals who work in the Creative Industries will receive new methods for video management and digital storytelling. Visually and hearing impaired, among others, will have better access to video content.
MeMAD is an EU funded H2020 research project (2018-2020). MeMAD will develop methods for an efficient re-use and re-purpose of multilingual audiovisual content targeting to revolutionize video management and digital storytelling in broadcasting and media production.
“We go far beyond the state-of-the-art automatic video description methods by making the machine learn from the human. The resulting description is thus not only a time-aligned semantic extraction of objects but makes use of the audio and recognizes action sequences.”
MeMAD’s main results are detailed, rich descriptions of the moving images, speech, and audio. We integrate the latest research achievements in deep neural network techniques in computer vision with knowledge bases, human and machine translation in a continuously improving machine learning framework.
MeMAD experiences on data licensing presented in Helsinki and Tallinn conferences October-November 2018
MeMAD work done on datasets and data licensing for research was presented at two conferences during late 2018. The Baltic audiovisual archives community gathered at the annual BAAC conference in…
More from the blog