Technological developments in comprehensive video understanding – detecting and identifying visual elements of a scene, combined with audio understanding (music, speech), as well as aligned with textual information such as captions, subtitles, etc. – have been undergoing a significant revolution during recent years. New scientific breakthroughs in video understanding through the application of AI techniques along with the increase in the volume of multimedia content and more computational power have led to significant improvements in automated video description and have opened fresh avenues for the seamless combination of multiple modalities’ analysis.
The workshop aims to bring together experts from academia and industry in order to discuss the latest research progress in topics related to multimodal information analysis, and in particular, semantic analysis of video, audio, and textual information for smart digital TV content production, access and delivery. Such topics include, but are not limited to, the following multimedia analysis techniques for streamed TV and radio programmes as well as TV archives (recorded content):
- Multimodal content analysis: scene segmentation, people and concept recognition, topic identification using video, audio and/or (textual) metadata
- Embeddings for Multimedia Knowledge Graph
- Use or adaptation of multimedia description models or vocabularies for machine learning / neural networks
- Combination of AI and external knowledge (graphs) for improved multimedia analysis
- Automatic multimedia summarization and remixing
- Automatic deep captioning
- Interactive multimodal search and browsing in archives
- Hyperlinking and enrichment of TV content
- Breaking the language barrier of TV content using multimodal translation
- Comparative evaluations of AI techniques for multimodal analysis tasks
- Creation of multimedia benchmarks for AI evaluations
- Gender studies on TV and Radio programmes
The main goal of the workshop is to promote AI techniques for multimedia analysis to enable smarter content production, access and delivery with the emphasis on large TV and radio program archives. We, thus, welcome submissions from both industry and academia, including interdisciplinary work and those from other relevant main streams.
- Submission Due: Monday 15 July 2019
- Acceptance Notification: Monday 12 August 2019
- Camera Ready: Monday 19 August 2019
Submissions are made in AI4TV2019’s EasyChair page.
Submitted papers (.pdf format) must use the ACM Article Template. Please remember to add Concepts and Keywords https://www.acm.org/publications/proceedings-template
The AI4TV workshop will welcome two kinds of submissions:
- Research papers which can be 6 to 8 pages. Up to two additional pages may be added for references. The reference pages must only contain references. Optionally, you may upload supplementary material that complements your submission (100Mb limit).
- Demo papers which can be 2 pages.
Paper submissions must conform with the “double-blind” review policy. This means that the authors should not know the names of the reviewers of their papers, and reviewers should not know the names of the authors. Please prepare your paper in a way that preserves anonymity of the authors.
- Do not put the authors’ names under the title.
- Avoid using phrases such as “our previous work” when referring to earlier publications by the authors.
- Remove information that may identify the authors in the acknowledgments (e.g., co-workers and grant IDs).
- Check supplemental material (e.g., titles in the video clips, or supplementary documents) for information that may identify the authors’ identities.
- Avoid providing links to websites that identify the authors.
Papers without appropriate blinding will be rejected without review.
Papers submitted to ACM Multimedia must be the original work of the authors. The may not be simultaneously under review elsewhere. Publications that have been peer-reviewed and have appeared at other conferences or workshops may not be submitted to ACM Multimedia (see also the arXiv/Archive policy below). Authors should be aware that ACM has a strict policy with regard to plagiarism and self-plagiarism (https://www.acm.org/publications/policies/plagiarism). The authors’ prior work must be cited appropriately.
Please ensure that you submit your papers with the full and final list of authors in the correct order. The author list registered for each submission is not allowed to change in any way after the paper submission deadline. (Note that this rule regards the identity of authors, e.g., typos are correctable.)
Please proofread your submission carefully. It is essential that the language use in the paper is clear and correct so that it is easily understandable. (Either US English or UK English spelling conventions are acceptable.)
In accordance with ACM guidelines, all SIGMM-sponsored conferences adhere to the following policy regarding arXiv papers:
We define a publication as a written piece documenting scientific work that was submitted for review by peers for either acceptance or rejection, and, after review, has been accepted. Documentation of scientific work that is published in a not-for-profit archive without any form of peer-review (departmental Technical Report, arXiv.org, etc.) is not considered a publication. However, this definition of publication does include peer-reviewed workshop papers, even if they do not appear in formal proceedings. Any submission to ACM Multimedia must not have substantial overlap with prior publications or other work currently undergoing peer review.
Note that documents published on website archives are subject to change. Citing such documents is discouraged. Furthermore, ACM Multimedia will review the documents formally submitted and any additional information in a web archive version will not affect the review.
AI4TV 2019 workshop will take place on Monday October 21st, 2019, in Nice, France.
The program is as follows:
- 9:30-9:40 Workshop opening and welcome by workshop chairs
- 9:40-10:30 Keynote 1: Annotation automation to support dynamic exploration and creative retrieval of audiovisual archives by Dr. Johan Oomen (NISV, The Netherlands).
- 10:30-11:00 Coffee break
- 11:00-12:30 Oral Session 1
- 11:00-11:20 L-STAP: Learned Spatio-Temporal Adaptive Pooling for Video Captioning by Danny Francis (EURECOM, France) and Benoit Huet (EURECOM, France)
- 11:20-11:40 A Stepwise, Label-based Approach for Improving the Adversarial Training in Unsupervised Video Summarization by Evlampios Apostolidis (CERTH, Greece; and, QMUL, UK), Alexandros Metsai (CERTH, Greece), Eleni Adamantidou (CERTH, Greece), Vasileios Mezaris (CERTH, Greece) and Ioannis Patras (QMUL, UK)
- 11:40-12:00 On the Robustness of Deep Learning Based Face Recognition by Werner Bailer (JOANNEUM RESEARCH, Austria) and Martin Winter (JOANNEUM RESEARCH, Austria)
- 12:00-12:20 Gender Representation in French Broadcast Corpora and Its Impact on ASR Performance by Mahault Garnerin (Univ. Grenoble Alpes, France), Solange Rossato (Univ. Grenoble Alpes, France) and Laurent Besacier (Univ. Grenoble Alpes, France)
- 12:30-14:00 Lunch
- 14:00-14:50 Keynote 2: AI gets creative by Dr. Marta Mrak (BBC, UK)
- 14:50-15:30 Oral Session 2
- 14:50-15:10 AI for Audience Prediction and Profiling to Power Innovative TV Content Recommendation Services by Lyndon Nixon (MODUL Technology, Austria), Krzysztof Ciesielski (Genistat AG, Switzerland) and Basil Philipp (Genistat AG, Switzerland)
- 15:10-15:30 Data-driven Summarization and Synchronized Second-screen Enrichment of Cycling Races by Steven Verstockt (Ghent University, Belgium), Erik Mannens (Ghent University, Belgium) and Jelle De Bock (Ghent University, Belgium)
- 15:30-16:00 Coffee break
- 16:00-17:00 Demo session
- 16:00-16:10 Demo pitches, 2 minutes each, by all demo session presenters
- 16:10-17:00 Examples of Uses of Artificial Intelligence in Video Archives by Antoine Mercier (Radio Télévision Suisse, Switzerland), Sébastien Ducret (Radio Télévision Suisse, Switzerland), Charlotte Bürki (Radio Télévision Suisse, Switzerland) and Léonard Bouchet (Radio Télévision Suisse, Switzerland)
- 16:10-17:00 Automatically Adapting and Publishing TV Content for Increased Effectiveness and Efficiency by Basil Philipp (Genistat AG, Switzerland), Krzysztof Ciesielski (Genistat AG, Switzerland; and Polish Academy of Sciences, Poland) and Lyndon Nixon (MODUL Technology, Austria)
- 16:10-17:00 Personalized Movie Trailer Using Thumbnail Containers by Ghulam Mujtaba (Gachon University, Korea) and Eun-Seok Ryu (Gachon University, Korea)
- 16:10-17:00 A Workstation for Real-time Processing of Multi-channel TV by Mathieu Delalandre (LIFAT Laboratory, France)
- 17:00-17:30 Wrap up, Next steps and Closing by all workshop participants
- Olivier Aubert, University of Nantes, France
- Werner Bailer, Joanneum Research, Austria
- Jean Carrive, INA, France
- Mikko Kurimo, Aalto University, Finland
- Tiina Lindh-Knuutila, Lingsoft, Finland
- Symeon Papadopoulos, CERTH, Greece
- Jörg Tiedemann, University of Helsinki, Finland
- Dieter Van Rijsselbergen, Limecraft, Belgium
- Miggi Zwicklbauer, Rundfunk Berlin-Brandenburg