2nd International Workshop on AI for Smart TV Content Production, Access and Delivery (AI4TV 2020)

Due to COVID-19 and potential travel restrictions, AI4TV 2020 will take place as a virtual workshop. The rest of the organization will totally remain unchanged (high quality proceedings published in the ACM DL on schedule).

This workshop follows the very successful first edition of AI4TV that took place at ACM Multimedia 2019 in Nice (France), https://memad.eu/ai4tv2019/ (~40 participants). It aims to bring together experts from academia and industry in order to discuss the latest research progresses on topics related to multimodal information analysis, and in particular, semantic analysis of video, audio, and textual information for intelligent digital TV content production, compliance, access and delivery. Such topics include, but are not limited to, the following multimedia analysis techniques for broadcasted TV and radio programs  as well as large TV archives:

  • Multimodal content analysis: scene segmentation, person recognition, object detection, speaker gender recognition, speaker diarization, topic identification using video, audio and metadata
  • Multimodal embeddings for multimedia (audio, visual, text, Knowledge Graph)
  • Automatic multimedia summarization
  • Automatic deep captioning
  • Automatic content description
  • Interactive multimodal search in archives
  • Hyperlinking and enrichment of TV content
  • Anomaly and violation detection in TV media contents
  • Automated TV content and camera compliance (emotion detection, fire detection, etc.)
  • Media-rich fake news detection
  • Breaking the language barrier of TV content using multimodal translation
  • Gender studies on TV and radio programs
  • Submission Due:                   Thursday 30 July 2020 (Extended to Monday 10 August)
  • Acceptance Notification:       Wednesday 26 August 2020
  • Camera Ready:                      Wednesday   2 September 2020

Submission page:

Submissions are made via https://cmt3.research.microsoft.com/MMW2020

Paper format:

Submitted papers (.pdf format) must use the ACM Article Template. Please remember to add Concepts and Keywords https://www.acm.org/publications/proceedings-template

Length:

The AI4TV workshop will welcome two kinds of submissions:

  • Research papers which can be 6 to 8 pages. Up to two additional pages may be added for references. The reference pages must only contain references. Optionally, you may upload supplementary material that complements your submission (100Mb limit).
  • Demo papers which can be 2 pages.

Blinding:

Paper submissions must conform with the “double-blind” review policy. This means that the authors should not know the names of the reviewers of their papers, and reviewers should not know the names of the authors. Please prepare your paper in a way that preserves anonymity of the authors.

    • Do not put the authors’ names under the title.
    • Avoid using phrases such as “our previous work” when referring to earlier publications by the authors.
    • Remove information that may identify the authors in the acknowledgments (e.g., co-workers and grant IDs).
    • Check supplemental material (e.g., titles in the video clips, or supplementary documents) for information that may identify the authors’ identities.
    • Avoid providing links to websites that identify the authors.

Papers without appropriate blinding will be rejected without review.

Originality:

Papers submitted to ACM Multimedia must be the original work of the authors. The may not be simultaneously under review elsewhere. Publications that have been peer-reviewed and have appeared at other conferences or workshops may not be submitted to ACM Multimedia (see also the arXiv/Archive policy below). Authors should be aware that ACM has a strict policy with regard to plagiarism and self-plagiarism (https://www.acm.org/publications/policies/plagiarism). The authors’ prior work must be cited appropriately.

Author list:

Please ensure that you submit your papers with the full and final list of authors in the correct order. The author list registered for each submission is not allowed to change in any way after the paper submission deadline. (Note that this rule regards the identity of authors, e.g., typos are correctable.)

Proofreading:

Please proofread your submission carefully. It is essential that the language use  in the paper is clear and correct so that it is easily understandable. (Either US English or UK English spelling conventions are acceptable.)

ArXiv/archive policy:

In accordance with ACM guidelines, all SIGMM-sponsored conferences adhere to the following policy regarding arXiv papers:

We define a publication as a written piece documenting scientific work that was submitted for review by peers for either acceptance or rejection, and, after review, has been accepted. Documentation of scientific work that is published in a not-for-profit archive without any form of peer-review (departmental Technical Report, arXiv.org, etc.)  is not considered a publication. However, this definition of publication does include peer-reviewed workshop papers, even if they do not appear in formal proceedings. Any submission to ACM Multimedia must not have substantial overlap with prior publications or other work currently undergoing peer review.

Note that documents published on website archives are subject to change. Citing such documents is discouraged. Furthermore, ACM Multimedia will review the documents formally submitted and any additional information in a web archive version will not affect the review.

AI4TV 2020 will take place on Monday 12th October, 2020, starting at 8am ET. In the program below, the timezone is the Eastern Time which is UTC-5

  • 08:00-08:15: Workshop opening and welcome by workshop chairs, [slides]
  • 08:15-09:00Keynote 1: Alexandre Rouxel (EBU, Switzerland) – AI in the Media Spotlight [slides]

Alexandre Rouxel is a Data Scientist and Project Coordinator at the EBU in the Technology and Innovation department. At the EBU, he is coordinating projects on metadata, cloud computing and machine learning for media applications. Before joining the EBU, he cumulated 20 years’ experience as algorithms and systems design engineer acquired within successful Nasdaq listed companies. He has extensive experience in developing standards and innovative products from research to market. He is data enthusiast, eager at designing and promoting efficient algorithms and systems for extracting and valorising information from massive amounts of data.

  • 09:00-10:00: Session 1 – Video Analytics and Storytelling
    • Lyndon Nixon – Predicting your future audience’s popular topics to optimize TV content marketing success
    • Syeda Maryam Fatima Taqvi, Marina Shehzad, and Sami Murtaza – Neural Style Transfer Based Voice Mimicking for Personalized Audio Stories 
    • Miggi Zwicklbauer, Willy Lamm, Martin Gordon, Konstantinos Apostolidis, Basil Philipp, and Vasileios Mezaris – Video Analysis for Interactive Story Creation: The Sandmännchen Showcase
  • 10:00-10:15: Coffee break
  • 10:15-11:00: Keynote 2: Natalie Parde (University of Illinois, USA) – And, Action!  Towards Leveraging Multimodal Patterns for Storytelling and Content Analysis [slides]

Natalie Parde is an Assistant Professor in the Department of Computer Science at the University of Illinois at Chicago, where she also co-directs UIC’s Natural Language Processing Laboratory.  Her research interests are in natural language processing, with emphases in interactive systems, multimodality, creative language, and grounded language learning.  She serves on the program committees of EMNLP, the Association for Computational Linguistics (ACL), and the North American Chapter of the ACL (NAACL), among other conferences and workshops, and as a review panelist for the National Science Foundation.  In her spare time, Dr. Parde enjoys engaging in mentorship and outreach for underrepresented CS students.

  • 11:00-12:00: Session 2 – Video Annotation and Summarization
    • Dejan Porjazovski, Juho Leinonen and Mikko Kurimo – Named Entity Recognition for Spoken Finnish
    • Yashaswi Rauthan, Vatsala Singh, Rishabh Agrawal,  Satej Kadlay, Niranjan Pedanekar, Shirish Karande, Iaphi Tariang and Manasi Malik – Avoid Crowding in the Battlefield: Semantic Placement of Social Messages in Entertainment Programs
    • Vishal Kaushal, Suraj Kothawade, Rishabh Iyer, and Ganesh Ramakrishnan – Realistic Video Summarization through VISIOCITY: A New Benchmark and Evaluation Framework
  • 12:00-12:15: Wrap up, Next steps and Closing by all workshop participants
  • Maryam Amiri, Ciena Telecom, Canada
  • Olivier Aubert, University of Nantes, France
  • Werner Bailer, Joanneum Research, Austria
  • Adrian Brasoveanu, MODUL Technology GmbH, Austria
  • Jean Carrive, INA, France
  • Francesco Cricri, Nokia Technologies, Finland
  • Jiang (John) Gao, Samsung Research America, USA
  • Emir Halepovic, AT&T Labs, USA
  • Mikko Kurimo, Aalto University, Finland
  • Tiina Lindh-Knuutila, Lingsoft Language Services, Finland
  • Jia-Yu Pan, Google, USA
  • Symeon Papadopoulos, CERTH, Greece
  • Shahin Sefati, Comcast Labs, USA
  • Guan-Ming Su, Dolby Labs, USA
  • Jörg Tiedemann, University of Helsinki, Finland
  • Zhe Wu, Comcast Labs, USA
  • Honglei Zhang, Nokia Technologies, Finland
MeMAD logo, a blue play sign with icons of text, sound and images, and text MeMAD Methods for Managing audiovisual data.
ReTV_tagline_RGB_staand_fullcolor