Abstract: Human communication is inherently multimodal and asynchronous. Analyzing
human emotions and sentiment is an emerging field of artificial intelligence.
We are witnessing an increasing amount of multimodal content in local languages
on social media about products and other topics. However, there are not many
multimodal resources available for under-resourced Dravidian languages. Our
study aims to create a multimodal sentiment analysis dataset for the
under-resourced Tamil and Malayalam languages. First, we downloaded product or
movies review videos from YouTube for Tamil and Malayalam. Next, we created
captions for the videos with the help of annotators. Then we labelled the
videos for sentiment, and verified the inter-annotator agreement using Fleiss's
Kappa. This is the first multimodal sentiment analysis dataset for Tamil and
Malayalam by volunteer annotators.