CheckThat! Lab at CLEF 2024

Home

Editions

Tasks

Contents

Task 4: Detecting hero, villain, and victim from memes

Definition

Given a meme (the image + the text extracted from the meme) and a list of entities, the task constitutes predicting the role of each entity: “hero”, “villain”, “victim”, or “other”, requiring multi-class classification via systematic modeling of multimodal semiotics.

This edition evaluates the entity-role detection, in three different language settings, encouraging knoweldge transferrability.

Role labeling for memes:

This task emphasizes detecting which entities are glorified, vilified or victimized, within a meme. Assuming the frame of reference as the meme author’s perspective, the objective is to classify for a given pair of a meme and an entity, whether the entity is being referenced as Hero vs. Villain vs. Victim vs. Other, within that meme.

Definition of the entity classes:

  1. Hero: The entity is presented in a positive light. Glorified for their actions conveyed via the meme or gathered from background context
  2. Villain: The entity is portrayed negatively, e.g., in an association with adverse traits like wickedness, cruelty, hypocrisy, etc.
  3. Victim: The entity is portrayed as suffering the negative impact of someone else’s actions or conveyed implicitly within the meme.
  4. Other: The entity is not a hero, a villain, or a victim.

Datasets

Train/Dev Stage Data

  • Train + Dev + Dev-Test: The train, dev, and dev-test files can be obtained in .jsonl format from the given link, along-with the required meme images.

Unseen Test Sets

Evaluation

The official evaluation measure for the shared task is the macro-averaged F1 score for the multi-class classification.

Please Note: The final evaluations will be done for three independent language settings:

  • Bulgarian
  • English
  • Code-mixed

Submission

Scorers, Format Checkers, and Baseline Scripts

All scripts can be found on gitlab at CheckThat! Lab Task 4 repository

Submission guidelines

Instructions for prediction file submission at Codalab (to be shared soon):

  • The submission NEEDS to be a .zip, with a ‘.jsonl’ predicted results file and a ‘description.txt’ file with a brief overview of your approach as its ONLY content.
  • Filenames could be anything but should conform to <samplefilename.jsonl>.
  • Please DO NOT include anything other than your only ‘.jsonl’ prediction result file and a ‘description.txt’.
  • Please ensure ALL the test samples (meme image) AND the corresponding entities enlisted in the unseen set, are considered for prediction and appear in your submission file. Please, do NOT leave out any entity unpredicted.
  • Please be watchful of the submission limits: Max submissions per day: 5; Max submissions per user: 10.
  • For eg. for the samples depicted for the unseen test set, the submission format would look like:

File format:

{‘image’ : memes_1486.png’, ‘hero’ :['Donald Trump'], ‘villain’ : ['Joe Biden'], ‘victim’ : [], ‘other’ : ['Democratic National Convention (DNC)’, 'Republican National Convention (RNC)']}

{‘image’ : ‘image_2.png’, ‘hero’ : [‘Vladimir Putin’], ‘villain’ : [], ‘victim’ : [], ‘other’ : [‘the world’, ‘Salman Khan’, ‘vaccine’]}

Submission Sites

Organizers

  • Tanmoy Chakraborty, Indian Institute of Technology Delhi, New Delhi, India
  • Shivam Sharma, Indian Institute of Technology Delhi, New Delhi, India
  • Palash Nandi, Indian Institute of Technology Delhi, New Delhi, India
  • Dimitar Dimitrov, Sofia University, Bulgaria
  • Preslav Nakov, Mohamed bin Zayed University of Artificial Intelligence, UAE