How MediaVerse employs Artificial intelligence to mitigate viewers’ impact from disturbing imagery

by and | Feb 14, 2023 | Content Moderation, Media Analysis

Professionals, such as journalism or human rights abuse investigators, frequently investigate cases of digital content coming from wars (e.g., the recent Ukraine war), natural disasters (e.g., the catastrophic earthquake in Turkey), accidents and others. This digital content may include potentially disturbing content or traumatising material that can have negative effects on mental well-being.

The protection of these investigators as much as possible has become vital.

CERTH, as experts on AI-based technologies, in collaboration with DW, have implemented a variety of filters that are intended to reduce the gravity of the impact of imagery that includes graphic elements.

AI filter example: Applying a painting style to the input image.

To test and research how far Artificial Intelligence can support to detect gruesome imagery that is potentially disturbing or traumatising and serve as some kind of ‘early warning system’ so investigators do not come across gruesome imagery unprepared, CERTH and DW are carrying out a study entitled “Mitigating Viewer Impact from Disturbing Imagery using AI Filters”.

A brief overview of the study (you can find details in the Google Form):

  • Purpose: Investigate how different AI filters can reduce viewers’ impact from disturbing imagery, while retaining critical information that allows for understanding what the images depict.
  • Participants: Professionals or not, that are frequently exposed to potentially disturbing imagery
  • Duration: around 15 minutes
  • Risks and benefits: The participants will be exposed to graphic content that can/may cause feelings of worry, concern, or anxiety to the viewer.

For anyone who would like to contribute to this study, the questionnaire is available through Google Forms.

Please take the time and read the instructions at the outset of the linked document carefully. The goal of the study is to help investigators, but in any case, avoiding negative impacts on the participants is paramount. If at any point you start feeling uncomfortable going through the questionnaire and examples used, do not at all feel obliged to continue or complete it.

The study is carried out in the context of the MediaVerse project and if it turns out that using AI filters for transforming disturbing images is helpful for investigators then we will proceed with the integration of these filters to the MediaVerse Asset Annotation Management platform.