How technology can detect fake news in videos

Credit: Pixabay/CC0 Public Domain

Social networks represent an important channel for the spread of fake news and disinformation. This situation has been aggravated by recent advances in photo and video editing and artificial intelligence tools, which facilitate the manipulation of audiovisual files, for example, with the so-called deepfakes, which combine and superimpose images, audio and video clips. to create montages. They look like real pictures.

Researchers from the K-riptography and Information Security for Open Networks (KISON) and Communication Networks & Social Change (CNSC) groups of the Interdisciplinary Internet Institute (IN3) of the Open University of Catalonia (UOC) have launched a new project to develop Innovative Technology that, using artificial intelligence and data hiding techniques, should help users automatically differentiate between original and adulterated media content, thus helping to minimize the reposting of false news. DISSIMILAR is an international initiative led by the UOC in which researchers from the Warsaw University of Technology (Poland) and Okayama University (Japan) participate.

“The project has two objectives: on the one hand, to provide content creators with tools to watermark their creations, making any modification easily detectable; and on the other hand, to offer social network users tools based on methods of state-of-the-art signal processing and machine learning to detect counterfeit digital content”, explains Professor David Megías, Principal Investigator at KISON and Director of IN3. In addition, DISSIMILAR intends to include “the cultural dimension and the point of view of the end user throughout the entire project”, from the design of the tools to the study of usability in the different stages.

The danger of bias

Currently, there are basically two types of tools to detect fake news. First, there are the automatic ones based on machine learning, of which (currently) only a few prototypes exist. And second, there are human-enabled fake news detection platforms like Facebook and Twitter, which require the involvement of humans to determine whether specific content is genuine or fake. According to David Megías, this centralized solution could be affected by “different biases” and encourage censorship. “We believe that an objective evaluation based on technological tools may be a better option, as long as users have the last word to decide, based on a previous evaluation, whether they can trust certain content or not,” he explained.

For Megías, there is no “single silver bullet” that can detect fake news: rather, detection must be carried out with a combination of different tools. “That is why we have opted to explore information hiding (watermarks), digital content forensic analysis techniques (largely based on signal processing) and, of course, machine learning,” she points out.

Automatic verification of multimedia files

Digital watermarking comprises a series of techniques in the field of data hiding that embed imperceptible information in the original file in order to “easily and automatically” verify a multimedia file. “It can be used to indicate the legitimacy of a content, for example, confirming that a video or photo has been distributed by an official news agency, and can also be used as an authentication mark, which would be removed in case of modification of the content. content, or trace the origin of the data. That is, you can know if the source of the information (for example, a Twitter account) is spreading false content,” explained Megías.

Digital content forensic analysis techniques

The project will combine the development of watermarks with the application of digital content forensic analysis techniques. The objective is to take advantage of signal processing technology to detect the intrinsic distortions produced by the devices and programs used when creating or modifying any audiovisual file. These processes give rise to a series of disturbances, such as sensor noise or optical distortion, which could be detected by machine learning models. “The idea is that the combination of all these tools improves the results compared to the use of unique solutions”, says Megías.

Studies with users from Catalonia, Poland and Japan

One of the key features of DISSIMILAR is its “holistic” approach and its collection of the “perceptions and cultural components surrounding fake news.” In this sense, different user-focused studies will be carried out, broken down into different stages. “First of all, we want to know how users interact with the news, what interests them, what media they consume, according to their interests, what they use as a basis to identify certain content as fake news, and what they are willing to do to verify its veracity. Yes we can identify these things, it will be easier for the technological tools that we design to help prevent the spread of false news,” explained Megías.

These perceptions will be measured in different places and cultural contexts, in studies of user groups in Catalonia, Poland and Japan, to incorporate their idiosyncrasies in the design of solutions. “This is important because, for example, each country has governments and/or public authorities with a greater or lesser degree of credibility. This has an impact on following the news and supporting fake news: if I don’t believe the word of the authorities, why should I pay attention to the news that comes from these sources? This could be seen during the crisis of COVID-19: in countries where there was less trust in public authorities, there was less respect for the suggestions and rules on handling the pandemic and vaccination,” said Andrea Rosales, researcher at the CNSC.

A product that is easy to use and understand.

In stage two, users will participate in the design of the tool to “ensure that the product is well received, easy to use and understandable,” said Andrea Rosales. “We would like them to engage with us throughout the process right up to the production of the final prototype, as this will help us better respond to their needs and priorities and do what other solutions have not been able to do,” David added. Megias.

This user acceptance could be a factor in the future that leads social media platforms to include the solutions developed in this project. “If our experiments bear fruit, it would be great if they integrated these technologies. For now, we would be happy with a working prototype and proof of concept that could encourage social media platforms to include these technologies in the future.” “, concluded David Megías.

Previous research was published in the Special issue on the ARES-Workshops 2021.


Artificial intelligence may not be the solution to stop the spread of fake news


More information:
D. Megías et al, Architecture of a fake news detection system combining digital watermarks, signal processing and machine learning, Special issue on the ARES-Workshops 2021 (2022). DOI: 10.22667/JOWUA.2022.03.31.033

A. Qureshi et al, Detection of fake videos by digital watermarking, 2021 Asia-Pacific Signal and Information Processing Association Annual Summit & Conference (APSIPA ASC) (2021). ieeexplore.ieee.org/document/9689555

David Megías et al, DISSIMILAR: Towards the detection of fake news through information concealment, signal processing and machine learning, 16th International Congress on Availability, Reliability and Security (ARES 2021) (2021). doi.org/10.1145/3465481.3470088

Provided by the Open University of Catalonia (UOC)

Citation: How Technology Can Spot Fake News in Videos (June 29, 2022) Retrieved June 29, 2022 from https://techxplore.com/news/2022-06-technology-fake-news-videos.html

This document is subject to copyright. Other than any fair dealing for private study or research purposes, no part may be reproduced without written permission. The content is provided for informational purposes only.

Leave a Comment