Lisbon, Sept. 24, 2025 (Lusa) - Instituto Superior Técnico (IST) researcher Sofia Pinto noted that current fact-checking mechanisms are time-consuming and contribute to the indiscriminate proliferation of disinformation, suggesting that automated fact-checking could be a solution.
"Manual journalistic fact-checking is the usual way of dealing with fake news, however, this labour-intensive task is regularly not compatible with the scale of the problem," so automated fact-checking (AFC) is presented "as a potential solution," reads the article "Automated fact-checking explained for journalists," by computer science researcher Sofia Pinto.
Automated fact-checking is based on Artificial Intelligence (AI) tools, specifically large-scale language models (LLMs), to generate fact-checking data that supports other verification approaches automatically.
"Human verification is key to reducing the increasingly negative effects of fake news," but "the viral spread of some claims often renders this time-consuming activity ineffective," the document states.
In an interview with Lusa, Sofia Pinto, an AI researcher at IST, explains that one of the project's aims is to speed up the identification of disinformation, as "it takes a long time" between the viralisation of an online publication and the publication of a retraction by a news outlet.
"The journalistic task of verifying and creating explanations for fake or manipulated news involves time and resources that make it difficult to stop this viral sharing right at the beginning, in time to stop its indiscriminate proliferation," she says.
For the researcher, "the time it takes for them to be denied means that fake or manipulated news, repeated indiscriminately, becomes “truth” believed by those who read it and that the denials produced by “fact-checkers” will never have the same virality, because the subject is already dead, it's yesterday's news".
To help with this task, the lecturer proposes a "tool with which journalists can improve their ability to react more quickly to the verification of news".
The model in question aims to reduce the time required to evaluate content with potential fake news, thereby accelerating the fact-checking of digital content to help journalists combat the phenomenon of disinformation.
The AI system automatically carries out all the fact-checking steps that professional journalists would manually perform, requiring only the identification of the statement to be verified.
The user has access to all the steps taken to obtain evidence, the sources used to compile the information, and a verdict based on that information. The model also generates an article explaining the reasons for the verdict in question.
Sofia Pinto confesses that although "it was developed and tested in English (...), it would be very interesting to develop and check how a Portuguese variant behaves".
The work involved a study with the participation of more than 100 professional journalists, aimed at validating the model's functionality and assessing the quality of the explanations generated.
The initiative developed by IST students is part of the CIMPLE project, which aims to research and develop creative and innovative explanations of social and knowledge-based AI, and test them in the field of detecting and tracking manipulated information.
CIMPLE is based on models of human creativity, both in manipulating and understanding information, to design explanations that are more comprehensible, reconfigurable and customisable.
PYR/ADB // ADB.
Lusa