Lisbon, Nov. 2, 2025 (Lusa) - CMS Portugal lawyer João Leitão Figueiredo warned, in an interview with Lusa, that people should bear in mind that generative AI systems are liable to make mistakes and are only support tools.
There are many examples of wrong or false information provided by generative artificial intelligence models (with the ability to create content), including search engines, and in Portugal, one of the most recent led Google to correct false content about who Anabela Natário was, after the journalist and writer complained to the technology company.
This is an issue "that's on the agenda. In fact, we still haven't solved previous problems related to search engines, and we already have a new problem that has to do with the hallucinations of artificial intelligence systems, particularly those of generative artificial intelligence," says the lawyer.
"I think we should all realise that this is a natural consequence of these systems, especially those that are [...] 'general purpose AI'," he continues, explaining why.
"Because the quality of the “input” [information] received by these generative artificial intelligence systems is often fragile, which means that the tool ends up producing equally fragile and erroneous “outputs”, what we normally and often call hallucinations," he said.
Now, "this is a reality that we have to live with, to the extent that when we ask someone something and the answer doesn't turn out to be correct, these systems are also subject to making mistakes," says João Leitão Figueiredo.
For this reason, "everyone should bear in mind that these are support tools and are not technical experts in any field," warns the CMS Portugal lawyer.
In the case of companies, he says, it's less likely that there will be so many errors in the information "insofar as the business name or denomination should be unique or tend to be unique" and then there's also industrial property law, which will have an impact not only on the registration of the denomination, but also on the brand or company logo.
"I'd say it's problematic with natural persons," he points out, noting that in these circumstances, the AI Act "focuses a lot on issues of transparency and risk management and not so much on liability," which leaves the scope somewhat open.
But "we can't forget at any point that we have the Product Liability Directive [PLD]".
"It's a directive that will come into force in 2026 on liability for defective products. So it's a new directive that also includes artificial intelligence in these matters, which had been a little left out," he said.
In this directive, "there are mechanisms that require the manufacturer of this type of product, when contacted by the means made available by the consumer or the person affected, to correct the information disclosed by the tool," emphasises João Leitão Figueiredo.
The PLD rules specifically clarify that all types of software are covered by the new directive, including applications, operating systems and AI systems.
ALU/ADB // ADB.
Lusa