By Foo Yun Chee and Supantha Mukherjee
BRUSSELS/STOCKHOLM (Reuters) – EU industry chief Thierry Breton has said new proposed artificial intelligence rules will aim to tackle concerns about the risks around the ChatGPT chatbot and AI technology, in the first comments on the app by a senior Brussels official.
Just two months after its launch, ChatGPT – which can generate articles, essays, jokes and even poetry in response to prompts – has been rated the fastest-growing consumer app in history.
Some experts have raised fears that systems used by such apps could be misused for plagiarism, fraud and spreading misinformation, even as champions of artificial intelligence hail it as a technological leap.
Breton said the risks posed by ChatGPT – the brainchild of OpenAI, a private company backed by Microsoft Corp – and AI systems underscored the urgent need for rules which he proposed last year in a bid to set the global standard for the technology. The rules are currently under discussion in Brussels.
“As showcased by ChatGPT, AI solutions can offer great opportunities for businesses and citizens, but can also pose risks. This is why we need a solid regulatory framework to ensure trustworthy AI based on high-quality data,” he told Reuters in written comments.
Microsoft declined to comment on Breton’s statement. OpenAI – whose app uses a technology called generative AI – did not immediately respond to a request for comment.
OpenAI has said on its website it aims to produce artificial intelligence that “benefits all of humanity” as it attempts to build safe and beneficial AI.
Under the EU draft rules, ChatGPT is considered a general purpose AI system which can be used for multiple purposes including high-risk ones such as the selection of candidates for jobs and credit scoring.
Breton wants OpenAI to cooperate closely with downstream developers of high-risk AI systems to enable their compliance with the proposed AI Act.Â
“Just the fact that generative AI has been newly included in the definition shows the speed at which technology develops and that regulators are struggling to keep up with this pace,” a partner at a U.S. law firm, said.
‘HIGH RISK’ WORRIES
Companies are worried about getting their technology classified under the “high risk” AI category which would lead to tougher compliance requirements and higher costs, according to executives of several companies involved in developing artificial intelligence.
A survey by industry body appliedAI showed that 51% of the respondents expect a slowdown of their AI development activities as a result of the AI Act.
Effective AI regulations should centre on the highest risk applications, Microsoft President Brad Smith wrote in a blog post on Wednesday.
“There are days when I’m optimistic and moments when I’m pessimistic about how humanity will put AI to use,” he said.
Breton said the European Commission is working closely with the EU Council and European Parliament to further clarify the rules in the AI Act for general purpose AI systems.
“People would need to be informed that they are dealing with a chatbot and not with a human being. Transparency is also important with regard to the risk of bias and false information,” he said.
Generative AI models need to be trained on huge amount of text or images for creating a proper response leading to allegations of copyright violations.
Breton said forthcoming discussions with lawmakers about AI rules would cover these aspects.
Concerns about plagiarism by students have prompted some U.S. public schools and French university Sciences Po to ban the use of ChatGPT.
(Reporting by Foo Yun Chee in Brussels and Supantha Mukherjee in Stockholm. Editing by Jane Merriman, Matt Scuffham and Andrew Heavens)