European Union officials have engaged in discussions concerning additional measures to enhance the transparency of artificial intelligence (AI) tools, specifically those capable of generating disinformation. The focus is on making AI-generated content more transparent to the public to combat the spread of “fake news.” The call for action comes as concerns grow about the potential abuse of generative AI technology and its exploitation by malicious actors and even governments.
Vera Jourova, the European Commission’s vice president for values and transparency, emphasized the need for companies deploying generative AI tools to take responsibility for the potential disinformation that their technology may produce. She stressed the importance of placing clear labels on content generated by these tools to enable users to distinguish between authentic information and potentially misleading or false content.
Safeguards against misuse
Jourova specifically mentioned prominent tech companies that integrate generative AI into their services, such as Microsoft’s Bing Chat and Google’s Bard, highlighting the importance of implementing safeguards to prevent the malicious use of these tools for spreading disinformation. The goal is to ensure that these services cannot be exploited by individuals or organizations seeking to generate and propagate false narratives.
To address the challenges posed by AI-generated disinformation, Jourova urged signatories of the EU Code of Practice on Disinformation, including Google, Microsoft, and Meta Platforms, to report on the safeguards they have implemented to combat the dissemination of AI-generated false information. The upcoming July reports are expected to outline the measures taken by these companies to protect users and maintain the integrity of information shared through their platforms.
Twitter’s withdrawal from the EU Code of Practice on Disinformation has attracted significant attention and raised concerns about the company’s commitment to compliance with EU law. As a result, Jourova warned that Twitter should anticipate heightened regulatory scrutiny, emphasizing the urgency and thoroughness with which the company’s actions will be assessed.
The future of AI regulation
While the EU is actively developing the forthcoming EU Artificial Intelligence Act, which will provide comprehensive guidelines for the public use of AI and its deployment by companies, officials have urged companies to adopt a voluntary code of conduct in the interim. This code of conduct would establish ethical standards and best practices for generative AI developers to address the challenges posed by AI-generated disinformation and protect the public from its potentially harmful effects.
As the deployment of AI tools continues to grow, ensuring transparency and safeguards against disinformation becomes increasingly crucial. The EU’s push for transparency and responsibility in AI-generated content sets the stage for a broader discussion on the ethical use of AI and the need for robust regulations to protect users and the integrity of information in the digital age.