European Commission wants tech platforms to be required to stop AI-generated content
These suggestions are an attempt to lessen the impact of deepfakes and generative AI on democracy.
In an effort to prevent disinformation from infiltrating the next European elections, the European Commission is planning to require digital platforms like Facebook, TikTok, and X to identify material that is created by artificial intelligence (AI).
The commission has begun a public consultation on suggested standards for very big online platforms (VLOPs) and very large online search engines (VLOSEs) in an effort to improve election security.
These suggestions are an attempt to lessen the impact of deepfakes and generative AI on democracy. The proposed rules lay out a number of steps to combat election-related risks, such as establishing explicit instructions for elections to the European Parliament, developing pre- and post-election plans to mitigate risks, and addressing particular methods linked to generative AI content.
The use of generative AI to create and spread false and misleading information about political figures, events, surveys, and stories might influence voting behavior and the outcome of elections.
Until March 7, the public in the European Union may comment on the proposed election security standards.
They think it’s important for relevant platforms to notify consumers when generative AI material can be inaccurate.
The draft also states that the rules should encourage consumers to seek out credible sources of information and that tech companies should take measures to prevent the dissemination of deceptive material.
In order for users to be able to check the reliability and further contextualize the information, it is now recommended that VLOPs and VLOSEs provide information about the concrete sources of the information used as input data in the outputs. This pertains to AI-generated text.
The draft guidelines’ suggested “best practices” for risk reduction are based on the AI Act and its non-binding equivalent, the AI Pact, which were both recently accepted legislative proposals by the European Union.
Tools like OpenAI’s ChatGPT have come under scrutiny due to growing concerns about sophisticated AI systems, including massive language models, after the broad deployment of generative AI in 2023.
Meta announced in a company blog post that they will be introducing new guidelines for AI-generated content on Facebook, Instagram, and Threads in the next few months. The commission has not yet specified when companies must label manipulated content under the Digital Services Act, the EU’s content moderation law.
The material will prominently display a label indicating that it is AI-generated, whether it be via metadata or purposeful watermarking.