Imagine a scenario where a Generative AI responds to your prompt about Covid-19 by falsely claiming that vaccines have caused impotence in men. This may seem like traditional misinformation, making it easy to argue that AI-generated falsehoods are just another form of misinformation.
Typically, false claims — whether online or offline — are only regulated in specific cases, such as defamation or national security. However, the deluge of AI-generated misinformation raises the question: should regulators treat it differently from human-created misinformation? The answer likely lies in AI’s persuasive power and its ability to convincingly present false information. Unlike an unreliable eyewitness account, AI-generated misinformation can appear authentic, complete with context and mannerisms. For example, a study in the USA found that over 40 per cent of people believed false AI-generated content.
In a country like India, with low digital literacy and a fragile media system, people are even more likely to accept misleading AI outputs as truth, leading to harmful consequences. Therefore, it is crucial to ensure that AI models are transparent and provide accurate information.
Regulations on AI-generated content are unfolding, and labelling — visible warnings for AI-generated content — has become a potentially accepted strategy.
MeitY advisories
In India, MeitY has issued advisories on similar lines and proposed amendments to the Information Technology Rules, 2021, focusing on deepfakes. However, poorly implemented labelling can be counterproductive. Editing techniques like filters may cause human-generated content to be mislabelled as AI, and vice versa.
This problem is worse in local languages, where AI labelling tools are 25 per cent less accurate than in English.
Regulations should ensure that content labels are reliable, clear, and monitored, requiring social media and Gen-AI platforms to accurately differentiate between human and AI-generated content. Platforms should quickly address mislabelled content through user-friendly reporting systems in all languages, as Gen-AI creates content in local languages as well.
To limit the spread of harmful content, platforms can use techniques like shadow banning for malicious AI-generated material. Further, platforms should share information about malicious AI-generated content with each other and news agencies, following a framework similar to the U.S. Cybersecurity Information Sharing Act, to ensure early prevention.
Obligating Transparency
While labelling is a welcome short-term regulatory measure, long-term solutions must address broader concerns about information integrity and public trust. Regulations should require Gen-AI providers to uphold truthfulness, especially given the unique nature of Gen-AI models.
Unlike social media platforms, which are considered neutral entities that host content created by others, Gen-AI compiles and creates content from trained datasets. Since Gen-AI generates content rather than simply hosting or suggesting it, it may not be considered neutral and should be held accountable for ensuring the truthfulness of the information it produces.
Similarly, given the potential harm of misleading AI-generated content, Gen-AI providers should be accountable for their outputs when they fail to ensure necessary disclosures and transparency. Providers should face liability if malicious content is produced due to negligence in the design and development of their models.
An independent expert committee should be established to monitor AI-generated content and oversee its creation, presentation, and labelling. This includes incorporating uncertainty indicators and fine-tuning models to prioritise accurate information from credible sources. The committee would ensure that generative AI platforms implement safeguards to prevent misleading content, with input from subject matter experts.
Gen-AI models are developed using scraped data, and misleading AI-generated content could undermine the integrity of information and truth. Veracity of information is essential to upholding democratic values.
The writer is a technology policy researcher at CUTS International