The European Union Artificial Intelligence (AI) Act, ratified by the EU Parliament in March and expected to come into force from next month, is a groundbreaking piece of legislation that can become the benchmark for other countries, including India, on the regulation and use of AI. The Act seeks to balance innovation with safety and ethical considerations. It categorises AI applications into different risk levels and lays down guardrails accordingly. This stratified approach allows for a nuanced regulation that can foster innovation while safeguarding the public interest.

The EU Parliament website points out that the law limits the use of biometric identification systems to “narrowly defined situations”, bans social scoring, predictive policing (based on profiling a person), emotion recognition in the workplace and schools, untargeted scraping of facial images to create social recognition databases, and AI that manipulates human behaviour by exploiting vulnerabilities. High-risk AI systems, such as those used in critical infrastructure, education, vocational training, employment, healthcare, banking and law enforcement, will be subject to transparency, accountability (maintaining use logs), and human oversight. Citizens will have the right to submit complaints about AI systems. The Act mandates transparency for AI systems interacting with humans, ensuring that users are aware when they are engaging with an AI. This requirement promotes trust and informed consent, crucial components in the responsible deployment of AI technologies. Going forward, India too should develop its law and rules by matching them with levels of risk. The recently concluded Global Partnership on Artificial Intelligence (GPAI) summit in Delhi marked a significant milestone in shaping the global discourse in this respect. The summit showcased the need for international collaboration in balancing innovation and curbing risks.

Meanwhile, the Act is not without its critics. One significant concern is the potential compliance burden it places on companies, especially start-ups and small-to-medium enterprises (SMEs). The detailed documentation and rigorous testing required for high-risk AI systems could deter smaller entities from entering the market, thereby limiting diversity, competitiveness and technological advancement. The other big worry is the exemptions provided in the name of national security where the governments have a carte blanche to bypass crucial AI regulations.

There is a global race for AI dominance. It is essential to establish global agreements and standards for AI development and use. The incoming new government at the Centre must urgently establish guardrails for AI, and avoid the delays seen in formulation of personal data protection rules. Proactive legislation will foster responsible innovation, protect citizens’ rights, and position India as a leader in the ethical use of AI.