Earlier this year, a reported data breach at DigiYatra put thousands of high-flying executives at risk, with their private and facial recognition data out there with criminals. While it is widely believed that AI may not directly cause any safety hazards, a National Institute of Standards and Technology report says that facial recognition algorithms from major tech companies exhibited significant racial and gender bias.

The average financial costs per data breach is estimated at a minimum of ₹36 crore. In addition, enterprises will have to hire AI cyber security experts and probably upgrade the tech systems to ward off more damage in the future. These costs are comparably much less than the losses due to a data theft. The stakes for government are far higher, with much bigger losses as well as credibility with the citizens.

India’s burgeoning AI sector is well aware of the need for a robust AI safety policy. But are the policymakers clear on this? And there are many challenges they should be cognisant of. Balancing innovation and regulation is the first. An evolving AI ecosystem like ours needs more flexibility in regulation. Striking a balance between pushing much-needed innovation and implementing stringent regulations is crucial. Overly restrictive policies could stifle innovation, while lax regulations might leave the door open for potential risks. Although Indians are largely ignorant of privacy concerns, policymakers must adopt the best practices from Europe. The data privacy bill needs to be strengthened further for better data governance. The AI safety policy will need to align with this framework to ensure responsible data collection, use, and storage.

A third critical challenge is about addressing algorithmic bias. Our diverse population raises concerns about this bias. The policy should address how to mitigate this in areas like facial recognition, social benefits or loan approvals, ensuring fairness and inclusivity.

Shortage of skills

India faces a shortage of skilled professionals in AI safety and ethics. The government needs to invest in training programmes and capacity building to create a workforce equipped to implement and enforce AI safety policies.

One key challenge as with most regulations in India is the weak enforcement mechanism. Developing robust enforcement mechanisms such as establishing clear lines of accountability and outlining penalties for non-compliance are crucial for AI safety regulation.

The last challenge policymakers must consider is alignment with other countries’ regulations. Consider global best practices and ongoing efforts towards AI governance frameworks for policy finalisation. Collaboration with other countries can bring a more comprehensive and coordinated approach to AI safety, especially in areas like deepfakes. Governments worldwide are actively involved in formulating policies and India can take the lead just as it had in digital money. Consider the following while developing it:

Focus on human values: Policies should ensure AI development aligns with fundamental human values like fairness, transparency, and accountability. This means establishing ethical guidelines for data collection, use, and potential biases within AI algorithms.

Prioritise risk management: Consider frameworks for identifying, assessing, and mitigating potential risks associated with AI systems. Ensure developers conduct thorough safety assessments before deploying AI in high-risk domains like healthcare or banking.

Transparency and explainability: Promote the development of AI systems that are transparent in their decision-making processes. This can help stakeholders understand how AI systems arrive at conclusions and identify potential biases.

Human oversight and control: Policies should emphasise the importance of human oversight in development and deployment. Humans should be in the loop for critical decision-making processes, especially in high-risk scenarios.

The government can refer to the best practices outlined in the the European Union’s Artificial Intelligence Act and South Korea’s Ethical Guidelines for AI Development and Use and, of course, the Singapore Model AI Governance Framework to set the initial pace for the policy.

Developing robust AI safety policies is an ongoing challenge. Regulatory frameworks need to be flexible enough to adapt to the rapid evolution of AI technology. International collaboration is crucial to ensure a level-playing field and prevent a fragmented global approach to AI safety.

The writer is a Fortune-500 advisor, startup investor and co-founder of the non-profit Medici Institute