Artificial Intelligence (AI) has come a long way and is, today, making significant strides in specialised domains that require strategic planning, a mastery of extensive knowledge, and complex reasoning.

Some experts claim AI will soon excel in most complex thinking and rational decision-making tasks. Others contend that AI is beginning to exhibit human-like behaviour and emotions. But despite AI’s limitless potential, several obstacles threaten its widespread adoption: a lack of trust, confusion between human and AI capabilities, and insufficient collaboration between institutions and businesses to govern AI effectively.

The emerging risks

Public wariness towards AI has been building over time, exacerbated by events like the Cambridge Analytica scandal and the deepfakes circulated before elections. The risks of AI have become more apparent in recent years, including:

Discrimination and bias: AI systems rely on existing data, algorithms and human decisions to expand its knowledge base. This can amplify existing human prejudices and can even harm marginalised/vulnerable groups.

Privacy and data protection: AI systems often collect and process large amounts of personal data without the data subjects’ consent or awareness, leading to potential data breaches, identity theft, and surveillance.

Safety and security: AI systems can malfunction, be hacked, and/or misused, causing physical or psychological harm and environmental impact.

Transparency and explainability: The complexity of AI systems often makes it difficult to understand how they work, why certain decisions are made, or who is responsible for them. This can impede accountability.

A proactive approach to AI governance is essential to mitigate these risks. India recently approved the Digital Personal Data Protection (DPDP) Act — a legislation that aspires to set and evolve the framework for businesses to adopt best practices. To be compliant, companies must clearly define personal information, adopt a privacy-by-design approach, raise stakeholders’ awareness, and promote confidence in users.

Collaboration between regulatory bodies, institutions, and private sector is crucial and should focus on building public trust, establishing best practices for AI development and use, and integrating technical and legal requirements, particularly in privacy by design.

Artificial and human intelligence are distinct yet complementary. Governments and businesses need clear strategies to help citizens understand this distinction, alleviate concerns about job loss, and improve transparency about education/training opportunities.

Humans contribute unique strengths like creativity, intuition, and ethical judgment, valuable for decision-making. While humans have cognitive limitations, computers and AI can handle vast data. Combining human and AI strengths enhances strategic decision-making.

To maximise AI’s benefits, businesses need robust AI governance frameworks, beyond compliance, that address AI solutions’ ethical, legal, and social implications.

Education and training: Employees should be educated on AI fundamentals and provided continuous updates as the technology evolves.

Upskilling: Organisations should assess existing requirements and upskill their teams, focusing on areas like privacy in AI.

Accountability: Ensure transparency through documentation, metadata, annotations and traceability to increase explainability of algorithms.

Stakeholder involvement: External stakeholders, including customers, suppliers, regulators, and civil society groups, should collaborate on designing and examining AI solutions.

As AI evolves, new legislation will emerge. By championing awareness, advocating for strong data protection, and fostering a culture of privacy, we can create a safer digital landscape.

The writer is Chief Privacy and AI Governance Officer, Wipro Ltd