When most of us think about Artificial Intelligence (AI), our minds go straight to robots and sci-fi thrillers, where machines take over the world. But the fact is AI is already existing among us — in our smartphones, fitness trackers, refrigerators, etc. It can also drive cars, trade stocks and shares, and cook simply by watching YouTube videos. And this is just the beginning.
The machines are now programmed to ‘think’ like human-beings and mimic the way humans act. The ideal characteristic of AI, which is different from conventional software programmes, is its ability to learn and rationalise on its own, and then, where required, take actions that have the best chance of achieving a specific goal.
For AI systems to work, huge quantity of data are required. Though AI as a field of study has been around for close to 60 years, the shortage of data for much of that period combined with limits in computational power has constrained AI’s growth until recently. .
Today, with the explosion of the World Wide Web and social media, and the relative ease and affordability of Internet connection, the amount of data being generated and information being digitised have witnessed a quantum leap. This has set the stage for AI to become a disruptive force across the global economy.
According to a PwC report, AI could contribute up to $15.7 trillion to global GDP in 2030, with $9.1 trillion coming from consumption-side effects and $6.6 trillion from increased productivity. For context, that would add about 14 per cent to global GDP, or more than China and India’s combined output.
Ethical dilemmas
Everything has its pros and cons. Recognised for the wealth of promising opportunities it could open up, AI also brings with it a number of ethical dilemmas and threats. For example, critics have pointed out that, while our smart devices are designed to make our life easier and healthier, they are also capable of working towards several micro and macro goals that benefit their makers/designers rather than us, the user — even if it is us who own the smart devices in question. An important issue that is being avidly discussed around the world currently is how technologies are putting our jobs in jeopardy.
Thanks to dual advances in both AI and its sibling field of robotics, automation is now sweeping across more industries than ever, putting a lot of manual workers out of work.
But the most unexpected shift when it comes to AI’s impact on employment, though, is what it means for the white-collar jobs that don’t require manual labour. AI has already made inroads carrying out many of the information-based tasks that are traditionally the domain of high-cognition professionals like doctors or lawyers, even high-level executives.
Other ethical problems that should be duly noted here include potential bias/discrimination in decision-making processes involving AI (can you realistically appeal an AI-made rejection of your mortgage application?), and liability in technology-induced accidents (who is responsible in accidents caused by autonomous vehicles?). Or, even though this might sound far-fetched to many of us, could AI have civic rights?
Choices and privacy
There are also a couple of pertinent points from the perspective of consumer law and policy. The synergy between AI and Big Data helps to enhance the power of businesses and their dominance over consumers. AI systems can use Big Data to anticipate consumer behaviour and to try and trigger desired reactions. As a result, consumers can be outwitted, manipulated, and induced into suboptimal purchases or other unwitting choices.
Our privacy is also affected by these phenomena in multiple ways: consumer data are continuously collected by online and off-line consumer behaviour tracking, they are stored and merged with other data sources, and processed in order to elicit further information about consumers through profiling. The resulting indications about consumer attitudes, weaknesses and propensities are deployed in decisions affecting individual consumers, or in attempts to influence their behaviour.
Last but not least, many of us are perhaps aware of the question whether or not AI poses some existential risk to humanity — the scenario of super intelligent machines taking over the world. As the renowned physicist Stephen Hawking wrote in May 2014, “One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who control it, the long-term impact depends on whether it can be controlled at all.” And that brings us to the final question: Are we prepared?
The writer is Deputy Executive Director, CUTS International.
Comments
Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.
We have migrated to a new commenting platform. If you are already a registered user of TheHindu Businessline and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.