Prominent experts have asked Artificial Intelligence labs to immediately pause the training of AI systems more powerful than GPT-4 for at least 6 months. Hundreds of experts, technologists, scientists and philosophers, signed an open letter to that effect by Future of Life Institute, an American non-profit that claims to work to reduce risks associated with transformative technologies.
While experts indicate that the notice to exercise caution from prominent technologists could put the indiscriminate proliferation of AI in check, other experts note that the open letter distracts from the risks associated with contemporary AI technologies.
The open letter has many notable signatories, including Tesla and Twitter CEO Elon Musk; Steve Wozniak co-founder of Apple; Swedish-American physicist and popular science writer Max Tegmark; Israeli historian Yuval Noah Harari and many more. This comes even as reports suggest that big tech players have recently laid off employees from the ‘responsible AI’ teams.
The letter signed by the tech doyens stated that as AI labs are locked in an out-of-control race to develop even more powerful digital minds, proper planning management is not happening to appraise and control the consequences of such technologies.
“Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones?” the letter said. Important questions like these, as per the letter, should not be delegated to unelected tech leaders, and proper AI systems should only be developed once we are confident that their effects will be positive and risks will be manageable the letter observed.
Indian experts weighed in on the matter, indicating that the concerns raised in the open letter are valid. Internet researcher Srinivas Kodali said: “Technologists such as Elon Musk hold significant sway in the tech landscape as well as influence. Concerns voiced by him will be echoed by the public, who will appraise AI technologies more critically.” Kodali added that the concerns raised in the open letter have been voiced by AI experts for ages, as technologists made great strides in artificial intelligence.
While the critique in the open letter is similar to a litany of concerns that have been expressed for AI in the past few months- Internet Freedom Foundation founder Prateek Waghre warned that concerns raised in the open letter could be subject to hyperbole instead of addressing more proximate harm associated with artificial intelligence.
“The tech hype train has a tendency to oversell criticism without addressing the proximate harm associated with it. The letter has not defined what are these AI technologies which are more powerful than GPT-4. At the same time, it has failed to highlight the existing risks of AI such as the iterations of generative AI and large language models that are already in the market. There is also an issue in the way we are indiscriminately rolling out existing AI technologies, and that has not been addressed in the open letter.”
This is indeed being seen, Researchers have observed that generative AI is making disinformation easier and cheaper to produce thus spreading it at an unprecedented scale. Others have reported that these chatbots produce believable answers that are completely wrong. “The only way to know that these answers are wrong is by knowing the right answers yourself,” a researcher said. More sinister reports have also emerged where Bing Chatbot, for example, has not only provided inaccurate information but also indulged in abusive behaviour – discussing personal matters and expressing its desire to be a human.
Emily M. Bender, a Professor in the Department of Linguistics at the University of Washington and the co-author of the first paper the letter cites stated in a Twitter thread that the letter is contributing to AI hype itself and misusing her research. According to Bender, the contents of the letter are essentially misdirections, where vague future harms of an “all-powerful AI” are being pointed at instead of the more immediate concerns of generative AI, which her paper spoke about.
“We wrote a whole paper in late 2020 (Stochastic Parrots, published in 2021) pointing out that this head-long rush to ever larger language models without considering risks was a bad thing. But the risks and harms have never been about ‘too powerful AI’,” Bender tweeted. “Instead: They’re about the concentration of power in the hands of people, about reproducing systems of oppression, about damage to the information ecosystem, and about damage to the natural ecosystem (through profligate use of energy resources).”
Arvind Narayanan, an Associate Professor of Computer Science at Princeton, echoed that the open letter was full of AI hype that “makes it harder to tackle real, occurring AI harms.”
“Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?” the open letter asks.
Narayanan said these questions are “nonsense” and “ridiculous.” The very far-out questions of whether computers will replace humans and take over human civilization are part of a long termist mindset that distracts us from current issues, which the letter should focus on as per Narayanan.
Comments
Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.
We have migrated to a new commenting platform. If you are already a registered user of TheHindu Businessline and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.