At GyanDhan, a non-banking financial company that offers education loans, the content writers found that their output trebled when they used the artifical intelligence (AI) tool ChatGPT. Co-founder and CEO Ankit Mehra is a devout fan of such AI tools — known as large language models (LLM) — that generate text in response to a question or prompt.
However, what stops him from deploying it more widely, he says, are concerns ranging from prohibitive pricing to the tool’s propensity for generating erroneous content, and grey areas such as potential ethical and privacy violations in the use of proprietary data.
LLM involves machine learning, a branch of AI that allows computers to learn from training data. Apart from ChatGPT, from the US-based AI research company OpenAI, some of the commercially available or enterprise LLMs include Gemini from Google and Copilot from Microsoft.
“Bias is a major concern; the datasets used to train these AI models are limited and can throw potential error,” Mehra says. He cites the recent instance of Google’s Gemini describing Indian Prime Minister Narendra Modi’s policies as fascist, while refraining from such definitive responses for other leaders.
“At this point, we see using LLMs as basic workplace hygiene,” Mehra says.
Around 15 per cent of GyanDhan’s workforce — which includes developers, marketing personnel, and content writers — currently uses an enterprise LLM.
Treading with caution
Privacy is a major concern when using commercial AI tools. “As an NBFC, we cannot analyse proprietary data using enterprise LLMs. We are developing our own LLMs using open-source models,” Mehra says.
On the face of it, fintechs — financial service companies powered by digital technology — may seem the ideal candidates to adopt generative AI the fastest.
However, in reality it’s a lot more complicated. A host of companies surveyed for this report — including a payments bank and a telecommunications company — reported limited deployment of generative AI. “The hype for AI tools is exaggerated,” an executive says, “there is no fire to the smoke.”
In a recent study by IBM, 74 per cent of the Indian IT professionals surveyed said their companies were exploring some form of AI deployment, but most of the projects were stuck in the pilot stage.
More importantly, even as the biggest AI players begin work on enterprise tools using their generative AI technology, the tech firms surveyed invested mainly in R&D, training and development of proprietary LLMs. Bias, lack of expertise, ethical concerns, and lack of data provenance continue to be impediments to enterprise LLM adoption, according to the report.
IT vs non-IT use
The IBM report is largely in tune with industry expert opinion, which places India at the very beginning of the purported hockey stick growth curve promised by AI.
Deployment of AI for enterprises is still at a nascent stage, says Sachin Arora, Partner and Head, Lighthouse–Data, AI and Emerging Technologies, KPMG India.
“Indian enterprises are using AI to make existing workflows more efficient through automation or chatbots; the second-level paradigm shift for IT and CRM [customer relationship management] firms is still a long way ahead, when entire workflows will be changed to accommodate the deployment of AI-led enterprise applications across the globe. We will see some of these changes in the next two to three years; but, for the moment, larger IT firms are dabbling with small ‘proof of concepts’ and learning, and training their workforce on AI, even as Silicon Valley works on building truly game-changing products for enterprises,” he says.
Arora believes India’s engagement with AI will mirror previous tech cycles, with the IT sector being its biggest user while non-tech enterprises will deploy it in a limited way.
Sanchit Vir Gogia, Chief Analyst at Greyhound Research, says that 2023 has been a year of pilot projects in India.
“But as with any technology, it will take 2-3 years for AI to become mainstream and find adoption across different teams in an organisation. This is largely because the companies offering AI tools are also maturing... moreover, firms are trying to understand the implications of the ever-evolving Indian data protection laws on AI deployments. AI needs a large amount of data deploymen,” he explains.
Value proposition
Companies such as Microsoft are trying to push AI into boardroom conversations. “I’m yet to come across a board or a CEO or a top leadership team that is not curious and excited about the potential of AI for their customers, their own business and employees. We did a study which found that for every dollar spent on AI, customers are getting 3.8x dollars back,” Puneet Chandok, President, Microsoft India and South Asia, had told businessline in a recent interview.
Users will wait to see the value proposition that AI can bring before they are ready to adopt it, says Abhigyan Modi, Senior Vice President, Document Cloud, Adobe. While the technology is perceived as cool on one hand, on the other is the question of how productive it can be, he says. “If our end users can see value in the technology and it gives them real time savings or value generation, I don’t see any barriers to adoption,” he says.