IBM’s AI Governance platform watsonx.governance puts in all the guardrails for generative AI (GenAI) and traditional AI, including ML models, said Siddhesh Naik, Country Leader, Data, AI & Automation software, Technology Sales, IBM India and South Asia. The platform helps businesses understand AI models and eliminate the mystery around the data going in and the answers coming out.

The platform also covers three broad areas from an AI governance perspective - risk, compliance and lifecycle governance, he said. “The approach we take with watsonx.governance is not only to govern our stack but also pieces running on some hyper scalers, or even some AI models customers themselves have built. Bringing in a trusted AI governance layer, on top of this, ensures the right guardrails are in place from an AI for business or responsible AI perspective.”

Naik observed that over the past year, there has been a significant surge in interest in GenAI. “The key questions are how enterprises can effectively implement this technology to achieve value at scale and how it can translate into AI for business conversations,” he added.

However, hallucination, or a phenomenon where a large language model (LLM) such as a GenAI chatbot or computer vision tool, detects patterns or objects nonexistent or visible to humans, resulting in incorrect outputs, is cause for concern as it will cost enterprises significantly.

“If a customer-facing chatbot gives out an offer, the company is liable to honor it regardless of whether the foundation model has hallucinated or not. There are financial, legal, and reputation implications. You have multiple elements of risk that come into play concerning how it can impact you,” commented Naik.

Other concerns include preventing biases, hate speech, profanity and the exposure of personal information, or drift where AI models might start with a specific intent but due to changing datasets, end up doing something else. “With the DPDP Act around the corner and very stringent penalties for exposing personal information, ensuring the right guardrails for AI models is crucial,” noted the IBM leader.

Addressing the reluctance to the widespread adoption of AI across enterprises, he said that while AI may not be at the top of the charts from an investment perspective, the AI-embedded workflow or AI-embedded automation is core to the enhancements clients are trying to drive.

Naik stated that traditional AI, followed by risk models, has been around for some time. “The BFSI segment has been ahead of the curve, adopting these for almost a decade. With GenAI coming in, the extensibility of AI across the organisation’s breadth is changing; it can be applied to business workflows, IT automation, application modernisation, code generation, asset management, IT security threat management, customer-facing elements like customer care,” he said, adding that GenAI’s extensibility is to touch every aspect of the business and various verticals of like HR, legal, marketing, and trade finance automation.

Challenges

Businesses are also exploring these models to enhance productivity, operational efficiency and customer experience, rather than full adoption at this point. From a GenAI perspective, a few things holding back clients from moving to the next level are that GenAI is a foundation model, or Large Language Model (LLM), built on billions of parameters from unknown sources. “It’s built on everything available in the open domain. Trusting this foundation model, putting in the right governance guardrails, and preventing it from misbehaving are some concerns clients face.”

Another aspect holding them back is the return on investment (ROI). “The market’s approach, including hyper scalers, has been a hammer-and-nail approach because of the large models out there, which range from 200 billion to 400 billion parameters. When running a simple AI for a business use case like talking to my data, customer care or agent assist use case, I don’t need models requiring a massive GPU infrastructure, and meaning significant cost. Getting the right ROI with the right model for the right use case is a key factor.”

The other aspect, he addressed, is AI-ready data and having the data foundation to ensure correct outcomes. “In AI, junk in means junk out. Without quality data serving as a foundation for AI, you can’t get off the ground unless you’ve laid that data foundation. Also, GenAI by itself means nothing unless I can integrate it into my workflow and get it to benefit my business.” he concluded.