It’s barely two years since ChatGPT appeared on the scene and dazzled the world with glimpses of never-before-seen capabilities of artificial intelligence (AI). Yet, already, AI applications running on one’s phones or laptops no longer seem a novelty. However, much of this AI workload runs on the cloud, which offers vast storage but poses challenges such as slower connectivity, energy consumption and network congestion. This means user experience in cloud-based AI apps remains subpar, spurring manufacturers of smartphones, computers and even automobiles to support development of local AI inferencing. Chip majors like Qualcomm, too, have thrown their weight behind on-device AI.
Businessline caught up with Dr Vinesh Sukumar, senior director, product management, Qualcomm Technologies, at the company’s recent Snapdragon Summit in Hawaii. With over 20 years of industry experience, Sukumar leads the generative AI and machine learning (GenAI/ML) product management team across multiple business units. Edited excerpts from the interview:
What does running AI apps on devices mean for a market like India?
When it comes to AI deployment, we at Qualcomm have always thought about which user pain point we can solve, and what value we can add to the device to justify customer refresh or upgrade. Improvements in photography and videography quality without much user intervention remain important use cases for OEM (original equipment manufacturer) partners. In the PC (personal computer) market, instantaneous productivity increase is a key request. In India, there is a lot of innovation in mobile devices, but other form factors such as automotives are still challenging. For instance, the success of ADAS (advanced driver assistance systems) is still difficult, but the use case of car infotainment is in demand. When you introduce any of these new features, it’s not going to be a splash right at the first try. It’s about more people getting used to it and making it better, depending on the maturity of the market.
How do you balance pricing with user experience, especially in the price-conscious Indian market?
There will always be a grading of experiences at different price points. You have to decide how much [AI processing] will be on-device, how much device-to-cloud, or which features you don’t enable at all. For instance, in video streaming, if something is at 30 frames per second, you may drop it at 15 or 24 frames, which the compute power can handle. Similarly, in on-device AI-based voice translation, we may go with fewer languages at lower price points... we continue to work with OEMs to have more even in lower price points.
Is India better equipped for AI workloads on-device rather than on cloud?
On-device AI is slowly growing in India, starting with smartphones, and will catch up on computers, wearables, etc. Given the population, utilisation of devices will be higher here than in other markets. Also, users in India interact with their phones a lot more for daily tasks like digital payments. That’s where data privacy and security concerns also come in, with preference to keep data within the device and not send to the cloud. The cloud computing infrastructure is still coming up in India, and the latency — or response time of going back and forth to the cloud for every task — could be more. So, the preference for on-device AI inferencing sits well for India. Moreover, on-device AI is less energy-intensive compared to the cloud.
Can one get these experiences on mid-premium or budget smartphones too?
The entire Generative AI boom started with a 70-billion parameter model. But with time, we have moved to smaller AI models. If these models can get down to a billion or less parameters while maintaining accuracy, you can deploy it in phones with less memory/storage or compute power. Qualcomm is pushing for smaller AI models along with our partners like Meta, Mistral and others.
Your competitors are eyeing AI software for data centres. What makes Qualcomm optimistic about on-device AI?
When Generative AI first came in, there was only talk of ChatGPT and Dall-E, and only from the lens of text or images. Qualcomm has shown performance improvements with on-device AI, whether it’s image generation on devices or smaller models such as Google Gemini Nano. We are aiming for more personalisation in the Generative AI experience — mapping the user’s style into the AI, which is where on-device helps. Next is going ‘app less’. Can I use my phone’s voice command to buy a ticket for a trip, and the AI knows my preference of airline, seats and other things? It is a difficult problem to solve, but we are marching towards it.
What about data privacy and sovereignty regulations for on-device AI?
Many governments are putting in standards to ensure AI engines are responsible, don’t have bias, and the generated synthetic content meets the standards. These regulations are still being defined. We then have to test it in the real world. Qualcomm is working closely with governments on this and I expect more developments by 2025 or 2026.
(The writer’s visit to Snapdragon Summit, Hawaii, was sponsored by Qualcomm)
Comments
Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.
We have migrated to a new commenting platform. If you are already a registered user of TheHindu Businessline and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.