Hottest graphics processing unit maker’s India playbook | Mint – Mint

Bengaluru: Behind ChatGPT, the blockbuster generative artificial intelligence (AI) chatbot capable of creating human-like text based on context and past conversations, are about 20,000 graphics processing units (GPUs). Behind these GPUs is a 30-year-old American company, Nvidia.
You would know of central processing units (CPUs), the brain of computing devices like mobiles, desktops, laptops and servers. GPUs are a more powerful version—unlike CPUs, GPUs are specifically designed to handle multiple tasks simultaneously, a feature known as parallel computing. That makes them more suitable for high-performance computing tasks like supercomputers and training large language models or models that can learn and comprehend. Deploying a large number of high-performance GPUs helps in shortening the training time for models such as ChatGPT.
The explosive growth of AI has now made Nvidia, known for selling its processors to the gaming industry even till five years ago, one of the hottest technology assets in the world. As of September, it is the fifth largest tech company measured by market capitalization at $1.2 trillion, after Apple, Microsoft, Alphabet and Amazon. For the second quarter ended 30 July, Nvidia generated revenue of $13.51 billion, up 101% from a year ago.
Jensen Huang, Nvidia’s co-founder, president, and chief executive officer, was in India earlier this month. What was he up to? Well, he has been visiting India since 2004, or ever since the company started its India operations. But this trip was different. He signed deals with two of India’s largest conglomerates. Those could have far-reaching implications for Indian businesses. The agreements, with Reliance Industries Ltd (RIL) and the Tata Group, will help the two conglomerates create AI infrastructure. The two groups, with their enormous resources, could have attempted building the infrastructure themselves, buying GPUs as needed. But it isn’t that simple. Nvidia’s chips (the company doesn’t run its own fabs but designs the chips) are in massive demand across the world and analysts have predicted the company’s processors to remain in shortage all through 2024. For every company, accessing these chips to power their AI operations is a challenge. The deals secure access to computing power.
Nonetheless, it is a two-way relationship. While companies in India need Nvidia’s GPUs, the company also needs India. There are many reasons why. But first, let’s look at how Indian businesses are currently using the company’s processors.

Chip in a garden

Huang, who always makes public appearances in a black leather biker jacket, has forever spoken about the versatility of GPUs, beyond gaming. Today, the company has been able to leverage the GPU architecture to create platforms for scientific computing, autonomous vehicles, robotics, metaverse and 3D internet applications, among others. The GPUs feed industries that are equally varied, from airports to food.
You may have taken a flight or landed at the swanky new Terminal-2 of Bengaluru’s Kempegowda International Airport. Because of its green spaces, indoor gardens, and waterfalls, it is also referred to as the ‘terminal in a garden’. This garden uses some of the latest that technology has to offer.
More than 500 live camera feeds across the terminal use AI, computer vision, and the internet of things to detect unattended luggage, manage passenger queues and identify congestion points. Anticipated delays help the ground staff redirect passengers to less crowded areas. The system can also detect speed violations by vehicles outside the terminal.
Gurugram-based company Industry.AI deployed this AI platform—it uses Nvidia’s software development kit (SDK) and GPUs to train its AI models. SDK is a set of software tools that help developers build specific applications.
Another company using Nvidia’s GPUs is Kolkata-registered Blu Cocoon Digital, which works on digitizing the food ecosystem. For instance, autonomous management of a food production line or better crop management. AI can help in sorting and inventory management for food producers, detection of pests in granaries, and even forecast the shelf life of food products.
Earlier, Blu Cocoon used CPUs to train and run its AI models, but it would take them almost two months to train a single AI model. The training of AI models involves millions of calculations and, like we mentioned earlier, GPUs have a parallel computing feature that can run several calculations simultaneously. It simply speeds up the entire training process and, thereby, saves time. The company began using Nvidia GPUs and Nvidia Metropolis framework for computer vision (a set of developer and AI tools). Besides the time saved, Blu Cocoon is now able to test its AI models on larger and growing datasets. Trained computer vision models running in a data centre have predicted disease about to hit a wheat farm in India—from pictures of plants uploaded by farmers.

Get the superchip

How will RIL and the Tata Group use Nvidia’s chips?
One, Nvidia will partner with the Tata Group to build an AI supercomputer. It would be powered by the Nvidia ‘GH200 Grace Hopper Superchip’. According to Nvidia, this processor, announced in August, was “designed from the ground up for giant-scale AI and high-performance computing applications”. This deal may help the Tatas regain its supercomputing glory. One may recall that the Tata Group’s Eka was India’s first supercomputer. It ranked fourth among the world’s top supercomputers in 2007 and was even used in the launch of India’s moon vehicle, Chandrayaan, by the Indian Space Research Organization.
Second, Nvidia said that the deal will “catalyze the AI-led transformation across Tata Group companies ranging from manufacturing to consumer businesses”. Tata Communications and Nvidia will develop an AI cloud while India’s largest IT services company, Tata Consultancy Services (TCS), will utilize Nvidia’s capabilities to build generative AI applications. Why is this important? TCS has to compete with aggressive IT services firms. At the moment, global consulting firm Accenture is racing away in the market of AI services. Accenture plans to invest $3 billion in the next three years on AI. “We think this is another Cloud First moment where we were out early to invest at scale,” Accenture chair and CEO Julie Sweet said in a post-earnings call on 22 June.
TCS, which partners with Microsoft on data and AI, launched a ‘Generative AI Enterprise Adoption’ service in July to jointly ideate with clients on AI-led solutions and train AI models, among other things. The Nvidia tie-up will sweeten the deal further.
According to research firm Gartner, the market for AI software may touch $134.8 billion by 2025, a large part of which will be chatbot technology that uses AI and natural language processing to respond to user queries.
TCS will also upskill its 600,000-strong workforce leveraging the partnership. Jayanth Kolla, founder and partner at Convergence Catalyst, a consultancy firm, believes that the AI re-skilling of TCS employees will help develop front office applications. In effect, this would help put the vast Indian IT workforce at the forefront of the upcoming AI applications development era.
But this is not the first time the Tatas have announced a partnership with Nvidia. In February 2022, for instance, Tata Motors subsidiary Jaguar Land Rover had announced a multi-year strategic partnership with the GPU company to develop automated driving systems and AI-enabled services for customers.
Nvidia also struck a deal to build an AI infrastructure with RIL, one that is “over an order of magnitude more powerful than the fastest supercomputer in India today”. The AI infrastructure will become the foundation for Reliance Jio Infocomm, RIL’s telecom arm, to provide different services. “Reliance will create AI applications and services for their 450 million Jio customers and provide energy-efficient AI infrastructure to scientists, developers and startups across India,” Nvidia stated in a release.
We don’t know what the nature of these services will be at the moment but the same release from Nvidia hinted at three possibilities.
“AI can help rural farmers interact via cell phones in their local language to get weather information and crop prices. It can help provide, at massive scale, expert diagnosis of medical symptoms and imaging scans where doctors may not be immediately available. AI can better predict cyclonic storms using decades of atmospheric data, enabling those at risk to evacuate and find shelter,” the company stated on 8 September.

A $10 mn ChatGPT?

Recently, Sam Altman, the chief executive of OpenAI, the AI research company behind ChatGPT, visited India. During an interaction, Rajan Anandan, the managing director at PeakXV Partners (formerly Sequoia), a venture capital firm, asked Altman whether an Indian startup could build a ChatGPT-like model with just about $10 million. Altman dismissed the idea as “completely hopeless”.
Huang, though, may know better. After all, his GPUs power ChatGPT. In an interview to CNBC on 19 March, he said that large language models and a ChatGPT-like model can be built for as little as $10-20 million with Nvidia platforms.
Exactly how? Right now, it all looks very expensive.
According to a March report by market research firm TrendForce, ChatGPT required about 20,000 GPUs to process its training data in 2020. Going forward, the firm predicts, OpenAI’s GPT model and ChatGPT may require more than 30,000 (Nvidia A100) GPUs as they get commercially deployed. The A100 is priced between $10,000 and $15,000, depending upon the configuration and form factor.
While Huang hadn’t elaborated on how the budget could shrink, one explanation is that GPUs will eventually become a volume game. With higher volume, prices can drop. Technology also tends to become cheaper over time.
If that is indeed the case, Indian companies can hope to build a cheaper version of ChatGPT someday—possibly with some association with Nvidia or other companies making GPUs.
The Indian government has enormous datasets but not all of it would be available to private companies. Sharing his thoughts on India building a ChatGPT-like version at the Mint Digital Innovation Summit, held in Bengaluru on 9 June, Rajeev Chandrasekhar, India’s minister of state for electronics and IT, said the government of India would give Indian startups and Indian researchers “curated access” to Indian datasets to build such a model, while allowing them to have “partnerships with foreign companies”. The AI models could be built primarily in areas of language translation, healthcare and governance.

The hub

So, why does Nvidia need India?
Huang has often said that India, where it has about 4,000 employees, is “strategic to Nvidia”. Of course, it is a growing market for its GPUs. But the country is also a hub for engineering resources and chip design. Overall, the company currently has more than 320,000 India-based developers working on its different platforms.
In March this year, the chief executive announced the launch of a ‘Indian Design Centre’ in Bengaluru. “Globally, on an average, we spend half a billion dollars on research and development, and we will not shy away from investing 25% (about $125 million) of that in India,” he had said during the launch.
Nvidia also needs to tap India due to geopolitical tensions and conflicts including those in China, Hong Kong, Israel, Korea, and Taiwan. Nvidia manufactures its product components and does the final assembly of its products in these regions. Any disruption in these countries could have an impact on its revenue.
During the third quarter of fiscal year 2023, the US government announced new export restrictions and export licensing requirements targeting China’s semiconductor and supercomputing industries. This impacts exports of certain chips, as well as the software, hardware, equipment, and technology used to develop and manufacture certain chips, to China (including Hong Kong and Macau) and Russia.
Further, Nvidia is not the only company that is powering the global AI drive. It does have competition from big tech companies. While Nvidia only designs its chips (they are manufactured by semiconductor fab makers such as Taiwan Semiconductor Manufacturing Company and Samsung Electronics Co.), rivals like Advanced Micro Devices and Intel Corp make their own chips. Then there are a host of fabless companies like Qualcomm and Broadcom. Microsoft, too, is reportedly working on its own AI chips that can be used to train large language models—this can avoid the company’s reliance on Nvidia.
For now, though, Nvidia has stolen a march in this space. According to a 27 May report by investment bank JPMorgan, Nvidia can garner about 60% of all AI deals this year on the back of its hardware products such as GPUs.
But competition will eventually eat into Nvidia’s market share. And Huang knows well that Nvidia will need to increasingly tap growing markets like India to offset any potential loss of revenue from other geographies.
Download the Mint app and read premium stories
Log in to our website to save your bookmarks. It’ll just take a moment.
You are just one step away from creating your watchlist!
Oops! Looks like you have exceeded the limit to bookmark the image. Remove some to bookmark this image.
Your session has expired, please login again.
You are now subscribed to our newsletters. In case you can’t find any email from our side, please check the spam folder.
This is a subscriber only feature Subscribe Now to get daily updates on WhatsApp

source

Jesse
https://playwithchatgtp.com