Bhavish Aggarwal Invests $230M in Krutrim to Supercharge Indian AI"

Startups AI Funding News 1 min read , February 4, 2025
Bhavish Aggarwal Invests $230M in Krutrim to Supercharge Indian AI"

Bhavish Aggarwal, the founder of Ola, is doubling down on AI with a massive $230 million investment in his startup, Krutrim. Using funds from his family office, Aggarwal is spearheading Krutrim’s ambitious goal of raising $1.15 billion by next year, with plans to bring in external investors for the remainder.

Krutrim, now a unicorn, is taking major strides in India’s AI landscape. The company has open-sourced its AI models and announced plans to build what it claims will be India’s largest supercomputer in partnership with Nvidia. The lab recently introduced Krutrim-2, a 12-billion parameter language model optimized for Indian languages. Early benchmarks show impressive results Krutrim-2 scored 0.95 in sentiment analysis, outperforming competitors at 0.70, and achieved an 80% success rate in code generation tasks.

Beyond language processing, Krutrim has also released specialised AI models for image processing, speech translation, and text search, all fine-tuned for Indian languages. Aggarwal, whose ventures have been backed by SoftBank, acknowledged the progress on X, saying, “We’re nowhere close to global benchmarks yet but have made good progress in one year. By open-sourcing our models, we hope the entire Indian AI community collaborates to create a world-class Indian AI ecosystem.”

India is making a concerted push to establish itself in AI, competing against US and Chinese tech giants. The recent launch of DeepSeek’s R1 “reasoning” model, built on a modest budget, has already disrupted the industry. India has embraced DeepSeek’s progress and will host the Chinese AI lab’s large language models on domestic servers. In response, Krutrim’s cloud arm began offering DeepSeek on Indian servers last week.

Krutrim has also introduced BharatBench, an evaluation framework designed to measure AI proficiency in Indian languages—an area where existing benchmarks primarily cater to English and Chinese. With a 128,000-token context window, Krutrim’s models handle longer texts and complex conversations effectively. Krutrim-2’s performance data highlights strong scores in grammar correction (0.98) and multi-turn conversations (0.91).

This investment follows the January launch of Krutrim-1, India’s first large language model with 7 billion parameters. The supercomputer deployment with Nvidia is set to go live in March, with further expansion planned throughout the year. With these aggressive moves, Krutrim positions itself as a key player in India’s AI revolution.

BhavishAggarwal AIStartup KrutrimAI IndianAI AI