Opinion | Can DeepSeek’s Liang Wenfeng stay true to his AI ideals?
While OpenAI CEO Sam Altman’s altruistic vision has turned pragmatic, Liang, whose open-source tech is free to share, has loftier aims for now.
Almost 20 years ago, when I was a computing undergraduate, my seniors often said that for a good programmer, putting people first was the simplest, most fundamental principle. The software we write, whether for the back end of a complex system or a protocol used only by developers, ultimately serves people.
Advertisement
The recent emergence of R1, the open-source reasoning model from DeepSeek, the Chinese artificial intelligence (AI) start-up, made me think of this again.
DeepSeek’s team is small compared to many of its American AI peers, and reportedly entirely local. Using less- efficient graphics processing unit hardware, they managed to train an AI model in less than two months – the V3 unveiled last December – and both its large language models (LLMs) perform on a par with well-established models. DeepSeek’s achievements were nothing short of a Lunar New Year miracle.
Before this, AI was seen as an exclusive game for top-tier companies with unlimited resources. That perception has been overturned. Suddenly, the AI race is no longer just about who has the most resources but about who can innovate better. As DeepSeek founder Liang Wenfeng put it, “we accidentally became a catfish” – stirring up the industry.
What I find astonishing is not just their innovation under constraints or their ability to achieve more with less, but their stunning choice to freely share their work. Not only is DeepSeek’s open-source code free to use but it has also published research papers explaining the development process.
Advertisement
Transparency and openness are the cornerstones of the internet. Yet people had become accustomed to proprietary LLMs as black boxes: we had no idea how they work. DeepSeek is disrupting this.