
In recent years, the artificial intelligence industry has relied on a simple formula: the bigger, the better. There’s a belief that more data, larger models, stronger computing power will automatically result in intelligence emerging on its own. However, Meta’s chief AI scientist, Yann LeCun, sharply criticized this idea in his speech at the National University of Singapore and emphasized that the future of AI requires a radical change in how we think about machine learning.
LeCun, one of the most respected figures in the AI world and known as the author of many fundamental technologies, criticized the growing dependence on “scaling laws” today. According to these laws, it is argued that simply enlarging an AI model will inevitably improve its performance. Companies worldwide are rushing to create increasingly large language models, such as OpenAI’s GPT-4 and Google’s Gemini. It’s as if intelligence depends solely on size.
However, LeCun argued that this logic has clear limitations. According to him, increasing the size of the model has yielded quite impressive results in a short period of time, such as creating more fluent texts or improving image recognition. However, when it comes to dealing with the real and complex world, this approach is insufficient. LeCun emphasizes that true intelligence requires the ability to analyze uncertainty, understand the physical environment, anticipate the consequences of actions, and plan – capabilities which, no matter how large, are still lacking in current AI models.
LeCun believes that instead of endlessly enlarging models, the industry should invest in creating “world models,” that is, systems capable of simulating and understanding how the world works. While today’s AI is mainly trying to predict the next word based on past patterns, world models would allow machines to understand cause-and-effect relationships more deeply. They would rely on common sense, adapt to unexpected situations, and act with foresight of long-term consequences.
LeCun is not the only one who raised these thoughts. In the artificial intelligence industry, more and more specialists are questioning whether real achievements can be made solely by increasing model size. The CEO of Scale AI, Alexander Wang, has also expressed concern about the declining efficiency resulting from enlarging models. The CEO of Cohere and one of the authors of the Transformer architecture, Aidan Gomez, expressed an even sharper opinion, calling simply enlarging models “the most foolish way to achieve intelligence.”
These criticisms are resonating at a crucial time. Investors are still continuing to invest billions of dollars, hoping for quick results through large models. However, if leaders like LeCun are correct, the next stage of artificial intelligence will require not only larger machines but also a fundamentally new understanding of how machines learn and interact with the world.
For startups, researchers, and investors, this means a period of significant change. Now success is determined not by who builds the biggest model, but by who designs the smartest system. At a time when the AI race is intensifying, the question is no longer about how large AI can become, but whether it can truly understand the world it was created in, think logically, and act intelligently.
Prepared by Navruzakhon Burieva
Leave a Reply