The AI revolution didn’t start with a big conference or a shiny product launch. It started with a mistake.
In early 2023, everyone was focused on the sudden rise of generative AI. OpenAI stunned the world with ChatGPT, Google rushed to catch up, and it looked like a simple race between a few huge American tech companies. The idea was clear: bigger models, more data, more powerful chips—whoever had the most resources and smartest people would control the future.
But the real turning point didn’t happen inside those secret labs. It happened out in the open.
Inside Meta, engineers had spent huge amounts of money and computing power building a new family of large language models called LLaMA. These models were strong, flexible, and meant to be shared only with a small, carefully chosen group of researchers under strict rules. They represented years of work and billions in investment. Then, one day, a version of LLaMA leaked onto the open internet.
A torrent file showed up in lesser-known corners of the web. Within hours, developers, hackers, and AI fans were downloading what was basically a multi‑billion‑dollar gift. What Meta had meant to keep tightly controlled suddenly became a resource for the whole world. Anyone with a decent computer and enough storage could now run and modify a model that was close to the cutting edge.
To most people, it looked like just another security mistake. To people working in AI, it felt like someone had quietly handed out the plans for the future to anyone who wanted them. Teams that never could have afforded to train such a huge model now had access to its inner workings. They started trimming it down, shrinking it, tuning it for specific tasks, and running it on hardware that would have been too weak for this only a year before.
The barrier to getting into serious AI work didn’t just get lower; it practically disappeared.
Around the same time, another leak was hitting a different tech giant. Inside Google, a senior engineer wrote an internal note with a title that spread fast once it got out: “We Have No Moat, and Neither Does OpenAI.” For years, the comfort story in Silicon Valley had been that only a few companies could ever build the most powerful AI. You needed massive capital, huge data centers, and vast amounts of private data. That exclusivity—this “moat”—was supposed to protect the leaders forever.
The memo said the opposite. It argued that open‑source AI was catching up much faster than people expected. Small teams around the world were solving problems that used to be possible only in giant, well‑funded labs. The important advances—better ways to fine‑tune models, smarter training tricks, faster and cheaper ways to run them—were showing up in public code repositories, online forums, and open research papers, not just in private company documents. Meta’s leak of LLaMA was not just an embarrassment; it was proof that once a strong base model exists in the open, the global community can improve it faster than any one company.
The memo’s conclusion was harsh but clear: a moat based on secrecy and size was disappearing. If anyone could take a leaked or open model, improve it, and deploy it, then “we built a huge private model” stopped being a safe long‑term advantage. Real power would shift to those who built around the models—tools, communities, platforms—instead of those who tried to lock everything behind closed doors.
While these debates were bouncing around Silicon Valley, something quieter but just as important was happening in China. The United States had decided that the best way to slow China’s progress in AI was to limit its access to high‑end chips. Export rules targeted powerful GPUs and advanced hardware, especially from companies like Nvidia. The logic seemed simple: without the newest chips, you couldn’t train or run the biggest, most advanced models at scale.
China reacted, but in a way many people didn’t expect. Instead of trying to fight the U.S. directly in a hardware race it didn’t fully control, Chinese companies and researchers turned their focus to something the U.S. couldn’t easily block: software—especially open software.
With LLaMA and other models available as open‑source or semi‑open foundations, Chinese teams could skip some of the most expensive and slow parts of development. They didn’t need to train absolutely everything from zero; they could build on top of models the West had already paid for. If hardware was limited, they could put their energy into making these models smaller, more efficient, and better suited to their own needs.
From that environment came a new wave of Chinese AI models, and two names stood out quickly: Qwen and DeepSeek.
Qwen, created by Alibaba, grew into a large family of open‑source models focused not just on raw power but on real‑world use. These weren’t just numbers in a research paper. They were built to plug into actual products. They supported many languages, worked well with existing tools, and came with terms that made them attractive to companies that didn’t want to be locked into a single provider. As Qwen improved, it quietly became the backbone for a huge range of applications—from chatbots and AI agents to internal tools inside large organizations.
DeepSeek followed a slightly different idea: doing more with less. Instead of racing to build the biggest possible model at any cost, DeepSeek focused on efficiency and smart thinking. Its models aimed to reach, or come close to, the performance of top Western systems while being much cheaper to run. For developers and companies, that difference in cost wasn’t just a nice bonus. It could be the difference between a cool demo and a working business.
If you can get similar quality at a tiny fraction of the price, spreadsheets—budgets, costs, margins—start to decide what you use. And that is exactly what happened.
A quiet shift began. Developers in the U.S. and Europe, who once assumed they would stick with OpenAI, Google, or a few other Western providers, started trying Chinese models. Some were attracted by lower prices, others by better support for multiple languages or by flexible ways to host the models. As people shared their results, this moved from simple curiosity to real adoption.
One of the clearest examples came from Airbnb, a major American tech company. When it built an AI‑powered customer support assistant, it didn’t rely only on the usual Western models. Instead, Airbnb publicly praised and heavily used Alibaba’s Qwen models. Company leaders described Qwen as fast, capable, and cost‑effective compared with some alternatives. This wasn’t a small startup taking a risky chance. It was a well‑known global platform choosing a Chinese open model for an important, customer‑facing system.
Airbnb was not alone. Across Silicon Valley and beyond, both new startups and large companies began sending more and more of their AI traffic to Chinese and open‑source models. They weren’t trying to make a political point. They were simply following performance and price. Many of these models were “good enough” or even better, and they were cheaper. The economic pull was too strong to ignore.
By late 2025, that shift showed up in the numbers. Chinese models had gone from barely noticeable in global usage to a major piece of the market. Estimates suggested their share had risen from around one percent to about thirty percent in less than a year. Much of this growth came from open‑source and low‑cost models like Qwen and DeepSeek, which set a new standard for how much AI power you could get for your money.
As all this happened, the story people told about AI started to change. For several years, AI had been treated like a rare, almost magical product. Access to top models was limited, and companies that controlled them could charge high prices because there weren’t many good alternatives. This sense of rarity supported huge valuations. Companies with relatively small current revenue but large, impressive models were valued in the tens or even hundreds of billions of dollars, based mostly on the belief that they alone would shape the future.
But when models become common—when open projects can reach performance close to heavily guarded systems—that story of rarity falls apart. AI begins to look less like a miracle and more like electricity.
Electricity used to be something special, owned by only a few providers and sold at a premium. Over time, standards were created, infrastructure was built, and supply grew. Electricity became cheap and everywhere. The real value moved away from “who can generate electricity” toward “who can use it to build the most valuable things”: factories, devices, data centers, whole industries. Power generation stayed important, but it was no longer the main place where money and influence sat.
The same thing is happening with AI. For a while, just being able to say “we have a huge model” was enough to attract investors and attention. Now, with open and low‑cost models all over the place, the key question is different: if almost anyone can get access to a strong model, where does real advantage come from?
The answer is moving “up the stack.” The main protective wall—the “moat”—is no longer the model itself. It is:
This has big implications for what people call the “AI bubble.” Many observers worry that AI valuations have been driven more by hype than by real, proven value. If there is a correction, it might not come from a dramatic crash caused by a scandal or a sudden ban. Instead, it could come from something quieter: the core capability that was supposed to be rare—the ability to generate text, code, images, and decisions—becoming so common that it no longer justifies sky‑high prices on its own
When dozens of different models can write decent emails, fix code, summarize long documents, and answer complex questions, simply owning one more model that can do those things isn’t special. The companies that win will be the ones that create the best experiences, the best tools, and the best businesses on top of this now basic layer of intelligence.
This is where open‑source AI shows itself as the deeper shift. Just as open‑source tools like Linux, Apache, and PostgreSQL quietly powered much of the internet, open models are now quietly powering a growing share of AI products. They let developers in any country build serious AI systems, even if they don’t have contracts with the biggest cloud providers. A startup in Lagos, a research group in São Paulo, or a small business in Warsaw can all work with powerful models without asking permission from a short list of U.S. tech giants.
China’s role in this new world is no longer just that of a “fast follower.” By adopting open‑source models and pushing hard on cost and practicality, Chinese companies have helped speed up the process of turning AI into a common utility. They have shown that huge budgets are not the only way to build strong systems. That, in turn, has pushed the rest of the world to rethink what it means to “lead in AI.”
There is a clear irony in how this played out. Attempts to limit China’s access to advanced hardware pushed its ecosystem to become stronger on the software side—leaner, more inventive, and more open. Meta’s accidental leak, originally meant as a controlled release to a handful of researchers, ended up giving everyone a shared base to build on. Others used that base to compete directly with the very companies that created it. Google’s internal memo, meant as a warning, now looks like it accurately predicted the direction of the field.
The story of AI from 2023 to 2025 is not just a story of ever‑larger models. It is the story of how control moved from a few organizations to many. It is the story of how open‑source communities turned leaked and published models into a living, constantly improving ecosystem. It is the story of how Chinese firms used that ecosystem to jump over some limits and win real customers, even in Silicon Valley’s backyard. And it is the story of how a technology once seen as rare and exclusive has started to look more like a basic service—powerful, everywhere, and slowly becoming something people simply expect to have.
In the end, the key lesson is straightforward: the future of AI will not belong only to those who build the biggest, most secret models. It will belong to those who know how to use an increasingly open, global, and widely available layer of intelligence to create products, services, and experiences that people truly rely on.
That is the AI revolution most people did not see coming—not one company’s victory, but the moment when the protective wall around intelligence itself began to fade.
With artificial intelligence we are summoning the demon… I think we should be very careful about AI.” Elon Musk