OpenAI’s o3 vs Deepseek R1, Open-Source or Full-Control

Is OpenAI scared of Deepseek? Not in the slightest. While Deepseek’s move to open source their R1 model is a clever play—and one that leverages the massive, mostly Western open source community (with over 50% of contributions coming from Europe and North America)—it’s clear that OpenAI is heading in an entirely different direction. In the immediate future, I firmly believe OpenAI will continue to dominate the AI landscape.

Open Source Advantage

Deepseek’s decision to open source their model is smart. The open source model isn’t just an idealistic dream anymore; it’s become a legitimate development process, providing what amounts to free labor from a global community of contributors. Open source has proven incredibly reliable over the past few years, and Deepseek is making excellent use of it. Yet, as innovative as this strategy is, it’s only one part of a broader story.

Two Responses

OpenAI isn’t just sitting on the sidelines. Their strategy can be broken down into two major moves:

  1. Train for Cheap:
    While it’s not legal for third parties to “distill” OpenAI’s outputs, OpenAI can legally do so for themselves. This “train for cheap” approach means that if someone else can train a model for as little as $30—as seen by Berkley in a recent experiment—imagine the possibilities when you scale up compute power. Even if efficiency gains don’t scale with compute, advanced parallelization techniques will bridge that gap, further boosting their capabilities and opportunities to use their existing infrastructure.

  2. o3-mini-high – The Game Changer:
    The second, and in my view more significant, reason OpenAI is poised to come out on top in the short term is their o3-mini-high. This isn’t just another iteration; it’s the first model classified as medium-risk in terms of model autonomy. And it feels different. If you’ve seen some of the examples, o3-mini-high’s coding prowess is remarkable—arguably unmatched in terms of complexity and completion, even though it’s still “mini” in size. You can check out some details about the technicalities from OpenAI here.

The implications of o3-mini-high are interesting. It signals a future where AI isn’t just a supportive tool, but an adaptive partner tool organizer capable of writing its own tools. Imagine an AI that can generate a tailored neural network for a specific task, train it, and then deploy it to complete that task seamlessly. We’re talking about a system that might operate with what I call “tool usage without tools”—a scenario where file format limits and traditional constraints no longer hold sway.

I think OpenAI, along with government entities (project Stargate didn’t appear from nowhere), has been working toward this vision for a while now. We heard about rumors with Orion, Strawberry and got O1. It’s clear that with o3-mini-high, OpenAI has reached a new threshold in model autonomy—a level that could redefine the way we think about AI’s role in research, development, and real-world applications. I can just wonder what o3 is like without guardrails.

While Deepseek is carving out its niche in the open source space, with little models, capable to run on anything, OpenAI is playing a different game.

They’re not just iterating on efficiency and compute, but my guess is they are working towards a model, not capable to run on anything but to run anything. An AI that writes its own tools, adapts on the fly, and even spawns and trains sub-AIs to tackle complex tasks. It’s a bold leap toward a future where AI evolves almost autonomously.

In the short term, OpenAI’s strategic moves suggest they will maintain their lead. But the long-term vision is different. And very unclear.

What an interesting year this is going to be.

Previous
Previous

And There’s No Turning Back from the AI Arms Race…

Next
Next

The Blurred Lines of Reality