February 3, 2025
Deep Research Is a Fundamental Shift in AI
The evolution from AI as a productivity tool to AI as an autonomous knowledge generator marks a pivotal moment in how we work and compete.
OpenAI launched Deep Research yesterday. They're calling it their second agent (which is a little confusing because they didn't put it in the same place as their first agent, Operator, but such is the UX strategy of ChatGPT). It seems to be a reasoning model, the unreleased full o3, with access to the internet in a continuous research loop. Unlike traditional AI tools that provide single responses, Deep Research iteratively explores, analyzes, and refines its findings—similar to how a human researcher would work through multiple rounds of investigation. The result is an experience unlike any other in an off-the-shelf LLM, including Google's own "Deep Research" feature.
Before Deep Research, AI tools were largely an extension of human capabilities. You got more done, faster, but it was still you at the helm. Deep Research, and to a lesser extent Operator, are the first AI agents that can generate work that you wouldn't have been able to produce on your own. What would have taken me weeks, maybe months, to complete took me 22 minutes. And, to be honest, was better than anything I could have done.
We're witnessing a fundamental shift in how AI tools operate and integrate into our work. We knew AI agents would be a hot topic this year, but the pace is going to be mind-bending. We got o1, OpenAI's first specialized reasoning model, on September 12th as a preview, then the full version on December 5th. This was followed by o3-mini—a more compact but more advanced reasoning engine—eight weeks later on January 31st, and now Deep Research just three days after that. Four major releases in less than five months, each one pushing the boundaries of AI's analytical capabilities further.
Testing Deep Research
The first task I put Deep Research through was to conduct secondary research on prompting methods. At Mostly Serious, we need to stay ahead of the curve on AI prompting best practices to effectively deliver AI solutions to enterprise clients. The agent returned an 8,000-word academic paper that not only answered my immediate questions but uncovered insights I hadn't even considered. This report now serves as a foundation that I can feed into other models like o1-pro to digest, understand, and transform the findings into practical guidance for our clients.
Here's another example from Mostly Serious: We've been working on a salary study. I had invested around five hours in the first position using a hodgepodge of the best AI models and tools. It was quite good. I gave Deep Research one prompt and it returned a significantly more comprehensive report, including cited market data I hadn't found in my research, in about 7 minutes. The quality gap was huge, and a little scary.
The Future of Work
The implications of this shift are truly profound. Not in the bullshit AI hype way, but in a very real way. We just moved past doing things faster to doing things that were previously impossible. With these agents, you're accessing levels of insight and efficiency previously only found on large teams of experts. Organizations that don't adapt to this new reality will be left behind.
I'm typically hesitant to buy into claims about OpenAI's products before I use them. But I think they've actually undersold Deep Research. It's that good. In just a couple of examples, without thinking about how to get the most out of the tool, I was able to save dozens of hours while gaining insights I wouldn't have been able to come up with on my own.
If you're not integrating these tools deeply into your process, you're already falling behind the curve. Organizations will either be AI-first or they will become obsolete, and the timeline is getting shorter every day.