AI agents in 2025
AI agents are all the rage right now, but they’re really just an implementation detail. Chasing them is like skating to where the puck is instead of where it’s going.
The transitory nature of “Build-Your-Own” AI Agents
We’re in the thick of the AI agent hype cycle. Companies are trying to build and deploy their own, and startups are promising platforms to do just that. But does anyone really want to manage an army of custom AI agents?
What companies truly want is an “API to a brain”—a digital employee they can direct. This “steerable, safe intelligence on tap” is the real prize.
From my experience with ML and MLOps, I understand why some think building agents in-house matters. There is a key difference though: with ML, most models had to be company-specific and trained on unique data. Not so with today’s generative AI—it’s general purpose. Like with human employees, the digital employee would be the same for company A and B. Expecting to build and manage your own AI agents is like running a private university just to hire graduates. No one would do that.
Where Will the Puck Be Next December?
Instead of DIY AI agents, we’ll see “API-to-a-brain” services—intelligent, contextual, secure endpoints. These will be domain-specific “brains” with superhuman skills in areas like engineering, operations, or customer service—endpoints that you steer rather than armies you must recruit, train, and manage.
To get there, innovators have discovered that LLMs and data/action layers alone are just two components. True “steerable intelligence” requires a lot more. Here are a few examples from the ops domain that we needed to build on our journey there:
Continuous learning algorithms: Systems must learn like humans, continuously assimilating new information, staying current, and unlearning outdated or incorrect facts. Distilling knowledge autonomously from new information. This goes beyond traditional fine-tuning and requires robust, dynamic knowledge updating.
Deeper understanding: Standard Retrieval-Augmented Generation (RAG) won’t cut it for genuine comprehension. We need proprietary knowledge graphs and richer context models that let these brains grasp nuanced user requests and handle intricate domain concepts.
Reasoning for specialist skills: as humans do, intelligent systems need to be taught how to handle deception, manage objections gracefully, deal with vulnerable users, or perform domain specific assessments. Which is not trivial given that systems with more reasoning are progressively harder to steer. This is however the most important property.
And there are many more examples across other domains like autonomous research systems or fully autonomous engineers where action taking and raw LLM output are just the tip of the iceberg.
Conclusion: AI agents are more of an implementation detail and highly transitory. Building your own agents is like running your own universities just to hire future employees from those. Instead, by next December, the puck will have moved towards out-of-the-box, steerable “brains” that can deliver real intelligence on demand.
Great post Dimitri
Helps mere mortals like me understand the dominant factors at play here and how well placed Gradient are to become a major player in this space
All the best to the team and hope you have time to relax and enjoy your Xmas,s/NewYear celebrations and get ready for a great 2025
Bob Bassi