Creating a Culture for AI Builders
The day-to-day at Gradient Labs
If you’ve ever worked in a startup, you’ve heard the common phrases— it’s fast paced, it’s high-autonomy, it’s impact-oriented. Anyone can tell you these buzzwords. This blog post sets out to go into actual detail about how we work at Gradient Labs.
Momentum over detailed planning
One of the earliest conversations we had when we were starting Gradient Labs was about momentum. In previous roles, we had felt that the real cost of large planning exercises like quarterly OKRs was a loss of velocity and an inability to quickly update plans when circumstances changed. We therefore started out with a simple mantra: “keep things moving.”
If you go into our archives, you’ll find a #one-line-updates slack channel which was bullet points of updates: “___ is now done.” It was early; majority of things were yet to be done. As we grew, and updates started spreading out, we switched over to a :glabs-heartbeat: emoji reaction, which automatically cross-posts the update to a channel. These posts celebrate anything that moves us forward: shipping new product features, winning bake-offs, launching a marketing campaign, or anything else.
Our planning process is high level and is oriented towards driving focus and momentum. We decide on outcomes we are aiming for, identify the bottlenecks, and set intentional time limits. At the same time, we retain the ability to change course. Some of our most interesting product features have come from the team spotting patterns, having ideas, or taking a day to run an experiment. When outcomes need to be sequenced or mapped out in more detail, we use proposals— documents akin to RFCs or product specifications.
From the outside, some of this might appear to be a little bit chaotic. At any one time, we have a set of problems that we are tackling and what we need is to have, validate, and then quickly exclude ideas that don’t work in order to move the needle forward. Making in-depth plans often stand in the way of fast iteration; deciding on the next step to take is easier than deciding on the next ten.
Fluid teams
Today, just over 50% of Gradient Labs is Engineering (AI, Backend, Product) and Design. We also have a very special team that we call AI Delivery, with Product & Ops specialists who run the entire lifecycle of landing and expanding our AI agent’s capabilities with each of our partners. The rest of the company spans Sales, Marketing, and Ops; all of them work deeply with everyone in tech.
There’s a blueprint that has been synonymous with startups and scale-ups for a long time: people get organised into cross-functional teams (squads, pods) and teams are clustered thematically into groups (tribes, collectives). This model assumes that you can decide that structure relatively up-front, by knowing what needs to be done. Once you create teams, re-structuring them is not a frictionless exercise, and people start thinking about their team’s remit.
We’ve adopted a more fluid structure:
Strike teams are cross-functional groups who are working on a top level company priority. The team’s continued existence means that the goal isn’t fully reached yet. Membership in a strike team is decided based on what the team currently needs to achieve that outcome—people are expected to drop out when no longer needed.
Pair projects. In some cases, we have people huddle on a project to get it over the line. Usually these are discipline-specific: a change that is only AI, product, or backend-related.
One person quest. Every company has had that inevitable moment of receiving a customer request or idea, and then needing to figure out how to slot that into a wider set of priorities that a cross-functional squad is working on. We have countless examples of a single Engineer having an idea, building it, and then seeing it through all the way to customers.
On demand over recurring meetings
When I asked the team for ideas for this blog post, “meeting free calendar” was one of the first replies and is highly valued across the entire Engineering team.
There’s usually a natural cadence of meetings that tech teams fall into. Stand up, planning, cross-team syncs, retros. In these environments, builders might realistically only get a couple hours of day of pure focus. We do not impose any required meetings. We’ve found that standing meetings quickly become formulaic and lose their meaning. In the spirit of fast collaboration, meetings happen on demand—when a thread is becoming too long, when someone is blocked, or when multiple people are running in parallel and want to check in.
The only standing meeting that we currently have is a weekly company-wide All Hands which ranges from 20 to 45 minutes, and spotlights recent progress, reinforces our current goals, or gives us a forum to share updates.
Customers actively contribute to our roadmap
When adopting an AI agent, the most natural questions from our partners are around understanding, steering, and extending the AI agent’s performance. A lot of amazing ideas come from their feedback and the constraints that they are working with.
It’s very common at Gradient Labs for customer meetings to have folks from AI Delivery and Engineering involved so that we can unblock them as quickly as possible. Given that we focus on financial services, it’s also common that new features we build from one customer’s feedback are immediately useful to others: this means that we can stage our feature rollouts by customer & their interest rather than needing to trade-off against or design around competing requests.
Process & tooling, insofar as it helps
When joining a company, a natural question to ask is “what is the process for ___?” At an early-stage startup, the answer is usually “we don’t have one yet.” Having a process does help in some cases: it brings clarity, and safety. Other times, it stands in the way.
At Gradient Labs, we’ve intentionally avoided adding process until the reason for adding one is to help us move faster.
What is our process for raising an incident? If you are thinking about whether or not to raise an incident, just do it.
What is our process for tracking bugs? Whatever the current team moves faster on—sometimes Linear, sometimes a Notion page; sometimes, just squash the bug right now.
What is the process for onboarding a new customer? We have the milestones that need to be achieved, not the strict sequence of steps.
All of these are ways-of-working processes; the process & bar we set for the product itself remains very high.
Owning the quality bar & not hiding behind the data
The truth we face today is that, over time, it is becoming easier to build an AI agent. What is not becoming easier is building a great one. Although the entire founding team has a background in Data Science, we are very intentional about not hiding behind “what the data says.” After all, an AI agent can deliver the same resolution in many ways: it might be okay, mediocre, frustrating, or amazing, even though the resolution was reached in all cases. The data is an important driver of where to look, but not the ultimate decider on whether things are good enough.
Coming to a shared understanding of “great” is not easy. When we embarked on building our voice agent, the first thing we did was scour the Internet to find any and all agents we could talk to, and asked questions like: “would I enjoy speaking to this?” and “would I buy this?” When we launched, we reviewed all of the phone calls and iterated heavily. And as we scale, we’re keeping that ethos, even though the number of conversations we’re having far exceeds our ability to review them all.
Scaling quality
We believe that when building AI agents (with AI), the bottleneck has largely moved away from the time it takes to create something and into the time it takes to ensure it works well. We know that every part of our stack, from the support platform connection through to the Temporal cache, has a meaningful impact on the outcomes we achieve.
And we have seen that nimble, fluid teams that live & breathe a problem while driving it towards the best possible outcome can build products that outperform others.



