Being an engineer means a lot more than building; it means constantly learning new tools — and using those tools to ship smarter and faster.
AI is one such tool, accelerating development and deployment like never before. In fact, the majority of dev teams are now using AI, according to a GitHub survey. And three in four coders use AI at least once a week.
But AI isn’t the only tool transforming engineers’ toolbox. New techniques in data visualization, light weight app development, and new continuous integration and discovery tactics are all ways dev teams are shipping faster than ever.
Which is why Built In spoke with Vijay Ramamurthy, founding engineer at Oso, about how his team designs, builds and ships fast — without sacrificing quality.
Oso’s authorization platform is designed to help engineering teams streamline the way they build access control into their AI-native apps so they can deliver new features more efficiently.
What’s your rule for releasing fast without chaos — and what KPI proves it?
There’s no one “silver bullet” rule to moving fast without breaking things; we approach it in layers. First, we have a variety of tools we use to validate our changes as we work, ranging from quick ones like unit tests and microbenchmarks, to more thorough ones like mirroring production traffic onto a shadow deployment of a risky candidate branch for observation. Next, we use automated checks in CI and CD to alert us of problems before they go live. For when something makes it all the way into production, we’ve designed our architecture, tooling and processes to enable quick rollbacks of various problems; in the most recent example, we rolled back a performance regression within five minutes of being alerted of the issue.
Some metrics to show our results are over five nines of uptime (despite the recent AWS outage) and sub-10ms P90 latency over billions of requests per month. Having multiple layers of tooling to prevent and mitigate problems allows us to build and ship quickly with peace of mind, since it reduces the amount of manual scrutiny required to know a change is safe.
What standard or metric defines “quality” in your toolchain?
We don’t track things like the number of story points or pull requests we’re shipping per week. We set deadlines based on what we need to get done for our customers and we hold regular engineering retros to identify things that slowed us down or held us back from achieving our goals the way we would have wanted to and prioritize fixing those problems. We regularly ship changes that improve our own development experience and velocity.
Recent examples include splitting our code into smaller modules to reduce time for incremental recompilations and launch templates for spinning up debug servers with all the tools necessary to reproduce issues observed in production and examples of things we’re currently working on are internal tooling for investing performance and improvements to logging to more quickly and easily diagnose production observations.
Share one recent adoption and its measurable impact.
Adopting a managed service provider and providing our customers with an MCP frontend for our API has been surprisingly impactful to their experience and ability to move fast when building with our product. Many of our customers use our product in fairly complex ways and when you encounter an unexpected result either in development or production with our product, it’s historically been a slow and painful process to manually issue the various API queries necessary to pinpoint the source of the issue. With MCP, I’ve seen certain debugging processes go from taking half an hour to less than a minute by allowing the customer’s agent of choice to query for all the information it needs. One thing we appreciate about MCP is how it lets our customers get a better experience with our product within whichever agent they’re familiar with; some of our customers use Cursor or Claude and they’re all able to benefit from this. There are many parallels to when we adopted language server protocol in the past.
