As AI rapidly changes, so are the ways that software engineers harness its power.
Many engineers use what’s referred to as the 30 percent rule — meaning AI and automation handle roughly 30 percent of tasks while engineers’ hands-on guidance steers the other 70 percent.
Increasingly, technologists are building tools to help that automation run smoothly.
Take, Agentic AI for example. Agentic AI is now regularly used by 23 percent of engineering teams, according to a 2025 McKinsey report. Or look to the engineers using smaller, more tailored models as the future. What we can see in both of these examples are engineers leading successful product teams by using the parts of AI that work well to help execute their visions.
Built In spoke with engineering leaders and technologists across industries to hear directly how their teams are using and building AI to bring their cutting-edge product visions to life.
Powered by its built-in data platform and AI, Klaviyo combines marketing automation, analytics and customer service into one unified solution.
How do your teams stay ahead of emerging technologies or frameworks?
We take a three-pronged approach to staying at the forefront of innovation: internal knowledge-sharing, external industry engagement and intentional research. Teams regularly share insights from sources like Hacker News, LinkedIn and academic papers in Slack and meetings. Our Boston and Silicon Valley teams stay connected to academia and industry leaders — including OpenAI, Anthropic and Meta — to exchange ideas and track trends. When exploring new domains, we organize focused reading groups, tap into recent academic research and empower interns and new grads to lead learning sessions — ensuring our teams remain informed, agile and ahead of the curve.
Can you share a recent example of an innovative project or tech adoption?
We’re innovating in product recommendations for email marketing and customer service by moving beyond static, history-based models. Our approach integrates conversational context, allowing agents to handle open-ended prompts like “a gift for my mom” in real time — bridging the gap between search and recommendation. While we use proven technologies like deep neural networks, the real innovation lies in how we apply AI to structure messy customer data. By cleaning and interpreting this data first, we turn Klaviyo’s data scale and depth into a competitive edge for precision machine learning applications — solving challenges that traditional search engines aren’t built to handle.
How does your culture support experimentation and learning?
We foster an engineering mindset grounded in curiosity, experimentation and continuous learning. Hackathons, quick prototypes and open knowledge-sharing help us explore ideas efficiently and collaboratively. On the tactical side, we’ve built deep observability into our stack to monitor and refine model performance and we own our own Statsig experimentation capability — enabling rigorous A/B testing to validate impact. This combination of culture and infrastructure empowers our teams to move fast, test boldly and deliver real value.
iManage develops an intelligent, cloud-enabled, secure knowledge work platform.
How do your teams stay ahead of emerging technologies or frameworks?
Applied research is how we ensure our models are performant and cost-optimized. The team keeps a pulse on the latest research — publications, industry seminars and professional networks — and experiments with a strategic focus so that the work gets integrated and adopted. In the last several years, infrastructure design and AI governance has become as important as getting the algorithm right and so that has been a priority focus for the team as well. This all aligns with an iManage core value that we call “hunger for learning.”
Can you share a recent example of an innovative project or tech adoption?
Recently, the applied AI team researched fine tuning open weights models for handling specific legal tasks and deployed a small language model built with Llama 3.2 into our platform.
Rather than defaulting to large, general-purpose models like OpenAI or Claude, the team evaluated where small language models perform better on targeted tasks. By tuning and deploying the model on iManage-controlled infrastructure, our AI dev team have full design and tuning control, which simply isn’t possible with closed, hosted APIs.
The benefits of this approach include significantly improved accuracy, security and privacy since customer data and context never leave the iManage ecosystem; cost efficiencies at scale as inference costs and infrastructure become a design choice; and architectural flexibility. In other words, we can optimize, fine tune and evolve the model as product needs change rather than being constrained by a vendor.
How does your culture support experimentation and learning?
Our applied AI team culture is built on the foundation that continuous experimentation and learning are essential to our success. We’ve institutionalized learning through four core pillars: structured knowledge sharing via our biweekly applied AI series, mentorship, active participation in the broader AI community through conference attendance, and embedding experimentation directly into our project workflows.
Grainger is a distributor of maintenance, repair and operating products, serving more than 4.5 million customers worldwide.
How do your teams stay ahead of emerging technologies or frameworks?
At Grainger, we foster a culture where learning and exploration are built into our workdays and continuous improvement is encouraged. Forums across our Grainger Technology Group function provide spaces for learning and growth. For example, the ML organization hosts monthly demo hours and academic lab sessions where team members come together to share new ideas, review the latest literature and discuss upcoming projects. Additionally, our broader communities of practice empower engineers to experiment with emerging frameworks and build targeted skill sets across the organization.
Further, we ‘compete with urgency.’ My team, the ML platform and operations team, makes intentional updates to our technology roadmap quarterly and continuously evaluates our stack to ensure we’re providing best-in-class tools to our users. It’s a balance of keeping our eyes on new emerging technology while staying very focused on our day-to-day work. While we love getting our hands on new tech, we ground ourselves by talking with our users about real needs and running early proofs of concept before committing them to the platform. By combining structured learning with hands-on exploration, we ensure our talent and our roadmaps stay in step with the next wave of AI innovation.
Can you share a recent example of an innovative project or tech adoption?
Driving innovation is core to our operating principles. During Grainger’s annual hackathon — a week-long sprint that encourages teams to move quickly and experiment — our ML platform & operations team built TicketSmith, an agentic support bot designed for internal user support channels. Leveraging cutting-edge frameworks like LangGraph, TicketSmith connects with GitHub, analyzes user logs, and uses large language models to provide instant, actionable support directly in chat. Since our team receives dozens of support tickets each week, a thoughtful and performant use of AI has the potential to drive meaningful time savings for both our team and our users. We also see clear potential for teams across Grainger who manage similar support workflows to benefit from a tool like this. What began as a proof of concept is now being woven into our roadmap for broader deployment. In each of the three years our team has participated in the hackathon, we’ve built PoCs that later matured into platform tools leveraged by users, demonstrating that the hackathon enables teams not only to experiment, but to deliver real, lasting impact.
How does your culture support experimentation and learning?
Grainger is a place where technologists thrive. Our culture is the engine behind our innovation — and it’s built on the Grainger Edge Principles including embracing curiosity and competing with urgency. These principles shape every aspect of how our teams work, learn and grow. Our leadership actively urges team members to pursue new knowledge and upskill, whether through formal education, online courses or hands-on workshops. For example, on my team we encourage dedicating 10 percent of working hours for personal development like diving into new AI/ML frameworks and building proofs of concept. Our larger GTG function also hosts shared learning forums my team participates in, such as focused communities of practice and monthly sessions where we review emerging academic research. These spaces help us maintain a collective pulse on both industry and academic trends. The annual GTG Technology Conference and Hackathon provide more opportunities to connect, learn and grow. Our leadership supports career advancement through mentorship, access to conferences and offers a tuition reimbursement program for continuing education.
LogicMonitor provides IT and business teams visibility and predictability across on-prem and multi-cloud environments.
How do your teams stay ahead of emerging technologies or frameworks?
We make it a core part of how we build. The Edwin AI team runs a continuous discovery loop where engineers evaluate new models, agent frameworks and tooling every week. We prototype fast, measure against real customer workflows and ship only what outperforms our current baselines on latency, accuracy, cost and safety.
Everyone contributes to our internal tech radar to track shifts in LLM capabilities, multimodal models, evaluation methods and agent orchestration patterns. Strong foundations in observability, evaluation harnesses and safety let us explore without risking product stability.
Most importantly, we learn in the open. Engineers demo experiments weekly, share wins and failures and influence the roadmap. Customer feedback from Edwin AI pilots grounds our choices so we focus on tech that solves real problems, not hype. This mix of autonomy, fast cycles and real impact keeps us on the cutting edge of AI.
Our team is involved in various industry events and meet-ups as participants and speakers. We also conduct internal hackathons to bring new, innovative ideas to life. Some team members even teach AI-related college courses and learn from other academic and industry experts.
Can you share a recent example of an innovative project or tech adoption?
Our data science team is pushing the boundaries of AI ops by applying advanced data science to real-world infrastructure challenges. This year, we developed vector embedding models for alert data that fundamentally improve how incidents are detected and understood in real time. These models employ state-of-the-art scientific methods to automatically group related alerts, strengthening the performance and accuracy of our correlation engine.
What makes this especially impactful is that the models learn directly from each customer’s environment. This generates highly tailored insights previously unattainable with static rules or generic AI, delivering earlier signal discovery, richer context and faster paths from insight to action.
This novel approach establishes the foundation for future predictive capabilities. By recognizing emerging alert patterns, the platform can anticipate what is likely to happen next and empower teams to resolve issues before they escalate into major incidents. This is a substantial leap forward in operational intelligence and incident prevention and is exemplary of our commitment to push the boundaries of AIOps and deliver transformative customer value.
How does your culture support experimentation and learning?
Our culture is built on rapid experimentation and continuous learning from real-world behavior. We run small, controlled tests often, from early root-cause reasoning prototypes to low-risk remediation attempts in sandboxed environments. Each run generates practical lessons, like how to tune context windows and better combine signals from logs, metrics and topology.
We validate ideas through structured offline evaluation, iterating on prompts, comparing before-and-after results and incorporating feedback from customers and domain experts. We invest heavily in tooling that accelerates prototyping and makes results visible to everyone. Tools like promptfoo, Lovable and Langfuse give shared clarity into what is working and why, encouraging broad participation in experimentation.
Cross-functional reviews are routine. Product, engineering and customer teams analyze real incident transcripts to improve correlation speed, reasoning clarity and explanation quality.
We also protect time for deeper technical exploration. The focus is on safe, lightweight experimentation that keeps the culture curious, fast-moving and focused on steadily increasing Edwin’s intelligence and reliability.
Quantum Metric’s digital intelligence platform helps companies improve their user experience with continuous product design.
How do your teams stay ahead of emerging technologies or frameworks?
We treat AI adoption as a learning exercise, not a finish line. Early on, we pushed for broad usage across the company. Not because we had it all mapped out but because real insights only show up once something becomes part of day-to-day work.
What we look for isn’t “what’s the newest tool,” it’s “what changes when this is used at scale?” Where does it reduce friction? Where does trust break down? Where do bottlenecks move? We extended our engineering metrics to track AI usage and impact alongside traditional delivery measures so we’re not relying on intuition alone. We want to know what’s actually working, not just what feels productive.
That internal operating model has directly shaped our product direction. What we’ve learned internally now informs Felix AI agentic, the evolution of AI within our platform, where the focus is on redefining experiences for our customers, not bolting on standalone features.
Can you share a recent example of an innovative project or tech adoption?
As AI adoption increased, we measured usage across different groups and saw cycle time improve overall. But our heaviest AI users showed a subtle increase in cycle time. We started calling it the “power user paradox.”
The lesson wasn’t that AI made people slower. It exposed a new constraint. With AI, we saw more pull requests and often larger ones but review capacity didn’t scale at the same rate. The bottleneck shifted from writing code to reviewing and coordinating changes.
That discovery reframed how we are thinking about AI in engineering. Instead of focusing only on generating code faster, we’ve been experimenting with AI in the later stages of delivery: assisted code review, suggested fixes, build failure triage, flaky test detection and keeping documentation current. Those are the areas that determine whether speed turns into shipped value.
How does your culture support experimentation and learning?
Experimentation and practicing what we preach have always been foundational to how we work at Quantum Metric and AI just gave us a new surface to apply that mindset. Leadership framed it as an opportunity, not a mandate and we intentionally avoided standardizing too early. That gave people room to try tools, share what worked and be honest about what didn’t.
We also make learning easy to spread. We have dedicated channels where people share practical use cases plus regular sessions where teams demo what they’ve tried. The goal is simple: turn one person’s experiment into something other teams can build on whether they’re in engineering, product or operations.
What ties it together is using AI in our own workflows, not just evaluating it in a vacuum. That tight feedback loop — what builds trust, what reduces noise, what genuinely helps people move faster — has become a major input into what we build for customers.
Lessen is a tech‑enabled, end‑to‑end property service provider trying to change how commercial and residential real estate services are managed at scale through a data‑driven platform and a vetted network of vendor partners.
How do your teams stay ahead of emerging technologies or frameworks?
We’ve designed both our platform and our culture to support continuous evolution.
From a technology standpoint, our modular, service-oriented architecture allows engineers to experiment with new tools, frameworks and AI models in isolation, validate them quickly and scale successful approaches into production without disrupting core systems. Just as important, we intentionally create space for learning. Engineers are encouraged to build depth in their domain while exploring adjacent areas, take on new challenges and grow into emerging problem spaces like AI, data and platform engineering. That flexibility helps our teams stay current while building skills that remain relevant as the technology landscape changes.
Can you share a recent example of an innovative project or tech adoption?
One of our most impactful innovations has been Aiden, our AI-powered platform that applies generative AI and agent-based systems across the entire facilities maintenance lifecycle.
Rather than building a single assistant, our engineering teams designed Aiden as a multi-agent system that can reason over structured and unstructured data, coordinate across workflows and take action when appropriate. It combines retrieval-augmented generation using our proprietary maintenance dataset with workflow-aware agents that support intake, triage, proposal review, invoicing and ongoing asset intelligence. Aiden has already driven significant business impact from more than ten percent improvement in first time fix rates via better work intake to three times faster time to process invoices via Aiden invoice assistant and a nearly 40 percent decrease in rejected proposals through Aiden proposal assistant for vendors.
How does your culture support experimentation and learning?
Experimentation is deeply embedded in how we operate. We intentionally create safe spaces for engineers to try, fail and iterate whether that’s through hackathons, proof-of-concept work or controlled pilots inside production workflows. Our AI hackathon is a standout example where engineers work side by side with product, design and operations counterparts to prototype new agents, workflows and user experiences in a short, focused window and present them to real customers for judging. Several ideas that started as hackathon experiments are now live in production, delivering measurable efficiency gains and reducing manual work across our platform. It’s a great example of how we move from experimentation to real-world impact quickly.
Enova International is a financial technology company providing online financial services through its machine learning-powered lending platform.
How do your teams stay ahead of emerging technologies or frameworks?
We stay ahead of emerging technologies through a thoughtful mix of research, experimentation and smart use of industry-leading tools. A dedicated group helps guide our focus, keeping close watch on new developments by reading the latest research, reviewing industry papers and attending conferences. This gives us a clear view of where the field is heading, such as the move toward more agentic AI and helps us understand how these shifts could shape our future work. Alongside this, we maintain an active experimentation pipeline where we explore new capabilities and push our systems beyond basic search. For example, we test advanced methods like retrieval-augmented generation and different ways of structuring data so our AI can better understand the relationships within our code, leading to more accurate and helpful assistance. To move quickly from ideas to real solutions, we build on top of strong vendor platforms such as Google Gemini and AWS Bedrock, giving us the ability to prototype and launch new tools faster without reinventing the wheel.
Can you share a recent example of an innovative project or tech adoption?
A great example of a recent project is our AI-powered code reviewer, ArchBot. We built it ourselves using Go on AWS Bedrock and it’s integrated directly into our GitHub environment to handle many of the routine tasks in the code review process. The benefits are clear: it keeps our code clean and consistent, speeds up development by giving senior developers more time back and can be customized for different team workflows. ArchBot is a prime example of how we use AI for “human assistance” — creating practical tools that help us today while also building our understanding of AI for the future. It’s a real, in-production tool that our engineering teams rely on every day.
How does your culture support experimentation and learning?
Our culture is built on encouraging new ideas and learning and we support this in several key ways. We have a clear path for taking ideas from small experiments to full products: We identify a specific problem, build a small test — like a tool for financial document processing — and if it works, we roll it out for everyone to use. At the same time, we strike a balance between quick wins and bigger bets. Tools like ArchBot deliver real value, build trust and give us the freedom to explore more cutting-edge research. Above all, our approach is people-focused. We develop AI tools to augment our teams, helping them work smarter on tasks like diagnosing complex issues and modeling financial scenarios, always with a focus on real-world value.
SmartBear provides a portfolio of trusted tools that give software development teams around the world visibility into end-to-end quality through test management and automation, API development lifecycle and application stability.
How do your teams stay ahead of emerging technologies or frameworks?
At SmartBear, staying ahead of new technologies starts with the people we hire and the culture we build. We look for employees who are curious, motivated and eager to make an impact. Those qualities are reinforced in a quarterly awards program where we celebrate people who demonstrate openness and curiosity. This sends a clear message that learning, asking questions and trying new things are part of the job, not something extra.
Open source is another big way for us to stay close to what’s coming next. SmartBear has a long history of contributing to projects like OpenAPI and we currently support OS projects including Swagger, Pact, SoapUI and Stoplight. The BugSnag team contributes to projects such as KSCrash, which helps us improve the tools we rely on ourselves. Working in the open keeps our teams connected to real developer needs, emerging standards and how technologies are actually used in practice.
Can you share a recent example of an innovative project or tech adoption?
A recent example is our early work with the model context protocol. As this AI-focused standard gained traction, we added MCP generation to Swagger and released the SmartBear MCP server across several products. This enabled teams to experiment with AI-driven workflows early without waiting for the ecosystem to mature.
We also hosted an MCP hackathon, uniting teams from our global offices. We worked with GitHub Copilot, Claude, Gemini and ChatGPT combined with SmartBear MCP server to quickly solve real customer problems and turn disconnected workflows into autonomous, intelligent systems.
One standout project was an MCP server tool that automatically detects and resolves discrepancies between live API implementations, documentation and contracts. The team built an MCP-powered workflow capable of automating the entire reconciliation process with a single prompt. Another standout was a QA intelligence assistant designed to unify data from across platforms and tools into a single, actionable view of product quality. The assistant created a single, trusted view of risk and quality — showing what’s being built, tracking what’s tested and revealing what fails in production.
How does your culture support experimentation and learning?
Experimentation and learning are built into how we work at SmartBear. We value curiosity and initiative and it’s visible through hiring, leadership support and recognition. We remove barriers by giving teams flexibility, budgets for experimentation and regular hackathons, making it easier to try new tools and approaches. In 2025, this was strengthened with a greater focus on AI, encouraging the company to get hands-on with new tools and workflows.
Experiments at SmartBear can start small. One employee started a side project that grew into an internal AI assistant. Available in Slack, BrainBear mines our internal wiki and helps employees find information about policies and projects. After proving its value, it gained the support of CEO Dan Faulkner, was shared companywide and saw rapid adoption. It’s a great example of how ideas are encouraged to grow when they solve real problems.
Most importantly, learning is expected to lead to real outcomes. When experiments show value, we invest in and scale them. Whether it’s MCP support across our products or internal tools like BrainBear, the message is consistent: try things, learn quickly and if it works, we take it further.
Luxury Presence is a growth platform for high-performing real estate agents, teams and brokerages.
How do your teams stay ahead of emerging technologies or frameworks?
We stay ahead by encouraging and in many cases requiring experimentation across our biggest initiatives. For every major project, we go through a technical planning process we call ARM documents, which includes a dedicated phase for exploration. It’s a collaborative process and we actively encourage team members to prototype and build rather than stay theoretical. That creates regular opportunities to evaluate new approaches and challenge existing ones. Just because something worked in the past doesn’t mean it’s the right way forward.
We also made a deliberate investment in Auggie Academy after recognizing that AI is fundamentally changing how work gets done. We trained our entire engineering organization on how to work with AI early, which allowed us to rethink norms around team huddles, planning and autonomy and give teams more freedom to move quickly.
On top of that, we run Hack Week twice a year, shutting down engineering, product and design so teams can experiment with any technology as long as it improves the experience for our customers. Many Hack Week projects make it into production, making it a strong mechanism for learning and adoption.
Can you share a recent example of an innovative project or tech adoption?
One of the most innovative initiatives we’ve launched is our suite of AI teammates including fully AI-driven teammates for SEO and blogging. These products represent a major shift in how we think about marketing technology and how we support customers at scale.
Internally, we’ve also adopted entirely new ways of working through Auggie Academy. We now operate with AI-first workflows where AI is embedded directly into planning, development and execution. AI acts as a development partner and a planning partner and in many cases authors significant portions of code. This is a completely new way of working for us.
We’re actively overhauling our R&D process to be AI-native so we can capture the gains AI enables from faster iteration to quicker production releases. This isn’t a surface-level adoption of tools. It’s a fundamental shift in how we build and ship software.
How does your culture support experimentation and learning?
Experimentation and learning are core parts of our culture. Programs like Auggie Academy and Hack Week give teams the time and space to learn new skills, prototype ideas and rethink how work gets done. We invest intentionally in training and exploration and we design our processes to support that.
Psychological safety is just as important. If someone tries something and it doesn’t work, that’s not held against them. We regularly ask what bets we’re making and whether they paid off and when they don’t, learning itself is considered a valuable outcome.
We also emphasize knowledge sharing. Senior engineers meet regularly to align on standards and translate those discussions into practical workflows including how we codify them into our AI-native way of working. Throughout the year, we run trainings and lunch-and-learns where engineers share what they’re building and learning. There’s a shared mindset across the organization to continuously challenge how we work and stay open to better ways of building.
Sprout Social is a global leader in social media management and analytics software.
How do your teams stay ahead of emerging technologies or frameworks?
As an AI manager, I feel especially grateful to be at Sprout during this technologic shift. Sprout has institutionalized intellectual whitespace and a culture of curiosity which make it easier to foresee and ride the AI waves instead of letting them crash over us. We operationalize this through three primary channels.
We create structured play space and incentivize teams to get familiar with new technologies. Our biannual hackathons, for example, judge teams on both a customer problem they’re solving as well as the technical sophistication of novel frameworks.
We scout, then we scale. We empower small, focused teams to scout emerging technologies and architectures. Once vetted, the teams create enablement tools necessary to unlock adoption across the entire R&D organization.
We encourage and enable T-shaped development. Sprout has a dedicated budget and resources that allow teams to build expertise in new, adjacent technologies.
These channels, combined with a truly fantastic team of curious, talented software engineers, designers, product managers and applied AI and machine learning scientists, result in products that the team is excited to build and our customers are excited to use.
Can you share a recent example of an innovative project or tech adoption?
A recent standout is Trellis, Sprout’s proprietary AI agent. This project was born at the nexus of Sprout’s curious culture, leadership support and a team of high-agency folks. The team practiced a change-the-game mindset that allowed them to successfully navigate technical gaps in emerging protocols, social media-specific reasoning, state management and state-of-the-art AI changing almost daily. This project has been a blast because the team has been able to engineer the agent’s personality and skill set all while experimenting with, embracing and contributing back to the agentic landscape.
Innovation at this scale demands an equal obsession with security. We balanced our obsession on user experience with rigorous internal partnerships to deliver a secure agent that manages exposure to untrusted content and access to private data, two of the three capabilities in the lethal trifecta of AI agents. We’ve built an agent that moves at the speed of social without compromising the integrity of our customers’ data. It’s been rewarding to see customers leverage Trellis as a strategist that transforms overwhelming social signals into clear direction.
How does your culture support experimentation and learning?
Sprout has an unwavering focus on developer experience and on the joy of building. One component we excel at is creating psychologically safe environments where folks are given space to experiment and candidly articulate potential challenges. In the AI space, we are constantly navigating unknown unknowns and we have intentionally built a culture where teams feel empowered to articulate technical risks and potential failures early.
This environment was the foundation for Trellis. We call out what’s broken and fix it fast and it is truly something special when your team is empowered to say “this model isn’t performing as expected,” “the latency on this architecture won’t scale” or a general “I don’t know and this is what I need to figure it out.” This transparency allows us to move at the speed of a startup while maintaining the rigor of an enterprise leader.
Furthermore, Sprout fosters a culture of learning through a robust growth program. Everyone in R&D has a dedicated annual budget for specialized courses, certifications and technical subscriptions. We incentivize teams to stay ahead of the curve then give them the space, time and resources to ensure they do.
SciPlay is a developer and publisher of digital casino games.
How do your teams stay ahead of emerging technologies or frameworks?
We don’t chase every shiny object but when something real shows up, we move fast. SciPlay engineers get space to tinker, prototype and share what works and we build the minimum viable process to turn sparks into fire.
Can you share a recent example of an innovative project or tech adoption?
We’ve been building iteratively an internal developer platform that gives our partners fast and secure access to the tools and tech they need while minimizing red tape. It’s opinionated where we think it should be, flexible where it matters and it’s already shifting how partner teams deliver value.
How does your culture support experimentation and learning?
We reward taking chances, not just outcomes. From demos where half-built ideas get real feedback to curious nerds spending time just learning about potential solutions, SciPlay’s culture views curiosity as a leadership trait. Our best problem solvers don’t just ship features, they explore patterns worth repeating.
GitLab is an open core software company that develops a comprehensive DevSecOps platform.
How do your teams stay ahead of emerging technologies or frameworks?
We stay ahead through a combination of hands-on practice and continuous learning. Working on the GitLab AI Duo Agents platform means we’re building the tools we use daily: Agentic Chat, code completions, creating automated MRs from issues. This creates a tight feedback loop between development and real-world usage. This dogfooding approach helps us quickly identify what works and what emerging patterns matter.
Can you share a recent example of an innovative project or tech adoption?
We recently implemented a front-end island architecture using custom elements that allows us to build isolated, self-contained components while running natively in our existing continuous integration and continuous delivery pipelines. This solved a key challenge for GitLab AI Duo features: We needed to iterate quickly without being constrained by our monolithic frontend but couldn’t afford separate deployment infrastructure. By leveraging web standards, we achieved architectural isolation with seamless integration. The islands deploy independently but operate as part of the unified application, enabling rapid experimentation while maintaining production reliability.
How does your culture support experimentation and learning?
Experimentation is actively encouraged through both resources and culture. We have access to various AI tools and platforms to try them out hands-on. There are regular reminders in all-hands meetings and written communications from leadership. Knowledge-sharing is built into our workflow. Engineers create learning videos showing how they use certain tools and what they’ve achieved, making it easy to learn from each other’s experiments. Whether it’s a new framework, prompting technique or unexpected use case, there’s a strong culture of “learn something, share something” that turns individual experimentation into collective knowledge.
Carbon Robotics is an agricultural tech company that builds automatic robots, like the LaserWeeder.
How do your teams stay ahead of emerging technologies or frameworks?
It’s a hard question to answer. We mostly rely on engineers to keep an eye out and bring fresh and interesting ideas to the table. Tech evolves too rapidly to really stay ahead of everything. But for particular tasks or applications, we typically start with a more naive approach and in the course of researching, designing and iterating on a feature, we try to make good forward-thinking decisions. Our mantra is to build the wrong thing and learn lessons to build the right thing.
Can you share a recent example of an innovative project or tech adoption?
Sure, I think we have a great example of this. I’m not sure how public it is so I don’t want to unintentionally expose too much. But we have been using WebRTC for live video and communication in some of our applications. One of the ideas that came up was to use WebRTC connections as a bus for GRPC API calls. This effectively allows remote use of applications with minimal code changes and it scales very well because it’s peer to peer.
How does your culture support experimentation and learning?
Experimentation is the lifeblood of Carbon Robotics. We try to build proofs of concept fast in order to fail faster. We love to joke about all the mistakes we’ve made in the past. Some have been costly but we understand that a long design cycle hinders innovation. It’s always better to build something than to have a huge plan with no result. This doesn’t mean we build lots of garbage and see what sticks but it does mean that we make lots of mistakes. The end result is a lot of learning and a lot of laughing at the past. If we can’t compare ourselves to the past and see how much we have improved, then we’ve already lost our momentum of growth.
Braze is a customer engagement platform that allows any marketer to collect and take action on any amount of data from any source.
How do your teams stay ahead of emerging technologies or frameworks?
The teams are encouraged to experiment with new technologies during quarterly hackathons. This allows them to explore and learn new approaches, and at the same time contribute to Braze’s products and goals. An example of an innovative project from the last hackathon is an observability pipeline from our Kubernetes-based job runner through Datadog to a custom visualization tool in Streamlit that shows overprovisioned ML pipeline jobs and allows us to more accurately right-size our infrastructure.
Our product and engineering team is actively experimenting with applying agentic coding techniques in their daily work. This is a fast moving field, with technologies such as Model Context Protocols, Retrieval-Augmented Generation and agent skills that are developing quickly. We have a biweekly AI lunch-and-learn where we share experiences and best practices: e.g., multi-agent workflows, different types of RAGs, and experiences with applying the latest models (via Cursor and Command-Line Interface) to our codebase.
Finally, it’s helpful to simply experiment with agentic tools hands-on, both in personal “toy projects” and applied to our core products. We have a very active Slack channel (#vibe-coding) where engineers learn from each other and share experiences and resources.
Can you share a recent example of an innovative project or tech adoption?
BrazeAI Decisioning Studio™ is a platform that leverages Reinforcement Learning to automate and optimize customer interactions. In simple terms, instead of a marketer manually guessing which message version is best, or running slow, manual A/B tests, an RL agent continuously learns from user engagement (clicks, conversions) to dynamically serve the optimal content at the optimal time via the optimal channel. However, configuring RL environments is notoriously difficult; it requires precise definitions of state, actions and reward functions.
To help our forward-deployed data scientists configure Decisioning Studio for new customers, we have developed the BrazeAI Decisioning Assistant, an internal agentic application designed to act as a co-pilot for setting up and maintaining these complex ML configurations. Unlike a standard RAG chatbot that only retrieves documentation, this assistant creates a bridge between LLMs and our runtime environment. It can actively verify proposed configurations against known standards, execute SQL queries to analyze model performance, and autonomously diagnose issues by interpreting real-time data logs. The goal is to shift our forward-deployed service posture from manual configuration and troubleshooting to a more automated, smart verification.
How does your culture support experimentation and learning?
Primarily, by tackling tough problems and building exciting AI products. Productizing reinforcement learning at scale for AI decisioning is a cutting-edge challenge that requires significant research and experimentation. That makes our product interesting for engineers to work on, in addition to being valuable for our customers. Outside of the core product work, we encourage learning via regular hackathons, provide a generous learning stipend to spend on materials, courses and conferences and try to match engineers and applied scientists to areas of work that are particularly interesting to them. Just now, we’re tackling scalability via optimizing contextual bandit algorithms in Spark + Scala, building a next generation marketer UI for Decisioning Studio inside the Braze Platform and investigating how to apply causal inference techniques to make our AI models learn faster from limited data. Simply executing on our ambitious roadmap requires a lot of learning and growth, which my team and I enjoy.
