Deepgram is the leading platform underpinning the emerging trillion-dollar Voice AI economy, providing real-time APIs for speech-to-text (STT), text-to-speech (TTS), and building production-grade voice agents at scale. More than 200,000 developers and 1,300+ organizations build voice offerings that are ‘Powered by Deepgram’, including Twilio, Cloudflare, Sierra, Decagon, Vapi, Daily, Cresta, Granola, and Jack in the Box. Deepgram’s voice-native foundation models are accessed through cloud APIs or as self-hosted and on-premises software, with unmatched accuracy, low latency, and cost efficiency. Backed by a recent Series C led by leading global investors and strategic partners, Deepgram has processed over 50,000 years of audio and transcribed more than 1 trillion words. There is no organization in the world that understands voice better than Deepgram.
Company Operating RhythmAt Deepgram, we expect an AI-first mindset—AI use and comfort aren’t optional, they’re core to how we operate, innovate, and measure performance.
Every team member who works at Deepgram is expected to actively use and experiment with advanced AI tools, and even build your own into your everyday work. We measure how effectively AI is applied to deliver results, and consistent, creative use of the latest AI capabilities is key to success here. Candidates should be comfortable adopting new models and modes quickly, integrating AI into their workflows, and continuously pushing the boundaries of what these technologies can do.
Additionally, we move at the pace of AI. Change is rapid, and you can expect your day-to-day work to evolve just as quickly. This may not be the right role if you’re not excited to experiment, adapt, think on your feet, and learn constantly, or if you’re seeking something highly prescriptive with a traditional 9-to-5.
The OpportunityDeepgram's infrastructure spans bare metal GPU clusters, multi-cloud deployments, and global edge presence -- all serving real-time voice AI at massive scale while simultaneously powering large-scale model training. As a Systems Architect, you will own the end-to-end infrastructure architecture that makes this possible. You will design the compute, storage, and networking systems that serve both production inference and research training workloads, build multi-cloud strategies that balance performance with cost, and create burstable infrastructure that scales with Deepgram's rapidly growing demands. This is a senior technical leadership role where your architectural decisions shape the foundation everything at Deepgram runs on.
Define and drive the end-to-end infrastructure architecture for Deepgram's AI/ML workloads across production inference and research training
Design multi-cloud and hybrid infrastructure strategies that balance performance, reliability, cost, and vendor flexibility
Architect compute orchestration systems that efficiently schedule and manage GPU and CPU workloads across heterogeneous infrastructure
Design storage architectures that handle the massive datasets required for speech and audio ML -- from high-throughput training data pipelines to low-latency model serving
Lead capacity planning across all infrastructure dimensions, modeling growth and ensuring Deepgram can scale ahead of demand
Drive cost optimization and FinOps practices, identifying opportunities to reduce infrastructure spend without compromising performance or reliability
Design burstable, elastic training infrastructure that can scale up for large training runs and scale down to minimize idle cost
Architect research compute infrastructure that gives ML teams the resources they need while maintaining operational efficiency
Establish architectural standards, design review processes, and technical documentation practices for infrastructure decisions
Collaborate with engineering leadership to align infrastructure strategy with product roadmap and business objectives
Evaluate emerging hardware, cloud services, and infrastructure technologies for potential adoption
Think in systems -- you naturally see the connections between compute, storage, network, and how they interact under load
Are motivated by designing infrastructure that operates at the intersection of real-time production systems and large-scale ML training
Enjoy making architectural trade-offs where cost, performance, reliability, and velocity are all in tension
Want to work across the full infrastructure stack -- from bare metal and GPUs to cloud services and container orchestration
Are excited about building cost-effective, burstable infrastructure that enables world-class AI research
Like operating at a strategic level while staying technically deep enough to validate designs and debug complex issues
7+ years of experience in infrastructure engineering, systems architecture, or a senior technical role focused on large-scale infrastructure
Proven experience designing multi-cloud architectures spanning AWS and at least one other major cloud provider or on-premises environment
Deep expertise in storage system design -- block, object, and file storage, including performance tuning for large-scale data workloads
Strong experience with compute orchestration using Kubernetes, and an understanding of how to schedule diverse workloads efficiently
Hands-on experience with GPU infrastructure -- procurement considerations, cluster design, driver and runtime management
Track record of capacity planning and infrastructure scaling for high-growth environments
Ability to communicate complex architectural decisions clearly to both technical and non-technical stakeholders
Strong understanding of networking fundamentals as they relate to infrastructure architecture (see our Network Engineer role for the deep specialist)
Direct experience architecting infrastructure for ML training workloads -- distributed training, large dataset management, experiment infrastructure
Background in cost optimization and FinOps practices for large-scale cloud and bare metal infrastructure
Experience operating and managing bare metal infrastructure in colocation facilities
Expertise in network architecture design, including high-bandwidth GPU interconnects and global traffic routing
Experience with infrastructure modeling and simulation for capacity planning
Familiarity with Slurm, Ray, or other HPC/ML job scheduling systems
Understanding of power, cooling, and physical infrastructure considerations for GPU-dense deployments
Medical, dental, vision benefits
Annual wellness stipend
Mental health support
Life, STD, LTD Income Insurance Plans
Unlimited PTO
Generous paid parental leave
Flexible schedule
12 Paid US company holidays
Quarterly personal productivity stipend
One-time stipend for home office upgrades
401(k) plan with company match
Tax Savings Programs
Learning / Education stipend
Participation in talks and conferences
Employee Resource Groups
AI enablement workshops / sessions
*For candidates outside of the US, we use an Employer of Record model in many countries, which means benefits are administered locally and governed by country-specific regulations. Because of this, benefits will differ by region — in some cases international employees receive benefits US employees do not, and vice versa. As we scale, we will continue to evaluate where we can create more alignment, but a 1:1 global benefits structure is not always legally or operationally possible.
Backed by prominent investors including Y Combinator, Madrona, Tiger Global, Wing VC and NVIDIA, Deepgram has raised over $215M in total funding. If you're looking to work on cutting-edge technology and make a significant impact in the AI industry, we'd love to hear from you!
Deepgram is an equal opportunity employer. We want all voices and perspectives represented in our workforce. We are a curious bunch focused on collaboration and doing the right thing. We put our customers first, grow together and move quickly. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, gender identity or expression, age, marital status, veteran status, disability status, pregnancy, parental status, genetic information, political affiliation, or any other status protected by the laws or regulations in the locations where we operate.
We are happy to provide accommodations for applicants who need them.
Top Skills
What We Do
Deepgram is the leading voice AI platform for developers building speech-to-text (STT), text-to-speech (TTS) and full speech-to-speech (STS) offerings. 200,000+ developers build with Deepgram’s voice-native foundational models – accessed through APIs or as self-managed software – due to our unmatched accuracy, latency and pricing. Customers include software companies building voice products, co-sell partners working with large enterprises, and enterprises solving internal voice AI use cases. The company ended 2024 cash-flow positive with 400+ enterprise customers, 3.3x annual usage growth across the past 4 years, over 50,000 years of audio processed and over 1 trillion words transcribed. There is no organization in the world that understands voice better than Deepgram.
Why Work With Us
Our culture, like our product, is constantly learning and evolving, but the heart of our team is enduring. We are a self-motivated, positive, passionate, and competitive group of people. At Deepgram, we put an emphasis on being ourselves, being curious, growing together, and being human. We are a unique bunch who celebrate our differences.
Gallery
Deepgram Offices
Remote Workspace
Employees work remotely.



.jpeg)



.jpeg)