Hedra is a pioneering generative media company backed by top investors at Index, A16Z, and Abstract Ventures. We're building Hedra Studio, a multimodal creation platform capable of control, emotion, and creative intelligence.
At the core of Hedra Studio is our Character-3 foundation model, the first omnimodal model in production. Character-3 jointly reasons across image, text, and audio for more intelligent video generation — it’s the next evolution of AI-driven content creation.
SummaryAs a Senior/Staff Infrastructure Engineer, you will own the reliability, availability, and operability of our core Python web services running at scale on AWS.
You will be responsible for designing, maintaining, and improving the production infrastructure that keeps Hedra online: Kubernetes for orchestration, AWS as the core cloud platform, and Postgres on RDS as a key managed data service. Your work will focus on building a highly available runtime environment for our services, ensuring we can ship quickly while staying resilient through incidents, traffic spikes, and growth.
You will design robust deployment patterns on Kubernetes, optimize our use of AWS (networking, load balancing, scaling, resilience), and put in place the observability and alerting we’re currently missing — from system-level metrics to product health signals. You’ll also partner with product engineers to make Python a great place to build: smoothing out CI/CD, runtime configuration, and production debugging.
This is a hands-on infrastructure role, not a product feature role. You will work closely with engineering leadership and product teams, but your primary mandate is to keep our services healthy, observable, and ready to scale. We're looking for a full-time hire in our San Francisco office.
ExperienceWe’re looking for candidates who have:
At least 4+ years in infrastructure / SRE / platform / backend operations roles at technology companies
At least 3+ years running a critical Python web application in production on AWS
Strong experience operating services on Kubernetes, including:
Designing deployment strategies (rolling, blue/green, canary)
Autoscaling, resource limits/requests, capacity planning
Debugging pod/node issues and cluster-level problems
Solid experience with AWS for high availability, such as:
Multi-AZ architectures, load balancers, security groups, IAM basics
Using managed services (RDS, S3, queues, caches, etc.) effectively
Understanding maintenance windows, failure modes, and regional/AZ considerations
Experience improving observability for production systems:
Implementing or refining system metrics (CPU, memory, disk, network, pod/node health)
Adding application and product health metrics (latency, error rates, key business KPIs)
Standing up useful dashboards, traces, structured logging, and actionable alerts
Comfort working with Python services at scale:
CI/CD pipelines, dependency management, runtime configuration
Performance tuning, concurrency models, and production debugging
Practical experience with Postgres on RDS:
Running it reliably in production (backups, restores, monitoring, failover)
Coordinating version upgrades and schema changes with minimal disruption
A developer experience mindset:
Making it easier and safer for engineers to deploy and operate services
Improving tooling, scripts, and workflows around our infrastructure and observability
A pragmatic approach to reliability and incident response:
Participating in or leading on-call rotations and incidents
Running postmortems, designing runbooks, and putting guardrails around risky operations
Strong communication skills and the ability to collaborate with product engineers and other stakeholders on tradeoffs between speed, reliability, and complexity
Competitive compensation and equity.
401k.
Healthcare (Silver PPO Medical, Vision, Dental).
Lunch and snacks at the office.
We encourage you to apply even if you don't fully meet all the listed requirements; we value potential and diverse perspectives, and your unique skills could be a great asset to our team.
Top Skills
What We Do
Hedra is an AI native platform for multimodal creation. The platform is built around their own cutting-edge proprietary video model, Character-3, which is the first multimodal model in production. Alongside Character-3, the platform also brings other leading foundation models into one ecosystem spanning generative images, video, and audio. Prosumer and enterprise users leverage Hedra to generate content ranging from viral social media to branded content marketing.
Why Work With Us
We're an early-stage team that moves very fast and is building at the leading edge of AI/Media. Every employee takes on a lot of ownership and has an opportunity to learn and grow rapidly.
Gallery
Hedra Offices
OnSite Workspace
Hedra's main office is in San Francisco and secondary hub is in New York.
%20(1).png)