Staff Technical Lead for Inference & ML Performance

Reposted 23 Days Ago
Easy Apply
San Francisco, CA, USA
In-Office
Expert/Leader
Cloud • Digital Media • Information Technology
Generative media platform for developers.
The Role
Lead the optimization of inference systems, guiding a team to enhance model performance, drive innovation, and mentor performance engineers in a high-growth environment.
Summary Generated by Built In

fal is pioneering the next generation of generative-media infrastructure. We're pushing the boundaries of model inference performance to power seamless creative experiences at unprecedented scale. We're looking for a Staff Technical Lead for Inference & ML Performance, someone who blends deep technical expertise with strategic vision, guiding a team to build and optimize state-of-the-art inference systems. This role is intense yet deeply impactful. Apply if you're ready to lead the future of inference performance at a fast-paced, high-growth frontier.

Why this role matters

You’ll shape the future of fal’s inference engine and ensure our generative models achieve best-in-class performance. Your work directly impacts our ability to rapidly deliver cutting-edge creative solutions to users, from individual creators to global brands.

What you'll do
Day-to-day
What success looks like
Set technical direction. Guide your team (kernels, applied performance, ML compilers, distributed inference) to build high-performance inference solutions.
fal’s inference engine consistently outperforms industry benchmarks in throughput, latency, and efficiency.
Hands-on IC leadership. Personally contribute to critical inference performance enhancements and optimizations.
You regularly ship code that significantly improves model serving performance.
Collaborate closely with research & applied ML teams. Influence model inference strategies and deployment techniques.
Seamless integration of inference innovations rapidly moves from research to production deployment.
Drive advanced performance optimizations. Implement model parallelism, kernel optimization, and compiler strategies.
Performance bottlenecks are quickly identified and eliminated, dramatically enhancing inference speed and scalability.
Mentor and scale your team. Coach and expand your team of performance-focused engineers.
Your team independently innovates, proactively solves complex performance challenges, and consistently levels up their skills.
You might be a fit if you
  • Are deeply experienced in ML performance optimization. You've optimized inference for large-scale generative models in production environments.
  • Understand the full ML performance stack. From PyTorch, TensorRT, TransformerEngine, Triton to CUTLASS kernels, you’ve navigated and optimized them all.
  • Know inference inside-out. Expert-level familiarity with advanced inference techniques: quantization, kernel authoring, compilation, model parallelism (TP, context/sequence parallel, expert parallel), distributed serving and profiling.
  • Lead from the front. You're a respected IC who enjoys getting hands-on with the toughest problems, demonstrating excellence to inspire your team.
  • Thrive in cross-functional collaboration. Comfortable interfacing closely with applied ML teams, researchers, and stakeholders.
Nice-to-haves
  • Experience building inference engines specifically for diffusion and generative media models
  • Track record of industry-leading performance improvements (papers, open-source contributions, benchmarks)
  • Leadership experience in scaling technical teams
What you'll get

One of the highest impact roles at one of the fastest growing companies (revenue is growing 40% MoM, we are 60x+ RR compared to last year, raised Series A/B/C within the last 12 months) with a world changing vision: hyperscaling human creativity.

Sound like your calling? Share your proudest optimization breakthrough, open-source contribution, or performance milestone with us. Let's set new standards for inference performance, together.

Top Skills

Cutlass
PyTorch
Tensorrt
Transformerengine
Triton
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
73 Employees

What We Do

Generative Media Cloud

Similar Jobs

Anduril Logo Anduril

Senior Supplier Industrialization Engineer, Casting

Aerospace • Artificial Intelligence • Hardware • Robotics • Security • Software • Defense
In-Office
Costa Mesa, CA, USA
6000 Employees
146K-194K Annually

Anduril Logo Anduril

Systems Engineer

Aerospace • Artificial Intelligence • Hardware • Robotics • Security • Software • Defense
In-Office
Costa Mesa, CA, USA
6000 Employees
146K-193K Annually

Anduril Logo Anduril

Corporate Development Associate

Aerospace • Artificial Intelligence • Hardware • Robotics • Security • Software • Defense
In-Office
Costa Mesa, CA, USA
6000 Employees
146K-194K Annually

Samsara Logo Samsara

Director, Product Marketing - Frontiers

Artificial Intelligence • Cloud • Computer Vision • Hardware • Internet of Things • Software
Easy Apply
Remote or Hybrid
6 Locations
4000 Employees
148K-265K Annually

Similar Companies Hiring

Scrunch  Thumbnail
Artificial Intelligence • Information Technology • Marketing Tech • Software • SEO
Salt Lake City, Utah
Amplify Platform Thumbnail
Fintech • Financial Services • Consulting • Cloud • Business Intelligence • Big Data Analytics
Scottsdale, AZ
62 Employees
Standard Template Labs Thumbnail
Artificial Intelligence • Information Technology • Software
New York, NY
15 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account