Head of Inference, Stealth Edge AI Co

Posted 9 Days Ago
Be an Early Applicant
New York City, NY, USA
Hybrid
Expert/Leader
Angel or VC Firm • Energy
The Role
The Head of Inference will define the architecture for Edge AI, build a proof of concept, optimize GPU utilization, and drive technical decisions. This role requires deep knowledge of production inference systems and leadership in building distributed inference pipelines.
Summary Generated by Built In

Head of Inference

Full Time, Remote, NYC Preferred (US Based)

About Montauk Capital

Montauk Capital builds and backs companies at the forefront of the Electron Economy, the generational shift towards electrified, intelligent technologies reshaping industries and driving unprecedented demand for energy. Our team combines deep investing acumen with decades of operating experience to give founders the strategic clarity and hands-on support that accelerates the building of enduring companies of consequence.

About Stealth Edge AI Co

Co-founded by Montauk Capital, Stealth Edge AI Co is a pre-seed venture specialized in modular, metro-edge AI capabilities. By leveraging existing infrastructure for inference deployment, Edge AI provides low-latency, SLA-guaranteed performance across diverse GPU SKUs and colocation environments. Our technology intelligently routes traffic based on demand proximity and real-world network limitations, bypassing the heavy power and infrastructure requirements of traditional hyperscalers. Currently initiating operations with pilot nodes in NYC, we are executing a city-by-city expansion strategy with plans for a broader multi-metro rollout.

About the Role

We are seeking a visionary and execution-oriented Head of Inference. You'll define the inference architecture, make foundational decisions, build the first POC, and own this domain end to end alongside the CEO. You will be a senior, hands-on technical leader and the technical authority on inference in the room. You’ll own the key technical decisions, and will be the internal and external expert on inference. You will own the core inference capability driving the platform and customer experience, and have a strong voice over the technical foundation of the company. You’ll evolve the vision into a viable proof of concept, building the practical system to then design and implement distributed inference systems. Alongside the CEO, you’ll represent the company with top-tier partners, early customers and investors, and will own this domain end to end. In addition to the CEO, you will have the support of a team of strong advisors, and the initial founding team.

What You’ll Do

  • Create the inference strategy and define the inference architecture for Edge AI

  • Own the inference serving layer end-to-end: vLLM, TensorRT-LLM, Triton, or equivalent

  • Build a credible POC fast — proves the platform works to NVIDIA, cloud providers, and customers

  • Drive cost-per-token optimization

  • Optimize GPU utilization, KV-cache management, and batching for production workloads

  • Own observability and reliability SLAs

  • Build distributed inference pipelines across multi-GPU, multi-node edge deployments

  • Set performance baselines and SLAs for inference latency and throughput, plus observability and performance SLA’s

  • Define quantization strategy

  • Translate complex inference requirements for infrastructure designs

  • Define the software access layer architecture and oversee integration efforts

  • Engage credibly with investors, partners, and technical stakeholders, represent the company externally

What You’ll Bring

You have a passion for inference and a background as a hands-on technical builder who has directly implemented inference systems before, ideally in production or near-production environments. Deep knowledge and are excited about model serving, and the practical engineering required to make an inference system work on real hardware. You can take a vision and initial concept and translate it into a viable POC quickly and are comfortable making foundational technical decisions quickly, in ambiguity, and building first of a kind.

If inference is your craft and you've built systems in production, we want to talk.

  • Production inference serving — vLLM, TensorRT-LLM, Triton Inference Server, or equivalent distributed at scale

  • Quantization, SGLang, containerization, cost-per-token

  • Observability tooling:distributed tracing, latency profiling, alerting. Instrument and debug complex distributed systems with a focus on building world-class observability and debuggability tools

  • C++/CUDA/Rust

  • GPU utilization and CUDA kernel optimization — has pushed hardware to its limits

  • Batching, KV-cache, speculative decoding expertise

  • Scale systems using Kubernetes, Ray, custom load balancing, multi-GPU/multi-node inference

  • Has built a serving system that NVIDIA and cloud providers respect

  • Model deployment and serving

  • Systems engineering

  • Technical leadership experience, either over teams or outcomes

  • Startup / 0→1 DNA: You ship fast and communicate clearly

Why Join Us

  • Category-Defining Opportunity: Solving the AI inference bottleneck without the burden of power and infrastructure constraints Own the metro edge inference across heterogeneous, disparate compute nodes

  • Massive Market Opportunity: AI spending projected to exceed hundreds of billions annually, 54GW of AI Inference demand expected by 2030

  • Studio Support: Leverage Montauk Capital's resources, network, and operational expertise during critical early stages

  • Competitive compensation + equity: True ownership over what you build

Skills Required

  • Production inference serving with vLLM, TensorRT-LLM, Triton Inference Server, or equivalent
  • C++/CUDA/Rust programming skills
  • Experience in optimizing GPU utilization and CUDA kernel optimization
  • Experience with systems engineering and technical leadership
  • Experience building serving systems respected by NVIDIA and cloud providers
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: NYC, NYC
17 Employees
Year Founded: 2023

What We Do

Montauk Capital builds and backs companies at the forefront of the Electron Economy, the generational shift towards electrified, intelligent technologies reshaping industries and driving unprecedented demand for energy. Our team combines deep investing acumen with decades of operating experience to give founders the strategic clarity and hands-on support that accelerates the building of enduring companies of consequence.

Similar Jobs

Capital One Logo Capital One

Product Manager

Fintech • Machine Learning • Payments • Software • Financial Services
Hybrid
3 Locations
55000 Employees
101K-138K Annually

Capital One Logo Capital One

Manager, Product Management- Developer Experience

Fintech • Machine Learning • Payments • Software • Financial Services
Hybrid
4 Locations
55000 Employees
150K-205K Annually

Capital One Logo Capital One

Sr. Associate, Product Management

Fintech • Machine Learning • Payments • Software • Financial Services
Hybrid
2 Locations
55000 Employees
111K-138K Annually

Capital One Logo Capital One

Lead Machine Learning Engineer

Fintech • Machine Learning • Payments • Software • Financial Services
Hybrid
2 Locations
55000 Employees
230K-286K Annually

Similar Companies Hiring

UL Solutions Thumbnail
Automotive • Professional Services • Software • Consulting • Energy • Chemical • Renewable Energy
Chicago, IL
15000 Employees
Runwise Thumbnail
Greentech • Hardware • Real Estate • Software • Energy • PropTech
New York, NY
199 Employees
Energy CX Thumbnail
Greentech • Professional Services • Business Intelligence • Consulting • Energy • Financial Services • Utilities
Chicago, IL
108 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account