Applied Machine Learning Engineer

Posted 18 Days Ago
Be an Early Applicant
San Francisco, CA
In-Office
220K-320K Annually
Mid level
Artificial Intelligence • Software
Affordable, Scalable LLM Inference API – Seamlessly Compatible with OpenAI.
The Role
Build and enhance ML systems for custom model training, oversee projects from data intake to model delivery, ensure model quality at scale, create evaluation frameworks, and collaborate with engineers.
Summary Generated by Built In

Help us build the systems that train specialized AI models for the fastest-growing companies in the world. If you love taking cutting-edge ML techniques and turning them into products that ship, we'd love to meet you.

About Inference.net

Inference.net trains and hosts specialized language models for companies who want frontier-quality AI at a fraction of the cost. The models we train match GPT-5 accuracy but are smaller, faster, and up to 90% cheaper. Our platform handles everything end-to-end: distillation, training, evaluation, and planet-scale hosting.

We are a well-funded ten-person team of engineers who work in-person in downtown San Francisco on difficult, high-impact engineering problems. Everyone on the team has been writing code for over 10 years, and has founded and run their own software companies. We are high-agency, adaptable, and collaborative. We value creativity alongside technical prowess and humility. We work hard, and deeply enjoy the work that we do. Most of us are in the office 4 days a week in SF; hybrid works for Bay Area candidates.

About the Role

You will be responsible for building and improving the core ML systems that power our custom model training platform, while also applying these systems directly for customers. Your role sits at the intersection of applied research and production engineering. You'll lead projects from data intake to trained model, building the infrastructure and tooling along the way.

Your north star is model quality at scale, measured by how well our custom models match frontier performance, how efficiently we can train and serve them, and how smoothly we can deliver results to our customers. You'll own the full training lifecycle: processing data, creating dashboards for visibility, training models using our frameworks, running evaluations, and shipping results. This role reports directly to the founding team. You'll have the autonomy, a large compute budget / GPU reservation, and technical support to push the boundaries of what's possible in custom model training.

Key Responsibilities

  • Lead projects from from data intake through the full training pipeline, including processing, cleaning, and preparing datasets for model training

  • Build and maintain data processing pipelines for aggregating, transforming, and validating training data

  • Create dashboards and visualization tools to display training metrics, data quality, and model performance

  • Train models using our internal frameworks and iterate based on evaluation results

  • Develop robust benchmarks and evaluation frameworks that ensure custom models match or exceed frontier performance

  • Build systems to automate portions of the training workflow, reducing manual intervention and improving consistency

  • Take research features and ship them into production settings

  • Apply the latest techniques in SFT, RL, and model optimization to improve training quality and efficiency

  • Collaborate with infrastructure engineers to scale training across our GPU fleet

  • Deeply understand customer use cases to inform training strategies and surface edge cases

Requirements

  • 2+ years of experience training AI models using PyTorch

  • Hands-on experience with post-training LLMs using SFT or RL

  • Strong understanding of transformer architectures and how they're trained

  • Experience with LLM-specific training frameworks (e.g., Hugging Face Transformers, DeepSpeed, Axolotl, or similar)

  • Experience training on NVIDIA GPUs

  • Strong data processing skills and comfortable building ETL pipelines and working with large datasets

  • Track record of creating benchmarks and evaluations

  • Ability to take research techniques and apply them to production systems

Nice-to-Have

  • Experience with model distillation or knowledge transfer

  • Experience building dashboards and data visualization tools

  • Familiarity with vision encoders and multimodal models

  • Experience with distributed training at scale

  • Contributions to open-source ML projects

You don't need to tick every box. Curiosity and the ability to learn quickly matter more.

Compensation

We offer competitive compensation, equity in a high-growth startup, and comprehensive benefits. The base salary range for this role is $220,000 - $320,000, plus equity and benefits, depending on experience.

Equal Opportunity

Inference.net is an equal opportunity employer. We welcome applicants from all backgrounds and don't discriminate based on race, color, religion, gender, sexual orientation, national origin, genetics, disability, age, or veteran status.

If you're excited about building the future of custom AI infrastructure, we'd love to hear from you. Please send your resume and GitHub to [email protected] and/or apply here on Ashby.

Top Skills

Axolotl
Deepspeed
Hugging Face Transformers
Nvidia Gpus
PyTorch
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: San Francisco, California
8 Employees
Year Founded: 2023

What We Do

At inference.net, we empower developers and startups to seamlessly integrate advanced AI capabilities into their applications. Our API is fully compatible with OpenAI’s, ensuring a smooth transition without code changes. With our pay-as-you-go pricing model, you can enjoy flexibility without long-term contracts or pushy sales tactics. Our platform is designed for speed and ease of use, offering access to cutting-edge open-source models, with new additions regularly.

Use Cases:
• OCR with Vision Language Models: Extract text from images and PDFs effortlessly.
• Intelligent Chatbots: Build chatbots powered by your own data to enhance customer interactions.
• Enhanced Functionalities: Implement features like address validation, sentiment analysis, and more. Essentially, anything achievable with an LLM can be done with inference.net at significantly lower costs.

Join the growing community of developers who trust inference.net to deliver reliable, scalable, and cost-effective AI solutions.

Similar Jobs

NVIDIA Logo NVIDIA

Machine Learning Engineer

Artificial Intelligence • Computer Vision • Hardware • Robotics • Metaverse
In-Office
Santa Clara, CA, USA
21960 Employees
136K-265K Annually

NVIDIA Logo NVIDIA

Machine Learning Engineer

Artificial Intelligence • Computer Vision • Hardware • Robotics • Metaverse
In-Office or Remote
Santa Clara, CA, USA
21960 Employees
116K-219K Annually

X, The Moonshot Factory Logo X, The Moonshot Factory

Senior Applied ML Research Engineer, Early Stage Project

Artificial Intelligence • Greentech • Hardware • Internet of Things • Transportation • Cybersecurity • Automation
In-Office
Mountain View, CA, USA
2277 Employees
165K-238K Annually
In-Office
San Francisco, CA, USA
548 Employees
362K-422K Annually

Similar Companies Hiring

Milestone Systems Thumbnail
Software • Security • Other • Big Data Analytics • Artificial Intelligence • Analytics
Lake Oswego, OR
1500 Employees
Idler Thumbnail
Artificial Intelligence
San Francisco, California
6 Employees
Fairly Even Thumbnail
Software • Sales • Robotics • Other • Hospitality • Hardware
New York, NY

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account