Software Engineer, Systems Generalist

Posted 7 Days Ago
Easy Apply
Be an Early Applicant
San Francisco, CA
In-Office
350K-475K Annually
Mid level
Artificial Intelligence • Information Technology
The Role
As a Software Engineer, you'll develop and maintain systems for AI models, enhance infrastructure, and collaborate across teams on product development.
Summary Generated by Built In

Thinking Machines Lab's mission is to empower humanity through advancing collaborative general intelligence. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals. 

We are scientists, engineers, and builders who’ve created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.

About the Role

We’re looking for generalist infrastructure and systems engineers to help build the systems that power our foundation models and the internal teams on research and product development to be able to create the models and ship the products powered by our models.

You'll join a small, high-impact team responsible for architecting and scaling the core infrastructure behind everything we do. You’ll work across the full technical stack, solving complex distributed systems problems and building robust, scalable platforms.

Infrastructure is critical to us: it's the bedrock that enables every breakthrough. You'll work directly with researchers to accelerate experiments, improve infrastructure efficiency, and enable key insights across our models, products, and data assets.

Note: This is an "evergreen role" that we keep open on an on-going basis to express interest. We receive many applications, and there may not always be an immediate role that aligns perfectly with your experience and skills. Still, we encourage you to apply. We continuously review applications and reach out to applicants as new opportunities open. You are welcome to reapply if you get more experience, but please avoid applying more than once every 6 months. You may also find that we put up postings for singular roles for separate, project or team specific needs. In those cases, you're welcome to apply directly in addition to an evergreen role.

What You’ll Do

We interview generally, but during project selection we’ll take into account your interests and experience alongside organizational needs. This flexible approach allows us to match talented engineers with the infrastructure teams where they'll have the greatest impact and growth potential.

Here are example areas you may contribute to depending on your area of expertise and interest:

  • Core Infrastructure: We support teams that train, research, and ultimately serve AI models and build the underlying infrastructure for the clusters to reliably and safely train frontier models. Examples might include building systems and running large Kubernetes clusters with GPU workloads, or building infrastructure to support Tinker.
  • Data Infrastructure: We build and maintain the data systems for our research and products. You'll design and optimize data pipelines using tools like Spark and other modern data infrastructure technologies.  You’ll build scalable, reliable, data infrastructure while embedding governance best practices.
  • Developer Productivity: We care deeply about research and engineering productivity and our ability to continue shipping quickly. We build tooling, systems, frameworks, and systems to make sure everyone gets well configured, optimized developer environments.
Skills and Qualifications

Minimum qualifications:

  • Bachelor’s degree or equivalent experience in computer science, engineering, or similar.
  • Proficiency in at least one backend language (we use Python or Rust).
  • Experience operating large‑scale clusters and container orchestration systems (e.g. Kubernetes or Slurm).
  • Comfort operating across the stack and owning projects end-to-end.
  • Thrive in a highly collaborative environment involving many, different cross-functional partners and subject matter experts.
  • A bias for action with a mindset to take initiative to work across different stacks and different teams where you spot the opportunity to make sure something ships.

Preferred qualifications — we encourage you to apply if you meet some but not all of these:

  • Strong debugging across application, OS, and network layers.
  • Proficiency in Python or Rust (or similar), containers, and modern CI.
  • Experience with Kubernetes, controllers/operators, or performance profiling.
  • Familiarity with GPU/ML workflows or large‑scale data/eval pipelines.
Logistics
  • Location: This role is based in San Francisco, California. 
  • Compensation: Depending on background, skills and experience, the expected annual salary range for this position is $350,000 - $475,000 USD.
  • Visa sponsorship: We sponsor visas. While we can't guarantee success for every candidate or role, if you're the right fit, we're committed to working through the visa process together.
  • Benefits: Thinking Machines offers generous health, dental, and vision benefits, unlimited PTO, paid parental leave, and relocation support as needed.

As set forth in Thinking Machines' Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.

Top Skills

Kubernetes
Python
Rust
Slurm
Spark
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: San FranciscoC, CA
91 Employees

What We Do

Thinking Machines Lab is an artificial intelligence research and product company. We're building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.

While AI capabilities have advanced dramatically, key gaps remain. The scientific community's understanding of frontier AI systems lags behind rapidly advancing capabilities. Knowledge of how these systems are trained is concentrated within the top research labs, limiting both the public discourse on AI and people's abilities to use AI effectively. And, despite their potential, these systems remain difficult for people to customize to their specific needs and values. To bridge the gaps, we're building Thinking Machines Lab to make AI systems more widely understood, customizable and generally capable.

We are scientists, engineers, and builders who've created some of the most widely used AI products, including ChatGPT and Character.ai, open-weights models like Mistral, as well as popular open source projects like PyTorch, OpenAI Gym, Fairseq, and Segment Anything.

Similar Jobs

Anduril Logo Anduril

Test Engineer

Aerospace • Artificial Intelligence • Hardware • Robotics • Security • Software • Defense
In-Office
Costa Mesa, CA, USA
6000 Employees
129K-171K Annually

Anduril Logo Anduril

Director of Guidance, Navigation and Controls

Aerospace • Artificial Intelligence • Hardware • Robotics • Security • Software • Defense
In-Office
Costa Mesa, CA, USA
6000 Employees
254K-336K Annually

Turion Space Logo Turion Space

Senior Thermal Engineer

Aerospace • Artificial Intelligence • Hardware • Information Technology • Software • Defense • Manufacturing
In-Office
Irvine, CA, USA
150 Employees
125K-175K Annually

HRL Laboratories Logo HRL Laboratories

Quality Control Inspector

Computer Vision • Hardware • Machine Learning • Software • Semiconductor • Quantum Computing • Defense
Hybrid
Ventura, CA, USA
1115 Employees
42-53 Hourly

Similar Companies Hiring

Standard Template Labs Thumbnail
Software • Information Technology • Artificial Intelligence
New York, NY
10 Employees
Scotch Thumbnail
Software • Retail • Payments • Fintech • eCommerce • Artificial Intelligence • Analytics
US
25 Employees
Idler Thumbnail
Artificial Intelligence
San Francisco, California
6 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account