Manager – AI Infrastructure Operations

Reposted 16 Days Ago
Be an Early Applicant
Sunnyvale, CA
In-Office
Senior level
Artificial Intelligence
The Role
The Manager of AI Infrastructure Operations leads the reliability and performance of AI compute infrastructure, manages escalations, and drives reliability initiatives.
Summary Generated by Built In

Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.  

Cerebras' current customers include global corporations across multiple industries, national labs, and top-tier healthcare systems. In January, we announced a multi-year, multi-million-dollar partnership with Mayo Clinic, underscoring our commitment to transforming AI applications across various fields. In August, we launched Cerebras Inference, the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services.

About The Role

As a senior leader on our team, you will be responsible for the overall health, performance, and reliability of our infrastructure, driving initiatives that maximize compute capacity and directly support our critical AI objectives. This role is a blend of strategic leadership and hands-on technical ownership. You will leverage your deep Site Reliability Engineering (SRE) expertise to build robust systems, lead high-stakes technical escalations, and champion customer success.

We're seeking a proactive problem-solver with extensive experience in large-scale distributed systems, a track record of leading high-performing teams, and a passion for tackling the most challenging technical problems.

Responsibilities
  • Lead and Manage Infrastructure: Oversee the operation and reliability of our advanced AI compute infrastructure, defining strategy and setting a high bar for operational excellence.
  • Drive Technical Ownership: Act as the primary owner for critical infrastructure systems, ensuring uptime, performance, and capacity are consistently optimized.
  • Handle High-Stakes Escalations: Serve as the final point of contact for complex customer and engineering escalations, providing expert-level, hands-on support and driving issues to a rapid and complete resolution.
  • Champion Reliability and Automation: Leverage your SRE experience to develop and implement robust monitoring, alerting, and automation solutions, reducing manual toil and preventing future issues.
  • Collaborate and Strategize: Partner with cross-functional teams, including engineering and product, to align on long-term infrastructure strategy and support future AI initiatives.
  • Innovate and Improve: Continuously evaluate and improve existing processes, tools, and technologies to enhance system reliability and operational efficiency.
Skills & Qualifications
  • Technical Leadership: 15+ years of experience in managing and operating complex compute infrastructure, with a minimum of 5 years in a senior or leadership role.
  • SRE and Operations Expertise: A strong background as a Site Reliability Engineer or in a similar role, with a proven track record of managing large-scale, mission-critical systems.
  • Deep Systems Knowledge: Expert-level proficiency in Linux-based systems, Python scripting, and command-line tools for system administration and automation.
  • Troubleshooting Acumen: Exceptional ability to lead and resolve complex technical challenges under pressure, especially during customer or engineering escalations.
  • On-Call Leadership: Proven experience managing an on-call rotation and responding to 24/7 technical incidents.
  • Communication: Excellent communication and leadership skills, with the ability to effectively mentor junior team members and communicate complex technical concepts to a diverse audience.

Preferred Skills

  • Prior experience operating large-scale GPU/accelerator clusters.
  • Knowledge of networking protocols (Ethernet, RoCE, TCP/IP).
  • Familiarity with ML frameworks and AI/ML workflows.
  • Exposure to cloud platforms (AWS, GCP, Azure).
Location

Sunnyvale, CA

Why Join Cerebras

People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection  point in our business. Members of our team tell us there are five main reasons they joined Cerebras:

  1. Build a breakthrough AI platform beyond the constraints of the GPU.
  2. Publish and open source their cutting-edge AI research.
  3. Work on one of the fastest AI supercomputers in the world.
  4. Enjoy job stability with startup vitality.
  5. Our simple, non-corporate work culture that respects individual beliefs.

Read our blog: Five Reasons to Join Cerebras in 2025.

Apply today and become part of the forefront of groundbreaking advancements in AI!

Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.

This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.

Top Skills

AWS
Azure
GCP
Linux
Ml Frameworks
Networking Protocols (Ethernet
Python
Roce
Tcp/Ip)
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: Sunnyvale, CA
402 Employees
Year Founded: 2016

What We Do

Cerebras Systems is a team of pioneering computer architects, computer scientists, deep learning researchers, functional business experts and engineers of all types. We have come together to build a new class of computer to accelerate artificial intelligence work by three orders of magnitude beyond the current state of the art.

The CS-2 is the fastest AI computer in existence. It contains a collection of industry firsts, including the Cerebras Wafer Scale Engine (WSE-2). The WSE-2 is the largest chip ever built. It contains 2.6 trillion transistors and covers more than 46,225 square millimeters of silicon. The largest graphics processor on the market has 54 billion transistors and covers 815 square millimeters. In artificial intelligence work, large chips process information more quickly producing answers in less time. As a result, neural networks that in the past took months to train, can now train in minutes on the Cerebras CS-2 powered by the WSE-2.

Join us: https://cerebras.net/careers/

Similar Jobs

Square Logo Square

Strategic Account Manager

eCommerce • Fintech • Hardware • Payments • Software • Financial Services
Hybrid
Los Angeles, CA, USA
12000 Employees
42-78 Annually

BlackLine Logo BlackLine

Enterprise Account Manager

Cloud • Fintech • Information Technology • Machine Learning • Software • App development • Generative AI
Remote or Hybrid
California, USA
1810 Employees
119K-140K Annually

SailPoint Logo SailPoint

Field Marketing Manager

Artificial Intelligence • Cloud • Sales • Security • Software • Cybersecurity • Data Privacy
Remote or Hybrid
7 Locations
2461 Employees
122K-226K Annually
Hybrid
5 Locations
213000 Employees
38-67 Hourly

Similar Companies Hiring

Credal.ai Thumbnail
Software • Security • Productivity • Machine Learning • Artificial Intelligence
Brooklyn, NY
Standard Template Labs Thumbnail
Software • Information Technology • Artificial Intelligence
New York, NY
10 Employees
Scotch Thumbnail
Software • Retail • Payments • Fintech • eCommerce • Artificial Intelligence • Analytics
US
25 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account