Infrastructure Engineer

Posted 19 Days Ago
2 Locations
Remote
100K-175K Annually
Mid level
Artificial Intelligence • App development
Frontier alignment research to ensure the safe development and deployment of advanced AI systems.
The Role
As an Infrastructure Engineer, you will develop and manage scalable infrastructure, supporting Kubernetes clusters and enhancing compute capabilities for AI research workloads while ensuring security and improved observability of resources.
Summary Generated by Built In
About FAR.AI

FAR.AI is a non-profit AI research institute dedicated to ensuring advanced AI is safe and beneficial for everyone. Our mission is to facilitate breakthrough AI safety research, advance global understanding of AI risks and solutions, and foster a coordinated global response.

Since our founding in July 2022, we've grown quickly to 30+ staff, producing over 40 influential academic papers, and established the leading AI Safety events for research, and international cooperation. Our work is recognized globally, with publications at premier venues such as NeurIPS, ICML, and ICLR, and features in the Financial Times, Nature News, and MIT Technology Review.

We drive practical change through red-teaming with frontier model developers and government institutes. Additionally, we help steer and grow the AI safety field through developing research roadmaps with renowned researchers such as Yoshua Bengio, running FAR.Labs, an AI safety-focused co-working space in Berkeley housing 40 members, and supporting the community through targeted grants to technical researchers.

About FAR.Research

Our research team likes to move fast. We explore promising research directions in AI safety and scale up only those showing a high potential for impact. Unlike other AI safety labs that take a bet on a single research direction, FAR.AI aims to pursue a diverse portfolio of projects.

Our current focus areas include:

  • Investigating deception in AI (e.g. lie detectors can either induce honesty or evasion)

  • Building a science of robustness (e.g. finding vulnerabilities in superhuman Go AIs)

  • Advancing model evaluation techniques (e.g. inverse scaling and codebook features, and learned planning).

We also put our research into practice through red-teaming engagements with frontier AI developers, and collaborations with government institutes.

About the Role

We’re seeking an Infrastructure Engineer to develop and manage scalable infrastructure to support our research workloads. You will own our existing Kubernetes cluster, deployed on top of bare-metal H100 cloud instances. You will oversee and enhance the cluster to 1) support new workloads, such as multi-node LoRA training; 2) new users, as we double the size of our research team in the next twelve to eighteen months; and 3) new features, such as fine-grained experiment compute usage tracking.

You will be the point-person for cluster-related work. You will work on the Foundations team alongside experienced engineers, including those who built and designed the cluster, who can provide guidance and backup. However, as our first dedicated infrastructure hire, you will need to work autonomously, design solutions to varied and complex problems, and communicate with researchers who are technically skilled but less knowledgeable about our cluster and infrastructure.

This is an opportunity to build the technical foundations of the largest independent AI safety research institute, with one of the most varied research agendas. You will be working directly with both the Foundations team and researchers across the organization to enable bleeding-edge research workloads across our research portfolio.

Responsibilities

Build and Maintain

You will deliver a scalable and easy to use compute cluster to support impactful research by:

  • Empowering the research team to solve their own day-to-day compute problems, such as debugging simple issues and streamlining recurring tasks (e.g. running batch experiments, launching an interactive devbox, etc.).

  • Maintaining and developing in-cluster services, such as backups, experiment tracking, and our in-house LLM-based cluster support bot.

  • Maintaining adequate cluster stability to avoid interfering with research workloads (currently >95% uptime outside of planned maintenance windows).

  • Maintaining situational awareness of the cloud GPU market and assisting leadership with vendor comparisons to ensure we are using the most effective compute platforms.

Support Security

We often collaborate with partners with stringent security requirements (e.g. governments, frontier developers) and handle sensitive information (e.g. non-public exploits, CBRN datasets). You will implement security measures towards:

  • Securing the cluster against insider threats (architecting it to have adequate isolation to provide data confidentiality and integrity for sensitive workloads) and external threats (through minimizing the attack surface, and ensuring security updates are promptly installed).

  • Making secure workflows the default, e.g. streamlining the deployment of internal web dashboards behind an OAuth reverse proxy.

  • Championing security across the FAR.AI team, including maintaining and extending our mobile device management (MDM) system.

Bleeding-edge Workloads

You will work with the Foundations team and specific research teams to support novel ML workloads (e.g. fine-tuning a new open-weight model release) by:

  • Architecting our Kubernetes cluster to flexibly support novel workloads.

  • Assisting projects with bespoke requirements, designing and implementing effective infrastructure solutions, and sharing your infrastructure wisdom with ML researchers.

  • Improving observability over cluster resources and GPU utilization to allow us to rapidly diagnose and work around hardware issues or software bugs that may only arise on novel workloads.

About You

It is essential that you

  • Have Kubernetes or other system administration experience.

  • Have a curiosity and willingness to rapidly learn the needs of a new space.

  • Are self-directed and comfortable with ambiguous or rapidly evolving requirements.

  • Are willing to be on-call during waking hours for cluster issues ahead of major deadlines (for a few weeks a quarter).

  • Are interested in improving our security posture through identifying, implementing and administering security policies.

It is preferable that you

  • Have experience supporting ML/AI workloads.

  • Have previously worked in research environments or startups.

  • Are experienced in administering compute or GPU clusters.

  • Are able to adopt a security mindset.

  • Are willing to be part of an eventual on-call rotation, if required.

Example Projects
  • Configure the cluster and user-space development environments to support InfiniBand nodes for high-performance multi-node training.

  • Improve our default devbox K8s pod template to incorporate best-practice workflows for our researchers.

  • Roll out a new mobile device management system to ensure corporate devices meet our security requirements.

  • Streamline onboarding to the cluster for new starters (possibly in different timezones), and candidates on time-limited work trials.

  • Be “holder of the keys”, managing permissions and access control for FAR.AI’s team members to technical systems, including streamlining/automating (e.g. via SAML, SCIM) where appropriate.

  • Analyze storage patterns and propose infrastructure improvements for backups, disaster recovery, and usability.

Logistics

You will be a full-time employee of FAR AI, a 501(c)(3) research non-profit.

  • Location: Both remote and in-person (Berkeley, CA) are possible, though 2 hours of overlap with Berkeley timezones are required. We sponsor visas for CA in-person employees, and can also hire remotely in most countries.

  • Hours: Full-time (40 hours/week).

  • Compensation: $100,000-$175,000/year depending on experience and location. We will also pay for work-related travel and equipment expenses. We offer catered lunch and dinner at our offices in Berkeley.

  • Application process: A programming assessment, a short screening call, two 1-hour interviews, and a 1 week paid work trial.

If you have any questions about the role, please reach out at [email protected]. If you don't have questions, the best way to ensure a proper review of your skills and qualifications is by applying directly via the application form. Please don't email us to share your resume (it won't have any impact on our decision). Thank you!

Top Skills

Cloud Infrastructure
Gpu Clusters
Kubernetes
Machine Learning Frameworks
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: Berkeley, California
41 Employees
Year Founded: 2022

What We Do

FAR.AI is a technical AI research and education non-profit, dedicated to ensuring the safe development and deployment of frontier AI systems.

FAR.Research: Explores a portfolio of promising technical AI safety research directions.

FAR.Labs: Supports the San Francisco Bay Area AI safety research community through a coworking space, events and programs.

FAR.Futures: Delivers events and initiatives bringing together global leaders in AI academia, industry and policy.

Similar Jobs

VelocityEHS Logo VelocityEHS

Infrastructure Engineer

Cloud • Greentech • Social Impact • Software • Consulting
Remote
USA
97K-138K Annually

NBCUniversal Logo NBCUniversal

Infrastructure Engineer

AdTech • Cloud • Digital Media • Information Technology • News + Entertainment • App development
Remote or Hybrid
New York, NY, USA
125K-155K

Commerce Logo Commerce

Infrastructure Engineer

Artificial Intelligence • Cloud • Consumer Web • eCommerce • Information Technology • Software
In-Office or Remote
3 Locations
99K-167K Annually

Commerce Logo Commerce

Infrastructure Engineer

Artificial Intelligence • Cloud • Consumer Web • eCommerce • Information Technology • Software
In-Office or Remote
3 Locations
110K-216K Annually

Similar Companies Hiring

Scrunch AI Thumbnail
Software • SEO • Marketing Tech • Information Technology • Artificial Intelligence
Salt Lake City, Utah
Credal.ai Thumbnail
Software • Security • Productivity • Machine Learning • Artificial Intelligence
Brooklyn, NY
Standard Template Labs Thumbnail
Software • Information Technology • Artificial Intelligence
New York, NY
10 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account