Principal, Special Projects

Posted One Month Ago
Be an Early Applicant
San Francisco, CA, USA
In-Office
150K-250K Annually
Senior level
Artificial Intelligence
The Role
The role involves owning high-stakes AI safety projects end-to-end, defining strategies, and managing complex collaborations while ensuring timely execution.
Summary Generated by Built In
The Center for AI Safety (CAIS) is a leading research and advocacy organization focused on mitigating societal-scale risks from AI. Some of our past achievements include: releasing the most widely used measure of AI capabilities used by all major AI companies, running a large compute cluster to facilitate AI safety research which has been cited over 16,000 times, and publishing a global statement on AI Risk signed by Geoffrey Hinton, Yoshua Bengio and top AI CEOs.
 
The Role
We’re hiring senior operators to own high-stakes projects and initiatives. You will identify high-impact opportunities, define strategy, and drive execution end-to-end. Above all, we need people who can operate autonomously: someone with the judgment to navigate complex decisions and the track record to be trusted with significant responsibility.
 
The scope is broad by design. Example projects include: partnering with the team behind #TeamTrees to run a public campaign on AGI risk, supporting researchers in building benchmarks for deception and weaponization risk, finding ways to engage YouTubers and longform creators on AI safety, and standing up an AI safety hub in Washington DC. What unites these is the need for someone with the judgment and ability to take an ambiguous mandate and turn it into a concrete outcome, without needing to be managed closely.
 
Who We're Looking For
You’ve operated at a high level in fast-moving, high-stakes environments—and have the track record to prove it. Example profiles we're looking for include former startup Founders and COOs—people with both exceptional ability and judgment. Your specific background may look very different.

What You'll Do

  • Own projects and initiatives end-to-end: identify opportunities, set strategy, build plans, and execute.
  • Scope new projects end-to-end, defining objectives, deliverables, timelines, and budgets.
  • Coordinate across researchers, vendors, policy partners, and external collaborators to move complex work forward.
  • Stay agile when priorities shift: re-scope, re-prioritize, and adjust without losing momentum.
  • Monitor risks and surface critical issues early, always with a recommended path forward.

What We're Looking For

  • A track record of owning complex, ambiguous initiatives and delivering outsized results.
  • The ability to scope new problem spaces quickly: defining goals, success metrics, and constraints through research, interviews, and good judgment.
  • Consistently good judgment under uncertainty. You make sound calls with incomplete information, know when to move fast and when to slow down, and leadership can trust your decisions without reviewing every detail.
  • Comfort operating with high autonomy: you find the path forward even when one isn’t obvious, and you escalate the right things at the right time.
  • Strong analytical skills for evaluating feasibility, impact, and risk across very different domains.
  • Excellent written and verbal communication. You can present complex ideas clearly to both technical and non-technical audiences.
  • Genuine interest in AI safety and the willingness to develop deep domain knowledge.

The Center for AI Safety is an Equal Opportunity Employer. We consider all qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity or expression, national origin, ancestry, age, disability, medical condition, marital status, military or veteran status, or any other protected status in accordance with applicable federal, state, and local laws. In alignment with the San Francisco Fair Chance Ordinance, we will consider qualified applicants with arrest and conviction records for employment.​
 
If you require a reasonable accommodation during the application or interview process, please contact [email protected].​
 
We value diversity and encourage individuals from all backgrounds to apply.

Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
25 Employees

Similar Jobs

Mastercard Logo Mastercard

Principal Software Engineer

Blockchain • Fintech • Payments • Consulting • Cryptocurrency • Cybersecurity • Quantum Computing
Hybrid
San Carlos, CA, USA
38800 Employees
212K-407K Annually

Mastercard Logo Mastercard

Principal Software Engineer

Blockchain • Fintech • Payments • Consulting • Cryptocurrency • Cybersecurity • Quantum Computing
Hybrid
San Francisco, CA, USA
38800 Employees
170K-281K Annually

Cox Enterprises Logo Cox Enterprises

Client Integration Specialist II (vAuto)

Artificial Intelligence • Automotive • Greentech • Information Technology • Machine Learning • Software • Cybersecurity
Remote or Hybrid
United States
50000 Employees
20-30 Hourly

Cox Enterprises Logo Cox Enterprises

Outside Plant Compliance Technician I

Artificial Intelligence • Automotive • Greentech • Information Technology • Machine Learning • Software • Cybersecurity
Hybrid
Oceanside, CA, USA
50000 Employees
32-48 Hourly

Similar Companies Hiring

GC AI Thumbnail
Artificial Intelligence • Legal Tech
San Mateo, California
100 Employees
Idler Thumbnail
Artificial Intelligence
San Francisco, California
6 Employees
Bellagent Thumbnail
Artificial Intelligence • Machine Learning • Business Intelligence • Generative AI
Chicago, IL
20 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account