Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
As part of the Anthropic security department, the compliance team understands security requirements for protecting customer information, AI systems, and corporate data as established by regulators, customers and (nascent) industry norms (which we also seek to influence). The compliance team uses this understanding to provide direction to internal partners on the priorities of security requirements they must meet. The compliance team assures regulators and customers that those expectations are met by earning security credentials and responding to direct inquiry about Anthropic's security program from auditors, customers, regulators, and partners.
This opportunity is unique, as we work to secure today’s most novel and valuable asset types, we must build a new kind of compliance program, safeguarding artificial intelligence capabilities.
Responsibilities:
- Plan and lead engagements with independent security assessors to earn certifications and attestations important to Anthropic’s customers in the EU.
- Understand the breadth of Anthropic’s security capabilities and how those capabilities address common security requirements in EU-specific security and privacy regulations.
- Support customers and prospective EU customers who have questions about or need commitments from Anthropic’s security program.
- Drive programs to improve the ease and rigor of Anthropic’s compliance to its security controls and standards.
- Contribute updates to policies capturing security and AI safety requirements specific to EU customers.
- Support maintenance of Anthropic’s system of controls through audit, record keeping, and communication.
You may be a good fit if you:
- Have been responsible for audit planning, evidence curation, document generation, and other procedures for compliance with industry standard security assessments, like ISO 27001.
- Engage confidently with customers, partners, and regulators to respond to their inquiries in written or verbal form
- Have built or significantly improved a common controls framework
- Write clear and useful security documentation
- Thrive in a fast-paced and growing organization
- Organize time-bounded tasks in delegated work streams across a diverse organization
- Are comfortable working in a distributed team
- Have 7+ years of experience in a role with similar responsibilities
Strong candidates may also have experience with:
- Be familiar with AWS / GCP security capabilities, especially identity and access management features.
- Understand development of large language models (LLMs)
- Have some experience implementing automated enforcement of security controls
Deadline to apply: None. Applications will be reviewed on a rolling basis.
The expected salary range for this position is:
Annual Salary:
€200.000—€200.000 EUR
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
How we're different
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Come work with us!
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.
What We Do
Anthropic is an AI safety and research company that’s working to build reliable, interpretable, and steerable AI systems. Our research interests span multiple areas including natural language, human feedback, scaling laws, reinforcement learning, code generation, and interpretability.