Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
Anthropic's Integrity & Compliance (I&C) function is building the systems that let us scale responsibly as our products reach more people, more enterprises, and more regulated industries. Our global compliance program is bespoke, reflecting our unique mission and position as one of the leading AI labs operating on the frontier.
Within Integrity & Compliance, the Privacy Programs pillar owns how we operationalize privacy across the company — from how we handle personal data in our products and research, to how we meet our obligations under the GDPR, CCPA, and the growing patchwork of global privacy law. We work closely with our Privacy Legal team on all privacy related matters.
We're hiring a Privacy Governance Lead to own the governance backbone of that work. You'll set the strategy for how privacy governance operates at Anthropic, define the policies and controls that translate privacy principles into operating practice, and help manage the relationship with internal and external stakeholders who depend on that framework holding up under scrutiny.
This is a foundational role with significant scope. You'll be shaping a privacy governance function from a relatively early stage, with the autonomy to set the standard and the mandate to drive cross-functional change. You'll partner closely with Privacy Legal, Security, Product, Research, and the wider I&C team, and you'll contribute directly to reporting that reaches the Audit Committee and boards. You'll report to the Head of Integrity & Compliance.
Key responsibilitiesSet the strategy and roadmap for Anthropic's privacy governance framework, including the policies, standards, and internal controls that map to GDPR, CCPA/CPRA, and other applicable global privacy regimes
Own the privacy documentation lifecycle end-to-end — Data Protection Impact Assessments, Records of Processing, Transfer Impact Assessments, and other accountability artifacts — including the methodology, the tooling, and the quality bar
Establish governance forums and approval workflows for privacy-significant product, research, and vendor decisions, and chair the forums where novel or high-risk questions are resolved
Own the privacy controls testing program: define what "good" looks like, set the testing cadence, and present results to the Head of Integrity & Compliance and other leadership forums
Partner with Privacy Legal to anticipate emerging privacy law and translate new obligations into concrete control changes ahead of enforcement
In partnership with Legal, co-lead privacy regulator engagement on governance matters, including responses to inquiries, audits, and complaints
Oversee the management of inputs for regulatory responses with the Privacy Program pillar
Drive privacy training and awareness strategy for engineering, product, research, and go-to-market teams, calibrated to the actual decisions those teams make
Represent the privacy governance function in Internal Audit reporting, and in cross-functional risk and compliance forums
Build and develop the privacy governance team over time
Deep working knowledge of GDPR and at least one major US state privacy regime (CCPA/CPRA, or equivalent), including how their requirements translate into operational controls at scale
Demonstrated track record building, scaling, or transforming a privacy governance program end-to-end — policies, DPIAs, ROPAs, controls libraries, governance forums, and the operating model that supports them
Strong written communication, with the ability to produce clear policies, board-ready reporting, and practical guidance that engineering and product teams will actually use
Comfort owning hard cross-functional decisions and operating across legal, technical, and operational boundaries
A privacy certification such as CIPP/E, CIPP/US, or CIPM, or equivalent demonstrated expertise
Senior privacy governance leadership experience at a technology company operating under multiple privacy regimes simultaneously, ideally including one with novel data processing (AI/ML, large-scale platforms, or similar)
Direct experience engaging privacy regulators, particularly EU data protection authorities or the Irish DPC, on governance matters such as inquiries, audits, or complaints
Familiarity with AI-specific privacy considerations: training data governance, model memorization, output filtering, and the intersection with emerging AI regulation
Experience standing up governance functions in a high-growth environment, including building from a blank page
Demonstrated experience presenting to Audit Committees, boards, or equivalent senior governance bodies on privacy matters
Background that bridges privacy and broader compliance disciplines (security, regulatory, ABAC, enterprise risk management)
Role-specific policy: For this role, we expect staff to be able to work from either our Washington, DC, San Francisco, or New York City office at least 3 days a week, though we encourage you to apply even if you might need some flexibility for an interim period of time.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Come work with us!Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
What We Do
Anthropic is an AI safety and research company that’s working to build reliable, interpretable, and steerable AI systems. Our research interests span multiple areas including natural language, human feedback, scaling laws, reinforcement learning, code generation, and interpretability.









