Integrity Operations Specialist

| London, England, GBR
Apply
By clicking Apply Now you agree to share your profile information with the hiring company.

(Located in either London, remote UK, remote Germany, remote Spain)
About the Role: 

Our Product Integrity Operations team is responsible for ensuring we are complying with responsible AI laws and regulations, enforcing our Acceptable Use Policy (AUP) and Terms of Service and performing related red team testing of our models and safeguards. As an Integrity Operations Specialist,  you will be reviewing user activity on our platform and the use of our various models to determine if policy violative actions are occurring.  You will also be performing red team testing of our models and safeguards and helping address safety issues before those models are finalized and shipped. You will work closely with internal teams (product, engineering, legal, etc.) and external organizations globally to prevent bad actors from misusing our products and services.  The ideal candidate is comfortable with technology and has experience working in fraud, abuse or similar investigative work and is comfortable working at a fast pace on a variety of tasks.  Please note that this role may involve exposure to explicit content and topics, including those of a sexual, violent, or psychologically disturbing nature.

Responsibilities:

  • Perform reviews of user activity, ensuring consistent and accurate action is taken related to user activity on our platform.   .
  • Conduct Red Teaming exercises, developing and executing offensive tactics and techniques to identify and exploit new models’ vulnerabilities.

Respond to concerns related to our products  while also conducting root cause analysis to prevent recurrence of confirmed violative incidents. 

  • Adhere to metrics and KPIs related to workflow, volume, response times, accuracy, and overall operational effectiveness, providing data and content-driven insights.
  • Stay current on industry best practices for trust and safety enforcement programs and develop skills and understand new technologies.
  • Partner with internal and external stakeholders in identifying best practices, new approaches, and providing thought leadership for AI safety operational practices. 

Qualifications:

  • 3+ years experience working in Trust & Safety, Fraud, Abuse Prevention or similar investigative roles, preferably in the technology industry. 
  • Solid understanding of bad actor/ adversarial behavior and abuse prevention mechanisms.
  • Basic SQL programming skills. 
  • Analyze large volumes of transactional and user data to identify suspicious patterns or behaviors. Recommend controls when identifying exceptions or trends.  
  • Highly organized and able to manage competing priorities in time-sensitive situations.
  • Experience working in  fast paced environments, where you have had to adapt to rapid changes, navigate ambiguity and take ownership for solving any problems that arise. 
  • Experience handling highly sensitive topics and customer concerns
  • Experience collaborating with cross-functional teams including legal, compliance, product, and engineering
  • Excellent written and verbal communication skills
  • A true passion for AI
More Information on Stability AI
Stability AI operates in the Generative AI industry. It has 149 total employees. To see all 8 open jobs at Stability AI, click here.
Read Full Job Description
Apply Now
By clicking Apply Now you agree to share your profile information with the hiring company.

Similar Jobs

Apply Now
By clicking Apply Now you agree to share your profile information with the hiring company.
Learn more about Stability AIFind similar jobs