About 10a Labs: 10a Labs is the safety and threat-intelligence layer trusted by frontier AI labs, AI unicorns, Fortune 10 companies, and leading global technology platforms. Our adversarial red teaming, model evaluations, and intelligence collection enable engineering, safety, and security teams to stay ahead of evolving threats and deploy AI systems safely.
- Develop and run adversarial test suites—both manual and scripted—for LLMs and image / video models.
- Craft multilingual prompts, jailbreaks, and escalation chains targeting policy edge cases.
Analyze outputs, triage failures, and write concise vulnerability reports. - Contribute to internal tooling (e.g., prompt libraries, scenario generators, dashboards).
- Has 2-4 years of experience in red-teaming, security research, trust & safety, or related fields.
- Is comfortable scripting basic tests (Python, Bash, or similar) and working in Jupyter or prompt-engineering tools.
- Communicates clearly in English and at least one additional language (ideally major non-English language relevant to global threat landscapes).
- Thinks like an adversary, documents findings crisply, and iterates quickly.
- Bachelor’s degree—or equivalent experience—in CS, data science, linguistics, international studies, or security.
- Basic proficiency with Python and command-line tools.
- Demonstrated interest in AI safety, adversarial ML, or abuse detection.
- Strong writing skills for short vulnerability reports and long-form analyses.
- Ability to rapidly context switch across domains, modalities, and abuse areas.
- Excited to work in a fast-paced and ambiguous space.
- Full professional proficiency in Arabic, Chinese, Farsi, Portuguese, Russian, or Spanish, as well as English.
- Prior work in content moderation, disinformation analysis, or cyber-threat intelligence.
- Experience with prompt-automation frameworks (e.g., Promptfoo, LangChain, Garak).
Familiarity with vector search or LLM fine-tuning workflows. - Formal training or certification in red-teaming or penetration testing.
- Salary range: $70K–$90K depending on experience.
- Opportunity for spot bonuses and annual performance-based bonus.
- Fully remote (U.S.-based) with flexible hours.
- Comprehensive health, dental, and vision.
- Generous PTO and paid holidays.
- 401(k) plan.
- Professional-development stipend for courses, conferences, or language study.
- We reward excellence with growth—team members who excel have clear paths for promotion and skill development.
Skills Required
- Bachelor's degree in CS, data science, linguistics, international studies, or security
- Basic proficiency with Python and command-line tools
- Demonstrated interest in AI safety, adversarial ML, or abuse detection
- Strong writing skills for reports and analyses
- Ability to rapidly context switch across domains, modalities, and abuse areas
10a Labs Compensation & Benefits Highlights
The following summarizes recurring compensation and benefits themes identified from responses generated by popular LLMs to common candidate questions about 10a Labs and has not been reviewed or approved by 10a Labs.
-
Healthcare Strength — Benefits include comprehensive medical, dental, and vision coverage for full-time roles, listed across multiple postings. Coverage is presented as a core part of the package rather than a role-specific perk.
-
Leave & Time Off Breadth — Positions frequently advertise generous PTO and paid holidays, with some roles noting unlimited PTO and flexible hours. This indicates substantial time-off provisions alongside remote work arrangements.
-
Strong & Reliable Incentives — Compensation commonly includes performance-based annual bonuses and occasional spot bonuses. These incentives are presented as standard components for many roles.
10a Labs Insights
What We Do
10a Labs is an applied research and technology company specializing in AI security. We deliver intelligence collection, investigative research, and analysis for AI unicorns, Fortune 10 companies, and U.S. tech leaders.








