aitrainer.work - AI Training Jobs Platform
Software Engineering mercor

AI Red-Teamer — Adversarial AI Testing; English

Mercor Remote Posted 78 days ago

Education

Any

Type

Pay Rate

$80.5/task

Posted

78d ago

✅ Applying through this link gives you a verified candidate referral.

Referrals from verified candidates give your profile a visibility boost and help support our platform at no cost to you.

This position is hosted on an external talent platform. Please only apply for this position if it fits your skills and interests.

Apply Now

About this Role

Location: Remote-friendly (US time zones); Geography restricted to US, UK, Canada Type: Full-time or Part-time

At Mercor, we believe the safest AI is the one that’s already been attacked — by us. That’s why we’re building a pod of AI Red-Teamers: human data experts who probe AI models with adversarial inputs, surface vulnerabilities, and generate the red-team data that makes AI safer for our customers.

Location

Type

Why This Role Exists

What You’ll Do

Who You Are

Nice-to-Have Specialties

What Success Looks Like

Why Join Mercor

Location: Remote-friendly (US time zones); Geography restricted to US, UK, Canada Type: Full-time or Part-time At Mercor, we believe the safest AI is the one that’s already been attacked — by us. That’s why we’re building a pod of AI Red-Teamers: human data experts who probe AI models with adversarial inputs, surface vulnerabilities, and generate the red-team data that makes AI safer for our customers. This role may include reviewing AI outputs that touch on sensitive topics such as bias, misinformation, or harmful behaviors. All work is text-based, and participation in higher-sensitivity projects is optional and supported by clear guidelines and wellness resources. Red-team AI models and agents: jailbreaks, prompt injections, misuse cases, exploits Generate high-quality human data: annotate failures, classify vulnerabilities, and flag systemic risks Apply structure: follow taxonomies, benchmarks, and playbooks to keep testing consistent Document reproducibly: produce reports, datasets, and attack cases customers can act on Flex across projects: support different customers, from LLM jailbreaks to socio-technical abuse testing You bring prior red-teaming experience (AI adversarial work, cybersecurity, socio-technical probing) You’re curious and adversarial: you instinctively push systems to breaking points You’re structured: you use frameworks or benchmarks, not just random hacks You’re communicative: you explain risks clearly to technical and non-technical stakeholders You’re adaptable: thrive on moving across projects and customers Adversarial ML: jailbreak datasets, prompt injection, RLHF/DPO attacks, model extraction Cybersecurity: penetration testing, exploit development, reverse engineering Socio-technical risk: harassment/disinfo probing, abuse analysis Creative probing: psychology, acting, writing for unconventional adversarial thinking You uncover vulnerabilities automated tests miss You deliver reproducible artifacts that strengthen customer AI systems Evaluation coverage expands: more scenarios tested, fewer surprises in production Mercor customers trust the safety of their AI because you’ve already probed it like an adversary Build experience in human data-driven AI red-teaming at the frontier of safety Play a direct role in making AI systems more robust, safe, and trustworthy The pay rate for this role may vary by project, customer, and content category. Compensation will be aligned with the level of expertise required, the sensitivity of the material, and the scope of work for each engagement. We consider all qualified applicants without regard to legally protected characteristics and provide reasonable accommodations upon request.

  • Red-team AI models and agents: jailbreaks, prompt injections, misuse cases, exploits
  • Generate high-quality human data: annotate failures, classify vulnerabilities, and flag systemic risks
  • Apply structure: follow taxonomies, benchmarks, and playbooks to keep testing consistent Document reproducibly: produce reports, datasets, and attack cases customers can act on
  • Flex across projects: support different customers, from LLM jailbreaks to socio-technical abuse testing
  • You bring prior red-teaming experience (AI adversarial work, cybersecurity, socio-technical probing)
  • You’re curious and adversarial: you instinctively push systems to breaking points
  • You’re structured: you use frameworks or benchmarks, not just random hacks
  • You’re communicative: you explain risks clearly to technical and non-technical stakeholders
  • You’re adaptable: thrive on moving across projects and customers
  • Adversarial ML: jailbreak datasets, prompt injection, RLHF/DPO attacks, model extraction
  • Cybersecurity: penetration testing, exploit development, reverse engineering
  • Socio-technical risk: harassment/disinfo probing, abuse analysis
  • Creative probing: psychology, acting, writing for unconventional adversarial thinking
  • You uncover vulnerabilities automated tests miss
  • You deliver reproducible artifacts that strengthen customer AI systems
  • Evaluation coverage expands: more scenarios tested, fewer surprises in production
  • Mercor customers trust the safety of their AI because you’ve already probed it like an adversary
  • Build experience in human data-driven AI red-teaming at the frontier of safety
  • Play a direct role in making AI systems more robust, safe, and trustworthy

Requirements

  • Must be eligible to work in Remote
  • Fluent proficiency in English (Written & Verbal)
  • Reliable high-speed internet connection
  • Bachelor's degree or equivalent professional experience
  • Demonstrated expertise in Software Engineering

Eligible Languages

Fluent proficiency in English

English

Compensation Analysis

Work from anywhere, at any time. This fully remote position ($80.5/hr) breaks down geographic barriers, allowing you to earn US-competitive rates regardless of your local market. It is a perfect stepping stone for building a career in the data labeling and AI training ecosystem.

Skills & Categories

Explore other opportunities in related specializations:

Related Jobs

Mercor

Browse All Jobs from Mercor

Discover more opportunities on Mercor that match your skills and interests.

View All Mercor Jobs →

Community Reviews

Loading reviews…

Frequently Asked Questions

Is this for freelancers or full-time employees?

Both. Mercor tries to match you with clients who want long-term contractors. Unlike other platforms where you log in and grab small tasks, Mercor matches you with one company for a steady role (e.g., 'Python Tutor for 3 months').

I'm not comfortable on camera. Can I still apply?

No. The application requires a video interview with an AI avatar. The AI asks you questions about your resume, and the video is shared with potential clients to prove your communication skills.

Does it cost money to join?

No. You should never pay to join these platforms. Mercor makes money by charging the client a fee on top of your hourly rate.

What does the work actually look like?

It is practical, hands-on data work. You might be recording short videos, categorizing images, rating text responses, or analyzing data. The tasks are designed to be short and distinct—typically 5-60 minutes per task.

How flexible is the schedule?

Extremely. This is true "log in and work" flexibility. You can usually work for 20 minutes or 4 hours depending on your availability. There are rarely minimum hour requirements, making it ideal for side income.

Is there an interview?

Usually, no. Hiring for these roles is almost entirely based on passing an automated assessment or "qualification" task. If you pass the test, you get access to the work.

How soon will I start?

Important: Mercor is a talent marketplace, not a task queue. Applying puts you in a pool of candidates. You will only start working when a specific client (like a major AI lab) selects your profile. This matching process can take weeks.