AI Red Team Tester
Alignerr • Remote • Posted 25 days ago
Education
Any
Type
Pay Rate
$45/task
Posted
25d ago
✅ Applying through this link gives you a verified candidate referral.
Referrals from verified candidates give your profile a visibility boost and help support our platform at no cost to you.
This position is hosted on an external talent platform. Please only apply for this position if it fits your skills and interests.
About this Role
What You'll Do
- Design and execute red-teaming exercises to uncover security weaknesses in AI systems
- Craft adversarial prompts and edge-case scenarios to probe model guardrails and safety filters
- Evaluate AI outputs for unsafe behavior, bias, and policy violations
- Document vulnerabilities, exploits, and unexpected behaviors in clear, structured reports
- Collaborate with engineering teams to recommend practical mitigations and improvements
- Stay current on emerging AI security threats, jailbreak techniques, and adversarial research
- Help define and refine security evaluation rubrics and testing protocols
About the Role
If you've ever looked at a system and immediately started thinking about how to break it — this role was made for you. We're looking for security-minded professionals to stress-test AI systems, expose their weaknesses, and help build the next generation of safe, reliable AI. This is a fully remote, flexible contract role where your findings directly shape how AI behaves in the real world.
- Organization: Alignerr
- Type: Hourly Contract
- Location: Remote
- Commitment: 10–40 hours/week
Who You Are
- Strong understanding of cybersecurity concepts — threat modeling, penetration testing, or ethical hacking
- Hands-on experience with AI/ML systems, large language models, or prompt engineering
- Creative and analytical — you enjoy finding the edge cases others miss
- Excellent written communication and documentation skills
- Comfortable working independently in an asynchronous, task-based environment
- Familiarity with open-source AI platforms or LLM ecosystems is a plus
- Background in infosec, AI safety research, or adversarial ML is a bonus — but not required
Why Join Us
- Work at the cutting edge of AI security alongside top research labs
- See your work directly improve the safety of AI products used by millions of people
- Full autonomy and a flexible schedule — work when and how you work best
- Build deep, marketable expertise in one of the fastest-growing fields in tech
- Ongoing contract potential with opportunities to expand scope and responsibility
- Be part of a global community of experts shaping the future of responsible AI
Requirements
- Fluent proficiency in English (Written & Verbal)
- Reliable high-speed internet connection
- Bachelor's degree or equivalent professional experience
- Demonstrated expertise in Software Engineering
Compensation Analysis
If you've ever looked at a system and immediately started thinking about how to break it — this role was made for you. We're looking for security-minded professionals to stress-test AI systems, expose their weaknesses, and help build the next generation of safe, reliable AI. This is a fully remote, flexible contract role where your findings directl
Skills & Categories
Explore other opportunities in related specializations:
Related Jobs
Browse All Jobs from Alignerr
Discover more opportunities on Alignerr that match your skills and interests.
View All Alignerr Jobs →Community Reviews
Leave your review
Frequently Asked Questions
What is the assessment actually like?
Notoriously strict. Alignerr uses TestGorilla for role-specific timed tests — a blank coding environment for engineers, rigorous grammar and fact-checking for writers. There is almost no hand-holding. The critical catch: this is essentially a one-shot process. Fail or abandon the assessment, and you are typically locked out of that role permanently with no option to retake.
How quickly can I start earning after I pass?
Not immediately. Even after passing the assessment and completing identity verification (via Persona) and billing setup (via Deel), you may sit in a waiting pool for weeks or months. You only start earning when a project matching your specific skills launches and you are officially assigned. Do not plan around Alignerr income until you are actively on a project.
Is there a community?
Yes — and it is one of Alignerr's genuine strengths. Once assigned to a project, you are added to Slack channels where you can ask questions, get rubric clarifications from admins, and talk to other AI trainers. This is rare in AI training and makes a real difference when guidelines are ambiguous or change mid-project.
What does the work actually look like?
It is practical, hands-on data work. You might be recording short videos, categorizing images, rating text responses, or analyzing data. The tasks are designed to be short and distinct—typically 5-60 minutes per task.
How flexible is the schedule?
Extremely. This is true "log in and work" flexibility. You can usually work for 20 minutes or 4 hours depending on your availability. There are rarely minimum hour requirements, making it ideal for side income.
Is there an interview?
Usually, no. Hiring for these roles is almost entirely based on passing an automated assessment or "qualification" task. If you pass the test, you get access to the work.
What is the barrier to entry?
Alignerr is known for difficult technical assessments. You must pass a timed test in your specific domain (e.g., Python, Physics, or Language) before you are eligible for any paid projects.