AI / Emerging Tech Security Analyst
Alignerr • Remote • Posted 0 days ago
Education
Any
Type
Pay Rate
$50/task
Posted
0d ago
✅ Applying through this link gives you a verified candidate referral.
Referrals from verified candidates give your profile a visibility boost and help support our platform at no cost to you.
This position is hosted on an external talent platform. Please only apply for this position if it fits your skills and interests.
About this Role
What You'll Do
- Analyze real-world AI and LLM security scenarios to understand how models behave under adversarial or unexpected conditions
- Review and evaluate cases involving prompt injection, data leakage, model abuse, and system misuse
- Classify security issues by real-world impact and likelihood, and recommend appropriate mitigations
- Apply threat modeling principles to emerging AI technologies and architectures
- Help evaluate and improve AI system behavior so it remains safe, aligned, and robust against attack
- Complete task-based assignments independently on your own schedule
About the Role
What if your security instincts could directly shape how the world's most powerful AI systems defend themselves against attack? We're looking for AI Security Analysts to stress-test frontier models — probing for weaknesses, evaluating adversarial scenarios, and helping ensure that cutting-edge AI remains safe, reliable, and resistant to misuse. This is a fully remote, flexible contract role built for security professionals who are curious about how modern AI systems can be exploited, manipulated, or pushed beyond their intended boundaries. If you've ever wondered what happens when someone tries to break an LLM — this is your chance to find out, get paid for it, and make AI safer in the process.
- Organization: Alignerr
- Type: Hourly Contract
- Location: Remote
- Commitment: 10–40 hours/week
Who You Are
- Background in cybersecurity, information security, or a closely related technical field
- Solid understanding of security threat modeling — and genuine curiosity about how it applies to AI
- Analytical and precise when evaluating complex systems, edge cases, and potential failure modes
- Comfortable working through ambiguous, open-ended scenarios with a structured mindset
- Self-motivated and reliable when working independently without supervision
Nice to Have
- Hands-on experience with penetration testing, red teaming, or vulnerability research
- Familiarity with large language models, AI APIs, or prompt engineering
- Background in application security, cloud security, or ML systems
- Prior exposure to AI safety, alignment research, or responsible disclosure
- Experience writing clear, structured security reports or risk assessments
Why Join Us
- Work directly on frontier AI systems alongside leading research labs
- Fully remote and flexible — work when and where it suits you
- Freelance autonomy with the structure of meaningful, task-based work
- Contribute to AI safety work that has a real impact on how the world's most advanced models behave
- Potential for ongoing work and contract extension as new projects launch
Requirements
- Fluent proficiency in English (Written & Verbal)
- Reliable high-speed internet connection
- Bachelor's degree or equivalent professional experience
- Demonstrated expertise in Software Engineering
Compensation Analysis
What if your security instincts could directly shape how the world's most powerful AI systems defend themselves against attack? We're looking for AI Security Analysts to stress-test frontier models — probing for weaknesses, evaluating adversarial scenarios, and helping ensure that cutting-edge AI remains safe, reliable, and resistant to misuse. Thi
Skills & Categories
Explore other opportunities in related specializations:
Related Jobs
Browse All Jobs from Alignerr
Discover more opportunities on Alignerr that match your skills and interests.
View All Alignerr Jobs →Community Reviews
Leave your review
Frequently Asked Questions
What is the assessment actually like?
Notoriously strict. Alignerr uses TestGorilla for role-specific timed tests — a blank coding environment for engineers, rigorous grammar and fact-checking for writers. There is almost no hand-holding. The critical catch: this is essentially a one-shot process. Fail or abandon the assessment, and you are typically locked out of that role permanently with no option to retake.
How quickly can I start earning after I pass?
Not immediately. Even after passing the assessment and completing identity verification (via Persona) and billing setup (via Deel), you may sit in a waiting pool for weeks or months. You only start earning when a project matching your specific skills launches and you are officially assigned. Do not plan around Alignerr income until you are actively on a project.
Is there a community?
Yes — and it is one of Alignerr's genuine strengths. Once assigned to a project, you are added to Slack channels where you can ask questions, get rubric clarifications from admins, and talk to other AI trainers. This is rare in AI training and makes a real difference when guidelines are ambiguous or change mid-project.
What does the work actually look like?
It is practical, hands-on data work. You might be recording short videos, categorizing images, rating text responses, or analyzing data. The tasks are designed to be short and distinct—typically 5-60 minutes per task.
How flexible is the schedule?
Extremely. This is true "log in and work" flexibility. You can usually work for 20 minutes or 4 hours depending on your availability. There are rarely minimum hour requirements, making it ideal for side income.
Is there an interview?
Usually, no. Hiring for these roles is almost entirely based on passing an automated assessment or "qualification" task. If you pass the test, you get access to the work.
What is the barrier to entry?
Alignerr is known for difficult technical assessments. You must pass a timed test in your specific domain (e.g., Python, Physics, or Language) before you are eligible for any paid projects.