Privacy & Data Protection Law Expert
Turing • USA • Posted 3 days ago
Education
Any
Type
Pay Rate
$50/task
Posted
3d ago
✅ Applying through this link gives you a verified candidate referral.
Referrals from verified candidates give your profile a visibility boost and help support our platform at no cost to you.
This position is hosted on an external talent platform. Please only apply for this position if it fits your skills and interests.
About this Role
About Turing:
Based in San Francisco, California, Turing is the world’s leading research accelerator for frontier AI labs and a trusted partner for global enterprises deploying advanced AI systems. Turing supports customers in two ways: first, by accelerating frontier research with high-quality data, advanced training pipelines, plus top AI researchers who specialize in coding, reasoning, STEM, multilinguality, multimodality, and agents; and second, by applying that expertise to help enterprises transform AI from proof of concept into proprietary intelligence with systems that perform reliably, deliver measurable impact, and drive lasting results on the P&L.
Role Overview:
As an AI Safety & Policy Analyst, you will be on the front lines of developing safe and responsible AI. You will be responsible for challenging our models' safeguards, identifying new vulnerabilities, and creating the detailed evaluation rubrics used to train and test our next generation of large language models.
This role requires a unique blend of creativity, analytical rigor, and a deep understanding of policy. You will not just follow instructions; you will actively design the tests, using an adversarial mindset to discover how models fail. You will then use your analytical skills to articulate why they failed, creating the precise rubrics and rationales that teach our models to be safer and more helpful.
*NOTE: This role may involve reviewing or encountering disturbing, sensitive, or otherwise potentially distressing content as part of AI safety evaluations. Candidates selected for this position may be required to sign an acknowledgment form confirming their understanding and consent.
What does day-to-day look like:
In this role, you will be part of a dynamic team focused on LLM safety and alignment. Your day-to-day work will involve:
Designing and executing creative, multi-turn conversational prompts that test model compliance with complex safety policies (e.g., Discriminatory, Abetting, Copyrighted Content, Harmful Advice). Identifying, analyzing, and documenting model failures, including successful jailbreaks and subtle policy violations. Developing detailed, objective, and independent rubrics for new safety prompts, assigning priority scores (e.g., Crucial, Important, Less Important) to define and weight desired model behavior. Rigorously evaluating and stack-ranking multiple model responses to a single prompt, using the rubrics you created to ensure clear discrimination between good, bad, and nuanced failures. Writing clear, defensible "Single Rationales" for your rankings that explain the "why" behind your evaluation, focusing on both safety and quality. Collaborating with researchers and policy-makers to understand new risks and refine the safety taxonomy.
Education & Experience
BS/BA degree or equivalent experience in a relevant field (e.g., Policy, Law, Ethics, Linguistics, Journalism, Computer Science, or a related analytical field). Experience in content moderation, policy analysis, AI safety evaluation, or a related role is strongly preferred
Requirements:
English Proficiency: Ability to read and write in English with a high degree of comp. Exceptional Analytical Thinking: A proven ability to research and evaluate nuanced, complex, and ambiguous information against a defined set of policy criteria. Creative & Adversarial Mindset: Experience in "red teaming," prompt engineering, or designing creative challenge prompts intended to test and bypass AI safety filters. Strong Policy & Taxonomy Acumen: A strong understanding of Trust & Safety principles, particularly in relation to LLMs (e.g., categories like misinformation, abetting, bias/stereotypes, jailbreaks, and dual-use). We welcome candidates with expertise in at least one of the following domains: Cyberharm Violence, terrorism Bias and stereotypes Mental health and self-harm Child safety Nudity and sexually explicit content Misinformation Fraud Sycophancy Regulated goods Privacy and identity rights Copyright Legal, medical, financial information Meticulous Attention to Detail: The ability to design and author precise, self-contained, and independent evaluation rubrics that can clearly discriminate between models. Excellent Written Communication: Superior ability to articulate complex rationale for model rankings clearly and concisely, providing a strong training signal for engineers. Familiarity with RLHF (Reinforcement Learning from Human Feedback) workflows and data annotation is a significant plus. Feedback: Ability to provide constructive feedback and detailed annotations. Communication: Excellent communication and collaboration skills. Independence: Self-motivated and able to work independently in a remote setting. Technical Setup: Desktop/Laptop set up with a good internet connection.
Benefits:
Flexible working hours and remote work environment. Opportunity to work on cutting-edge AI projects with leading LLM companies. Potential for contract extension based on performance and project needs.
Offer Details:
Commitments Required : at least 4 hours per day and a total of 40 hours per week with 2-4 hours of overlap with PST. Engagement type : Contractor assignment/freelancer (no medical/paid leave) Duration of contract: 1 month This role will require some overlap with UTC-8:00 (2-4 hrs/day) America/Los_Angeles
Application Process:
Shortlisted candidates will be sent automated analytical challenges. Once you clear them, you are ready to go!
Requirements
- Must be eligible to work in USA
- Fluent proficiency in English (Written & Verbal)
- Reliable high-speed internet connection
- Bachelor's degree or equivalent professional experience
- Demonstrated expertise in STEM
Eligible Languages
Fluent proficiency in English
Compensation Analysis
Rare opportunity for top 1% experts. Earn $50/hr contributing to the world's most advanced AI labs. This is one of the few roles where academic precision is valued as highly as commercial output.
Skills & Categories
Explore other opportunities in related specializations:
Related Jobs
Browse All Jobs from Turing
Discover more opportunities on Turing that match your skills and interests.
View All Turing Jobs →Community Reviews
Leave your review
Frequently Asked Questions
Do I need to be a software engineer?
Not anymore. Turing built its reputation matching senior engineers with Silicon Valley companies, but they have heavily pivoted into AGI infrastructure. They now hire non-engineering domain experts, technical writers, and researchers for post-training data annotation and RLHF. A strong analytical background and excellent English are required, but you do not need to code.
How does matching work?
Turing calls it the 'Intelligent Talent Cloud.' You build a profile and go through deep vetting — automated tests, an AI-powered interview, and practical skill assessments. Once vetted, Turing's algorithm automatically surfaces you to partner companies (Fortune 500s and top AI labs). You don't browse job boards or bid on work — matches come to you.
How does payment work?
You are hired as an independent contractor, responsible for your own local taxes. Turing collects payment from the client and pays you monthly in USD via Deel, Payoneer, or direct bank/wire transfer. Monthly pay is standard for long-term contract roles — if you need weekly cash flow, this structure requires adjustment.
Is this just labeling data?
No. This is closer to academic research. You will likely be writing or verifying complex proofs, solving advanced equations, or checking the logic of a model's step-by-step reasoning. The goal is to teach AI systems to reason deeply in your field.
Do I need a PhD?
For the highest pay tiers in this category, a PhD (or current enrollment) is usually expected. However, the most important factor is your ability to pass the domain assessment. If you can solve the problems, the degree is secondary.
Is the work continuous?
Work in niche fields is often project-based. A specific "campaign" (e.g., training a model on Quantum Mechanics) might last for a few weeks. It is best to treat this as a high-paying fellowship or grant rather than a permanent daily job.