Generalist - English & Arabic
Mercor • Remote • Posted 108 days ago
Education
Any
Type
Pay Rate
$0/task
Posted
108d ago
✅ Applying through this link gives you a verified candidate referral.
Referrals from verified candidates give your profile a visibility boost and help support our platform at no cost to you.
This position is hosted on an external talent platform. Please only apply for this position if it fits your skills and interests.
About this Role
Location: Geography restricted to Egypt, Saudi Arabia, UAE, USA Type: Full-time or Part-time Contract Work Fluent Language Skills Required: English & Arabic
Why This Role Exists
Location
Type
Fluent Language Skills Required
Why This Role Exists
improving general chat behavior
What You’ll Do
Who You Are
Bachelor’s degree
native speaker
ILR 5/primary fluency (C2 on the CEFR scale)
Arabic
significant experience using large language models
excellent writing skills
strong attention to detail
adaptable
structured analytical thinking
excellent college-level mathematics skills
Nice-to-Have Specialties
RLHF, model evaluation, or data annotation work
high-quality written content
fine-grained qualitative judgments
Familiarity with evaluation rubrics
What Success Looks Like
Why Join Mercor
Location: Geography restricted to Egypt, Saudi Arabia, UAE, USA Type: Full-time or Part-time Contract Work Fluent Language Skills Required: English & Arabic Why This Role Exists Mercor partners with leading AI teams to improve the quality, usefulness, and reliability of general-purpose conversational AI systems. These systems are used across a wide range of everyday and professional scenarios, and their effectiveness depends on how clearly, accurately, and helpfully they respond to real user questions. This project focuses on evaluating and improving general chat behavior in large language models (LLMs). You will assess model-generated responses across diverse topics, provide high-quality human feedback, and help ensure AI systems communicate in ways that are accurate, well-reasoned, and aligned with human expectations. What You’ll Do Evaluate LLM-generated responses on their ability to effectively answer user queries Conduct fact-checking using trusted public sources and external tools Generate high-quality human evaluation data by annotating response strengths, areas for improvement, and factual inaccuracies Assess reasoning quality, clarity, tone, and completeness of responses Ensure model responses align with expected conversational behavior and system guidelines Apply consistent annotations by following clear taxonomies, benchmarks, and detailed evaluation guidelines Who You Are You hold a Bachelor’s degree You are a native speaker or have ILR 5/primary fluency (C2 on the CEFR scale) in Arabic You have significant experience using large language models (LLMs) and understand how and why people use them You have excellent writing skills and can clearly articulate nuanced feedback You have strong attention to detail and consistently notice subtle issues others may overlook You are adaptable and comfortable moving across topics, domains, and customer requirements You have a background or experience in domains requiring structured analytical thinking (e.g., research, policy, analytics, linguistics, engineering) You have excellent college-level mathematics skills Nice-to-Have Specialties Prior experience with RLHF, model evaluation, or data annotation work Experience writing or editing high-quality written content Experience comparing multiple outputs and making fine-grained qualitative judgments Familiarity with evaluation rubrics, benchmarks, or quality scoring systems What Success Looks Like You identify factual inaccuracies, reasoning errors, and communication gaps in model responses You produce clear, consistent, and reproducible evaluation artifacts Your feedback leads to measurable improvements in response quality and user experience Mercor customers trust the quality of their AI systems because your evaluations surface issues before public release Why Join Mercor At Mercor, you’ll work at the frontier of human-in-the-loop AI development, directly shaping how advanced language models behave in the real world. This role offers flexible, remote contract work and the opportunity to contribute meaningfully to AI systems used by millions of people. Contract rates are competitive and aligned with the level of expertise required and scope of work. We consider all qualified applicants without regard to legally protected characteristics and provide reasonable accommodations upon request.
- Evaluate LLM-generated responses on their ability to effectively answer user queries
- Conduct fact-checking using trusted public sources and external tools
- Generate high-quality human evaluation data by annotating response strengths, areas for improvement, and factual inaccuracies
- Assess reasoning quality, clarity, tone, and completeness of responses
- Ensure model responses align with expected conversational behavior and system guidelines
- Apply consistent annotations by following clear taxonomies, benchmarks, and detailed evaluation guidelines
- You hold a Bachelor’s degree
- You are a native speaker or have ILR 5/primary fluency (C2 on the CEFR scale) in Arabic
- You have significant experience using large language models (LLMs) and understand how and why people use them
- You have excellent writing skills and can clearly articulate nuanced feedback
- You have strong attention to detail and consistently notice subtle issues others may overlook
- You are adaptable and comfortable moving across topics, domains, and customer requirements
- You have a background or experience in domains requiring structured analytical thinking (e.g., research, policy, analytics, linguistics, engineering)
- You have excellent college-level mathematics skills
- Prior experience with RLHF, model evaluation, or data annotation work
- Experience writing or editing high-quality written content
- Experience comparing multiple outputs and making fine-grained qualitative judgments
- Familiarity with evaluation rubrics, benchmarks, or quality scoring systems
- You identify factual inaccuracies, reasoning errors, and communication gaps in model responses
- You produce clear, consistent, and reproducible evaluation artifacts
- Your feedback leads to measurable improvements in response quality and user experience
- Mercor customers trust the quality of their AI systems because your evaluations surface issues before public release
Requirements
- Must be eligible to work in Remote
- Fluent proficiency in English (Written & Verbal)
- Reliable high-speed internet connection
Eligible Languages
Fluent proficiency in Arabic or English
Compensation Analysis
Work from anywhere, at any time. This fully remote position ($0/hr) breaks down geographic barriers, allowing you to earn US-competitive rates regardless of your local market. It is a perfect stepping stone for building a career in the data labeling and AI training ecosystem.
Skills & Categories
Explore other opportunities in related specializations:
Related Jobs
Browse All Jobs from Mercor
Discover more opportunities on Mercor that match your skills and interests.
View All Mercor Jobs →Community Reviews
Leave your review
Frequently Asked Questions
Is this for freelancers or full-time employees?
Both. Mercor tries to match you with clients who want long-term contractors. Unlike other platforms where you log in and grab small tasks, Mercor matches you with one company for a steady role (e.g., 'Python Tutor for 3 months').
I'm not comfortable on camera. Can I still apply?
No. The application requires a video interview with an AI avatar. The AI asks you questions about your resume, and the video is shared with potential clients to prove your communication skills.
Does it cost money to join?
No. You should never pay to join these platforms. Mercor makes money by charging the client a fee on top of your hourly rate.
What does the work actually look like?
It is practical, hands-on data work. You might be recording short videos, categorizing images, rating text responses, or analyzing data. The tasks are designed to be short and distinct—typically 5-60 minutes per task.
How flexible is the schedule?
Extremely. This is true "log in and work" flexibility. You can usually work for 20 minutes or 4 hours depending on your availability. There are rarely minimum hour requirements, making it ideal for side income.
Is there an interview?
Usually, no. Hiring for these roles is almost entirely based on passing an automated assessment or "qualification" task. If you pass the test, you get access to the work.
How soon will I start?
Important: Mercor is a talent marketplace, not a task queue. Applying puts you in a pool of candidates. You will only start working when a specific client (like a major AI lab) selects your profile. This matching process can take weeks.