aitrainer.work - AI Training Jobs Platform
Software Engineering mercor

Software Engineering, Data Science, and Systems Design Experts, Ruby (5+ YOE)

Mercor • Remote • Posted 29 days ago

Education

Any

Type

Pay Rate

$80/task

Posted

29d ago

✅ Applying through this link gives you a verified candidate referral.

Referrals from verified candidates give your profile a visibility boost and help support our platform at no cost to you.

This position is hosted on an external talent platform. Please only apply for this position if it fits your skills and interests.

Apply Now →

About this Role

Location: US-Based and Non-US-Based Type: Full-time or Part-time Contract Work Fluent Language Skills Required: English

Why This Role Exists

Location

Type

Fluent Language Skills Required

Why This Role Exists

What You’ll Do

Evaluate LLM-generated responses

Conduct fact-checking

executing code and validating outputs using appropriate tools

Annotate model responses

model responses align with expected conversational behavior

Apply consistent evaluation standards

Who You Are

BS, MS, or PhD in Computer Science or a closely related field

significant (5+ years) real-world experience in software engineering

Ruby

HackerRank or LeetCode Medium and Hard–level problems independently

significant experience using LLMs while coding

You have strong attention to detail

comfortable evaluating complex technical reasoning

Nice-to-Have Specialties

What Success Looks Like

Why Join Mercor

Location: US-Based and Non-US-Based Type: Full-time or Part-time Contract Work Fluent Language Skills Required: English Why This Role Exists Mercor partners with leading AI teams to improve the quality, usefulness, and reliability of general-purpose conversational AI systems. These systems are used across a wide range of everyday and professional scenarios, and their effectiveness depends on how clearly, accurately, and helpfully they respond to real user questions. In coding and software engineering contexts, conversational AI systems must demonstrate correct reasoning, strong problem-solving ability, and adherence to real-world engineering best practices. This project focuses on evaluating and improving how models reason about code, generate solutions, and explain technical concepts across a variety of programming tasks and complexity levels. What You’ll Do Evaluate LLM-generated responses to coding and software engineering queries for accuracy, reasoning, clarity, and completeness Conduct fact-checking using trusted public sources and authoritative references Conduct accuracy testing by executing code and validating outputs using appropriate tools Annotate model responses by identifying strengths, areas of improvement, and factual or conceptual inaccuracies Assess code quality, readability, algorithmic soundness, and explanation quality Ensure model responses align with expected conversational behavior and system guidelines Apply consistent evaluation standards by following clear taxonomies, benchmarks, and detailed evaluation guidelines Who You Are You hold a BS, MS, or PhD in Computer Science or a closely related field You have significant (5+ years) real-world experience in software engineering or related technical roles You are an expert in Ruby You are able to solve HackerRank or LeetCode Medium and Hard–level problems independently You have experience contributing to well-known open-source projects, including merged pull requests You have significant experience using LLMs while coding and understand their strengths and failure modes You have strong attention to detail and are comfortable evaluating complex technical reasoning, identifying subtle bugs or logical flaws Nice-to-Have Specialties Prior experience with RLHF, model evaluation, or data annotation work Track record in competitive programming Experience reviewing code in production environments Familiarity with multiple programming paradigms or ecosystems Experience explaining complex technical concepts to non-expert audiences What Success Looks Like You identify incorrect logic, inefficiencies, edge cases, or misleading explanations in model-generated code, technical concepts, and system design discussions Your feedback improves the correctness, robustness, and clarity of AI coding outputs You deliver reproducible evaluation artifacts that strengthen model performance Mercor customers trust AI systems to assist reliably with real-world coding tasks Why Join Mercor At Mercor, experienced software engineers play a direct role in shaping how AI systems reason about and generate code. This remote role allows you to apply your technical expertise to high-impact AI development work, improving systems used by developers around the world. We consider all qualified applicants without regard to legally protected characteristics and provide reasonable accommodations upon request.

  • Evaluate LLM-generated responses to coding and software engineering queries for accuracy, reasoning, clarity, and completeness
  • Conduct fact-checking using trusted public sources and authoritative references
  • Conduct accuracy testing by executing code and validating outputs using appropriate tools
  • Annotate model responses by identifying strengths, areas of improvement, and factual or conceptual inaccuracies
  • Assess code quality, readability, algorithmic soundness, and explanation quality
  • Ensure model responses align with expected conversational behavior and system guidelines
  • Apply consistent evaluation standards by following clear taxonomies, benchmarks, and detailed evaluation guidelines
  • You hold a BS, MS, or PhD in Computer Science or a closely related field
  • You have significant (5+ years) real-world experience in software engineering or related technical roles
  • You are an expert in Ruby
  • You are able to solve HackerRank or LeetCode Medium and Hard–level problems independently
  • You have experience contributing to well-known open-source projects, including merged pull requests
  • You have significant experience using LLMs while coding and understand their strengths and failure modes
  • You have strong attention to detail and are comfortable evaluating complex technical reasoning, identifying subtle bugs or logical flaws
  • Prior experience with RLHF, model evaluation, or data annotation work
  • Track record in competitive programming
  • Experience reviewing code in production environments
  • Familiarity with multiple programming paradigms or ecosystems
  • Experience explaining complex technical concepts to non-expert audiences
  • You identify incorrect logic, inefficiencies, edge cases, or misleading explanations in model-generated code, technical concepts, and system design discussions
  • Your feedback improves the correctness, robustness, and clarity of AI coding outputs
  • You deliver reproducible evaluation artifacts that strengthen model performance
  • Mercor customers trust AI systems to assist reliably with real-world coding tasks

Requirements

  • Must be eligible to work in Remote
  • Fluent proficiency in English (Written & Verbal)
  • Reliable high-speed internet connection
  • Bachelor's degree or equivalent professional experience
  • Demonstrated expertise in Software Engineering

Eligible Languages

Fluent proficiency in English

English

Compensation Analysis

Shape the "brain" of future AI. By working as a Software Engineering, Data Science, and Systems Design Experts, Ruby (5+ YOE), you ensure that future models understand the nuance of your field. At $80/hr, it's a lucrative way to preserve the integrity of your profession in the digital age.

Skills & Categories

Explore other opportunities in related specializations:

Related Jobs

Mercor

Browse All Jobs from Mercor

Discover more opportunities on Mercor that match your skills and interests.

View All Mercor Jobs →

Community Reviews

Loading reviews…

Frequently Asked Questions

Is this for freelancers or full-time employees?

Both. Mercor tries to match you with clients who want long-term contractors. Unlike other platforms where you log in and grab small tasks, Mercor matches you with one company for a steady role (e.g., 'Python Tutor for 3 months').

I'm not comfortable on camera. Can I still apply?

No. The application requires a video interview with an AI avatar. The AI asks you questions about your resume, and the video is shared with potential clients to prove your communication skills.

Does it cost money to join?

No. You should never pay to join these platforms. Mercor makes money by charging the client a fee on top of your hourly rate.

What does the work actually look like?

It is practical, hands-on data work. You might be recording short videos, categorizing images, rating text responses, or analyzing data. The tasks are designed to be short and distinct—typically 5-60 minutes per task.

How flexible is the schedule?

Extremely. This is true "log in and work" flexibility. You can usually work for 20 minutes or 4 hours depending on your availability. There are rarely minimum hour requirements, making it ideal for side income.

Is there an interview?

Usually, no. Hiring for these roles is almost entirely based on passing an automated assessment or "qualification" task. If you pass the test, you get access to the work.

How soon will I start?

Important: Mercor is a talent marketplace, not a task queue. Applying puts you in a pool of candidates. You will only start working when a specific client (like a major AI lab) selects your profile. This matching process can take weeks.