aitrainer.work - AI Training Jobs Platform
Mathematics mercor

Math (PhD)

Mercor Remote Posted 110 days ago

Education

Any

Type

Pay Rate

$51/task

Posted

110d ago

✅ Applying through this link gives you a verified candidate referral.

Referrals from verified candidates give your profile a visibility boost and help support our platform at no cost to you.

This position is hosted on an external talent platform. Please only apply for this position if it fits your skills and interests.

Apply Now

About this Role

Location: Geography restricted to USA, UK, Canada, EU Type: Full-time or Part-time Contract Work Fluent Language Skills Required: English

Why This Role Exists

Location Requirements

Location

Type

Fluent Language Skills Required

Why This Role Exists

What You’ll Do

Write and refine prompts

Evaluate LLM-generated responses

Verify mathematical claims, derivations, and proofs

Conduct fact-checking

Annotate model responses

Assess clarity, structure, and appropriateness of explanations

model responses align with expected conversational behavior

Apply consistent evaluation standards

Who You Are

PhD in Mathematics

significant experience using large language models

excellent writing skills

strong attention to detail

Experience reviewing or editing technical or academic writing

Nice-to-Have Specialties

What Success Looks Like

Why Join Mercor

Location: Geography restricted to USA, UK, Canada, EU Type: Full-time or Part-time Contract Work Fluent Language Skills Required: English Why This Role Exists Mercor partners with leading AI teams to improve the quality, usefulness, and reliability of general-purpose conversational AI systems. These systems are used across a wide range of everyday and professional scenarios, and their effectiveness depends on how clearly, accurately, and helpfully they respond to real user questions. In mathematics-related contexts, conversational AI systems must demonstrate precise formal reasoning, mathematical rigor, and conceptual clarity. This project focuses on evaluating and improving how models reason about mathematical problems, explanations, and proofs across both foundational and advanced areas of mathematics. What You’ll Do Write and refine prompts to guide model behavior in mathematical contexts Evaluate LLM-generated responses to mathematics-related queries for correctness, rigor, and logical coherence Verify mathematical claims, derivations, and proofs using domain expertise Conduct fact-checking using authoritative public sources and domain knowledge Annotate model responses by identifying strengths, areas of improvement, and factual or conceptual inaccuracies Assess clarity, structure, and appropriateness of explanations for different audiences Ensure model responses align with expected conversational behavior and system guidelines Apply consistent evaluation standards by following clear taxonomies, benchmarks, and detailed evaluation guidelines Who You Are You hold (or are currently pursuing) a PhD in Mathematics or a closely related field, or have demonstrated exceptional achievement in mathematics (e.g., IMO medalist or comparable distinction) You have strong experience across core areas of mathematics, such as: Algebra & Number Theory Calculus & Analysis Geometry & Topology Discrete Mathematics, Logic & Computation Probability & Statistics You have significant experience using large language models (LLMs) and understand how and why people use them You have excellent writing skills and can clearly explain complex mathematical concepts You have strong attention to detail and consistently notice subtle issues others may overlook Experience reviewing or editing technical or academic writing Nice-to-Have Specialties Prior experience with RLHF, model evaluation, or data annotation work Experience teaching, mentoring, or explaining mathematical concepts to non-expert audiences Familiarity with evaluation rubrics, benchmarks, or structured review frameworks What Success Looks Like You identify inaccuracies or weak reasoning in mathematical-related model outputs Your feedback improves the rigor, clarity, and correctness of AI explanations You deliver consistent, reproducible evaluation artifacts that strengthen model performance Mercor customers trust their AI systems in mathematical contexts because you’ve rigorously evaluated them Why Join Mercor Mercor provides mathematicians with the opportunity to apply deep theoretical expertise to the evaluation and improvement of advanced AI systems. This flexible, remote role allows you to influence how mathematical reasoning is represented and communicated at scale. We consider all qualified applicants without regard to legally protected characteristics and provide reasonable accommodations upon request.

  • Write and refine prompts to guide model behavior in mathematical contexts
  • Evaluate LLM-generated responses to mathematics-related queries for correctness, rigor, and logical coherence
  • Verify mathematical claims, derivations, and proofs using domain expertise
  • Conduct fact-checking using authoritative public sources and domain knowledge
  • Annotate model responses by identifying strengths, areas of improvement, and factual or conceptual inaccuracies
  • Assess clarity, structure, and appropriateness of explanations for different audiences
  • Ensure model responses align with expected conversational behavior and system guidelines
  • Apply consistent evaluation standards by following clear taxonomies, benchmarks, and detailed evaluation guidelines
  • You hold (or are currently pursuing) a PhD in Mathematics or a closely related field, or have demonstrated exceptional achievement in mathematics (e.g., IMO medalist or comparable distinction)
  • You have strong experience across core areas of mathematics, such as:

Algebra & Number Theory

Calculus & Analysis

Geometry & Topology

Discrete Mathematics, Logic & Computation

Probability & Statistics

  • Algebra & Number Theory
  • Calculus & Analysis
  • Geometry & Topology
  • Discrete Mathematics, Logic & Computation
  • Probability & Statistics
  • You have significant experience using large language models (LLMs) and understand how and why people use them
  • You have excellent writing skills and can clearly explain complex mathematical concepts
  • You have strong attention to detail and consistently notice subtle issues others may overlook
  • Experience reviewing or editing technical or academic writing
  • Algebra & Number Theory
  • Calculus & Analysis
  • Geometry & Topology
  • Discrete Mathematics, Logic & Computation
  • Probability & Statistics
  • Prior experience with RLHF, model evaluation, or data annotation work
  • Experience teaching, mentoring, or explaining mathematical concepts to non-expert audiences
  • Familiarity with evaluation rubrics, benchmarks, or structured review frameworks
  • You identify inaccuracies or weak reasoning in mathematical-related model outputs
  • Your feedback improves the rigor, clarity, and correctness of AI explanations
  • You deliver consistent, reproducible evaluation artifacts that strengthen model performance
  • Mercor customers trust their AI systems in mathematical contexts because you’ve rigorously evaluated them

Requirements

  • Must be eligible to work in Remote
  • Fluent proficiency in English (Written & Verbal)
  • Reliable high-speed internet connection
  • Bachelor's degree or equivalent professional experience
  • Demonstrated expertise in Mathematics

Eligible Languages

Fluent proficiency in English

English

Compensation Analysis

Monetize your niche expertise without the billable hours. At $51/hr, this role offers elite compensation for pure intellectual work—no client management or administrative bloat.

Skills & Categories

Explore other opportunities in related specializations:

Related Jobs

Mercor

Browse All Jobs from Mercor

Discover more opportunities on Mercor that match your skills and interests.

View All Mercor Jobs →

Community Reviews

Loading reviews…

Frequently Asked Questions

Is this for freelancers or full-time employees?

Both. Mercor tries to match you with clients who want long-term contractors. Unlike other platforms where you log in and grab small tasks, Mercor matches you with one company for a steady role (e.g., 'Python Tutor for 3 months').

I'm not comfortable on camera. Can I still apply?

No. The application requires a video interview with an AI avatar. The AI asks you questions about your resume, and the video is shared with potential clients to prove your communication skills.

Does it cost money to join?

No. You should never pay to join these platforms. Mercor makes money by charging the client a fee on top of your hourly rate.

What does the work actually look like?

It is practical, hands-on data work. You might be recording short videos, categorizing images, rating text responses, or analyzing data. The tasks are designed to be short and distinct—typically 5-60 minutes per task.

How flexible is the schedule?

Extremely. This is true "log in and work" flexibility. You can usually work for 20 minutes or 4 hours depending on your availability. There are rarely minimum hour requirements, making it ideal for side income.

Is there an interview?

Usually, no. Hiring for these roles is almost entirely based on passing an automated assessment or "qualification" task. If you pass the test, you get access to the work.

How soon will I start?

Important: Mercor is a talent marketplace, not a task queue. Applying puts you in a pool of candidates. You will only start working when a specific client (like a major AI lab) selects your profile. This matching process can take weeks.