aitrainer.work - AI Training Jobs Platform
STEM mercor

MLOps Engineer (JAX, PyTorch, Pallas/Triton)

Mercor β€’ Remote β€’ Posted 0 days ago

Education

Any

Type

Pay Rate

$110/task

Posted

0d ago

βœ… Applying through this link gives you a verified candidate referral.

Referrals from verified candidates give your profile a visibility boost and help support our platform at no cost to you.

This position is hosted on an external talent platform. Please only apply for this position if it fits your skills and interests.

Apply Now β†’

About this Role

Join a leading AI lab's cutting-edge GenAI team to be at the core of the AI revolution, where your expertise fuels the development of the most advanced Large Language Models.

Join a leading AI lab's cutting-edge GenAI team and help build foundational AI models from the ground up. We're seeking talented MLOps Engineers with deep, hands-on expertise in modern ML frameworks β€” specifically JAX, PyTorch, and kernel-level programming (Pallas/Triton) β€” to bring technical excellence and elevate the quality of our AI training data.

Location Requirements

About Cincinnatus LLC

Equal Employment Opportunity

Join a leading AI lab's cutting-edge GenAI team to be at the core of the AI revolution, where your expertise fuels the development of the most advanced Large Language Models. Join a leading AI lab's cutting-edge GenAI team and help build foundational AI models from the ground up. We're seeking talented MLOps Engineers with deep, hands-on expertise in modern ML frameworks β€” specifically JAX, PyTorch, and kernel-level programming (Pallas/Triton) β€” to bring technical excellence and elevate the quality of our AI training data. This is a W-2 employment position with Cincinnatus LLC (or appropriate international entity), with the opportunity to be placed at a leading AI Lab as part of their extended workforce. Guide research and engineering teams to close knowledge gaps and improve AI model performance in MLOps, training infrastructure, and ML framework-level topics. Design challenging, domain-relevant tasks across multiple specializations, and write accurate and well-structured solutions to MLOps and ML systems problems. Evaluate MLOps tasks and solutions and provide clear, written technical feedback. Develop guidelines and detailed rubrics/evaluation frameworks to assess training pipeline design, distributed systems reasoning, and kernel-level optimization across tasks. Collaborate with other subject matter experts to ensure consistency and accuracy in training data. 5+ years of dedicated professional experience in ML infrastructure, MLOps, or ML systems engineering at a recognized, top-tier organization. Hands-on production experience with JAX and/or PyTorch at scale β€” including distributed training strategies (FSDP, tensor parallelism, pipeline parallelism), memory optimization, and framework-level debugging. Experience writing or optimizing custom GPU kernels using Pallas (JAX) or Triton β€” including tiling strategies, memory layout design, and kernel fusion. Demonstrable career progression. Ability to engage reliably for at least 30 hours/week during weekdays. Strong written communication skills and the ability to explain complex technical decisions clearly. About Cincinnatus LLC: Cincinnatus LLC is an enterprise staffing company that partners with leading technology companies to source and employ highly skilled professionals for contingent and contract-based opportunities. Cincinnatus serves as the employer of record for these engagements, providing W-2 employment, payroll, benefits, and compliance, while placing employees directly within client teams to work on high-impact initiatives. Equal Employment Opportunity: Cincinnatus is proud to be an Equal Employment Opportunity employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, reproductive health decisions, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, genetic information, political views or activity, or any other legally protected characteristic. We consider all qualified applicants without regard to legally protected characteristics and provide reasonable accommodations upon request.

  • Guide research and engineering teams to close knowledge gaps and improve AI model performance in MLOps, training infrastructure, and ML framework-level topics.
  • Design challenging, domain-relevant tasks across multiple specializations, and write accurate and well-structured solutions to MLOps and ML systems problems.
  • Evaluate MLOps tasks and solutions and provide clear, written technical feedback.
  • Develop guidelines and detailed rubrics/evaluation frameworks to assess training pipeline design, distributed systems reasoning, and kernel-level optimization across tasks.
  • Collaborate with other subject matter experts to ensure consistency and accuracy in training data.
  • 5+ years of dedicated professional experience in ML infrastructure, MLOps, or ML systems engineering at a recognized, top-tier organization.
  • Hands-on production experience with JAX and/or PyTorch at scale β€” including distributed training strategies (FSDP, tensor parallelism, pipeline parallelism), memory optimization, and framework-level debugging.
  • Experience writing or optimizing custom GPU kernels using Pallas (JAX) or Triton β€” including tiling strategies, memory layout design, and kernel fusion.
  • Demonstrable career progression.
  • Ability to engage reliably for at least 30 hours/week during weekdays.
  • Strong written communication skills and the ability to explain complex technical decisions clearly.

Requirements

  • Must be eligible to work in Remote
  • Fluent proficiency in English (Written & Verbal)
  • Reliable high-speed internet connection
  • Bachelor's degree or equivalent professional experience
  • Demonstrated expertise in STEM

Compensation Analysis

This role offers a powerful combination of high income ($110/hr) and total flexibility. Unlike traditional contracting, you generally set your own hours. It is an ideal "second stream" of income for professionals who want to stay sharp in their field while gaining exposure to the booming AI industry.

Skills & Categories

Explore other opportunities in related specializations:

Related Jobs

Mercor

Browse All Jobs from Mercor

Discover more opportunities on Mercor that match your skills and interests.

View All Mercor Jobs β†’

Community Reviews

Loading reviews…

Frequently Asked Questions

Is this for freelancers or full-time employees?

Both. Mercor tries to match you with clients who want long-term contractors. Unlike other platforms where you log in and grab small tasks, Mercor matches you with one company for a steady role (e.g., 'Python Tutor for 3 months').

I'm not comfortable on camera. Can I still apply?

No. The application requires a video interview with an AI avatar. The AI asks you questions about your resume, and the video is shared with potential clients to prove your communication skills.

Does it cost money to join?

No. You should never pay to join these platforms. Mercor makes money by charging the client a fee on top of your hourly rate.

Is this traditional consulting?

Not exactly. You act as a "Teacher" for advanced AI. Instead of client deliverables, you are given complex scenarios to evaluate. You grade the AI's logic, correct its hallucinations, and provide expert-level reasoning. Your job is to train the model to think like you do.

Why is the pay so high?

This role requires deep, verified expertise. General knowledge isn't enough; the model is specifically being trained on "edge cases"β€”the rare, difficult, or highly technical nuances that only a senior professional would know.

What is the workload like?

This is cognitive, deep work. Unlike simple data labeling, you might spend 45-60 minutes on a single task, researching citations or verifying complex calculations. Quality is prioritized over speed.

How soon will I start?

Important: Mercor is a talent marketplace, not a task queue. Applying puts you in a pool of candidates. You will only start working when a specific client (like a major AI lab) selects your profile. This matching process can take weeks.