aitrainer.work - AI Training Jobs Platform
Software Engineering alignerr

Fluent English Software Engineer – AI Testing

Alignerr Remote Posted 27 days ago

Education

Any

Type

Pay Rate

$75/task

Posted

27d ago

✅ Applying through this link gives you a verified candidate referral.

Referrals from verified candidates give your profile a visibility boost and help support our platform at no cost to you.

This position is hosted on an external talent platform. Please only apply for this position if it fits your skills and interests.

Apply Now

About this Role

What You'll Do

  • Evaluate frontier AI models on complex software engineering tasks — from algorithm design to system architecture
  • Hunt for bugs, logical errors, hallucinations, and reliability issues in AI-generated code
  • Design and review prompts, test cases, and evaluation scenarios that expose model weaknesses
  • Write precise, structured feedback documenting model strengths, failure modes, and edge cases
  • Work across multiple languages and codebases to assess how well AI generalizes across real engineering contexts
  • Think like a senior reviewer — not a user — and push models beyond surface-level correctness

About the Role

What if your engineering expertise could directly shape how the next generation of AI writes, reasons about, and debugs code? We're looking for experienced software engineers to evaluate cutting-edge AI models on complex, real-world coding tasks — finding the failure modes, hallucinations, and edge cases that only a seasoned engineer would catch. This is a fully remote, flexible contract role built for engineers who think critically, debug instinctively, and know what good code actually looks like.

  • Organization: Alignerr
  • Type: Hourly Contract
  • Location: Remote
  • Commitment: 10–40 hours/week

Who You Are

  • 3+ years of professional software engineering experience
  • Strong proficiency in at least one of: TypeScript, Ruby, Java, or C++
  • Excellent written and spoken English — you communicate complex technical reasoning clearly
  • Sharp debugging instincts — you notice when something is subtly wrong, not just obviously broken
  • Familiar with modern development workflows: Git, CLI tooling, testing frameworks, and IDEs
  • Able to critically evaluate AI output rather than simply accept it at face value

Nice to Have

  • Experience across multiple programming languages or paradigms
  • Background in code review, QA engineering, or technical writing
  • Prior exposure to LLMs, AI evaluation, or prompt engineering workflows
  • Comfort working with ambiguous tasks and defining your own evaluation criteria

Why Join Us

  • Work on frontier AI projects alongside leading research labs
  • Fully remote and flexible — set your own hours and work from anywhere
  • Freelance autonomy with the structure of meaningful, task-based engineering work
  • Make a direct, tangible impact on how AI understands and produces real-world code
  • Potential for ongoing work and contract extension as new projects launch

Requirements

  • Fluent proficiency in English (Written & Verbal)
  • Reliable high-speed internet connection
  • Bachelor's degree or equivalent professional experience
  • Demonstrated expertise in Software Engineering

Eligible Languages

Fluent proficiency in English

English

Compensation Analysis

What if your engineering expertise could directly shape how the next generation of AI writes, reasons about, and debugs code? We're looking for experienced software engineers to evaluate cutting-edge AI models on complex, real-world coding tasks — finding the failure modes, hallucinations, and edge cases that only a seasoned engineer would catch. T

Skills & Categories

Explore other opportunities in related specializations:

Related Jobs

Alignerr

Browse All Jobs from Alignerr

Discover more opportunities on Alignerr that match your skills and interests.

View All Alignerr Jobs →

Community Reviews

Loading reviews…

Frequently Asked Questions

What is the assessment actually like?

Notoriously strict. Alignerr uses TestGorilla for role-specific timed tests — a blank coding environment for engineers, rigorous grammar and fact-checking for writers. There is almost no hand-holding. The critical catch: this is essentially a one-shot process. Fail or abandon the assessment, and you are typically locked out of that role permanently with no option to retake.

How quickly can I start earning after I pass?

Not immediately. Even after passing the assessment and completing identity verification (via Persona) and billing setup (via Deel), you may sit in a waiting pool for weeks or months. You only start earning when a project matching your specific skills launches and you are officially assigned. Do not plan around Alignerr income until you are actively on a project.

Is there a community?

Yes — and it is one of Alignerr's genuine strengths. Once assigned to a project, you are added to Slack channels where you can ask questions, get rubric clarifications from admins, and talk to other AI trainers. This is rare in AI training and makes a real difference when guidelines are ambiguous or change mid-project.

What does the work actually look like?

It is practical, hands-on data work. You might be recording short videos, categorizing images, rating text responses, or analyzing data. The tasks are designed to be short and distinct—typically 5-60 minutes per task.

How flexible is the schedule?

Extremely. This is true "log in and work" flexibility. You can usually work for 20 minutes or 4 hours depending on your availability. There are rarely minimum hour requirements, making it ideal for side income.

Is there an interview?

Usually, no. Hiring for these roles is almost entirely based on passing an automated assessment or "qualification" task. If you pass the test, you get access to the work.

What is the barrier to entry?

Alignerr is known for difficult technical assessments. You must pass a timed test in your specific domain (e.g., Python, Physics, or Language) before you are eligible for any paid projects.