AI Infrastructure Engineer, Model Serving Platform

Scale AI logo

Scale AI

Software Engineer, ML Infrastructure

As a software engineer on the ML Infrastructure team, you will work on developing the platform for orchestrating post-training and model evaluation jobs. At Scale, we are constantly developing new data sources and running experiments to understand their impact on ML models. To support this effort, we are looking for engineers who are comfortable navigating cloud infrastructure challenges as well as research challenges in benchmarking and tuning LLMs.

The ideal candidate is someone who has strong fundamentals in machine learning, backend system design, and has prior ML Infrastructure experience. They should also be comfortable with infrastructure and large scale system design, as well as diagnosing both model performance and system failures.

Responsibilities

  • Develop re-usable platforms for running in-house and open-source LLM-benchmarks.
  • Ensure correctness and performance of post-training and eval jobs on the platform.
  • Improve APIs for managing ML workflows.
  • Contribute to foundational infrastructure at the company for model inference and training.
  • Participate in our team’s on-call process to ensure the availability of our services.
  • Own projects end-to-end, from requirements, scoping, design, to implementation, in a highly collaborative and cross-functional environment.

Requirements

  • 4+ years of experience developing ML platforms.
  • Passion for working closely with researchers to drive business impact.
  • Experience training and/or benchmarking LLMs.
  • Experience with Python, Docker, Kubernetes, and Infrastructure as code (e.g. Terraform).

Nice to Have

  • Experience building, deploying, and monitoring complex microservice architectures.
  • Experience working with a cloud technology stack (e.g. AWS or GCP).

Compensation and Benefits

Compensation packages at Scale for eligible roles include base salary, equity, and benefits. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position, determined by work location and additional factors, including job-related skills, experience, interview performance, and relevant education or training. Scale employees in eligible roles are also granted equity-based compensation, subject to Board of Director approval.

Your recruiter can share more about the specific salary range for your preferred location during the hiring process and confirm whether the hired role will be eligible for an equity grant.

You will also receive benefits including, but not limited to:

  • Comprehensive health, dental, and vision coverage
  • Retirement benefits
  • Learning and development stipend
  • Generous PTO
  • Additional benefits such as a commuter stipend

For pay transparency purposes, the base salary range for this full-time position in the locations of San Francisco, New York, and Seattle is:
$175,000—$220,000 USD

About Scale

At Scale, we believe that the transition from traditional software to AI is one of the most important shifts of our time. Our mission is to make that happen faster across every industry, and our team is transforming how organizations build and deploy AI. Our products power the world's most advanced LLMs, generative models, and computer vision models.

We are trusted by generative AI companies such as OpenAI, Meta, and Microsoft, government agencies like the U.S. Army and U.S. Air Force, and enterprises including GM and Accenture. We are expanding our team to accelerate the development of AI applications.

We believe that everyone should be able to bring their whole selves to work, which is why we are proud to be an inclusive and equal opportunity workplace. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability status, gender identity, or Veteran status.

Location

    New York, San Francisco, US

Job type

  • Fulltime

Role

Engineering

Keywords

  • LLMs
  • Kubernetes