Senior Backend Engineer, Inference Platform About the Team Together AI is building the Inference Platform that brings the most advanced generative AI models to the world. Our platform powers multi tenant serverless workloads and dedicated endpoints, enabling developers, enterprises, and researchers to harness the latest LLMs, multimodal models, image, audio, video, and speech models at scale. If you get a thrill from optimizing latency down to the last millisecond, this is your playground. You'll work hands on with tens of thousands of GPUs (H100s, H200s, GB200s, and beyond), figuring out how to fully utilize every FLOP and every gigabyte of memory. You'll collaborate directly with research teams to bring frontier models into production, making breakthroughs usable in the real world. Our team also works closely with the open source community, contributing to and leveraging projects like SGLang, vLLM, and NVIDIA Dynamo to push the boundaries of inference performance and efficiency. Some of What You'll Work On Build and optimize global and local request routing, ensuring low latency load balancing across data centers and model engine pods. Develop auto scaling systems to dynamically allocate resources and meet strict SLOs across dozens of data centers. Design systems for multi tenant traffic shaping, tuning both resource allocation and request handling - including smart rate limiting and regulation - to ensure fairness and consistent experience across all users. Engineer trade offs between latency and throughput to serve diverse workloads efficiently. Optimize prefix caching to reduce model compute and speed up responses. Collaborate with ML researchers to bring new model architectures into production at scale. Continuously profile and analyze system level performance to identify bottlenecks and implement optimizations. What We're Looking For 5+ years of demonstrated experience building large scale, fault tolerant, distributed systems and API microservices. Strong background in designing, analysing, and improving efficiency, scalability, and stability of complex systems. Excellent understanding of low level OS concepts: multi threading, memory management, networking, and storage performance. Expert level programming in one or more of: Rust, Go, Python, or TypeScript. Knowledge of modern LLMs and generative models and how they are served in production is a plus. Experience working with the open source ecosystem around inference is highly valuable; familiarity with SGLang, vLLM, or NVIDIA Dynamo will be especially handy. Experience with Kubernetes or container orchestration is a strong plus. Familiarity with GPU software stacks (CUDA, Triton, NCCL) and HPC technologies (InfiniBand, NVLink, MPI) is a plus. Bachelor's or Master's degree in Computer Science, Computer Engineering, or a related field, or equivalent practical experience. Why Join Us? Shape the core inference backbone that powers Together AI's frontier models. Solve performance critical challenges in global request routing, load balancing, and large scale resource allocation. Work with state of the art accelerators (H100s, H200s, GB200s) at global scale. Partner with world class researchers to bring new model architectures into production. Collaborate with and contribute to the open source community, shaping the tools that advance the industry. Enjoy a culture of deep technical ownership and high impact - where your work makes models faster, cheaper, and more accessible. Competitive compensation, equity, and benefits. About Together AI Together AI is a research driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co designing software, hardware, algorithms, and models. We have contributed to leading open source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as FlashAttention, Hyena, FlexGen, and RedPajama. Compensation We offer competitive compensation, startup equity, health insurance, and other benefits. The US base salary range for this full time position is: $160,000 - $250,000 + equity + benefits. Our salary ranges are determined by location, level, and role. Individual compensation will be determined by experience, skills, and job related knowledge. Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more.
04/02/2026
Full time
Senior Backend Engineer, Inference Platform About the Team Together AI is building the Inference Platform that brings the most advanced generative AI models to the world. Our platform powers multi tenant serverless workloads and dedicated endpoints, enabling developers, enterprises, and researchers to harness the latest LLMs, multimodal models, image, audio, video, and speech models at scale. If you get a thrill from optimizing latency down to the last millisecond, this is your playground. You'll work hands on with tens of thousands of GPUs (H100s, H200s, GB200s, and beyond), figuring out how to fully utilize every FLOP and every gigabyte of memory. You'll collaborate directly with research teams to bring frontier models into production, making breakthroughs usable in the real world. Our team also works closely with the open source community, contributing to and leveraging projects like SGLang, vLLM, and NVIDIA Dynamo to push the boundaries of inference performance and efficiency. Some of What You'll Work On Build and optimize global and local request routing, ensuring low latency load balancing across data centers and model engine pods. Develop auto scaling systems to dynamically allocate resources and meet strict SLOs across dozens of data centers. Design systems for multi tenant traffic shaping, tuning both resource allocation and request handling - including smart rate limiting and regulation - to ensure fairness and consistent experience across all users. Engineer trade offs between latency and throughput to serve diverse workloads efficiently. Optimize prefix caching to reduce model compute and speed up responses. Collaborate with ML researchers to bring new model architectures into production at scale. Continuously profile and analyze system level performance to identify bottlenecks and implement optimizations. What We're Looking For 5+ years of demonstrated experience building large scale, fault tolerant, distributed systems and API microservices. Strong background in designing, analysing, and improving efficiency, scalability, and stability of complex systems. Excellent understanding of low level OS concepts: multi threading, memory management, networking, and storage performance. Expert level programming in one or more of: Rust, Go, Python, or TypeScript. Knowledge of modern LLMs and generative models and how they are served in production is a plus. Experience working with the open source ecosystem around inference is highly valuable; familiarity with SGLang, vLLM, or NVIDIA Dynamo will be especially handy. Experience with Kubernetes or container orchestration is a strong plus. Familiarity with GPU software stacks (CUDA, Triton, NCCL) and HPC technologies (InfiniBand, NVLink, MPI) is a plus. Bachelor's or Master's degree in Computer Science, Computer Engineering, or a related field, or equivalent practical experience. Why Join Us? Shape the core inference backbone that powers Together AI's frontier models. Solve performance critical challenges in global request routing, load balancing, and large scale resource allocation. Work with state of the art accelerators (H100s, H200s, GB200s) at global scale. Partner with world class researchers to bring new model architectures into production. Collaborate with and contribute to the open source community, shaping the tools that advance the industry. Enjoy a culture of deep technical ownership and high impact - where your work makes models faster, cheaper, and more accessible. Competitive compensation, equity, and benefits. About Together AI Together AI is a research driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co designing software, hardware, algorithms, and models. We have contributed to leading open source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as FlashAttention, Hyena, FlexGen, and RedPajama. Compensation We offer competitive compensation, startup equity, health insurance, and other benefits. The US base salary range for this full time position is: $160,000 - $250,000 + equity + benefits. Our salary ranges are determined by location, level, and role. Individual compensation will be determined by experience, skills, and job related knowledge. Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more.
A pioneering AI company in San Francisco is seeking a Senior Backend Engineer for their Inference Platform. The role involves optimizing latency, developing auto-scaling systems, and collaborating with ML researchers to scale architectures. Ideal candidates will have extensive experience in distributed systems and expertise in languages like Rust, Go, Python, or TypeScript. Competitive compensation ranges from $160,000 to $250,000 annually, plus equity and benefits.
04/02/2026
Full time
A pioneering AI company in San Francisco is seeking a Senior Backend Engineer for their Inference Platform. The role involves optimizing latency, developing auto-scaling systems, and collaborating with ML researchers to scale architectures. Ideal candidates will have extensive experience in distributed systems and expertise in languages like Rust, Go, Python, or TypeScript. Competitive compensation ranges from $160,000 to $250,000 annually, plus equity and benefits.
A leading AI firm in San Francisco seeks a skilled engineer to build large scale, fault-tolerant distributed systems. You will optimize for performance, work with Kubernetes, and contribute both software and Infrastructure as Code solutions. A strong background in programming languages such as Python or Golang is essential. We offer a competitive salary of $160,000 - $250,000 plus equity and benefits in a dynamic startup environment.
04/02/2026
Full time
A leading AI firm in San Francisco seeks a skilled engineer to build large scale, fault-tolerant distributed systems. You will optimize for performance, work with Kubernetes, and contribute both software and Infrastructure as Code solutions. A strong background in programming languages such as Python or Golang is essential. We offer a competitive salary of $160,000 - $250,000 plus equity and benefits in a dynamic startup environment.
Our team focuses on enabling custom models and dedicated inference on Together. We are responsible for building a container platform, optimizing autoscaling, minimizing cold starts, achieving the best end to end model performance, and providing a best in class developer experience with great tooling. We often focus on video or audio generation across the stack: CUDA kernels, pytorch optimization, inference engines, container orchestration, queueing theory, etc. An ideal candidate will be great at profiling/optimization but know the word kubernetes, or be intimately familiar with multi cluster scheduling and have some sense of ML bottlenecks. Requirements 5+ years of demonstrated experience in building large scale, fault tolerant, distributed systems. Experience running serverless inference platforms, doing model bring up on short notice, being on call, or running a cloud provider is a very big plus Good taste and ability to thoughtfully discuss how what you've built has failed over time Experience designing, analyzing and improving efficiency, scalability, and stability of various system resources Excellent understanding of low level operating systems concepts including concurrency, networking and storage, performance and scale Expert level programmer in one or more of Python, Golang, Rust, C++, or Haskell Proficiency in writing and maintaining Infrastructure as Code (IaC) using tools like Terraform Experience with Kubernetes internals or other container orchestration systems Sound judgement for when to use and when to not use LLMs for code Bachelor's or Master's degree in Computer Science, Computer Engineering, or a related technical field, or equivalent practical experience Writing heavy roles or companies are a plus Responsibilities New hires may work on multi cluster orchestration, portfolio optimization, predictive autoscaling, control panes, model bring up, model optimization, APIs for managing deployments, inference worker SDKs, and CLI tools. Analyze and improve the robustness and scalability of existing distributed systems, APIs, databases, and infrastructure Partner with product teams to understand functional requirements and deliver solutions that meet business needs Write clear, well tested, and maintainable software and IaC for both new and existing systems Conduct design and code reviews, create developer documentation, and develop testing strategies for robustness and fault tolerance About Together AI Together AI is a research driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co designing software, hardware, algorithms, and models. We have contributed to leading open source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as FlashAttention, Hyena, FlexGen, and RedPajama. We invite you to join a passionate group of researchers and engineers in our journey in building the next generation AI infrastructure. Compensation We offer competitive compensation, startup equity, health insurance and other competitive benefits. The US base salary range for this full time position is: $160,000 - $250,000 + equity + benefits. Our salary ranges are determined by location, level and role. Individual compensation will be determined by experience, skills, and job related knowledge. Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more. Interested in building your career at Together AI? Get future opportunities sent straight to your email.
04/02/2026
Full time
Our team focuses on enabling custom models and dedicated inference on Together. We are responsible for building a container platform, optimizing autoscaling, minimizing cold starts, achieving the best end to end model performance, and providing a best in class developer experience with great tooling. We often focus on video or audio generation across the stack: CUDA kernels, pytorch optimization, inference engines, container orchestration, queueing theory, etc. An ideal candidate will be great at profiling/optimization but know the word kubernetes, or be intimately familiar with multi cluster scheduling and have some sense of ML bottlenecks. Requirements 5+ years of demonstrated experience in building large scale, fault tolerant, distributed systems. Experience running serverless inference platforms, doing model bring up on short notice, being on call, or running a cloud provider is a very big plus Good taste and ability to thoughtfully discuss how what you've built has failed over time Experience designing, analyzing and improving efficiency, scalability, and stability of various system resources Excellent understanding of low level operating systems concepts including concurrency, networking and storage, performance and scale Expert level programmer in one or more of Python, Golang, Rust, C++, or Haskell Proficiency in writing and maintaining Infrastructure as Code (IaC) using tools like Terraform Experience with Kubernetes internals or other container orchestration systems Sound judgement for when to use and when to not use LLMs for code Bachelor's or Master's degree in Computer Science, Computer Engineering, or a related technical field, or equivalent practical experience Writing heavy roles or companies are a plus Responsibilities New hires may work on multi cluster orchestration, portfolio optimization, predictive autoscaling, control panes, model bring up, model optimization, APIs for managing deployments, inference worker SDKs, and CLI tools. Analyze and improve the robustness and scalability of existing distributed systems, APIs, databases, and infrastructure Partner with product teams to understand functional requirements and deliver solutions that meet business needs Write clear, well tested, and maintainable software and IaC for both new and existing systems Conduct design and code reviews, create developer documentation, and develop testing strategies for robustness and fault tolerance About Together AI Together AI is a research driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co designing software, hardware, algorithms, and models. We have contributed to leading open source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as FlashAttention, Hyena, FlexGen, and RedPajama. We invite you to join a passionate group of researchers and engineers in our journey in building the next generation AI infrastructure. Compensation We offer competitive compensation, startup equity, health insurance and other competitive benefits. The US base salary range for this full time position is: $160,000 - $250,000 + equity + benefits. Our salary ranges are determined by location, level and role. Individual compensation will be determined by experience, skills, and job related knowledge. Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more. Interested in building your career at Together AI? Get future opportunities sent straight to your email.
As a Site Reliability Engineer (SRE) at Together, you are responsible for keeping all user-facing services and production systems running smoothly. You are a blend of a pragmatic operator and a software engineer that applies sound engineering principles, operational discipline, and mature automation to our operating environments and codebase. You specialize in systems (operating systems, storage subsystems, networking), while implementing best practices for availability, reliability and scalability, with varied interests in algorithms and distributed systems. Requirements 2+ years of professional SRE or related experience Bachelor's degree in Computer Science or a related field or equivalent work experience Knowledge of Ansible (roles, playbooks), Terraform, and Kubernetes Proficiency in programming/scripting languages Direct experience in monitoring and observability practices Knowledge of cloud services Ability to thrive in a collaborative environment involving different stakeholders and subject matter experts Responsibilities Participate in on-call rotation (Pagerduty) to respond to production incidents Build and run our infrastructure with Ansible, Terraform, and Kubernetes to enable scaling to a massive number of concurrent users Build monitoring systems to ensure the highest quality service for our customers Design and implement operational processes (such as deployments and upgrades) Debug production issues across all services and levels of the stack Identify improvements for the product architecture from the reliability, performance and availability perspectives Plan the growth of Together AI's infrastructure About Together AI Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, hardware, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as FlashAttention, Hyena, FlexGen, and RedPajama. We invite you to join a passionate group of researchers and engineers in our journey in building the next generation AI infrastructure. Compensation We offer competitive compensation, startup equity, health insurance and other competitive benefits. The US base salary range for this full-time position is: $150,000 - $200,000 + equity + benefits. Our salary ranges are determined by location, level and role. Individual compensation will be determined by experience, skills, and job-related knowledge. Equal Opportunity Together AI is an Equal Opportunity Employer and is proud to offer employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more. Interested in building your career at Together AI? Get future opportunities sent straight to your email.
04/02/2026
Full time
As a Site Reliability Engineer (SRE) at Together, you are responsible for keeping all user-facing services and production systems running smoothly. You are a blend of a pragmatic operator and a software engineer that applies sound engineering principles, operational discipline, and mature automation to our operating environments and codebase. You specialize in systems (operating systems, storage subsystems, networking), while implementing best practices for availability, reliability and scalability, with varied interests in algorithms and distributed systems. Requirements 2+ years of professional SRE or related experience Bachelor's degree in Computer Science or a related field or equivalent work experience Knowledge of Ansible (roles, playbooks), Terraform, and Kubernetes Proficiency in programming/scripting languages Direct experience in monitoring and observability practices Knowledge of cloud services Ability to thrive in a collaborative environment involving different stakeholders and subject matter experts Responsibilities Participate in on-call rotation (Pagerduty) to respond to production incidents Build and run our infrastructure with Ansible, Terraform, and Kubernetes to enable scaling to a massive number of concurrent users Build monitoring systems to ensure the highest quality service for our customers Design and implement operational processes (such as deployments and upgrades) Debug production issues across all services and levels of the stack Identify improvements for the product architecture from the reliability, performance and availability perspectives Plan the growth of Together AI's infrastructure About Together AI Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, hardware, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as FlashAttention, Hyena, FlexGen, and RedPajama. We invite you to join a passionate group of researchers and engineers in our journey in building the next generation AI infrastructure. Compensation We offer competitive compensation, startup equity, health insurance and other competitive benefits. The US base salary range for this full-time position is: $150,000 - $200,000 + equity + benefits. Our salary ranges are determined by location, level and role. Individual compensation will be determined by experience, skills, and job-related knowledge. Equal Opportunity Together AI is an Equal Opportunity Employer and is proud to offer employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more. Interested in building your career at Together AI? Get future opportunities sent straight to your email.