Fluidstack
San Francisco, California
Overview About Fluidstack: At Fluidstack, we're building the infrastructure for abundant intelligence. We partner with top AI labs, governments, and enterprises to unlock compute at the speed of light. We're working with urgency to make AGI a reality and are looking for motivated individuals who are committed to delivering world-class infrastructure. If you're motivated by purpose, obsessed with excellence, and ready to work very hard to accelerate the future of intelligence, join us in building what's next. Role As a System Engineer, GPU Fleet, you will manage, operate, and optimize hyperscale GPU compute infrastructure supporting AI/ML training and inference workloads. Ensure high availability, performance, and reliability of the GPU server fleet through automation, monitoring, troubleshooting, and collaboration with hardware engineering, platform teams, and datacenter operations. Responsibilities Operate and maintain a large-scale GPU server fleet (H100, B200, GB200) supporting AI/ML workloads; monitor system health, performance, and utilization to maximize uptime and ensure SLA compliance. Perform hands-on troubleshooting and root cause analysis of complex hardware, firmware, OS, and application issues across GPU clusters; coordinate with vendors and hardware teams to resolve systemic failures. Develop and maintain automation scripts for provisioning, configuration management, monitoring, and remediation at scale. Build and improve tooling for GPU health checks, performance diagnostics, driver validation, and automated recovery. Execute server provisioning, configuration, firmware updates, and OS installation using automation frameworks; manage lifecycle operations including deployment, maintenance, and decommissioning. Participate in 24x7 on-call rotation; respond to production incidents and coordinate resolution with cross-functional teams including datacenter operations, network engineering, and application teams. Lead post-incident reviews, document root causes, and drive continuous improvement initiatives focused on automation, reliability, monitoring, and operational efficiency. Basic Qualifications Bachelor's degree in Computer Science, Engineering, or related technical field (or equivalent practical experience). 3+ years (System Engineer) or 5+ years (Senior System Engineer) in Linux system administration, datacenter operations, or infrastructure engineering. Strong Linux/Unix fundamentals including system administration, shell scripting (Bash, Python), troubleshooting, and performance tuning. Experience with server hardware architecture, troubleshooting techniques, and understanding of compute, memory, storage, and networking components. Experience in automation and configuration management tools (Ansible, Puppet, Chef, Terraform). Strong analytical and problem-solving skills with ability to diagnose complex technical issues under pressure. Excellent communication and collaboration skills; ability to work effectively with cross-functional teams. Preferred Qualifications Experience managing large-scale GPU infrastructure (NVIDIA H100, A100, B200, GB200) in production environments supporting AI/ML workloads. Deep knowledge of GPU architecture, CUDA toolkit, GPU drivers, monitoring tools (nvidia-smi, DCGM). Experience with HPC cluster management, job schedulers (Slurm, PBS, LSF), and container orchestration (Kubernetes, Docker). Proficiency in out-of-band management protocols (IPMI, Redfish, BMC) and firmware management for server hardware. Experience with high-performance networking (InfiniBand, RoCE, RDMA) and network troubleshooting in GPU cluster environments. Familiarity with datacenter operations including rack installations, cabling, power management, and thermal considerations. Salary & Benefits Competitive total compensation package (salary + equity). Retirement or pension plan, in line with local norms. Health, dental, and vision insurance. Generous PTO policy, in line with local norms. The base salary range for this position is $200,000 - $300,000 per year, depending on experience, skills, qualifications, and location. This range represents our good faith estimate of the compensation for this role at the time of posting. Total compensation may also include equity in the form of stock options. We are committed to pay equity and transparency. Fluidstack is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans' status, or any other characteristic protected by law. Fluidstack will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. You will receive a confirmation email once your application has successfully been accepted. If there is an error with your submission and you did not receive a confirmation email, please email with your resume/CV, the role you've applied for, and the date you submitted your application. Someone from our recruiting team will be in touch.
Overview About Fluidstack: At Fluidstack, we're building the infrastructure for abundant intelligence. We partner with top AI labs, governments, and enterprises to unlock compute at the speed of light. We're working with urgency to make AGI a reality and are looking for motivated individuals who are committed to delivering world-class infrastructure. If you're motivated by purpose, obsessed with excellence, and ready to work very hard to accelerate the future of intelligence, join us in building what's next. Role As a System Engineer, GPU Fleet, you will manage, operate, and optimize hyperscale GPU compute infrastructure supporting AI/ML training and inference workloads. Ensure high availability, performance, and reliability of the GPU server fleet through automation, monitoring, troubleshooting, and collaboration with hardware engineering, platform teams, and datacenter operations. Responsibilities Operate and maintain a large-scale GPU server fleet (H100, B200, GB200) supporting AI/ML workloads; monitor system health, performance, and utilization to maximize uptime and ensure SLA compliance. Perform hands-on troubleshooting and root cause analysis of complex hardware, firmware, OS, and application issues across GPU clusters; coordinate with vendors and hardware teams to resolve systemic failures. Develop and maintain automation scripts for provisioning, configuration management, monitoring, and remediation at scale. Build and improve tooling for GPU health checks, performance diagnostics, driver validation, and automated recovery. Execute server provisioning, configuration, firmware updates, and OS installation using automation frameworks; manage lifecycle operations including deployment, maintenance, and decommissioning. Participate in 24x7 on-call rotation; respond to production incidents and coordinate resolution with cross-functional teams including datacenter operations, network engineering, and application teams. Lead post-incident reviews, document root causes, and drive continuous improvement initiatives focused on automation, reliability, monitoring, and operational efficiency. Basic Qualifications Bachelor's degree in Computer Science, Engineering, or related technical field (or equivalent practical experience). 3+ years (System Engineer) or 5+ years (Senior System Engineer) in Linux system administration, datacenter operations, or infrastructure engineering. Strong Linux/Unix fundamentals including system administration, shell scripting (Bash, Python), troubleshooting, and performance tuning. Experience with server hardware architecture, troubleshooting techniques, and understanding of compute, memory, storage, and networking components. Experience in automation and configuration management tools (Ansible, Puppet, Chef, Terraform). Strong analytical and problem-solving skills with ability to diagnose complex technical issues under pressure. Excellent communication and collaboration skills; ability to work effectively with cross-functional teams. Preferred Qualifications Experience managing large-scale GPU infrastructure (NVIDIA H100, A100, B200, GB200) in production environments supporting AI/ML workloads. Deep knowledge of GPU architecture, CUDA toolkit, GPU drivers, monitoring tools (nvidia-smi, DCGM). Experience with HPC cluster management, job schedulers (Slurm, PBS, LSF), and container orchestration (Kubernetes, Docker). Proficiency in out-of-band management protocols (IPMI, Redfish, BMC) and firmware management for server hardware. Experience with high-performance networking (InfiniBand, RoCE, RDMA) and network troubleshooting in GPU cluster environments. Familiarity with datacenter operations including rack installations, cabling, power management, and thermal considerations. Salary & Benefits Competitive total compensation package (salary + equity). Retirement or pension plan, in line with local norms. Health, dental, and vision insurance. Generous PTO policy, in line with local norms. The base salary range for this position is $200,000 - $300,000 per year, depending on experience, skills, qualifications, and location. This range represents our good faith estimate of the compensation for this role at the time of posting. Total compensation may also include equity in the form of stock options. We are committed to pay equity and transparency. Fluidstack is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans' status, or any other characteristic protected by law. Fluidstack will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. You will receive a confirmation email once your application has successfully been accepted. If there is an error with your submission and you did not receive a confirmation email, please email with your resume/CV, the role you've applied for, and the date you submitted your application. Someone from our recruiting team will be in touch.
Fluidstack
San Francisco, California
Senior / Staff Network Reliability Engineer Join to apply for the Senior / Staff Network Reliability Engineer role at Fluidstack 3 days ago Be among the first 25 applicants About Fluidstack At Fluidstack, we're building the infrastructure for abundant intelligence. We partner with top AI labs, governments, and enterprises - including Mistral, Poolside, Black Forest Labs, Meta, and more - to unlock compute at the speed of light. About The Role Our Network Reliability Engineers are the backbone of Fluidstack's platform. You'll utilize deep networking expertise and software engineering to keep our high-performance network fabrics fast, reliable and cost-efficient at scale. Our NREs operate RDMA fabrics, the datacenter network, and our WAN backbones. Focus Super charge the network stack. Tune TCP/IP, RDMA (primarily RoCE congestion control), kernel by pass frameworks (DPDK, XDP, eBPF) and NIC offloads to squeeze microseconds off packet latency for AI & HPC workloads. Deploy & optimize at scale. Roll out new ToR/spine switches (from NVIDIA, Arista, Juniper, and others), validate SmartNIC and BlueField networking, configure BGP/EVPN fabrics, and optimize flow control (PFC, ECN) for zero loss transport. Automate observability. Build NIC to orchestrator telemetry pipelines, packet loss detection bots, and real time throughput/latency dashboards. Root cause the gnarly stuff. Lead packet captures, congestion analyses and latency regressions; turn insights into switch firmware patches, kernel tuning and topology optimizations. Drive vendor collaboration. Pair with networking vendors to debug hardware, accelerate RDMA paths, validate optics, and integrate emerging network hardware (800G/1.6T, LPO/CPO) Continuously improve. Inject link failures, run game days simulating network partitions and codify post mortem learnings into SLIs/SLOs that matter to customers. About You 7+ yrs in network heavy SRE, performance engineering or data center networking. Mastery of Linux networking stack and protocol level debugging (TCP, IB, RoCE). Production experience with many vendors (Mellanox/NVIDIA, Arista, Juniper, etc.), multi layer fabrics, and network overlays (VXLAN, Geneve). Fluency in Python, Go or Rust; solid Infra as Code & CI/CD chops. Familiarity with DPDK, XDP, eBPF and InfiniBand/RoCE. Proven track record scaling low latency, high throughput networks for AI/ML or HPC clusters. Salary & Benefits Competitive total compensation package (salary + equity). Retirement or pension plan, in line with local norms. Health, dental, and vision insurance. Generous PTO policy, in line with local norms. The base salary range for this position is $250,000 - $400,000 per year, depending on experience, skills, qualifications, and location. This range represents our good faith estimate of the compensation for this role at the time of posting. Total compensation may also include equity in the form of stock options. We are committed to pay equity and transparency. Fluidstack is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans' status, or any other characteristic protected by law. Fluidstack will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Referrals increase your chances of interviewing at Fluidstack by 2x Seniority level Mid Senior level Employment type Full time Job function Engineering and Information Technology Industries Technology, Information and Internet
Senior / Staff Network Reliability Engineer Join to apply for the Senior / Staff Network Reliability Engineer role at Fluidstack 3 days ago Be among the first 25 applicants About Fluidstack At Fluidstack, we're building the infrastructure for abundant intelligence. We partner with top AI labs, governments, and enterprises - including Mistral, Poolside, Black Forest Labs, Meta, and more - to unlock compute at the speed of light. About The Role Our Network Reliability Engineers are the backbone of Fluidstack's platform. You'll utilize deep networking expertise and software engineering to keep our high-performance network fabrics fast, reliable and cost-efficient at scale. Our NREs operate RDMA fabrics, the datacenter network, and our WAN backbones. Focus Super charge the network stack. Tune TCP/IP, RDMA (primarily RoCE congestion control), kernel by pass frameworks (DPDK, XDP, eBPF) and NIC offloads to squeeze microseconds off packet latency for AI & HPC workloads. Deploy & optimize at scale. Roll out new ToR/spine switches (from NVIDIA, Arista, Juniper, and others), validate SmartNIC and BlueField networking, configure BGP/EVPN fabrics, and optimize flow control (PFC, ECN) for zero loss transport. Automate observability. Build NIC to orchestrator telemetry pipelines, packet loss detection bots, and real time throughput/latency dashboards. Root cause the gnarly stuff. Lead packet captures, congestion analyses and latency regressions; turn insights into switch firmware patches, kernel tuning and topology optimizations. Drive vendor collaboration. Pair with networking vendors to debug hardware, accelerate RDMA paths, validate optics, and integrate emerging network hardware (800G/1.6T, LPO/CPO) Continuously improve. Inject link failures, run game days simulating network partitions and codify post mortem learnings into SLIs/SLOs that matter to customers. About You 7+ yrs in network heavy SRE, performance engineering or data center networking. Mastery of Linux networking stack and protocol level debugging (TCP, IB, RoCE). Production experience with many vendors (Mellanox/NVIDIA, Arista, Juniper, etc.), multi layer fabrics, and network overlays (VXLAN, Geneve). Fluency in Python, Go or Rust; solid Infra as Code & CI/CD chops. Familiarity with DPDK, XDP, eBPF and InfiniBand/RoCE. Proven track record scaling low latency, high throughput networks for AI/ML or HPC clusters. Salary & Benefits Competitive total compensation package (salary + equity). Retirement or pension plan, in line with local norms. Health, dental, and vision insurance. Generous PTO policy, in line with local norms. The base salary range for this position is $250,000 - $400,000 per year, depending on experience, skills, qualifications, and location. This range represents our good faith estimate of the compensation for this role at the time of posting. Total compensation may also include equity in the form of stock options. We are committed to pay equity and transparency. Fluidstack is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans' status, or any other characteristic protected by law. Fluidstack will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Referrals increase your chances of interviewing at Fluidstack by 2x Seniority level Mid Senior level Employment type Full time Job function Engineering and Information Technology Industries Technology, Information and Internet