Machine Learning Development Design, build, and deploy robust ML models using Python and industry-standard ML frameworks (TensorFlow, PyTorch, Scikit learn, XGBoost, etc.). Collaborate with data scientists to translate prototypes into production-ready systems. Perform feature engineering, data preprocessing, model selection, hyperparameter tuning, and performance optimization. MLOps & Productionalization Develop and maintain ML pipelines using AWS SageMaker, MLflow, H2O.ai, and other automation tools. Implement best practices for model versioning, lineage tracking, model performance monitoring, and retraining. Set up CI/CD pipelines for ML services and automate deployment workflows. Cloud & Distributed Systems Architect and operate scalable ML workflows in AWS, including SageMaker, Step Functions, S3, ECR, CloudWatch, IAM, etc. Build and optimize distributed data processing pipelines using PySpark and AWS EMR. Ensure reliability, scalability, and cost efficiency of ML environments. Data Engineering Integration Work closely with data engineering teams to build robust data ingestion and transformation pipelines. Improve data quality, reliability, and observability for ML use cases. Heavy hands-on coding with PySpark, SQL, and Python-based ETL workflows. Collaboration & Leadership Provide technical mentorship and guidance to junior ML engineers and data scientists. Lead architectural discussions and participate in design reviews. Partner with cross-functional teams to scope and deliver ML-driven products. Required Qualifications 5-8+ years of professional experience in ML engineering, data engineering, or related fields. Expert-level proficiency in Python and ML frameworks (Scikit learn, TensorFlow, PyTorch, H2O, XGBoost, etc.). Hands-on experience with AWS SageMaker for training, tuning, deployment, and pipeline automation. Strong knowledge of H2O.ai (Driverless AI or H2O3), AutoML frameworks, and enterprise ML workflows. Proficient with MLflow for experiment tracking, model packaging, and deployment. Advanced experience with PySpark and distributed data processing. Experience with AWS EMR for Spark cluster management and large scale data transformations. Solid understanding of MLOps concepts: CI/CD for ML, feature stores, monitoring, drift detection, model governance. Strong background in object oriented programming, algorithm design, and software engineering best practices. Experience with Docker and containerized ML workloads. Preferred Qualifications Knowledge of Kubernetes (EKS) for ML deployment. Experience implementing model monitoring systems (e.g., Neptune, SageMaker Model Monitor, custom solutions). Familiarity with microservices, REST APIs, and event-driven architectures. Experience with large language models (LLMs) and vector databases is a plus. Soft Skills Excellent problem-solving and analytical skills. Strong communication and documentation abilities. Ability to operate in a fast-paced, cross-functional environment. Passion for experimentation, innovation, and continuous improvement
05/02/2026
Full time
Machine Learning Development Design, build, and deploy robust ML models using Python and industry-standard ML frameworks (TensorFlow, PyTorch, Scikit learn, XGBoost, etc.). Collaborate with data scientists to translate prototypes into production-ready systems. Perform feature engineering, data preprocessing, model selection, hyperparameter tuning, and performance optimization. MLOps & Productionalization Develop and maintain ML pipelines using AWS SageMaker, MLflow, H2O.ai, and other automation tools. Implement best practices for model versioning, lineage tracking, model performance monitoring, and retraining. Set up CI/CD pipelines for ML services and automate deployment workflows. Cloud & Distributed Systems Architect and operate scalable ML workflows in AWS, including SageMaker, Step Functions, S3, ECR, CloudWatch, IAM, etc. Build and optimize distributed data processing pipelines using PySpark and AWS EMR. Ensure reliability, scalability, and cost efficiency of ML environments. Data Engineering Integration Work closely with data engineering teams to build robust data ingestion and transformation pipelines. Improve data quality, reliability, and observability for ML use cases. Heavy hands-on coding with PySpark, SQL, and Python-based ETL workflows. Collaboration & Leadership Provide technical mentorship and guidance to junior ML engineers and data scientists. Lead architectural discussions and participate in design reviews. Partner with cross-functional teams to scope and deliver ML-driven products. Required Qualifications 5-8+ years of professional experience in ML engineering, data engineering, or related fields. Expert-level proficiency in Python and ML frameworks (Scikit learn, TensorFlow, PyTorch, H2O, XGBoost, etc.). Hands-on experience with AWS SageMaker for training, tuning, deployment, and pipeline automation. Strong knowledge of H2O.ai (Driverless AI or H2O3), AutoML frameworks, and enterprise ML workflows. Proficient with MLflow for experiment tracking, model packaging, and deployment. Advanced experience with PySpark and distributed data processing. Experience with AWS EMR for Spark cluster management and large scale data transformations. Solid understanding of MLOps concepts: CI/CD for ML, feature stores, monitoring, drift detection, model governance. Strong background in object oriented programming, algorithm design, and software engineering best practices. Experience with Docker and containerized ML workloads. Preferred Qualifications Knowledge of Kubernetes (EKS) for ML deployment. Experience implementing model monitoring systems (e.g., Neptune, SageMaker Model Monitor, custom solutions). Familiarity with microservices, REST APIs, and event-driven architectures. Experience with large language models (LLMs) and vector databases is a plus. Soft Skills Excellent problem-solving and analytical skills. Strong communication and documentation abilities. Ability to operate in a fast-paced, cross-functional environment. Passion for experimentation, innovation, and continuous improvement
Be a part of our mission! As a world leader in creating comfortable, sustainable, and efficient climate solutions for buildings, homes and transportation, it's our responsibility to put the planet first. For us at Trane Technologies , and through our businesses including Trane and Thermo King , sustainability is not just how we do business-it is our business. Do you dare to look at the world's challenges and see impactful possibilities? Do you want to contribute to making a better future? If the answer is yes, we invite you to consider joining us in boldly challenging what's possible for a sustainable world. Learn about our benefits designed for you to Thrive at work and at home. We boldly go. Where is the work: Monday to Thursday, work onsite with your colleagues. Fridays, choose your work location, balancing what your work requires. What's in it for you: A sustainable future demands ongoing digital advancement. Our digital solutions team leads the way in developing next-generation climate technology focused on reducing demand-side energy consumption and emissions. Our team-including BrainBox AI, Nuvolo, and more-combines technical expertise with advanced analytics to create data-driven solutions that add real value for customers, communities, and the planet. Whether you're advancing AI in HVAC or driving analytics for greater efficiency, your ideas will help engineer solutions for stronger communities and a sustainable world. Trane Technologies is currently seeking a Senior Software Engineer for our Digital Solutions Team at Trane Technologies. In this cloud-first, AI-driven engineering environment, you'll join our Digital Solutions team and play a key role in transforming our flagship proprietary software. You'll help modernize a core technology platform-introducing advanced cloud architectures and unlocking the power of machine learning and artificial intelligence to deliver smarter, more sustainable climate solutions. This is an opportunity to directly influence the future of our digital product portfolio, shaping solutions at the intersection of data, cloud, and AI for critical applications in buildings, homes, and transportation. You'll collaborate across global, cross-functional teams, leveraging cutting-edge development practices and participating in a culture of innovation and continuous learning. We're looking for a hands-on, team-oriented engineer who thrives on complex problem solving, architectural design, and creating intelligent logic for real-world customer needs. Success in this role will rely on your curiosity, diligence, technical leadership, awareness of industry best practices, and your commitment to quality, security and customer success. You'll be empowered to help define and deliver truly next-generation technologies for a more sustainable world What you will do: Design, develop, and deploy highly scalable and reliable cloud-based applications, integrating AI and machine learning to deliver innovative solutions for customers. Build, maintain, and iterate on both front-end (React) and back-end (Python or Node.js) components as part of modern, user-focused web applications. Collaborate with product owners, data scientists, and UI/UX designers to create seamless, intuitive, and visually appealing interfaces. Architect and implement robust, secure microservices and APIs on AWS or similar cloud platforms. Develop and optimize data pipelines for big data and analytics, leveraging modern data stores such as columnar databases. Apply best practices for application security, scalability, and performance in a cloud-centric environment. Champion DevOps methodologies-including CI/CD, automated testing, monitoring, and infrastructure as code-to ensure rapid and reliable delivery. Work closely with global teams in an Agile environment, mentoring peers and contributing to code reviews. Stay up to date with emerging technologies, frameworks, and trends in AI, cloud, and full stack development. What you will bring: Bachelor's or Master's degree in Computer Science, Engineering, or STEM related field. 5 years of hands-on software development experience, including building, testing, and deploying cloud-native solutions. Proven full stack engineering expertise with React for front-end and Python or Node.js for back-end development. Strong UI development skills, with a demonstrated ability to deliver user-friendly, accessible, and responsive web interfaces. Extensive experience with AWS or other major cloud platforms (Azure, GCP), including leveraging managed services for scaling, security, and automation. Working knowledge of big data, analytics platforms, and columnar databases. Solid background in application security best practices within a cloud environment. Proficiency with DevOps tools and practices (CI/CD, Docker, Kubernetes, infrastructure as code, cloud monitoring). Experience collaborating within cross-functional Agile teams and effectively communicating technical concepts. Experience integrating and deploying AI/ML models into production applications is a plus. Passion for continuous learning and driving innovation through technology. Annual Base Salary Range or Hourly Base Pay Range: $127,110.00 - $177,870.00 Compensation Type: Salary Incentive Eligible: No Sales Commission Eligible: No Disclaimer : We strive to provide competitive compensation for this position, tailored to a variety of factors. The actual compensation will depend on elements such as seniority, merit, geographic location, education, experience, travel requirements, and union designation. Our compensation range is generally based on the national average for the country. Additionally, benefits may vary depending on the region, business alignment, union involvement, and employee status. Safety Sensitive Role: No The company designates certain roles as Safety Sensitive. Safety Sensitive roles may require that you pass additional drug screening. We offer competitive compensation and comprehensive benefits and programs. We are an equal opportunity employer; all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, pregnancy, age, marital status, disability, status as a protected veteran, or any legally protected status.5c143e31-5e48-4549-b2d185386
05/02/2026
Full time
Be a part of our mission! As a world leader in creating comfortable, sustainable, and efficient climate solutions for buildings, homes and transportation, it's our responsibility to put the planet first. For us at Trane Technologies , and through our businesses including Trane and Thermo King , sustainability is not just how we do business-it is our business. Do you dare to look at the world's challenges and see impactful possibilities? Do you want to contribute to making a better future? If the answer is yes, we invite you to consider joining us in boldly challenging what's possible for a sustainable world. Learn about our benefits designed for you to Thrive at work and at home. We boldly go. Where is the work: Monday to Thursday, work onsite with your colleagues. Fridays, choose your work location, balancing what your work requires. What's in it for you: A sustainable future demands ongoing digital advancement. Our digital solutions team leads the way in developing next-generation climate technology focused on reducing demand-side energy consumption and emissions. Our team-including BrainBox AI, Nuvolo, and more-combines technical expertise with advanced analytics to create data-driven solutions that add real value for customers, communities, and the planet. Whether you're advancing AI in HVAC or driving analytics for greater efficiency, your ideas will help engineer solutions for stronger communities and a sustainable world. Trane Technologies is currently seeking a Senior Software Engineer for our Digital Solutions Team at Trane Technologies. In this cloud-first, AI-driven engineering environment, you'll join our Digital Solutions team and play a key role in transforming our flagship proprietary software. You'll help modernize a core technology platform-introducing advanced cloud architectures and unlocking the power of machine learning and artificial intelligence to deliver smarter, more sustainable climate solutions. This is an opportunity to directly influence the future of our digital product portfolio, shaping solutions at the intersection of data, cloud, and AI for critical applications in buildings, homes, and transportation. You'll collaborate across global, cross-functional teams, leveraging cutting-edge development practices and participating in a culture of innovation and continuous learning. We're looking for a hands-on, team-oriented engineer who thrives on complex problem solving, architectural design, and creating intelligent logic for real-world customer needs. Success in this role will rely on your curiosity, diligence, technical leadership, awareness of industry best practices, and your commitment to quality, security and customer success. You'll be empowered to help define and deliver truly next-generation technologies for a more sustainable world What you will do: Design, develop, and deploy highly scalable and reliable cloud-based applications, integrating AI and machine learning to deliver innovative solutions for customers. Build, maintain, and iterate on both front-end (React) and back-end (Python or Node.js) components as part of modern, user-focused web applications. Collaborate with product owners, data scientists, and UI/UX designers to create seamless, intuitive, and visually appealing interfaces. Architect and implement robust, secure microservices and APIs on AWS or similar cloud platforms. Develop and optimize data pipelines for big data and analytics, leveraging modern data stores such as columnar databases. Apply best practices for application security, scalability, and performance in a cloud-centric environment. Champion DevOps methodologies-including CI/CD, automated testing, monitoring, and infrastructure as code-to ensure rapid and reliable delivery. Work closely with global teams in an Agile environment, mentoring peers and contributing to code reviews. Stay up to date with emerging technologies, frameworks, and trends in AI, cloud, and full stack development. What you will bring: Bachelor's or Master's degree in Computer Science, Engineering, or STEM related field. 5 years of hands-on software development experience, including building, testing, and deploying cloud-native solutions. Proven full stack engineering expertise with React for front-end and Python or Node.js for back-end development. Strong UI development skills, with a demonstrated ability to deliver user-friendly, accessible, and responsive web interfaces. Extensive experience with AWS or other major cloud platforms (Azure, GCP), including leveraging managed services for scaling, security, and automation. Working knowledge of big data, analytics platforms, and columnar databases. Solid background in application security best practices within a cloud environment. Proficiency with DevOps tools and practices (CI/CD, Docker, Kubernetes, infrastructure as code, cloud monitoring). Experience collaborating within cross-functional Agile teams and effectively communicating technical concepts. Experience integrating and deploying AI/ML models into production applications is a plus. Passion for continuous learning and driving innovation through technology. Annual Base Salary Range or Hourly Base Pay Range: $127,110.00 - $177,870.00 Compensation Type: Salary Incentive Eligible: No Sales Commission Eligible: No Disclaimer : We strive to provide competitive compensation for this position, tailored to a variety of factors. The actual compensation will depend on elements such as seniority, merit, geographic location, education, experience, travel requirements, and union designation. Our compensation range is generally based on the national average for the country. Additionally, benefits may vary depending on the region, business alignment, union involvement, and employee status. Safety Sensitive Role: No The company designates certain roles as Safety Sensitive. Safety Sensitive roles may require that you pass additional drug screening. We offer competitive compensation and comprehensive benefits and programs. We are an equal opportunity employer; all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, pregnancy, age, marital status, disability, status as a protected veteran, or any legally protected status.5c143e31-5e48-4549-b2d185386
JT4, LLC provides engineering and technical support to multiple western test ranges for the U.S. Air Force, Space Force, and Navy under the Joint Range Technical Services Contract, better known as J-Tech II. JT4 develops and maintains realistic, integrated test and training environments and prepares our nation's war-fighting aircraft, weapons systems, and aircrews for today's missions and tomorrow's global challenges. The Management Systems Computer Scientist will be the technical visionary responsible for designing and leading our enterprise data strategy in a cloud environment. JOB SUMMARY The Management Systems Computer Scientist will be the technical visionary responsible for designing and leading our enterprise data strategy in a cloud environment. You will conduct critical analyses of alternatives and provide data-driven recommendations that shape the future of our core platforms. Your primary focus will be to guide the "build vs. buy" decisions for our next-generation data manager, mission manager, and user manager systems to ensure they are scalable, secure, and mission-ready. Job Duties & Experience Strong Docker, Windows, and Linux skills Hands-on experience with a major cloud provider (AWS, Azure, GCP); including services like RDS, Aurora, DynamoDB, Azure SQL, or Cloud Spanner Knowledge of both SQL (e.g., PostgreSQL, MySQL, SQL Server) and NoSQL (e.g., MongoDB, Cassandra, Redis) database technologies Experience in conducting formal technology trade-off analyses (AoAs) and cost-benefit analyses. Ability to articulate complex technical concepts to non-technical stakeholders and provide clear "build vs. buy" recommendations Design and implement scalable, secure, and highly available cloud database solutions on platforms like AWS and Azure Develop and maintain data models, database schemas, and data flow diagrams that align with architecture standards Lead formal Analysis of Alternatives (AoAs) to compare various database technologies (e.g., SQL vs. NoSQL, graph databases), platforms, and vendors Serve as the subject matter expert for database systems, providing guidance to development teams on best practices for performance, security, and scalability Establish and oversee database performance tuning, optimization, security protocols and monitoring strategies REQUIREMENTS - EDUCATION, TECHNICAL, AND WORK EXPERIENCE A bachelor's degree in an associated discipline is required for this position. In addition, a Computer Scientist I must possess the following qualifications: Knowledge of computer-based systems and applications Programming skills in languages used for job-specific programming tasks Familiarity with systems engineering and software development lifecycles Effective verbal and written communication skills Good planning/organizational skills Ability to work under deadlines The candidate must possess a valid, state issued driver's license. Must be able to obtain and maintain security clearance. Must be a U.S. citizen. SALARY The expected salary range for this position is $123,000 to $140,000 annually. Note: The salary range offered for this position is a good faith description of the expected salary range this role will pay. JT4, LLC considers factors such as (but not limited to) responsibilities of the position, candidate's work experience, education/training, key skills, internal peer equity, as well as, market and business considerations when extending an offer. BENEFITS Medical, Dental, Vision Insurance Benefits Active on Day 1 Life Insurance Health Savings Accounts/FSA's Disability Insurance Paid Time Off 401(k) Plan Options with Employer Match JT4 will match 50%, up to an 8% contribution 100% Immediate Vesting Tuition Reimbursement OTHER RESPONSIBILITIES Each employee must read, understand, and implement the general and specific operational, safety, quality, and environmental requirements of all plans, procedures, and policies pertaining to their job. WORKING CONDITIONS Work is performed in a typical office environment with no unusual hazards. Occasional lifting (up to 20 pounds), constant sitting while using the computer terminal, constant use of sight abilities while reviewing documents, constant use of speech/hearing abilities for communication, and constant mental alertness are required. Travel to remote company work locations may be required. DISCLAIMER The above statements are intended to describe the general nature and level of work being performed by personnel assigned to this classification. They are not intended to be construed as an exhaustive list of all responsibilities, duties, and skills required of persons so classified. Tasking is in support of a Federal Government Contract that requires U.S. citizenship. Some jobs may require a candidate to be eligible for a government security clearance, state-issued driver's license or other licenses/certifications, and the inability to obtain and maintain the required clearance, license or certification may affect an employee's ability to maintain employment. SCC: JSD12, A1412TW
05/01/2026
Full time
JT4, LLC provides engineering and technical support to multiple western test ranges for the U.S. Air Force, Space Force, and Navy under the Joint Range Technical Services Contract, better known as J-Tech II. JT4 develops and maintains realistic, integrated test and training environments and prepares our nation's war-fighting aircraft, weapons systems, and aircrews for today's missions and tomorrow's global challenges. The Management Systems Computer Scientist will be the technical visionary responsible for designing and leading our enterprise data strategy in a cloud environment. JOB SUMMARY The Management Systems Computer Scientist will be the technical visionary responsible for designing and leading our enterprise data strategy in a cloud environment. You will conduct critical analyses of alternatives and provide data-driven recommendations that shape the future of our core platforms. Your primary focus will be to guide the "build vs. buy" decisions for our next-generation data manager, mission manager, and user manager systems to ensure they are scalable, secure, and mission-ready. Job Duties & Experience Strong Docker, Windows, and Linux skills Hands-on experience with a major cloud provider (AWS, Azure, GCP); including services like RDS, Aurora, DynamoDB, Azure SQL, or Cloud Spanner Knowledge of both SQL (e.g., PostgreSQL, MySQL, SQL Server) and NoSQL (e.g., MongoDB, Cassandra, Redis) database technologies Experience in conducting formal technology trade-off analyses (AoAs) and cost-benefit analyses. Ability to articulate complex technical concepts to non-technical stakeholders and provide clear "build vs. buy" recommendations Design and implement scalable, secure, and highly available cloud database solutions on platforms like AWS and Azure Develop and maintain data models, database schemas, and data flow diagrams that align with architecture standards Lead formal Analysis of Alternatives (AoAs) to compare various database technologies (e.g., SQL vs. NoSQL, graph databases), platforms, and vendors Serve as the subject matter expert for database systems, providing guidance to development teams on best practices for performance, security, and scalability Establish and oversee database performance tuning, optimization, security protocols and monitoring strategies REQUIREMENTS - EDUCATION, TECHNICAL, AND WORK EXPERIENCE A bachelor's degree in an associated discipline is required for this position. In addition, a Computer Scientist I must possess the following qualifications: Knowledge of computer-based systems and applications Programming skills in languages used for job-specific programming tasks Familiarity with systems engineering and software development lifecycles Effective verbal and written communication skills Good planning/organizational skills Ability to work under deadlines The candidate must possess a valid, state issued driver's license. Must be able to obtain and maintain security clearance. Must be a U.S. citizen. SALARY The expected salary range for this position is $123,000 to $140,000 annually. Note: The salary range offered for this position is a good faith description of the expected salary range this role will pay. JT4, LLC considers factors such as (but not limited to) responsibilities of the position, candidate's work experience, education/training, key skills, internal peer equity, as well as, market and business considerations when extending an offer. BENEFITS Medical, Dental, Vision Insurance Benefits Active on Day 1 Life Insurance Health Savings Accounts/FSA's Disability Insurance Paid Time Off 401(k) Plan Options with Employer Match JT4 will match 50%, up to an 8% contribution 100% Immediate Vesting Tuition Reimbursement OTHER RESPONSIBILITIES Each employee must read, understand, and implement the general and specific operational, safety, quality, and environmental requirements of all plans, procedures, and policies pertaining to their job. WORKING CONDITIONS Work is performed in a typical office environment with no unusual hazards. Occasional lifting (up to 20 pounds), constant sitting while using the computer terminal, constant use of sight abilities while reviewing documents, constant use of speech/hearing abilities for communication, and constant mental alertness are required. Travel to remote company work locations may be required. DISCLAIMER The above statements are intended to describe the general nature and level of work being performed by personnel assigned to this classification. They are not intended to be construed as an exhaustive list of all responsibilities, duties, and skills required of persons so classified. Tasking is in support of a Federal Government Contract that requires U.S. citizenship. Some jobs may require a candidate to be eligible for a government security clearance, state-issued driver's license or other licenses/certifications, and the inability to obtain and maintain the required clearance, license or certification may affect an employee's ability to maintain employment. SCC: JSD12, A1412TW
Principal Solutions Architect Drive AI Growth + Build Enterprise ML Solutions This Jobot Job is hosted by: Robert Donohue Are you a fit? Easy Apply now by clicking the "Apply" button and sending us your resume. Salary: $170,000 - $225,000 per year A bit about us: We are a premier modern data and AI consultancy helping enterprise organizations unlock value through cloud, analytics, machine learning, and intelligent automation. Our partnerships include leading platforms such as Snowflake, Amazon Web Services, Microsoft Azure, Google Cloud, Databricks, dbt Labs, and Fivetran. We are a remote-first organization with team members across the United States, Latin America, and India. Our culture is built around ownership, innovation, trust, curiosity, and delivering measurable outcomes for clients. Why join us? Competitive Compensation: $170,000 - $225,000 base salary + Bonus Remote-First: Work from anywhere in the US with occasional customer-site travel nationwide Massive Growth: Be part of a company growing 40% YOY, creating career advancement opportunities Principal-level role combining technical leadership + client growth Lead enterprise AI / ML transformations for top-tier customers Award-Winning Culture: Collaborative, inclusive, and committed to professional development Learning & Development: Accelerated training, advanced certifications, and exposure to AI/ML innovation Time Off & Benefits: 4 weeks PTO, 10 paid holidays, health/dental/vision insurance, 401(k), and additional perks Job Details We are seeking a Principal Solutions Architect - Machine Learning to serve as a trusted advisor to clients while leading the design and delivery of enterprise AI / ML solutions. This role is ideal for someone with strong hands-on architecture experience who has also succeeded in consulting, pre-sales, post-sales, account growth, or strategic client leadership environments. What You'll Do Client Engagement & Account Growth Build executive relationships with client stakeholders Understand long-term AI / ML priorities and create strategic roadmaps Identify expansion opportunities where additional AI or data solutions add value Partner with internal leadership to grow strategic accounts Lead RFPs, RFIs, solution proposals, and executive presentations Translate technical capabilities into business value and ROI Technical Leadership & Delivery Architect end-to-end AI / ML solutions focused on scalability and performance Lead teams of ML Engineers, Data Scientists, and Architects Oversee AI platforms, production pipelines, and model deployment strategies Mentor teams on best practices in machine learning architecture and delivery Ensure successful execution across design, testing, deployment, and optimization Stay current on emerging AI technologies and recommend innovation paths Required Background 8+ years in Solutions Architecture, Machine Learning Engineering, Data Science, or Software Engineering 3+ years leading consulting engagements, client relationships, or account growth initiatives Experience designing and deploying AI / ML solutions in production Strong pre-sales, proposal development, or expansion success Deep expertise with Amazon Web Services, Google Cloud, Microsoft Azure, Databricks, Vertex AI, or Amazon SageMaker Strong coding skills in Python, Java, Scala, or similar Experience leading cross-functional technical teams Strong client-facing communication and executive presence Bachelor's degree in Computer Science or related field Nice to Have Master's degree in Data Science or related discipline Experience with TensorFlow, Keras, scikit-learn, MLflow, h2o Docker / Kubernetes experience Experience building enterprise ML products and APIs Open-source contributions or side projects Expertise with MLOps frameworks and production model monitoring Interested in hearing more? Easy Apply now by clicking the "Apply" button. Jobot is an Equal Opportunity Employer. We provide an inclusive work environment that celebrates diversity and all qualified candidates receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, age (40 and over), disability, military status, genetic information or any other basis protected by applicable federal, state, or local laws. Jobot also prohibits harassment of applicants or employees based on any of these protected categories. It is Jobot's policy to comply with all applicable federal, state and local laws respecting consideration of unemployment status in making hiring decisions. Sometimes Jobot is required to perform background checks with your authorization. Jobot will consider qualified candidates with criminal histories in a manner consistent with any applicable federal, state, or local law regarding criminal backgrounds, including but not limited to the Los Angeles Fair Chance Initiative for Hiring and the San Francisco Fair Chance Ordinance. Information collected and processed as part of your Jobot candidate profile, and any job applications, resumes, or other information you choose to submit is subject to Jobot's Privacy Policy, as well as the Jobot California Worker Privacy Notice and Jobot Notice Regarding Automated Employment Decision Tools which are available at By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from Jobot, and/or its agents and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here:
04/27/2026
Full time
Principal Solutions Architect Drive AI Growth + Build Enterprise ML Solutions This Jobot Job is hosted by: Robert Donohue Are you a fit? Easy Apply now by clicking the "Apply" button and sending us your resume. Salary: $170,000 - $225,000 per year A bit about us: We are a premier modern data and AI consultancy helping enterprise organizations unlock value through cloud, analytics, machine learning, and intelligent automation. Our partnerships include leading platforms such as Snowflake, Amazon Web Services, Microsoft Azure, Google Cloud, Databricks, dbt Labs, and Fivetran. We are a remote-first organization with team members across the United States, Latin America, and India. Our culture is built around ownership, innovation, trust, curiosity, and delivering measurable outcomes for clients. Why join us? Competitive Compensation: $170,000 - $225,000 base salary + Bonus Remote-First: Work from anywhere in the US with occasional customer-site travel nationwide Massive Growth: Be part of a company growing 40% YOY, creating career advancement opportunities Principal-level role combining technical leadership + client growth Lead enterprise AI / ML transformations for top-tier customers Award-Winning Culture: Collaborative, inclusive, and committed to professional development Learning & Development: Accelerated training, advanced certifications, and exposure to AI/ML innovation Time Off & Benefits: 4 weeks PTO, 10 paid holidays, health/dental/vision insurance, 401(k), and additional perks Job Details We are seeking a Principal Solutions Architect - Machine Learning to serve as a trusted advisor to clients while leading the design and delivery of enterprise AI / ML solutions. This role is ideal for someone with strong hands-on architecture experience who has also succeeded in consulting, pre-sales, post-sales, account growth, or strategic client leadership environments. What You'll Do Client Engagement & Account Growth Build executive relationships with client stakeholders Understand long-term AI / ML priorities and create strategic roadmaps Identify expansion opportunities where additional AI or data solutions add value Partner with internal leadership to grow strategic accounts Lead RFPs, RFIs, solution proposals, and executive presentations Translate technical capabilities into business value and ROI Technical Leadership & Delivery Architect end-to-end AI / ML solutions focused on scalability and performance Lead teams of ML Engineers, Data Scientists, and Architects Oversee AI platforms, production pipelines, and model deployment strategies Mentor teams on best practices in machine learning architecture and delivery Ensure successful execution across design, testing, deployment, and optimization Stay current on emerging AI technologies and recommend innovation paths Required Background 8+ years in Solutions Architecture, Machine Learning Engineering, Data Science, or Software Engineering 3+ years leading consulting engagements, client relationships, or account growth initiatives Experience designing and deploying AI / ML solutions in production Strong pre-sales, proposal development, or expansion success Deep expertise with Amazon Web Services, Google Cloud, Microsoft Azure, Databricks, Vertex AI, or Amazon SageMaker Strong coding skills in Python, Java, Scala, or similar Experience leading cross-functional technical teams Strong client-facing communication and executive presence Bachelor's degree in Computer Science or related field Nice to Have Master's degree in Data Science or related discipline Experience with TensorFlow, Keras, scikit-learn, MLflow, h2o Docker / Kubernetes experience Experience building enterprise ML products and APIs Open-source contributions or side projects Expertise with MLOps frameworks and production model monitoring Interested in hearing more? Easy Apply now by clicking the "Apply" button. Jobot is an Equal Opportunity Employer. We provide an inclusive work environment that celebrates diversity and all qualified candidates receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, age (40 and over), disability, military status, genetic information or any other basis protected by applicable federal, state, or local laws. Jobot also prohibits harassment of applicants or employees based on any of these protected categories. It is Jobot's policy to comply with all applicable federal, state and local laws respecting consideration of unemployment status in making hiring decisions. Sometimes Jobot is required to perform background checks with your authorization. Jobot will consider qualified candidates with criminal histories in a manner consistent with any applicable federal, state, or local law regarding criminal backgrounds, including but not limited to the Los Angeles Fair Chance Initiative for Hiring and the San Francisco Fair Chance Ordinance. Information collected and processed as part of your Jobot candidate profile, and any job applications, resumes, or other information you choose to submit is subject to Jobot's Privacy Policy, as well as the Jobot California Worker Privacy Notice and Jobot Notice Regarding Automated Employment Decision Tools which are available at By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from Jobot, and/or its agents and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here:
Principal Solutions Architect Drive AI Growth + Build Enterprise ML Solutions This Jobot Job is hosted by: Robert Donohue Are you a fit? Easy Apply now by clicking the "Apply" button and sending us your resume. Salary: $170,000 - $225,000 per year A bit about us: We are a premier modern data and AI consultancy helping enterprise organizations unlock value through cloud, analytics, machine learning, and intelligent automation. Our partnerships include leading platforms such as Snowflake, Amazon Web Services, Microsoft Azure, Google Cloud, Databricks, dbt Labs, and Fivetran. We are a remote-first organization with team members across the United States, Latin America, and India. Our culture is built around ownership, innovation, trust, curiosity, and delivering measurable outcomes for clients. Why join us? Competitive Compensation: $170,000 - $225,000 base salary + Bonus Remote-First: Work from anywhere in the US with occasional customer-site travel nationwide Massive Growth: Be part of a company growing 40% YOY, creating career advancement opportunities Principal-level role combining technical leadership + client growth Lead enterprise AI / ML transformations for top-tier customers Award-Winning Culture: Collaborative, inclusive, and committed to professional development Learning & Development: Accelerated training, advanced certifications, and exposure to AI/ML innovation Time Off & Benefits: 4 weeks PTO, 10 paid holidays, health/dental/vision insurance, 401(k), and additional perks Job Details We are seeking a Principal Solutions Architect - Machine Learning to serve as a trusted advisor to clients while leading the design and delivery of enterprise AI / ML solutions. This role is ideal for someone with strong hands-on architecture experience who has also succeeded in consulting, pre-sales, post-sales, account growth, or strategic client leadership environments. What You'll Do Client Engagement & Account Growth Build executive relationships with client stakeholders Understand long-term AI / ML priorities and create strategic roadmaps Identify expansion opportunities where additional AI or data solutions add value Partner with internal leadership to grow strategic accounts Lead RFPs, RFIs, solution proposals, and executive presentations Translate technical capabilities into business value and ROI Technical Leadership & Delivery Architect end-to-end AI / ML solutions focused on scalability and performance Lead teams of ML Engineers, Data Scientists, and Architects Oversee AI platforms, production pipelines, and model deployment strategies Mentor teams on best practices in machine learning architecture and delivery Ensure successful execution across design, testing, deployment, and optimization Stay current on emerging AI technologies and recommend innovation paths Required Background 8+ years in Solutions Architecture, Machine Learning Engineering, Data Science, or Software Engineering 3+ years leading consulting engagements, client relationships, or account growth initiatives Experience designing and deploying AI / ML solutions in production Strong pre-sales, proposal development, or expansion success Deep expertise with Amazon Web Services, Google Cloud, Microsoft Azure, Databricks, Vertex AI, or Amazon SageMaker Strong coding skills in Python, Java, Scala, or similar Experience leading cross-functional technical teams Strong client-facing communication and executive presence Bachelor's degree in Computer Science or related field Nice to Have Master's degree in Data Science or related discipline Experience with TensorFlow, Keras, scikit-learn, MLflow, h2o Docker / Kubernetes experience Experience building enterprise ML products and APIs Open-source contributions or side projects Expertise with MLOps frameworks and production model monitoring Interested in hearing more? Easy Apply now by clicking the "Apply" button. Jobot is an Equal Opportunity Employer. We provide an inclusive work environment that celebrates diversity and all qualified candidates receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, age (40 and over), disability, military status, genetic information or any other basis protected by applicable federal, state, or local laws. Jobot also prohibits harassment of applicants or employees based on any of these protected categories. It is Jobot's policy to comply with all applicable federal, state and local laws respecting consideration of unemployment status in making hiring decisions. Sometimes Jobot is required to perform background checks with your authorization. Jobot will consider qualified candidates with criminal histories in a manner consistent with any applicable federal, state, or local law regarding criminal backgrounds, including but not limited to the Los Angeles Fair Chance Initiative for Hiring and the San Francisco Fair Chance Ordinance. Information collected and processed as part of your Jobot candidate profile, and any job applications, resumes, or other information you choose to submit is subject to Jobot's Privacy Policy, as well as the Jobot California Worker Privacy Notice and Jobot Notice Regarding Automated Employment Decision Tools which are available at By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from Jobot, and/or its agents and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here:
04/27/2026
Full time
Principal Solutions Architect Drive AI Growth + Build Enterprise ML Solutions This Jobot Job is hosted by: Robert Donohue Are you a fit? Easy Apply now by clicking the "Apply" button and sending us your resume. Salary: $170,000 - $225,000 per year A bit about us: We are a premier modern data and AI consultancy helping enterprise organizations unlock value through cloud, analytics, machine learning, and intelligent automation. Our partnerships include leading platforms such as Snowflake, Amazon Web Services, Microsoft Azure, Google Cloud, Databricks, dbt Labs, and Fivetran. We are a remote-first organization with team members across the United States, Latin America, and India. Our culture is built around ownership, innovation, trust, curiosity, and delivering measurable outcomes for clients. Why join us? Competitive Compensation: $170,000 - $225,000 base salary + Bonus Remote-First: Work from anywhere in the US with occasional customer-site travel nationwide Massive Growth: Be part of a company growing 40% YOY, creating career advancement opportunities Principal-level role combining technical leadership + client growth Lead enterprise AI / ML transformations for top-tier customers Award-Winning Culture: Collaborative, inclusive, and committed to professional development Learning & Development: Accelerated training, advanced certifications, and exposure to AI/ML innovation Time Off & Benefits: 4 weeks PTO, 10 paid holidays, health/dental/vision insurance, 401(k), and additional perks Job Details We are seeking a Principal Solutions Architect - Machine Learning to serve as a trusted advisor to clients while leading the design and delivery of enterprise AI / ML solutions. This role is ideal for someone with strong hands-on architecture experience who has also succeeded in consulting, pre-sales, post-sales, account growth, or strategic client leadership environments. What You'll Do Client Engagement & Account Growth Build executive relationships with client stakeholders Understand long-term AI / ML priorities and create strategic roadmaps Identify expansion opportunities where additional AI or data solutions add value Partner with internal leadership to grow strategic accounts Lead RFPs, RFIs, solution proposals, and executive presentations Translate technical capabilities into business value and ROI Technical Leadership & Delivery Architect end-to-end AI / ML solutions focused on scalability and performance Lead teams of ML Engineers, Data Scientists, and Architects Oversee AI platforms, production pipelines, and model deployment strategies Mentor teams on best practices in machine learning architecture and delivery Ensure successful execution across design, testing, deployment, and optimization Stay current on emerging AI technologies and recommend innovation paths Required Background 8+ years in Solutions Architecture, Machine Learning Engineering, Data Science, or Software Engineering 3+ years leading consulting engagements, client relationships, or account growth initiatives Experience designing and deploying AI / ML solutions in production Strong pre-sales, proposal development, or expansion success Deep expertise with Amazon Web Services, Google Cloud, Microsoft Azure, Databricks, Vertex AI, or Amazon SageMaker Strong coding skills in Python, Java, Scala, or similar Experience leading cross-functional technical teams Strong client-facing communication and executive presence Bachelor's degree in Computer Science or related field Nice to Have Master's degree in Data Science or related discipline Experience with TensorFlow, Keras, scikit-learn, MLflow, h2o Docker / Kubernetes experience Experience building enterprise ML products and APIs Open-source contributions or side projects Expertise with MLOps frameworks and production model monitoring Interested in hearing more? Easy Apply now by clicking the "Apply" button. Jobot is an Equal Opportunity Employer. We provide an inclusive work environment that celebrates diversity and all qualified candidates receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, age (40 and over), disability, military status, genetic information or any other basis protected by applicable federal, state, or local laws. Jobot also prohibits harassment of applicants or employees based on any of these protected categories. It is Jobot's policy to comply with all applicable federal, state and local laws respecting consideration of unemployment status in making hiring decisions. Sometimes Jobot is required to perform background checks with your authorization. Jobot will consider qualified candidates with criminal histories in a manner consistent with any applicable federal, state, or local law regarding criminal backgrounds, including but not limited to the Los Angeles Fair Chance Initiative for Hiring and the San Francisco Fair Chance Ordinance. Information collected and processed as part of your Jobot candidate profile, and any job applications, resumes, or other information you choose to submit is subject to Jobot's Privacy Policy, as well as the Jobot California Worker Privacy Notice and Jobot Notice Regarding Automated Employment Decision Tools which are available at By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from Jobot, and/or its agents and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here:
Principal Solutions Architect Drive AI Growth + Build Enterprise ML Solutions This Jobot Job is hosted by: Robert Donohue Are you a fit? Easy Apply now by clicking the "Apply" button and sending us your resume. Salary: $170,000 - $225,000 per year A bit about us: We are a premier modern data and AI consultancy helping enterprise organizations unlock value through cloud, analytics, machine learning, and intelligent automation. Our partnerships include leading platforms such as Snowflake, Amazon Web Services, Microsoft Azure, Google Cloud, Databricks, dbt Labs, and Fivetran. We are a remote-first organization with team members across the United States, Latin America, and India. Our culture is built around ownership, innovation, trust, curiosity, and delivering measurable outcomes for clients. Why join us? Competitive Compensation: $170,000 - $225,000 base salary + Bonus Remote-First: Work from anywhere in the US with occasional customer-site travel nationwide Massive Growth: Be part of a company growing 40% YOY, creating career advancement opportunities Principal-level role combining technical leadership + client growth Lead enterprise AI / ML transformations for top-tier customers Award-Winning Culture: Collaborative, inclusive, and committed to professional development Learning & Development: Accelerated training, advanced certifications, and exposure to AI/ML innovation Time Off & Benefits: 4 weeks PTO, 10 paid holidays, health/dental/vision insurance, 401(k), and additional perks Job Details We are seeking a Principal Solutions Architect - Machine Learning to serve as a trusted advisor to clients while leading the design and delivery of enterprise AI / ML solutions. This role is ideal for someone with strong hands-on architecture experience who has also succeeded in consulting, pre-sales, post-sales, account growth, or strategic client leadership environments. What You'll Do Client Engagement & Account Growth Build executive relationships with client stakeholders Understand long-term AI / ML priorities and create strategic roadmaps Identify expansion opportunities where additional AI or data solutions add value Partner with internal leadership to grow strategic accounts Lead RFPs, RFIs, solution proposals, and executive presentations Translate technical capabilities into business value and ROI Technical Leadership & Delivery Architect end-to-end AI / ML solutions focused on scalability and performance Lead teams of ML Engineers, Data Scientists, and Architects Oversee AI platforms, production pipelines, and model deployment strategies Mentor teams on best practices in machine learning architecture and delivery Ensure successful execution across design, testing, deployment, and optimization Stay current on emerging AI technologies and recommend innovation paths Required Background 8+ years in Solutions Architecture, Machine Learning Engineering, Data Science, or Software Engineering 3+ years leading consulting engagements, client relationships, or account growth initiatives Experience designing and deploying AI / ML solutions in production Strong pre-sales, proposal development, or expansion success Deep expertise with Amazon Web Services, Google Cloud, Microsoft Azure, Databricks, Vertex AI, or Amazon SageMaker Strong coding skills in Python, Java, Scala, or similar Experience leading cross-functional technical teams Strong client-facing communication and executive presence Bachelor's degree in Computer Science or related field Nice to Have Master's degree in Data Science or related discipline Experience with TensorFlow, Keras, scikit-learn, MLflow, h2o Docker / Kubernetes experience Experience building enterprise ML products and APIs Open-source contributions or side projects Expertise with MLOps frameworks and production model monitoring Interested in hearing more? Easy Apply now by clicking the "Apply" button. Jobot is an Equal Opportunity Employer. We provide an inclusive work environment that celebrates diversity and all qualified candidates receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, age (40 and over), disability, military status, genetic information or any other basis protected by applicable federal, state, or local laws. Jobot also prohibits harassment of applicants or employees based on any of these protected categories. It is Jobot's policy to comply with all applicable federal, state and local laws respecting consideration of unemployment status in making hiring decisions. Sometimes Jobot is required to perform background checks with your authorization. Jobot will consider qualified candidates with criminal histories in a manner consistent with any applicable federal, state, or local law regarding criminal backgrounds, including but not limited to the Los Angeles Fair Chance Initiative for Hiring and the San Francisco Fair Chance Ordinance. Information collected and processed as part of your Jobot candidate profile, and any job applications, resumes, or other information you choose to submit is subject to Jobot's Privacy Policy, as well as the Jobot California Worker Privacy Notice and Jobot Notice Regarding Automated Employment Decision Tools which are available at By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from Jobot, and/or its agents and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here:
04/27/2026
Full time
Principal Solutions Architect Drive AI Growth + Build Enterprise ML Solutions This Jobot Job is hosted by: Robert Donohue Are you a fit? Easy Apply now by clicking the "Apply" button and sending us your resume. Salary: $170,000 - $225,000 per year A bit about us: We are a premier modern data and AI consultancy helping enterprise organizations unlock value through cloud, analytics, machine learning, and intelligent automation. Our partnerships include leading platforms such as Snowflake, Amazon Web Services, Microsoft Azure, Google Cloud, Databricks, dbt Labs, and Fivetran. We are a remote-first organization with team members across the United States, Latin America, and India. Our culture is built around ownership, innovation, trust, curiosity, and delivering measurable outcomes for clients. Why join us? Competitive Compensation: $170,000 - $225,000 base salary + Bonus Remote-First: Work from anywhere in the US with occasional customer-site travel nationwide Massive Growth: Be part of a company growing 40% YOY, creating career advancement opportunities Principal-level role combining technical leadership + client growth Lead enterprise AI / ML transformations for top-tier customers Award-Winning Culture: Collaborative, inclusive, and committed to professional development Learning & Development: Accelerated training, advanced certifications, and exposure to AI/ML innovation Time Off & Benefits: 4 weeks PTO, 10 paid holidays, health/dental/vision insurance, 401(k), and additional perks Job Details We are seeking a Principal Solutions Architect - Machine Learning to serve as a trusted advisor to clients while leading the design and delivery of enterprise AI / ML solutions. This role is ideal for someone with strong hands-on architecture experience who has also succeeded in consulting, pre-sales, post-sales, account growth, or strategic client leadership environments. What You'll Do Client Engagement & Account Growth Build executive relationships with client stakeholders Understand long-term AI / ML priorities and create strategic roadmaps Identify expansion opportunities where additional AI or data solutions add value Partner with internal leadership to grow strategic accounts Lead RFPs, RFIs, solution proposals, and executive presentations Translate technical capabilities into business value and ROI Technical Leadership & Delivery Architect end-to-end AI / ML solutions focused on scalability and performance Lead teams of ML Engineers, Data Scientists, and Architects Oversee AI platforms, production pipelines, and model deployment strategies Mentor teams on best practices in machine learning architecture and delivery Ensure successful execution across design, testing, deployment, and optimization Stay current on emerging AI technologies and recommend innovation paths Required Background 8+ years in Solutions Architecture, Machine Learning Engineering, Data Science, or Software Engineering 3+ years leading consulting engagements, client relationships, or account growth initiatives Experience designing and deploying AI / ML solutions in production Strong pre-sales, proposal development, or expansion success Deep expertise with Amazon Web Services, Google Cloud, Microsoft Azure, Databricks, Vertex AI, or Amazon SageMaker Strong coding skills in Python, Java, Scala, or similar Experience leading cross-functional technical teams Strong client-facing communication and executive presence Bachelor's degree in Computer Science or related field Nice to Have Master's degree in Data Science or related discipline Experience with TensorFlow, Keras, scikit-learn, MLflow, h2o Docker / Kubernetes experience Experience building enterprise ML products and APIs Open-source contributions or side projects Expertise with MLOps frameworks and production model monitoring Interested in hearing more? Easy Apply now by clicking the "Apply" button. Jobot is an Equal Opportunity Employer. We provide an inclusive work environment that celebrates diversity and all qualified candidates receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, age (40 and over), disability, military status, genetic information or any other basis protected by applicable federal, state, or local laws. Jobot also prohibits harassment of applicants or employees based on any of these protected categories. It is Jobot's policy to comply with all applicable federal, state and local laws respecting consideration of unemployment status in making hiring decisions. Sometimes Jobot is required to perform background checks with your authorization. Jobot will consider qualified candidates with criminal histories in a manner consistent with any applicable federal, state, or local law regarding criminal backgrounds, including but not limited to the Los Angeles Fair Chance Initiative for Hiring and the San Francisco Fair Chance Ordinance. Information collected and processed as part of your Jobot candidate profile, and any job applications, resumes, or other information you choose to submit is subject to Jobot's Privacy Policy, as well as the Jobot California Worker Privacy Notice and Jobot Notice Regarding Automated Employment Decision Tools which are available at By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from Jobot, and/or its agents and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here:
Principal Solutions Architect Drive AI Growth + Build Enterprise ML Solutions This Jobot Job is hosted by: Robert Donohue Are you a fit? Easy Apply now by clicking the "Apply" button and sending us your resume. Salary: $170,000 - $225,000 per year A bit about us: We are a premier modern data and AI consultancy helping enterprise organizations unlock value through cloud, analytics, machine learning, and intelligent automation. Our partnerships include leading platforms such as Snowflake, Amazon Web Services, Microsoft Azure, Google Cloud, Databricks, dbt Labs, and Fivetran. We are a remote-first organization with team members across the United States, Latin America, and India. Our culture is built around ownership, innovation, trust, curiosity, and delivering measurable outcomes for clients. Why join us? Competitive Compensation: $170,000 - $225,000 base salary + Bonus Remote-First: Work from anywhere in the US with occasional customer-site travel nationwide Massive Growth: Be part of a company growing 40% YOY, creating career advancement opportunities Principal-level role combining technical leadership + client growth Lead enterprise AI / ML transformations for top-tier customers Award-Winning Culture: Collaborative, inclusive, and committed to professional development Learning & Development: Accelerated training, advanced certifications, and exposure to AI/ML innovation Time Off & Benefits: 4 weeks PTO, 10 paid holidays, health/dental/vision insurance, 401(k), and additional perks Job Details We are seeking a Principal Solutions Architect - Machine Learning to serve as a trusted advisor to clients while leading the design and delivery of enterprise AI / ML solutions. This role is ideal for someone with strong hands-on architecture experience who has also succeeded in consulting, pre-sales, post-sales, account growth, or strategic client leadership environments. What You'll Do Client Engagement & Account Growth Build executive relationships with client stakeholders Understand long-term AI / ML priorities and create strategic roadmaps Identify expansion opportunities where additional AI or data solutions add value Partner with internal leadership to grow strategic accounts Lead RFPs, RFIs, solution proposals, and executive presentations Translate technical capabilities into business value and ROI Technical Leadership & Delivery Architect end-to-end AI / ML solutions focused on scalability and performance Lead teams of ML Engineers, Data Scientists, and Architects Oversee AI platforms, production pipelines, and model deployment strategies Mentor teams on best practices in machine learning architecture and delivery Ensure successful execution across design, testing, deployment, and optimization Stay current on emerging AI technologies and recommend innovation paths Required Background 8+ years in Solutions Architecture, Machine Learning Engineering, Data Science, or Software Engineering 3+ years leading consulting engagements, client relationships, or account growth initiatives Experience designing and deploying AI / ML solutions in production Strong pre-sales, proposal development, or expansion success Deep expertise with Amazon Web Services, Google Cloud, Microsoft Azure, Databricks, Vertex AI, or Amazon SageMaker Strong coding skills in Python, Java, Scala, or similar Experience leading cross-functional technical teams Strong client-facing communication and executive presence Bachelor's degree in Computer Science or related field Nice to Have Master's degree in Data Science or related discipline Experience with TensorFlow, Keras, scikit-learn, MLflow, h2o Docker / Kubernetes experience Experience building enterprise ML products and APIs Open-source contributions or side projects Expertise with MLOps frameworks and production model monitoring Interested in hearing more? Easy Apply now by clicking the "Apply" button. Jobot is an Equal Opportunity Employer. We provide an inclusive work environment that celebrates diversity and all qualified candidates receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, age (40 and over), disability, military status, genetic information or any other basis protected by applicable federal, state, or local laws. Jobot also prohibits harassment of applicants or employees based on any of these protected categories. It is Jobot's policy to comply with all applicable federal, state and local laws respecting consideration of unemployment status in making hiring decisions. Sometimes Jobot is required to perform background checks with your authorization. Jobot will consider qualified candidates with criminal histories in a manner consistent with any applicable federal, state, or local law regarding criminal backgrounds, including but not limited to the Los Angeles Fair Chance Initiative for Hiring and the San Francisco Fair Chance Ordinance. Information collected and processed as part of your Jobot candidate profile, and any job applications, resumes, or other information you choose to submit is subject to Jobot's Privacy Policy, as well as the Jobot California Worker Privacy Notice and Jobot Notice Regarding Automated Employment Decision Tools which are available at By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from Jobot, and/or its agents and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here:
04/27/2026
Full time
Principal Solutions Architect Drive AI Growth + Build Enterprise ML Solutions This Jobot Job is hosted by: Robert Donohue Are you a fit? Easy Apply now by clicking the "Apply" button and sending us your resume. Salary: $170,000 - $225,000 per year A bit about us: We are a premier modern data and AI consultancy helping enterprise organizations unlock value through cloud, analytics, machine learning, and intelligent automation. Our partnerships include leading platforms such as Snowflake, Amazon Web Services, Microsoft Azure, Google Cloud, Databricks, dbt Labs, and Fivetran. We are a remote-first organization with team members across the United States, Latin America, and India. Our culture is built around ownership, innovation, trust, curiosity, and delivering measurable outcomes for clients. Why join us? Competitive Compensation: $170,000 - $225,000 base salary + Bonus Remote-First: Work from anywhere in the US with occasional customer-site travel nationwide Massive Growth: Be part of a company growing 40% YOY, creating career advancement opportunities Principal-level role combining technical leadership + client growth Lead enterprise AI / ML transformations for top-tier customers Award-Winning Culture: Collaborative, inclusive, and committed to professional development Learning & Development: Accelerated training, advanced certifications, and exposure to AI/ML innovation Time Off & Benefits: 4 weeks PTO, 10 paid holidays, health/dental/vision insurance, 401(k), and additional perks Job Details We are seeking a Principal Solutions Architect - Machine Learning to serve as a trusted advisor to clients while leading the design and delivery of enterprise AI / ML solutions. This role is ideal for someone with strong hands-on architecture experience who has also succeeded in consulting, pre-sales, post-sales, account growth, or strategic client leadership environments. What You'll Do Client Engagement & Account Growth Build executive relationships with client stakeholders Understand long-term AI / ML priorities and create strategic roadmaps Identify expansion opportunities where additional AI or data solutions add value Partner with internal leadership to grow strategic accounts Lead RFPs, RFIs, solution proposals, and executive presentations Translate technical capabilities into business value and ROI Technical Leadership & Delivery Architect end-to-end AI / ML solutions focused on scalability and performance Lead teams of ML Engineers, Data Scientists, and Architects Oversee AI platforms, production pipelines, and model deployment strategies Mentor teams on best practices in machine learning architecture and delivery Ensure successful execution across design, testing, deployment, and optimization Stay current on emerging AI technologies and recommend innovation paths Required Background 8+ years in Solutions Architecture, Machine Learning Engineering, Data Science, or Software Engineering 3+ years leading consulting engagements, client relationships, or account growth initiatives Experience designing and deploying AI / ML solutions in production Strong pre-sales, proposal development, or expansion success Deep expertise with Amazon Web Services, Google Cloud, Microsoft Azure, Databricks, Vertex AI, or Amazon SageMaker Strong coding skills in Python, Java, Scala, or similar Experience leading cross-functional technical teams Strong client-facing communication and executive presence Bachelor's degree in Computer Science or related field Nice to Have Master's degree in Data Science or related discipline Experience with TensorFlow, Keras, scikit-learn, MLflow, h2o Docker / Kubernetes experience Experience building enterprise ML products and APIs Open-source contributions or side projects Expertise with MLOps frameworks and production model monitoring Interested in hearing more? Easy Apply now by clicking the "Apply" button. Jobot is an Equal Opportunity Employer. We provide an inclusive work environment that celebrates diversity and all qualified candidates receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, age (40 and over), disability, military status, genetic information or any other basis protected by applicable federal, state, or local laws. Jobot also prohibits harassment of applicants or employees based on any of these protected categories. It is Jobot's policy to comply with all applicable federal, state and local laws respecting consideration of unemployment status in making hiring decisions. Sometimes Jobot is required to perform background checks with your authorization. Jobot will consider qualified candidates with criminal histories in a manner consistent with any applicable federal, state, or local law regarding criminal backgrounds, including but not limited to the Los Angeles Fair Chance Initiative for Hiring and the San Francisco Fair Chance Ordinance. Information collected and processed as part of your Jobot candidate profile, and any job applications, resumes, or other information you choose to submit is subject to Jobot's Privacy Policy, as well as the Jobot California Worker Privacy Notice and Jobot Notice Regarding Automated Employment Decision Tools which are available at By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from Jobot, and/or its agents and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here:
Principal Solutions Architect Drive AI Growth + Build Enterprise ML Solutions This Jobot Job is hosted by: Robert Donohue Are you a fit? Easy Apply now by clicking the "Apply" button and sending us your resume. Salary: $170,000 - $225,000 per year A bit about us: We are a premier modern data and AI consultancy helping enterprise organizations unlock value through cloud, analytics, machine learning, and intelligent automation. Our partnerships include leading platforms such as Snowflake, Amazon Web Services, Microsoft Azure, Google Cloud, Databricks, dbt Labs, and Fivetran. We are a remote-first organization with team members across the United States, Latin America, and India. Our culture is built around ownership, innovation, trust, curiosity, and delivering measurable outcomes for clients. Why join us? Competitive Compensation: $170,000 - $225,000 base salary + Bonus Remote-First: Work from anywhere in the US with occasional customer-site travel nationwide Massive Growth: Be part of a company growing 40% YOY, creating career advancement opportunities Principal-level role combining technical leadership + client growth Lead enterprise AI / ML transformations for top-tier customers Award-Winning Culture: Collaborative, inclusive, and committed to professional development Learning & Development: Accelerated training, advanced certifications, and exposure to AI/ML innovation Time Off & Benefits: 4 weeks PTO, 10 paid holidays, health/dental/vision insurance, 401(k), and additional perks Job Details We are seeking a Principal Solutions Architect - Machine Learning to serve as a trusted advisor to clients while leading the design and delivery of enterprise AI / ML solutions. This role is ideal for someone with strong hands-on architecture experience who has also succeeded in consulting, pre-sales, post-sales, account growth, or strategic client leadership environments. What You'll Do Client Engagement & Account Growth Build executive relationships with client stakeholders Understand long-term AI / ML priorities and create strategic roadmaps Identify expansion opportunities where additional AI or data solutions add value Partner with internal leadership to grow strategic accounts Lead RFPs, RFIs, solution proposals, and executive presentations Translate technical capabilities into business value and ROI Technical Leadership & Delivery Architect end-to-end AI / ML solutions focused on scalability and performance Lead teams of ML Engineers, Data Scientists, and Architects Oversee AI platforms, production pipelines, and model deployment strategies Mentor teams on best practices in machine learning architecture and delivery Ensure successful execution across design, testing, deployment, and optimization Stay current on emerging AI technologies and recommend innovation paths Required Background 8+ years in Solutions Architecture, Machine Learning Engineering, Data Science, or Software Engineering 3+ years leading consulting engagements, client relationships, or account growth initiatives Experience designing and deploying AI / ML solutions in production Strong pre-sales, proposal development, or expansion success Deep expertise with Amazon Web Services, Google Cloud, Microsoft Azure, Databricks, Vertex AI, or Amazon SageMaker Strong coding skills in Python, Java, Scala, or similar Experience leading cross-functional technical teams Strong client-facing communication and executive presence Bachelor's degree in Computer Science or related field Nice to Have Master's degree in Data Science or related discipline Experience with TensorFlow, Keras, scikit-learn, MLflow, h2o Docker / Kubernetes experience Experience building enterprise ML products and APIs Open-source contributions or side projects Expertise with MLOps frameworks and production model monitoring Interested in hearing more? Easy Apply now by clicking the "Apply" button. Jobot is an Equal Opportunity Employer. We provide an inclusive work environment that celebrates diversity and all qualified candidates receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, age (40 and over), disability, military status, genetic information or any other basis protected by applicable federal, state, or local laws. Jobot also prohibits harassment of applicants or employees based on any of these protected categories. It is Jobot's policy to comply with all applicable federal, state and local laws respecting consideration of unemployment status in making hiring decisions. Sometimes Jobot is required to perform background checks with your authorization. Jobot will consider qualified candidates with criminal histories in a manner consistent with any applicable federal, state, or local law regarding criminal backgrounds, including but not limited to the Los Angeles Fair Chance Initiative for Hiring and the San Francisco Fair Chance Ordinance. Information collected and processed as part of your Jobot candidate profile, and any job applications, resumes, or other information you choose to submit is subject to Jobot's Privacy Policy, as well as the Jobot California Worker Privacy Notice and Jobot Notice Regarding Automated Employment Decision Tools which are available at By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from Jobot, and/or its agents and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here:
04/27/2026
Full time
Principal Solutions Architect Drive AI Growth + Build Enterprise ML Solutions This Jobot Job is hosted by: Robert Donohue Are you a fit? Easy Apply now by clicking the "Apply" button and sending us your resume. Salary: $170,000 - $225,000 per year A bit about us: We are a premier modern data and AI consultancy helping enterprise organizations unlock value through cloud, analytics, machine learning, and intelligent automation. Our partnerships include leading platforms such as Snowflake, Amazon Web Services, Microsoft Azure, Google Cloud, Databricks, dbt Labs, and Fivetran. We are a remote-first organization with team members across the United States, Latin America, and India. Our culture is built around ownership, innovation, trust, curiosity, and delivering measurable outcomes for clients. Why join us? Competitive Compensation: $170,000 - $225,000 base salary + Bonus Remote-First: Work from anywhere in the US with occasional customer-site travel nationwide Massive Growth: Be part of a company growing 40% YOY, creating career advancement opportunities Principal-level role combining technical leadership + client growth Lead enterprise AI / ML transformations for top-tier customers Award-Winning Culture: Collaborative, inclusive, and committed to professional development Learning & Development: Accelerated training, advanced certifications, and exposure to AI/ML innovation Time Off & Benefits: 4 weeks PTO, 10 paid holidays, health/dental/vision insurance, 401(k), and additional perks Job Details We are seeking a Principal Solutions Architect - Machine Learning to serve as a trusted advisor to clients while leading the design and delivery of enterprise AI / ML solutions. This role is ideal for someone with strong hands-on architecture experience who has also succeeded in consulting, pre-sales, post-sales, account growth, or strategic client leadership environments. What You'll Do Client Engagement & Account Growth Build executive relationships with client stakeholders Understand long-term AI / ML priorities and create strategic roadmaps Identify expansion opportunities where additional AI or data solutions add value Partner with internal leadership to grow strategic accounts Lead RFPs, RFIs, solution proposals, and executive presentations Translate technical capabilities into business value and ROI Technical Leadership & Delivery Architect end-to-end AI / ML solutions focused on scalability and performance Lead teams of ML Engineers, Data Scientists, and Architects Oversee AI platforms, production pipelines, and model deployment strategies Mentor teams on best practices in machine learning architecture and delivery Ensure successful execution across design, testing, deployment, and optimization Stay current on emerging AI technologies and recommend innovation paths Required Background 8+ years in Solutions Architecture, Machine Learning Engineering, Data Science, or Software Engineering 3+ years leading consulting engagements, client relationships, or account growth initiatives Experience designing and deploying AI / ML solutions in production Strong pre-sales, proposal development, or expansion success Deep expertise with Amazon Web Services, Google Cloud, Microsoft Azure, Databricks, Vertex AI, or Amazon SageMaker Strong coding skills in Python, Java, Scala, or similar Experience leading cross-functional technical teams Strong client-facing communication and executive presence Bachelor's degree in Computer Science or related field Nice to Have Master's degree in Data Science or related discipline Experience with TensorFlow, Keras, scikit-learn, MLflow, h2o Docker / Kubernetes experience Experience building enterprise ML products and APIs Open-source contributions or side projects Expertise with MLOps frameworks and production model monitoring Interested in hearing more? Easy Apply now by clicking the "Apply" button. Jobot is an Equal Opportunity Employer. We provide an inclusive work environment that celebrates diversity and all qualified candidates receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, age (40 and over), disability, military status, genetic information or any other basis protected by applicable federal, state, or local laws. Jobot also prohibits harassment of applicants or employees based on any of these protected categories. It is Jobot's policy to comply with all applicable federal, state and local laws respecting consideration of unemployment status in making hiring decisions. Sometimes Jobot is required to perform background checks with your authorization. Jobot will consider qualified candidates with criminal histories in a manner consistent with any applicable federal, state, or local law regarding criminal backgrounds, including but not limited to the Los Angeles Fair Chance Initiative for Hiring and the San Francisco Fair Chance Ordinance. Information collected and processed as part of your Jobot candidate profile, and any job applications, resumes, or other information you choose to submit is subject to Jobot's Privacy Policy, as well as the Jobot California Worker Privacy Notice and Jobot Notice Regarding Automated Employment Decision Tools which are available at By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from Jobot, and/or its agents and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here:
Biomedical Cloud Engineer School of Medicine, Stanford, California, United States Information Technology Services Aug 08, 2025 Post Date 107025 Requisition Stanford Center for Genomics and Personalized Medicine (SCGPM) has an exciting opportunity available for a motivated Biomedical Cloud Engineer to create innovative data architectures that will automate the process of turning big genomic data into biomedical insights. The ideal person for this position is a keen listener who can interpret biological questions, assess the value and relevance of different technologies and methods, and deliver actionable technical solutions. Background: The Department of Veterans Affairs (VA) has commissioned the sequencing of hundreds of thousands of whole genomes from participants in the Million Veteran Program (MVP) . This data is currently being delivered to the SCGPM's cloud computing environment and constitutes one of the largest repositories of whole-genome sequencing data in the world. The scale and richness of this data make it an incredible resource for biomedical research. Our goal is to turn this data lake into a data commons: a dynamic computing environment where researchers bring questions and get answers, all without having to go through the ordeal of manually collecting, cleaning, massaging, scrubbing, sorting, transforming, and filtering data. As an example of a publication from this group, see this reference describing the early design of our data processing system: Ross, P.B., Song, J., Tsao, P.S. et al. Trellis for efficient data and task management in the VA Million Veteran Program. Scientific Reports 11, 23229 (2021). Position: In this position, you would be the system developer of the cloud-based MVP data management system that we have created called Trellis. Trellis stores the petabytes of sequence data contributed to the MVP by veterans and orchestrates its processing while keeping track of what programs were used, maintaining a detailed record of data provenance. To manage the enormous volumes of biomedical research data that the MVP generates, we have built Trellis in the Google Cloud Platform. The Trellis architecture takes advantage of many serverless cloud services, such as Cloud Functions, Dataproc, Cloud SQL, and Pub/Sub, to make a workflow which responds to the arrival of new data by initiating pipeline processes automatically and at scale. A production version of Trellis has already processed the whole genomic sequences of over 150,000 veterans and we plan to process at least as many more in the coming year. You would take the lead in keeping this production system running and optimized, and you would interface with our SecOps team which maintains that system in a FedRAMP-secure environment. Our Team: Our SCGPM bioinformatics team is a multi-disciplinary group composed of about a dozen scientists, engineers, and software developers with complementary backgrounds, each contributing their own expertise in managing and analyzing complex biomedical data . Other projects supported by this team include the NCI Human Tumor Atlas Network, Human BioMolecular Atlas Program, and the Stanford Metabolic Health Center. This position can be on-site in Palo Alto, fully remote, or hybrid. Duties include: Maintaining the smooth execution of our production Trellis systemWorking with our Security Operations team to respond to any security incidentsConstructing queries to our graph database to gain insights from pipeline run dataImplementing population-level genomic analyses (GWAS, PCA) to verify data integrityDesigning and integrating novel bioinformatics pipelines into our Trellis systemTroubleshooting data flow in our state-driven Trellis architectureBuilding containers for bioinformatics tools and integrating them with our internal data management system to automate workflowsCollaborating with researchers to explore solutions to relevant biological questions and maximize the value of our whole-genome sequencing dataset to the public - Other duties may also be assigned. DESIRED QUALIFICATIONS: Four-year degree in Genetics, Computer Science, Bioinformatics, Computational Physics, or a related fieldExperience with biomedical data formats (FASTQ, FASTA, BAM, CRAM, Hail MatrixTable, et al.)Comfortable in programming with PythonExperience with cloud computing, especially Google CloudExperience with databases, especially graph databasesExperience with big data technologies (e.g., BigQuery, Spark, Hail, Terra)Familiarity with issues in computer data securityFamiliarity with FedRAMP cloud securityFamiliarity with FAIR principles of data managementExcellent verbal and written communication skillsAn ability to independently grasp the objectives of research projects and assemble solutions from a range of technologies, standards, and approachesA desire to learn new methods and technologies and to adapt to demands of fast-paced research EDUCATION & EXPERIENCE (REQUIRED):Bachelor's degree and five years of relevant experience, or a combination of education and relevant experience. KNOWLEDGE, SKILLS AND ABILITIES (REQUIRED): Expertise in designing, developing, testing, and deploying applications.Proficiency with application design and data modeling.Ability to define and solve logical problems for highly technical applications.Strong communication skills with both technical and non-technical clients.Ability to lead activities on structured team development projects.Ability to select, adapt, and effectively use a variety of programming methods.Knowledge of application domain. CERTIFICATIONS & LICENSES:None PHYSICAL REQUIREMENTS : Constantly perform desk-based computer tasks.Frequently sit, grasp lightly/fine manipulation.Occasionally stand/walk, writing by hand.Rarely use a telephone, lift/carry/push/pull objects that weigh up to 10 pounds. - Consistent with its obligations under the law, the University will provide reasonable accommodation to any employee with a disability who requires accommodation to perform the essential functions of his or her job. WORKING CONDITIONS:May work extended hours, evening and weekends. WORK STANDARDS (from JDL): Interpersonal Skills: Demonstrates the ability to work well with Stanford colleagues and clients and with external organizations.Promote Culture of Safety: Demonstrates commitment to personal responsibility and value for safety; communicates safety concerns; uses and promotes safe behaviors based on training and lessons learned.Subject to and expected to comply with all applicable University policies and procedures, including but not limited to the personnel policies and other policies found in the University's Administrative Guide, . The job duties listed are typical examples of work performed by positions in this job classification and are not designed to contain or be interpreted as a comprehensive inventory of all duties, tasks, and responsibilities. Specific duties and responsibilities may vary depending on department or program needs without changing the general nature and scope of the job or level of responsibility. Employees may also perform other duties as assigned. Consistent with its obligations under the law, the University will provide reasonable accommodations to applicants and employees with disabilities. Applicants requiring a reasonable accommodation for any part of the application or hiring process should contact Stanford University Human Resources at . For all other inquiries, please submit a contact form . Stanford is an equal employment opportunity and affirmative action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status, or any other characteristic protected by law.Additional Information Schedule: Full-time Job Code: 4822 Employee Status: Regular Grade: J Requisition ID: 107025 Work Arrangement : On Site
01/14/2026
Full time
Biomedical Cloud Engineer School of Medicine, Stanford, California, United States Information Technology Services Aug 08, 2025 Post Date 107025 Requisition Stanford Center for Genomics and Personalized Medicine (SCGPM) has an exciting opportunity available for a motivated Biomedical Cloud Engineer to create innovative data architectures that will automate the process of turning big genomic data into biomedical insights. The ideal person for this position is a keen listener who can interpret biological questions, assess the value and relevance of different technologies and methods, and deliver actionable technical solutions. Background: The Department of Veterans Affairs (VA) has commissioned the sequencing of hundreds of thousands of whole genomes from participants in the Million Veteran Program (MVP) . This data is currently being delivered to the SCGPM's cloud computing environment and constitutes one of the largest repositories of whole-genome sequencing data in the world. The scale and richness of this data make it an incredible resource for biomedical research. Our goal is to turn this data lake into a data commons: a dynamic computing environment where researchers bring questions and get answers, all without having to go through the ordeal of manually collecting, cleaning, massaging, scrubbing, sorting, transforming, and filtering data. As an example of a publication from this group, see this reference describing the early design of our data processing system: Ross, P.B., Song, J., Tsao, P.S. et al. Trellis for efficient data and task management in the VA Million Veteran Program. Scientific Reports 11, 23229 (2021). Position: In this position, you would be the system developer of the cloud-based MVP data management system that we have created called Trellis. Trellis stores the petabytes of sequence data contributed to the MVP by veterans and orchestrates its processing while keeping track of what programs were used, maintaining a detailed record of data provenance. To manage the enormous volumes of biomedical research data that the MVP generates, we have built Trellis in the Google Cloud Platform. The Trellis architecture takes advantage of many serverless cloud services, such as Cloud Functions, Dataproc, Cloud SQL, and Pub/Sub, to make a workflow which responds to the arrival of new data by initiating pipeline processes automatically and at scale. A production version of Trellis has already processed the whole genomic sequences of over 150,000 veterans and we plan to process at least as many more in the coming year. You would take the lead in keeping this production system running and optimized, and you would interface with our SecOps team which maintains that system in a FedRAMP-secure environment. Our Team: Our SCGPM bioinformatics team is a multi-disciplinary group composed of about a dozen scientists, engineers, and software developers with complementary backgrounds, each contributing their own expertise in managing and analyzing complex biomedical data . Other projects supported by this team include the NCI Human Tumor Atlas Network, Human BioMolecular Atlas Program, and the Stanford Metabolic Health Center. This position can be on-site in Palo Alto, fully remote, or hybrid. Duties include: Maintaining the smooth execution of our production Trellis systemWorking with our Security Operations team to respond to any security incidentsConstructing queries to our graph database to gain insights from pipeline run dataImplementing population-level genomic analyses (GWAS, PCA) to verify data integrityDesigning and integrating novel bioinformatics pipelines into our Trellis systemTroubleshooting data flow in our state-driven Trellis architectureBuilding containers for bioinformatics tools and integrating them with our internal data management system to automate workflowsCollaborating with researchers to explore solutions to relevant biological questions and maximize the value of our whole-genome sequencing dataset to the public - Other duties may also be assigned. DESIRED QUALIFICATIONS: Four-year degree in Genetics, Computer Science, Bioinformatics, Computational Physics, or a related fieldExperience with biomedical data formats (FASTQ, FASTA, BAM, CRAM, Hail MatrixTable, et al.)Comfortable in programming with PythonExperience with cloud computing, especially Google CloudExperience with databases, especially graph databasesExperience with big data technologies (e.g., BigQuery, Spark, Hail, Terra)Familiarity with issues in computer data securityFamiliarity with FedRAMP cloud securityFamiliarity with FAIR principles of data managementExcellent verbal and written communication skillsAn ability to independently grasp the objectives of research projects and assemble solutions from a range of technologies, standards, and approachesA desire to learn new methods and technologies and to adapt to demands of fast-paced research EDUCATION & EXPERIENCE (REQUIRED):Bachelor's degree and five years of relevant experience, or a combination of education and relevant experience. KNOWLEDGE, SKILLS AND ABILITIES (REQUIRED): Expertise in designing, developing, testing, and deploying applications.Proficiency with application design and data modeling.Ability to define and solve logical problems for highly technical applications.Strong communication skills with both technical and non-technical clients.Ability to lead activities on structured team development projects.Ability to select, adapt, and effectively use a variety of programming methods.Knowledge of application domain. CERTIFICATIONS & LICENSES:None PHYSICAL REQUIREMENTS : Constantly perform desk-based computer tasks.Frequently sit, grasp lightly/fine manipulation.Occasionally stand/walk, writing by hand.Rarely use a telephone, lift/carry/push/pull objects that weigh up to 10 pounds. - Consistent with its obligations under the law, the University will provide reasonable accommodation to any employee with a disability who requires accommodation to perform the essential functions of his or her job. WORKING CONDITIONS:May work extended hours, evening and weekends. WORK STANDARDS (from JDL): Interpersonal Skills: Demonstrates the ability to work well with Stanford colleagues and clients and with external organizations.Promote Culture of Safety: Demonstrates commitment to personal responsibility and value for safety; communicates safety concerns; uses and promotes safe behaviors based on training and lessons learned.Subject to and expected to comply with all applicable University policies and procedures, including but not limited to the personnel policies and other policies found in the University's Administrative Guide, . The job duties listed are typical examples of work performed by positions in this job classification and are not designed to contain or be interpreted as a comprehensive inventory of all duties, tasks, and responsibilities. Specific duties and responsibilities may vary depending on department or program needs without changing the general nature and scope of the job or level of responsibility. Employees may also perform other duties as assigned. Consistent with its obligations under the law, the University will provide reasonable accommodations to applicants and employees with disabilities. Applicants requiring a reasonable accommodation for any part of the application or hiring process should contact Stanford University Human Resources at . For all other inquiries, please submit a contact form . Stanford is an equal employment opportunity and affirmative action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status, or any other characteristic protected by law.Additional Information Schedule: Full-time Job Code: 4822 Employee Status: Regular Grade: J Requisition ID: 107025 Work Arrangement : On Site
ML Data Engineer - Healthcare Data Curation & Cleaning (1 Year Fixed Term) School of Medicine, Stanford, California, United States Information Analytics Jun 03, 2025 Post Date 106579 Requisition University is seeking a Big Data Architect 1 for a 1 year fixed term (possibility of renewal) to design and develop applications, test and build automation tools and support the development of Big Data architecture and analytical solutions. About Us: The Department of Biomedical Data Science merges the disciplines of biomedical informatics, biostatistics, computer science and advances in AI. The intersection of these disciplines is applied to precision health, leveraging data across the entire medical spectrum, including molecular, tissue, medical imaging, EHR, biosensory and population data. About the Position: We are seeking an experienced ML Data Engineer to drive the programmatic curation, cleaning, and generation of healthcare data. In this role, you will focus exclusively on developing and maintaining automated, ML-accelerated pipelines that ensure high-quality data ready for machine learning applications. Your work will be pivotal in shaping the integrity of our data and supporting downstream predictive models in a complex healthcare environment. You Will Find This Position a Good Fit If: You are passionate about transforming raw healthcare data into valuable insights. You believe in the critical role of robust data curation in advancing machine learning in healthcare. You thrive in environments where you can work independently on complex data challenges while collaborating with multidisciplinary teams. You are excited to work with patient-level data and embrace challenges related to data diversity and complexity. Duties include: Design Big Data systems that are scalable, optimized and fault-tolerant. Work closely with scientific staff, IT professional and project managers to understand their data requirements for existing and future projects involving Big Data. Develop, test, implement, and maintain database management applications. Optimize and tune the system, perform software review and maintenance to ensure that data design elements are reusable, repeatable and robust. Contribute to the development of guidelines, standards, and processes to ensure data quality, integrity and security of systems and data appropriate to risk. Participate in and/or contribute to setting strategy and standards through data architecture and implementation, leveraging Big Data, analytics tools and technologies. Work with IT and data owners to understand the types of data collected in various databases and data warehouses. Research and suggest new toolsets/methods to improve data ingestion, storage, and data access. Key Responsibilities: Data Pipeline Engineering: Design, implement, and maintain robust pipelines for the programmatic cleaning, transformation, and curation of healthcare data. Develop automated processes to curate and validate data, ensuring accuracy and compliance with healthcare standards (e.g. OMOP CDM, FHIR). ML Data Engineering: Leverage core machine learning techniques to generate datasets, clean existing health records, join heterogeneous data sources, and enhance data quality for model training. Implement innovative solutions to detect and correct data inconsistencies and anomalies in large-scale healthcare datasets. Healthcare Data Expertise: Work extensively with patient-level health data, ensuring that data handling practices adhere to industry regulations and ethical standards. Utilize the OMOP Common Data Model (OMOP CDM) to standardize and harmonize disparate healthcare data sources, enhancing interoperability and scalability. Collaboration & Continuous Improvement: Collaborate closely with data scientists, clinical informaticians, and engineers to align data engineering practices with analytical and clinical requirements. Continuously monitor, troubleshoot, and optimize data workflows to support dynamic research and operational needs. The expected pay range for this position is $157,945 to $177,385 per annum. Stanford University provides pay ranges representing its good faith estimate of what the university reasonably expects to pay for a position. The pay offered to a selected candidate will be determined based on factors such as (but not limited to) the scope and responsibilities of the position, the qualifications of the selected candidate, departmental budget availability, internal equity, geographic location and external market pay for comparable jobs. At Stanford University, base pay represents only one aspect of the comprehensive rewards package. The Cardinal at Work website ( ) provides detailed information on Stanford's extensive range of benefits and rewards offered to employees. Specifics about the rewards package for this position may be discussed during the hiring process. Consistent with its obligations under the law, the University will provide reasonable accommodations to applicants and employees with disabilities. Applicants requiring a reasonable accommodation for any part of the application or hiring process should contact Stanford University Human Resources at . For all other inquiries, please submit a contact form. Stanford is an equal employment opportunity and affirmative action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status, or any other characteristic protected by law. Stanford welcomes applications from all who would bring additional dimensions to the University's research mission. DESIRED QUALIFICATIONS: 3+ years of experience in software development and data engineering with a strong focus on data cleaning, transformation, and creation. Proficiency in Python and experience with data processing libraries (e.g., Pandas, Polars, NumPy). Hands-on experience in building and maintaining automated data pipelines for large-scale data processing. Familiarity with machine learning frameworks (e.g., PyTorch, JAX, scikit-learn) as applied to data quality and augmentation tasks. Expertise in working with healthcare data, including familiarity with the OMOP Common Data Model (OMOP CDM). Strong experience in a Linux environment and comfort with UNIX command-line tools. Proven ability to work collaboratively in multidisciplinary teams and communicate technical concepts effectively. PREFERRED QUALIFICATIONS: Experience with cloud platforms (e.g., GCP, AWS, or Azure) and distributed computing frameworks. Proficiency with version control systems (e.g., Git) and containerization tools (e.g., Docker). Familiarity with healthcare data standards and regulatory requirements. EDUCATION & EXPERIENCE (REQUIRED): Bachelor's degree in scientific or analytic field and five years of relevant experience, or a combination of education and relevant experience. KNOWLEDGE, SKILLS AND ABILITIES (REQUIRED): • Knowledge of key data structures algorithms, and techniques pertinent to systems that support high volume, velocity, or variety datasets (including data mining, machine learning, NLP, data retrieval). • Experience with relational, NoSQL, or NewSQL database systems and data modeling, structured and unstructured. • Experience in parallel and distributed data processing techniques and platforms (MPI, Map/Reduce, Batch). • Experience in scripting languages and experience in debugging them, experience with high performance/systems languages and techniques. • Knowledge of benchmark software development and programmable fields/systems, ability to analyze systems and data pipelines and propose solutions that leverage emerging technologies. • Ability to use and integrate security controls for web applications, mobile platforms, and backend systems. • Experience deploying reliable data systems and data quality management. • Ability to research, evaluate, architect, and deploy new tools, frameworks, and patterns to build scalable Big Data platforms. • Ability to document use cases, solutions and recommendations. • Demonstrated excellence in written and verbal communication skills. CERTIFICATIONS & LICENSES: None PHYSICAL REQUIREMENTS : • Frequently sit, grasp lightly, use fine manipulation and perform desk-based computer tasks, lift, carry, push pull objects that weigh to ten pounds. • Occasionally sit, use a telephone or write by hand. • Rarely kneel, crawl, climb, twist, bend, stoop, squat, reach or work above shoulders, sort, file paperwork or parts, operate foot and hand controls. - Consistent with its obligations under the law, the University will provide reasonable accommodation to any employee with a disability who requires accommodation to perform the essential functions of his or her job. Additional Information Schedule: Full-time Job Code: 4734 Employee Status: Fixed-Term Grade: K Requisition ID: 106579 Work Arrangement : Hybrid Eligible
01/14/2026
Full time
ML Data Engineer - Healthcare Data Curation & Cleaning (1 Year Fixed Term) School of Medicine, Stanford, California, United States Information Analytics Jun 03, 2025 Post Date 106579 Requisition University is seeking a Big Data Architect 1 for a 1 year fixed term (possibility of renewal) to design and develop applications, test and build automation tools and support the development of Big Data architecture and analytical solutions. About Us: The Department of Biomedical Data Science merges the disciplines of biomedical informatics, biostatistics, computer science and advances in AI. The intersection of these disciplines is applied to precision health, leveraging data across the entire medical spectrum, including molecular, tissue, medical imaging, EHR, biosensory and population data. About the Position: We are seeking an experienced ML Data Engineer to drive the programmatic curation, cleaning, and generation of healthcare data. In this role, you will focus exclusively on developing and maintaining automated, ML-accelerated pipelines that ensure high-quality data ready for machine learning applications. Your work will be pivotal in shaping the integrity of our data and supporting downstream predictive models in a complex healthcare environment. You Will Find This Position a Good Fit If: You are passionate about transforming raw healthcare data into valuable insights. You believe in the critical role of robust data curation in advancing machine learning in healthcare. You thrive in environments where you can work independently on complex data challenges while collaborating with multidisciplinary teams. You are excited to work with patient-level data and embrace challenges related to data diversity and complexity. Duties include: Design Big Data systems that are scalable, optimized and fault-tolerant. Work closely with scientific staff, IT professional and project managers to understand their data requirements for existing and future projects involving Big Data. Develop, test, implement, and maintain database management applications. Optimize and tune the system, perform software review and maintenance to ensure that data design elements are reusable, repeatable and robust. Contribute to the development of guidelines, standards, and processes to ensure data quality, integrity and security of systems and data appropriate to risk. Participate in and/or contribute to setting strategy and standards through data architecture and implementation, leveraging Big Data, analytics tools and technologies. Work with IT and data owners to understand the types of data collected in various databases and data warehouses. Research and suggest new toolsets/methods to improve data ingestion, storage, and data access. Key Responsibilities: Data Pipeline Engineering: Design, implement, and maintain robust pipelines for the programmatic cleaning, transformation, and curation of healthcare data. Develop automated processes to curate and validate data, ensuring accuracy and compliance with healthcare standards (e.g. OMOP CDM, FHIR). ML Data Engineering: Leverage core machine learning techniques to generate datasets, clean existing health records, join heterogeneous data sources, and enhance data quality for model training. Implement innovative solutions to detect and correct data inconsistencies and anomalies in large-scale healthcare datasets. Healthcare Data Expertise: Work extensively with patient-level health data, ensuring that data handling practices adhere to industry regulations and ethical standards. Utilize the OMOP Common Data Model (OMOP CDM) to standardize and harmonize disparate healthcare data sources, enhancing interoperability and scalability. Collaboration & Continuous Improvement: Collaborate closely with data scientists, clinical informaticians, and engineers to align data engineering practices with analytical and clinical requirements. Continuously monitor, troubleshoot, and optimize data workflows to support dynamic research and operational needs. The expected pay range for this position is $157,945 to $177,385 per annum. Stanford University provides pay ranges representing its good faith estimate of what the university reasonably expects to pay for a position. The pay offered to a selected candidate will be determined based on factors such as (but not limited to) the scope and responsibilities of the position, the qualifications of the selected candidate, departmental budget availability, internal equity, geographic location and external market pay for comparable jobs. At Stanford University, base pay represents only one aspect of the comprehensive rewards package. The Cardinal at Work website ( ) provides detailed information on Stanford's extensive range of benefits and rewards offered to employees. Specifics about the rewards package for this position may be discussed during the hiring process. Consistent with its obligations under the law, the University will provide reasonable accommodations to applicants and employees with disabilities. Applicants requiring a reasonable accommodation for any part of the application or hiring process should contact Stanford University Human Resources at . For all other inquiries, please submit a contact form. Stanford is an equal employment opportunity and affirmative action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status, or any other characteristic protected by law. Stanford welcomes applications from all who would bring additional dimensions to the University's research mission. DESIRED QUALIFICATIONS: 3+ years of experience in software development and data engineering with a strong focus on data cleaning, transformation, and creation. Proficiency in Python and experience with data processing libraries (e.g., Pandas, Polars, NumPy). Hands-on experience in building and maintaining automated data pipelines for large-scale data processing. Familiarity with machine learning frameworks (e.g., PyTorch, JAX, scikit-learn) as applied to data quality and augmentation tasks. Expertise in working with healthcare data, including familiarity with the OMOP Common Data Model (OMOP CDM). Strong experience in a Linux environment and comfort with UNIX command-line tools. Proven ability to work collaboratively in multidisciplinary teams and communicate technical concepts effectively. PREFERRED QUALIFICATIONS: Experience with cloud platforms (e.g., GCP, AWS, or Azure) and distributed computing frameworks. Proficiency with version control systems (e.g., Git) and containerization tools (e.g., Docker). Familiarity with healthcare data standards and regulatory requirements. EDUCATION & EXPERIENCE (REQUIRED): Bachelor's degree in scientific or analytic field and five years of relevant experience, or a combination of education and relevant experience. KNOWLEDGE, SKILLS AND ABILITIES (REQUIRED): • Knowledge of key data structures algorithms, and techniques pertinent to systems that support high volume, velocity, or variety datasets (including data mining, machine learning, NLP, data retrieval). • Experience with relational, NoSQL, or NewSQL database systems and data modeling, structured and unstructured. • Experience in parallel and distributed data processing techniques and platforms (MPI, Map/Reduce, Batch). • Experience in scripting languages and experience in debugging them, experience with high performance/systems languages and techniques. • Knowledge of benchmark software development and programmable fields/systems, ability to analyze systems and data pipelines and propose solutions that leverage emerging technologies. • Ability to use and integrate security controls for web applications, mobile platforms, and backend systems. • Experience deploying reliable data systems and data quality management. • Ability to research, evaluate, architect, and deploy new tools, frameworks, and patterns to build scalable Big Data platforms. • Ability to document use cases, solutions and recommendations. • Demonstrated excellence in written and verbal communication skills. CERTIFICATIONS & LICENSES: None PHYSICAL REQUIREMENTS : • Frequently sit, grasp lightly, use fine manipulation and perform desk-based computer tasks, lift, carry, push pull objects that weigh to ten pounds. • Occasionally sit, use a telephone or write by hand. • Rarely kneel, crawl, climb, twist, bend, stoop, squat, reach or work above shoulders, sort, file paperwork or parts, operate foot and hand controls. - Consistent with its obligations under the law, the University will provide reasonable accommodation to any employee with a disability who requires accommodation to perform the essential functions of his or her job. Additional Information Schedule: Full-time Job Code: 4734 Employee Status: Fixed-Term Grade: K Requisition ID: 106579 Work Arrangement : Hybrid Eligible
Bioinformatics Engineer II (18 Month Fixed-Term) (Hybrid Opportunity) School of Medicine, Stanford, California, United States Information Analytics Oct 14, 2025 Post Date 107526 Requisition # The Department of Medicine , Division of Cardiovascular Medicine at Stanford University is seeking a talented Bioinformatics Engineer II to join the Bioinformatics Core (BIC) of the Molecular Transducers of Physical Activity Consortium (MoTrPAC). As part of this groundbreaking national research consortium, you will help unravel the molecular mechanisms underlying the benefits of physical activity. Under the supervision of co-PIs Dr. Euan Ashley and Dr. Matthew Wheeler, you will play a crucial role in shaping the future of personalized exercise science and public health. Dr. Ashley's research focuses on applying genomics and other omics data to improve clinical care, with an emphasis on cardiovascular disease and personalized medicine. Dr. Wheeler's research centers on integrating large-scale molecular and clinical data to understand the genetic basis of diseases and to develop novel therapeutic strategies. In this role, you will focus on genome, epigenome, and transcriptome (GET) analyses (specifically WGS, ATAC-seq, and RNA-seq) running the pipelines and tools that convert raw sequencing data into clean, analysis-ready results for the consortium. You will architect and operate scalable workflows that adhere to best practices, including WGS variant calling and joint genotyping, RNA-seq quantification and differential expression, and ATAC-seq peak calling and differential accessibility. You will adapt rigorous QC frameworks across modalities; produce integrated multi-omics analyses (e.g., linking genetic variation, chromatin accessibility, and gene expression through eQTL/caQTL/colocalization); and deliver clear visualizations, genome browser tracks, and interactive dashboards that enable collaborative interpretation across teams. Your work will span data engineering and software development: building reproducible pipelines with Nextflow and/or WDL/Cromwell, containerizing and testing them for reliable deployment on cloud and HPC environments; leveraging GCP services such as Cloud Storage and BigQuery; and designing robust schemas for omics metadata and results. You will apply software engineering best practices (version control, code review, automated testing, and documentation) while implementing data governance aligned with FAIR principles and secure handling of controlled-access human genomic data. As a key contributor to our public-facing portal ( ), you will help push the boundaries of biomedical data analytics to accelerate discovery and translation. You will collaborate closely with wet-lab scientists, clinicians, and data engineers to translate biological questions into robust computational analyses and to communicate findings in reports, presentations, and publications. Working within our multidisciplinary team, you will be at the forefront of understanding how physical activity preserves and improves health, ultimately making a lasting impact on human well-being. This is an 18-month fixed term position. This is a hybrid eligible position. Why Join Us? Work on a highly exciting and innovative multi-omics project with the potential to revolutionize our understanding of physical activity and health. Be part of a world-class research team at Stanford University, led by Dr. Euan Ashley, a pioneer in personalized medicine Contribute to groundbreaking research with a significant impact on public health and the prevention of diseases. Enjoy a collaborative and stimulating work environment at one of the top universities in the world. If you are a passionate and dedicated professional with the required qualifications and a strong interest in advancing scientific research, we encourage you to apply for this exciting opportunity. Join us in unraveling the mysteries of physical activity and making a lasting impact on human health. A complete application will include a cover letter. Duties include: Prioritize and extract data from a variety of sources such as notes, survey results, medical reports, and laboratory data, and maintain its accuracy and completeness. Determine additional data collection and reporting requirements. Design and customize reports based upon data in the database. Oversee and monitor regulatory compliance for utilization of the data. Use system reports and analyses to identify potentially problematic data, make corrections, and eliminate root cause for data problems or justify solutions to be implemented by others. Create complex charts and databases, perform statistical analyses, and develop graphs and tables for publication and presentation. Serve as a resource for non-routine inquiries such as requests for statistics or surveys. Test prototype software and participate in approval and release process for new software. Provide documentation based on audit and reporting criteria to investigators and research staff. DESIRED QUALIFICATIONS: Transcriptomics and Gene Expression Analysis: Comprehensive RNA-seq workflows including read alignment (STAR, HISAT2), quantification (Salmon, Kallisto), QC (FastQC, MultiQC, RSeQC, Picard), normalization and differential expression (DESeq2, edgeR, limma), and pathway enrichment. Isoform analysis (StringTie) and fusion detection (STAR-Fusion) are a plus. Advanced Genomics Data Analysis Expertise: Extensive experience with WGS data from raw FASTQ through variant calling, joint genotyping, annotation, cohort-level QC, and interpretation. Proficiency with BWA-MEM/BWA, GATK Best Practices (BQSR, HaplotypeCaller, joint calling, VQSR), DeepVariant, bcftools, and scalable VCF/BCF/CRAM handling. Structural and Copy Number Variation: Experience with SV/CNV calling and QC (e.g., Manta, Delly, LUMPY, CNVnator), sample-level QC (coverage, duplication, contamination via VerifyBamID/Peddy/Somalier), and cohort metrics (Ti/Tv, call rate, Hardy-Weinberg). Epigenomics and Chromatin Accessibility Analysis: Expertise in ATAC-seq processing and analysis, including alignment (BWA/Bowtie2), Tn5 shifting, peak calling (MACS2), replicate concordance (IDR), QC metrics (FRiP, TSS enrichment, nucleosome signal), differential accessibility (DiffBind/DESeq2), footprinting (HINT-ATAC), motif enrichment (HOMER/MEME), and browser tracks (bigWig/bigBed for IGV/UCSC). Regulatory element annotation using ENCODE/Roadmap resources. Multi-omics Data Integration: Experience integrating WGS, ATAC-seq, and RNA-seq to identify regulatory relationships (eQTL/aseQTL/caQTL, colocalization), linking chromatin accessibility to gene expression and variant effects. Advanced Python and R for Genomics: Deep proficiency in Python and R/Bioconductor with strong statistical and reproducible analysis skills. Genomics Workflow Development: Proven experience designing, testing, and deploying complex workflows using Nextflow and/or WDL/Cromwell (or Snakemake) in cloud or HPC environments, with containerization (Docker, Singularity) and CI/CD for reproducibility. Specialized Cloud and Database Skills: Hands-on experience with GCP (Cloud Storage, BigQuery), and genomics platforms (Terra, AnVIL). SQL skills and experience designing schemas for omics metadata/results; familiarity with gnomAD, ClinVar, Ensembl/RefSeq, dbSNP, UCSC. Genome Browser and Visualization Expertise: Proficiency creating custom track hubs and sessions for IGV/UCSC; ability to produce publication-quality visualizations and interactive dashboards for large-scale genomics data. Software Engineering Best Practices: Version control (Git/GitHub), code review, issue tracking, semantic versioning, packaging (setuptools/Poetry), automated testing (pytest), and comprehensive documentation (Sphinx/MkDocs). Data Governance and FAIR Principles: Demonstrated experience with data lineage, provenance, audit trails, and adherence to FAIR; secure handling of controlled-access human genomic data (HIPAA/IRB compliance, DUAs), and submissions to dbGaP/GEO/SRA. Cross-functional Collaboration and Communication: Proven ability to work with wet-lab scientists, clinicians, and data engineers to translate biological questions into robust, actionable computational analyses. EDUCATION & EXPERIENCE (REQUIRED): Bachelor's degree and three years of relevant experience or combination of education and relevant experience. Experience in a quantitative discipline such as economics, finance, statistics or engineering. KNOWLEDGE, SKILLS AND ABILITIES (REQUIRED): Substantial experience with MS Office and analytical programs. Excellent writing and analytical skills. Ability to prioritize workload. CERTIFICATIONS & LICENSES: None PHYSICAL REQUIREMENTS : Sitting in place at computer for long periods of time with extensive keyboarding/dexterity. Occasionally use a telephone. Rarely writing by hand. WORKING CONDITIONS: Some work may be performed in a laboratory or field setting. WORKING STANDARDS: Interpersonal Skills: Demonstrates the ability to work well with Stanford colleagues and clients and with external organizations. Promote Culture of Safety: Demonstrates commitment to personal responsibility and value for safety; communicates safety concerns; uses and promotes safe behaviors based on training and lessons learned. Subject to and expected to comply with all applicable University policies and procedures . click apply for full job details
01/14/2026
Full time
Bioinformatics Engineer II (18 Month Fixed-Term) (Hybrid Opportunity) School of Medicine, Stanford, California, United States Information Analytics Oct 14, 2025 Post Date 107526 Requisition # The Department of Medicine , Division of Cardiovascular Medicine at Stanford University is seeking a talented Bioinformatics Engineer II to join the Bioinformatics Core (BIC) of the Molecular Transducers of Physical Activity Consortium (MoTrPAC). As part of this groundbreaking national research consortium, you will help unravel the molecular mechanisms underlying the benefits of physical activity. Under the supervision of co-PIs Dr. Euan Ashley and Dr. Matthew Wheeler, you will play a crucial role in shaping the future of personalized exercise science and public health. Dr. Ashley's research focuses on applying genomics and other omics data to improve clinical care, with an emphasis on cardiovascular disease and personalized medicine. Dr. Wheeler's research centers on integrating large-scale molecular and clinical data to understand the genetic basis of diseases and to develop novel therapeutic strategies. In this role, you will focus on genome, epigenome, and transcriptome (GET) analyses (specifically WGS, ATAC-seq, and RNA-seq) running the pipelines and tools that convert raw sequencing data into clean, analysis-ready results for the consortium. You will architect and operate scalable workflows that adhere to best practices, including WGS variant calling and joint genotyping, RNA-seq quantification and differential expression, and ATAC-seq peak calling and differential accessibility. You will adapt rigorous QC frameworks across modalities; produce integrated multi-omics analyses (e.g., linking genetic variation, chromatin accessibility, and gene expression through eQTL/caQTL/colocalization); and deliver clear visualizations, genome browser tracks, and interactive dashboards that enable collaborative interpretation across teams. Your work will span data engineering and software development: building reproducible pipelines with Nextflow and/or WDL/Cromwell, containerizing and testing them for reliable deployment on cloud and HPC environments; leveraging GCP services such as Cloud Storage and BigQuery; and designing robust schemas for omics metadata and results. You will apply software engineering best practices (version control, code review, automated testing, and documentation) while implementing data governance aligned with FAIR principles and secure handling of controlled-access human genomic data. As a key contributor to our public-facing portal ( ), you will help push the boundaries of biomedical data analytics to accelerate discovery and translation. You will collaborate closely with wet-lab scientists, clinicians, and data engineers to translate biological questions into robust computational analyses and to communicate findings in reports, presentations, and publications. Working within our multidisciplinary team, you will be at the forefront of understanding how physical activity preserves and improves health, ultimately making a lasting impact on human well-being. This is an 18-month fixed term position. This is a hybrid eligible position. Why Join Us? Work on a highly exciting and innovative multi-omics project with the potential to revolutionize our understanding of physical activity and health. Be part of a world-class research team at Stanford University, led by Dr. Euan Ashley, a pioneer in personalized medicine Contribute to groundbreaking research with a significant impact on public health and the prevention of diseases. Enjoy a collaborative and stimulating work environment at one of the top universities in the world. If you are a passionate and dedicated professional with the required qualifications and a strong interest in advancing scientific research, we encourage you to apply for this exciting opportunity. Join us in unraveling the mysteries of physical activity and making a lasting impact on human health. A complete application will include a cover letter. Duties include: Prioritize and extract data from a variety of sources such as notes, survey results, medical reports, and laboratory data, and maintain its accuracy and completeness. Determine additional data collection and reporting requirements. Design and customize reports based upon data in the database. Oversee and monitor regulatory compliance for utilization of the data. Use system reports and analyses to identify potentially problematic data, make corrections, and eliminate root cause for data problems or justify solutions to be implemented by others. Create complex charts and databases, perform statistical analyses, and develop graphs and tables for publication and presentation. Serve as a resource for non-routine inquiries such as requests for statistics or surveys. Test prototype software and participate in approval and release process for new software. Provide documentation based on audit and reporting criteria to investigators and research staff. DESIRED QUALIFICATIONS: Transcriptomics and Gene Expression Analysis: Comprehensive RNA-seq workflows including read alignment (STAR, HISAT2), quantification (Salmon, Kallisto), QC (FastQC, MultiQC, RSeQC, Picard), normalization and differential expression (DESeq2, edgeR, limma), and pathway enrichment. Isoform analysis (StringTie) and fusion detection (STAR-Fusion) are a plus. Advanced Genomics Data Analysis Expertise: Extensive experience with WGS data from raw FASTQ through variant calling, joint genotyping, annotation, cohort-level QC, and interpretation. Proficiency with BWA-MEM/BWA, GATK Best Practices (BQSR, HaplotypeCaller, joint calling, VQSR), DeepVariant, bcftools, and scalable VCF/BCF/CRAM handling. Structural and Copy Number Variation: Experience with SV/CNV calling and QC (e.g., Manta, Delly, LUMPY, CNVnator), sample-level QC (coverage, duplication, contamination via VerifyBamID/Peddy/Somalier), and cohort metrics (Ti/Tv, call rate, Hardy-Weinberg). Epigenomics and Chromatin Accessibility Analysis: Expertise in ATAC-seq processing and analysis, including alignment (BWA/Bowtie2), Tn5 shifting, peak calling (MACS2), replicate concordance (IDR), QC metrics (FRiP, TSS enrichment, nucleosome signal), differential accessibility (DiffBind/DESeq2), footprinting (HINT-ATAC), motif enrichment (HOMER/MEME), and browser tracks (bigWig/bigBed for IGV/UCSC). Regulatory element annotation using ENCODE/Roadmap resources. Multi-omics Data Integration: Experience integrating WGS, ATAC-seq, and RNA-seq to identify regulatory relationships (eQTL/aseQTL/caQTL, colocalization), linking chromatin accessibility to gene expression and variant effects. Advanced Python and R for Genomics: Deep proficiency in Python and R/Bioconductor with strong statistical and reproducible analysis skills. Genomics Workflow Development: Proven experience designing, testing, and deploying complex workflows using Nextflow and/or WDL/Cromwell (or Snakemake) in cloud or HPC environments, with containerization (Docker, Singularity) and CI/CD for reproducibility. Specialized Cloud and Database Skills: Hands-on experience with GCP (Cloud Storage, BigQuery), and genomics platforms (Terra, AnVIL). SQL skills and experience designing schemas for omics metadata/results; familiarity with gnomAD, ClinVar, Ensembl/RefSeq, dbSNP, UCSC. Genome Browser and Visualization Expertise: Proficiency creating custom track hubs and sessions for IGV/UCSC; ability to produce publication-quality visualizations and interactive dashboards for large-scale genomics data. Software Engineering Best Practices: Version control (Git/GitHub), code review, issue tracking, semantic versioning, packaging (setuptools/Poetry), automated testing (pytest), and comprehensive documentation (Sphinx/MkDocs). Data Governance and FAIR Principles: Demonstrated experience with data lineage, provenance, audit trails, and adherence to FAIR; secure handling of controlled-access human genomic data (HIPAA/IRB compliance, DUAs), and submissions to dbGaP/GEO/SRA. Cross-functional Collaboration and Communication: Proven ability to work with wet-lab scientists, clinicians, and data engineers to translate biological questions into robust, actionable computational analyses. EDUCATION & EXPERIENCE (REQUIRED): Bachelor's degree and three years of relevant experience or combination of education and relevant experience. Experience in a quantitative discipline such as economics, finance, statistics or engineering. KNOWLEDGE, SKILLS AND ABILITIES (REQUIRED): Substantial experience with MS Office and analytical programs. Excellent writing and analytical skills. Ability to prioritize workload. CERTIFICATIONS & LICENSES: None PHYSICAL REQUIREMENTS : Sitting in place at computer for long periods of time with extensive keyboarding/dexterity. Occasionally use a telephone. Rarely writing by hand. WORKING CONDITIONS: Some work may be performed in a laboratory or field setting. WORKING STANDARDS: Interpersonal Skills: Demonstrates the ability to work well with Stanford colleagues and clients and with external organizations. Promote Culture of Safety: Demonstrates commitment to personal responsibility and value for safety; communicates safety concerns; uses and promotes safe behaviors based on training and lessons learned. Subject to and expected to comply with all applicable University policies and procedures . click apply for full job details
Department: Sch of Nursing - 440100 Career Area : Information Technology Posting Open Date: 10/23/2025 Application Deadline: 01/11/2026 Position Type: Temporary Staff (SHRA) Position Title : AI/LLM Developer/Engineer Position Number: Vacancy ID: S026301 Full-time/Part-time: Full-Time Temporary Hours per week: 40 Position Location: North Carolina, US Hiring Range: $26.04 - $33.85 per hour Estimated Duration of Appointment: 6 months not to exceed 11 months Be a Tar Heel!: A global higher education leader in innovative teaching, research and public service, the University of North Carolina at Chapel Hill consistently ranks as one of the nation's top public universities . Known for its beautiful campus, world-class medical care, commitment to the arts and top athletic programs, Carolina is an ideal place to teach, work and learn.One of the best college towns and best places to live in the United States, Chapel Hill has diverse social, cultural, recreation and professional opportunities that span the campus and community.University employees can choose from a wide range of professional training opportunities for career growth, skill development and lifelong learning and enjoy exclusive perks that include numerous retail and restaurant discounts, savings on local child care centers and special rates for performing arts events. Primary Purpose of Organizational Unit: The mission of the School of Nursing is to enhance and improve the health and well-being of the people of North Carolina and the nation, and, as relevant and appropriate, the people of other nations, through its programs of education, research, and scholarship, and through clinical practice and community service. The University of North Carolina at Chapel Hill School of Nursing has been the leader in nursing education in North Carolina throughout its history. Established in 1950, it was the first school of nursing in North Carolina to offer a four-year baccalaureate nursing degree program followed by the first master's degree program, the first continuing education program for nurses, first doctoral program, and first accelerated BSN option to students with college degrees. Today, the School is renowned for its academic programs, its research and its commitment to clinical and community service within state, national and global communities. Position Summary: The Center for Virtual Care Value and Excellence (ViVE), led by Dr. Saif Khairat, is seeking an AI/LLM Developer/Engineer to join our AI research team. This is an exciting opportunity to contribute to innovative projects at the forefront of healthcare delivery improvement, leveraging Large Language Models (LLMs) and clinical data analysis. About the Position We are looking for individuals with a strong theoretical and practical background in large language models, machine learning, and natural language processing, combined with a collaborative spirit and a drive for problem-solving. You'll join a multidisciplinary team that values diversity and brings together expertise in software engineering, big data, clinical informatics, and medicine. Key Responsibilities Design, fine-tune, and evaluate large language models (LLMs) tailored to domain-specific applications using techniques such as transfer learning, LoRA, and reinforcement learning with human feedback (RLHF). Build intelligent applications powered by LLMs, including chatbots, virtual agents, clinical decision tools, or document analyzers, using frameworks like LangChain, LlamaIndex, or semantic search pipelines. Develop scalable LLM pipelines and infrastructure, including data ingestion, preprocessing, model serving (via GPU/TPU), and continuous performance monitoring. Integrate commercial and open-source LLMs (e.g., OpenAI GPT, Claude, Mistral, LLaMA) via APIs or local deployment into digital health or enterprise systems. Craft and iterate prompts using advanced prompt engineering and chain-of-thought strategies to improve output relevance, tone, factuality, and task completion. Implement retrieval-augmented generation (RAG) architectures to enhance context awareness using vector databases (e.g., Pinecone, FAISS, Weaviate). Evaluate LLM performance using automated and human-in-the-loop methods to assess accuracy, hallucination, safety, and user satisfaction. Collaborate across disciplines with data scientists, UX designers, domain experts, and MLOps to ensure usability, performance, and alignment with real-world needs. Monitor and optimize system performance, including latency, throughput, token usage, and model cost-effectiveness across deployment environments. Stay current with advancements in generative AI, contributing to the internal knowledge base and driving adoption of best practices for ethical and responsible LLM use. Minimum Education and Experience Requirements: Bachelor's degree in Computer Science, Computer Information Systems, Computer Engineering, or closely related degree from an appropriately accredited institution and three years of experience in operations analysis and design, systems programming, or closely related area; or a - Bachelor's degree from an appropriately accredited institution and four years of experience in operations analysis and design, systems programming or closely related area; or an Associate's degree in Computer Information Technology, Computer Engineering Technology, or Networking Technology from an appropriately accredited institution and five years of experience in operations analysis and design, systems programming, or closely related area; or an equivalent combination of education and experience. - Journey level requires an additional one year of education or experience. - Advanced level requires an additional two years of education or experience. Required Qualifications, Competencies, and Experience: Bachelor's degree in Computer Science, Electrical Engineering, or related fields. Expertise in Retrieval-Augmented Generation (RAG), Natural Language Processing (NLP), deep learning frameworks. Proficiency in Python and frameworks such as PyTorch, TensorFlow, Hugging Face Transformers, or LangChain Familiarity with clinical or healthcare data (e.g., EHRs, clinical notes, structured claims data) Proven research record with peer-reviewed publications in relevant fields Strong problem-solving skills and the ability to work in a collaborative environment. Preferred Qualifications, Competencies, and Experience: Distributed parallel training and parameter-efficient tuning. Familiarity with multi-modal foundation models, HITL techniques, and prompt engineering. Experience with LLM fine-tuning, prompt engineering, or retrieval-augmented generation (RAG) Experience deploying large-scale machine learning models in cloud environments. Campus Security Authority Responsibilities: Not Applicable.
01/14/2026
Full time
Department: Sch of Nursing - 440100 Career Area : Information Technology Posting Open Date: 10/23/2025 Application Deadline: 01/11/2026 Position Type: Temporary Staff (SHRA) Position Title : AI/LLM Developer/Engineer Position Number: Vacancy ID: S026301 Full-time/Part-time: Full-Time Temporary Hours per week: 40 Position Location: North Carolina, US Hiring Range: $26.04 - $33.85 per hour Estimated Duration of Appointment: 6 months not to exceed 11 months Be a Tar Heel!: A global higher education leader in innovative teaching, research and public service, the University of North Carolina at Chapel Hill consistently ranks as one of the nation's top public universities . Known for its beautiful campus, world-class medical care, commitment to the arts and top athletic programs, Carolina is an ideal place to teach, work and learn.One of the best college towns and best places to live in the United States, Chapel Hill has diverse social, cultural, recreation and professional opportunities that span the campus and community.University employees can choose from a wide range of professional training opportunities for career growth, skill development and lifelong learning and enjoy exclusive perks that include numerous retail and restaurant discounts, savings on local child care centers and special rates for performing arts events. Primary Purpose of Organizational Unit: The mission of the School of Nursing is to enhance and improve the health and well-being of the people of North Carolina and the nation, and, as relevant and appropriate, the people of other nations, through its programs of education, research, and scholarship, and through clinical practice and community service. The University of North Carolina at Chapel Hill School of Nursing has been the leader in nursing education in North Carolina throughout its history. Established in 1950, it was the first school of nursing in North Carolina to offer a four-year baccalaureate nursing degree program followed by the first master's degree program, the first continuing education program for nurses, first doctoral program, and first accelerated BSN option to students with college degrees. Today, the School is renowned for its academic programs, its research and its commitment to clinical and community service within state, national and global communities. Position Summary: The Center for Virtual Care Value and Excellence (ViVE), led by Dr. Saif Khairat, is seeking an AI/LLM Developer/Engineer to join our AI research team. This is an exciting opportunity to contribute to innovative projects at the forefront of healthcare delivery improvement, leveraging Large Language Models (LLMs) and clinical data analysis. About the Position We are looking for individuals with a strong theoretical and practical background in large language models, machine learning, and natural language processing, combined with a collaborative spirit and a drive for problem-solving. You'll join a multidisciplinary team that values diversity and brings together expertise in software engineering, big data, clinical informatics, and medicine. Key Responsibilities Design, fine-tune, and evaluate large language models (LLMs) tailored to domain-specific applications using techniques such as transfer learning, LoRA, and reinforcement learning with human feedback (RLHF). Build intelligent applications powered by LLMs, including chatbots, virtual agents, clinical decision tools, or document analyzers, using frameworks like LangChain, LlamaIndex, or semantic search pipelines. Develop scalable LLM pipelines and infrastructure, including data ingestion, preprocessing, model serving (via GPU/TPU), and continuous performance monitoring. Integrate commercial and open-source LLMs (e.g., OpenAI GPT, Claude, Mistral, LLaMA) via APIs or local deployment into digital health or enterprise systems. Craft and iterate prompts using advanced prompt engineering and chain-of-thought strategies to improve output relevance, tone, factuality, and task completion. Implement retrieval-augmented generation (RAG) architectures to enhance context awareness using vector databases (e.g., Pinecone, FAISS, Weaviate). Evaluate LLM performance using automated and human-in-the-loop methods to assess accuracy, hallucination, safety, and user satisfaction. Collaborate across disciplines with data scientists, UX designers, domain experts, and MLOps to ensure usability, performance, and alignment with real-world needs. Monitor and optimize system performance, including latency, throughput, token usage, and model cost-effectiveness across deployment environments. Stay current with advancements in generative AI, contributing to the internal knowledge base and driving adoption of best practices for ethical and responsible LLM use. Minimum Education and Experience Requirements: Bachelor's degree in Computer Science, Computer Information Systems, Computer Engineering, or closely related degree from an appropriately accredited institution and three years of experience in operations analysis and design, systems programming, or closely related area; or a - Bachelor's degree from an appropriately accredited institution and four years of experience in operations analysis and design, systems programming or closely related area; or an Associate's degree in Computer Information Technology, Computer Engineering Technology, or Networking Technology from an appropriately accredited institution and five years of experience in operations analysis and design, systems programming, or closely related area; or an equivalent combination of education and experience. - Journey level requires an additional one year of education or experience. - Advanced level requires an additional two years of education or experience. Required Qualifications, Competencies, and Experience: Bachelor's degree in Computer Science, Electrical Engineering, or related fields. Expertise in Retrieval-Augmented Generation (RAG), Natural Language Processing (NLP), deep learning frameworks. Proficiency in Python and frameworks such as PyTorch, TensorFlow, Hugging Face Transformers, or LangChain Familiarity with clinical or healthcare data (e.g., EHRs, clinical notes, structured claims data) Proven research record with peer-reviewed publications in relevant fields Strong problem-solving skills and the ability to work in a collaborative environment. Preferred Qualifications, Competencies, and Experience: Distributed parallel training and parameter-efficient tuning. Familiarity with multi-modal foundation models, HITL techniques, and prompt engineering. Experience with LLM fine-tuning, prompt engineering, or retrieval-augmented generation (RAG) Experience deploying large-scale machine learning models in cloud environments. Campus Security Authority Responsibilities: Not Applicable.
Job Description At Boeing, we innovate and collaborate to make the world a better place. We're committed to fostering an environment for every teammate that's welcoming, respectful and inclusive, with great opportunity for professional growth. Find your future with us. The Boeing Company is currently seeking a Software Engineer - Artificial intelligence to join the Software Engineering Organization l o cated in Tukwila, Seattle, or Everett Washington. This is a Software Engineer - Artificial intelligence position, focusing on LLMs who will work closely with Boeing Artificial Intelligence (AI) team. In this position, you will collaborate with other world-class scientists, researchers, and engineers innovating on a range of technologies such as Artificial Intelligence & Machine Learning (AI/ML), Automation, and Autonomy, etc. As part of Boeing Software Engineering Organization who supports Boeing Technology Innovation (BTI), our software engineers use their expertise to dream up next-generation software capabilities for amazing aerospace, satellite, and autonomy platforms. We develop cutting edge software applications that will improve the future of software capabilities, airplane and flight controls, artificial intelligence, machine learning, and much more. Our products help solve Boeing's most challenging problems across Commercial Airplanes, Defense Space & Security, and Global Services businesses. The projects can range from new software products for the revolutionary 787 Dreamliner to innovative aircrafts across several commercial, autonomy, defense, satellite, and space platforms. Boeing is committed to your development. As part of the software engineering organization, you will have the opportunity to be trained and equipped with the software technology and tools to be successful. In the software capability, you will be encouraged and resourced to pursue your passions, explore different technology domains, and advance your career. Boeing is the world's largest aerospace company and leading manufacturer of commercial jetliners and defense, space and security systems. Here, you'll work alongside more than 170,000 exceptional people focused on bringing great products and services to market. Located in more than 70 countries, Boeing is comprised of one of the most diverse, talented, and innovative workforces you will find anywhere. More than 140,000 of our people hold degrees from approximately 2,700 colleges and universities worldwide. Their expertise and knowledge represent virtually every business and technical field. By building a career at Boeing, you'll have the opportunity to grow your skills, create relationships around the world and help shape the future of aerospace. Our teams are currently hiring for a broad range of experience levels including Associate and Experienced Level Software Engineers. Position Responsibilities: Model Development: Support senior engineers in designing, fine-tuning, and implementing large language models for specific applications. Data Management: Collect, clean, and preprocess large datasets to ensure high quality for model training and evaluation. Prompt Engineering: Design and optimize prompts to improve model performance and efficiency for various tasks. Testing and Evaluation: Conduct model evaluations, analyze results, and perform testing and debugging of AI systems to identify strengths and weaknesses. Code Implementation: Write clean, maintainable, and efficient Python code, adhering to software engineering best practices. Collaboration: Work closely with cross-functional teams, including product managers, system engineers, and other software engineers, to integrate AI solutions into various programs. Documentation: Create clear and thorough documentation for code, models, and evaluation processes. Continuous Learning: Stay forefront in natural language processing and generative AI to recommend and implement state-of-the-art solutions or innovative solutions. This position is expected to be 100% onsite . The selected candidate will be required to work onsite at one of the listed location options. This position must meet export control compliance requirements. To meet export control compliance requirements, a "U.S. Person" as defined by 22 C.F.R. 120.15 is required . "U.S. Person" includes U.S. Citizen, lawful permanent resident, refugee, or asylee. To be considered for this position you will be required to complete a technical assessment as part of the selection process . Failure to complete the assessment will remove you from consideration . Basic Qualifications (Required Skills/ Experience): 1 + years of experience designing and developing software using any of the listed programming languages - Python, C/C++, or Java 1 + years of experience working with backend development and/ or cloud environments Preferred Qualifications (Desired Skills/Experience): Bachelor degree from an accredited course of study in computer science, data science, mathematics, engineering, engineering technology (includes manufacturing engineering technology), chemistry, or physics. Level 2: 1 or more years' related work experience or an equivalent combination of education and experience Level 3: 3 or more years' related work experience or an equivalent combination of education and experience Experience with full software development lifecycle as part of the agile team Proficient in C++ / python / Java Experience with Cloud platforms (e.g., AWS, etc.), Linux, backend, and containerization (e.g., Docker, Kubernetes, OpenShift). Experience with modern microservices architecture, implementation, and operations. Experience with relational database and AI/ML techniques Familiarity with Large Language Models (LLM) MS or PhD in Computer Science or Engineering related field. Drug Free Workplace: Boeing is a Drug Free Workplace (DFW) where post offer applicants and employees are subject to testing for marijuana, cocaine, opioids, amphetamines, PCP, and alcohol when criteria is met as outlined in our policies. Union: This is a union-represented position. Travel: This position may require up to 10% travel . CodeVue Coding Challenge: To be considered for this position you will be required to complete a technical assessment as part of the selection process. Failure to complete the assessment will remove you from consideration. Pay & Benefits: At Boeing, we strive to deliver a Total Rewards package that will attract, engage and retain the top talent. Elements of the Total Rewards package include competitive base pay and variable compensation opportunities. The Boeing Company also provides eligible employees with an opportunity to enroll in a variety of benefit programs, generally including health insurance, flexible spending accounts, health savings accounts, retirement savings plans, life and disability insurance programs, and a number of programs that provide for both paid and unpaid time away from work. The specific programs and options available to any given employee may vary depending on eligibility factors such as geographic location, date of hire, and the applicability of collective bargaining agreements. Pay is based upon candidate experience and qualifications, as well as market and business considerations . Summary pay range: $ 1 19,850 - $2 03 , 5 50 Applications for this position will be accepted until Jan. 21, 2026 Relocation This position offers relocation based on candidate eligibility. Visa Sponsorship Employer will not sponsor applicants for employment visa status. Shift This position is for 1st shift Equal Opportunity Employer: Boeing is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, national origin, gender, sexual orientation, gender identity, age, physical or mental disability, genetic factors, military/veteran status or other characteristics protected by law.
01/07/2026
Full time
Job Description At Boeing, we innovate and collaborate to make the world a better place. We're committed to fostering an environment for every teammate that's welcoming, respectful and inclusive, with great opportunity for professional growth. Find your future with us. The Boeing Company is currently seeking a Software Engineer - Artificial intelligence to join the Software Engineering Organization l o cated in Tukwila, Seattle, or Everett Washington. This is a Software Engineer - Artificial intelligence position, focusing on LLMs who will work closely with Boeing Artificial Intelligence (AI) team. In this position, you will collaborate with other world-class scientists, researchers, and engineers innovating on a range of technologies such as Artificial Intelligence & Machine Learning (AI/ML), Automation, and Autonomy, etc. As part of Boeing Software Engineering Organization who supports Boeing Technology Innovation (BTI), our software engineers use their expertise to dream up next-generation software capabilities for amazing aerospace, satellite, and autonomy platforms. We develop cutting edge software applications that will improve the future of software capabilities, airplane and flight controls, artificial intelligence, machine learning, and much more. Our products help solve Boeing's most challenging problems across Commercial Airplanes, Defense Space & Security, and Global Services businesses. The projects can range from new software products for the revolutionary 787 Dreamliner to innovative aircrafts across several commercial, autonomy, defense, satellite, and space platforms. Boeing is committed to your development. As part of the software engineering organization, you will have the opportunity to be trained and equipped with the software technology and tools to be successful. In the software capability, you will be encouraged and resourced to pursue your passions, explore different technology domains, and advance your career. Boeing is the world's largest aerospace company and leading manufacturer of commercial jetliners and defense, space and security systems. Here, you'll work alongside more than 170,000 exceptional people focused on bringing great products and services to market. Located in more than 70 countries, Boeing is comprised of one of the most diverse, talented, and innovative workforces you will find anywhere. More than 140,000 of our people hold degrees from approximately 2,700 colleges and universities worldwide. Their expertise and knowledge represent virtually every business and technical field. By building a career at Boeing, you'll have the opportunity to grow your skills, create relationships around the world and help shape the future of aerospace. Our teams are currently hiring for a broad range of experience levels including Associate and Experienced Level Software Engineers. Position Responsibilities: Model Development: Support senior engineers in designing, fine-tuning, and implementing large language models for specific applications. Data Management: Collect, clean, and preprocess large datasets to ensure high quality for model training and evaluation. Prompt Engineering: Design and optimize prompts to improve model performance and efficiency for various tasks. Testing and Evaluation: Conduct model evaluations, analyze results, and perform testing and debugging of AI systems to identify strengths and weaknesses. Code Implementation: Write clean, maintainable, and efficient Python code, adhering to software engineering best practices. Collaboration: Work closely with cross-functional teams, including product managers, system engineers, and other software engineers, to integrate AI solutions into various programs. Documentation: Create clear and thorough documentation for code, models, and evaluation processes. Continuous Learning: Stay forefront in natural language processing and generative AI to recommend and implement state-of-the-art solutions or innovative solutions. This position is expected to be 100% onsite . The selected candidate will be required to work onsite at one of the listed location options. This position must meet export control compliance requirements. To meet export control compliance requirements, a "U.S. Person" as defined by 22 C.F.R. 120.15 is required . "U.S. Person" includes U.S. Citizen, lawful permanent resident, refugee, or asylee. To be considered for this position you will be required to complete a technical assessment as part of the selection process . Failure to complete the assessment will remove you from consideration . Basic Qualifications (Required Skills/ Experience): 1 + years of experience designing and developing software using any of the listed programming languages - Python, C/C++, or Java 1 + years of experience working with backend development and/ or cloud environments Preferred Qualifications (Desired Skills/Experience): Bachelor degree from an accredited course of study in computer science, data science, mathematics, engineering, engineering technology (includes manufacturing engineering technology), chemistry, or physics. Level 2: 1 or more years' related work experience or an equivalent combination of education and experience Level 3: 3 or more years' related work experience or an equivalent combination of education and experience Experience with full software development lifecycle as part of the agile team Proficient in C++ / python / Java Experience with Cloud platforms (e.g., AWS, etc.), Linux, backend, and containerization (e.g., Docker, Kubernetes, OpenShift). Experience with modern microservices architecture, implementation, and operations. Experience with relational database and AI/ML techniques Familiarity with Large Language Models (LLM) MS or PhD in Computer Science or Engineering related field. Drug Free Workplace: Boeing is a Drug Free Workplace (DFW) where post offer applicants and employees are subject to testing for marijuana, cocaine, opioids, amphetamines, PCP, and alcohol when criteria is met as outlined in our policies. Union: This is a union-represented position. Travel: This position may require up to 10% travel . CodeVue Coding Challenge: To be considered for this position you will be required to complete a technical assessment as part of the selection process. Failure to complete the assessment will remove you from consideration. Pay & Benefits: At Boeing, we strive to deliver a Total Rewards package that will attract, engage and retain the top talent. Elements of the Total Rewards package include competitive base pay and variable compensation opportunities. The Boeing Company also provides eligible employees with an opportunity to enroll in a variety of benefit programs, generally including health insurance, flexible spending accounts, health savings accounts, retirement savings plans, life and disability insurance programs, and a number of programs that provide for both paid and unpaid time away from work. The specific programs and options available to any given employee may vary depending on eligibility factors such as geographic location, date of hire, and the applicability of collective bargaining agreements. Pay is based upon candidate experience and qualifications, as well as market and business considerations . Summary pay range: $ 1 19,850 - $2 03 , 5 50 Applications for this position will be accepted until Jan. 21, 2026 Relocation This position offers relocation based on candidate eligibility. Visa Sponsorship Employer will not sponsor applicants for employment visa status. Shift This position is for 1st shift Equal Opportunity Employer: Boeing is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, national origin, gender, sexual orientation, gender identity, age, physical or mental disability, genetic factors, military/veteran status or other characteristics protected by law.
Job Description At Boeing, we innovate and collaborate to make the world a better place. We're committed to fostering an environment for every teammate that's welcoming, respectful and inclusive, with great opportunity for professional growth. Find your future with us. The Boeing Company is currently seeking a Software Engineer - Artificial intelligence to join the Software Engineering Organization l o cated in Tukwila, Seattle, or Everett Washington. This is a Software Engineer - Artificial intelligence position, focusing on LLMs who will work closely with Boeing Artificial Intelligence (AI) team. In this position, you will collaborate with other world-class scientists, researchers, and engineers innovating on a range of technologies such as Artificial Intelligence & Machine Learning (AI/ML), Automation, and Autonomy, etc. As part of Boeing Software Engineering Organization who supports Boeing Technology Innovation (BTI), our software engineers use their expertise to dream up next-generation software capabilities for amazing aerospace, satellite, and autonomy platforms. We develop cutting edge software applications that will improve the future of software capabilities, airplane and flight controls, artificial intelligence, machine learning, and much more. Our products help solve Boeing's most challenging problems across Commercial Airplanes, Defense Space & Security, and Global Services businesses. The projects can range from new software products for the revolutionary 787 Dreamliner to innovative aircrafts across several commercial, autonomy, defense, satellite, and space platforms. Boeing is committed to your development. As part of the software engineering organization, you will have the opportunity to be trained and equipped with the software technology and tools to be successful. In the software capability, you will be encouraged and resourced to pursue your passions, explore different technology domains, and advance your career. Boeing is the world's largest aerospace company and leading manufacturer of commercial jetliners and defense, space and security systems. Here, you'll work alongside more than 170,000 exceptional people focused on bringing great products and services to market. Located in more than 70 countries, Boeing is comprised of one of the most diverse, talented, and innovative workforces you will find anywhere. More than 140,000 of our people hold degrees from approximately 2,700 colleges and universities worldwide. Their expertise and knowledge represent virtually every business and technical field. By building a career at Boeing, you'll have the opportunity to grow your skills, create relationships around the world and help shape the future of aerospace. Our teams are currently hiring for a broad range of experience levels including Associate and Experienced Level Software Engineers. Position Responsibilities: Model Development: Support senior engineers in designing, fine-tuning, and implementing large language models for specific applications. Data Management: Collect, clean, and preprocess large datasets to ensure high quality for model training and evaluation. Prompt Engineering: Design and optimize prompts to improve model performance and efficiency for various tasks. Testing and Evaluation: Conduct model evaluations, analyze results, and perform testing and debugging of AI systems to identify strengths and weaknesses. Code Implementation: Write clean, maintainable, and efficient Python code, adhering to software engineering best practices. Collaboration: Work closely with cross-functional teams, including product managers, system engineers, and other software engineers, to integrate AI solutions into various programs. Documentation: Create clear and thorough documentation for code, models, and evaluation processes. Continuous Learning: Stay forefront in natural language processing and generative AI to recommend and implement state-of-the-art solutions or innovative solutions. This position is expected to be 100% onsite . The selected candidate will be required to work onsite at one of the listed location options. This position must meet export control compliance requirements. To meet export control compliance requirements, a "U.S. Person" as defined by 22 C.F.R. 120.15 is required . "U.S. Person" includes U.S. Citizen, lawful permanent resident, refugee, or asylee. To be considered for this position you will be required to complete a technical assessment as part of the selection process . Failure to complete the assessment will remove you from consideration . Basic Qualifications (Required Skills/ Experience): 1 + years of experience designing and developing software using any of the listed programming languages - Python, C/C++, or Java 1 + years of experience working with backend development and/ or cloud environments Preferred Qualifications (Desired Skills/Experience): Bachelor degree from an accredited course of study in computer science, data science, mathematics, engineering, engineering technology (includes manufacturing engineering technology), chemistry, or physics. Level 2: 1 or more years' related work experience or an equivalent combination of education and experience Level 3: 3 or more years' related work experience or an equivalent combination of education and experience Experience with full software development lifecycle as part of the agile team Proficient in C++ / python / Java Experience with Cloud platforms (e.g., AWS, etc.), Linux, backend, and containerization (e.g., Docker, Kubernetes, OpenShift). Experience with modern microservices architecture, implementation, and operations. Experience with relational database and AI/ML techniques Familiarity with Large Language Models (LLM) MS or PhD in Computer Science or Engineering related field. Drug Free Workplace: Boeing is a Drug Free Workplace (DFW) where post offer applicants and employees are subject to testing for marijuana, cocaine, opioids, amphetamines, PCP, and alcohol when criteria is met as outlined in our policies. Union: This is a union-represented position. Travel: This position may require up to 10% travel . CodeVue Coding Challenge: To be considered for this position you will be required to complete a technical assessment as part of the selection process. Failure to complete the assessment will remove you from consideration. Pay & Benefits: At Boeing, we strive to deliver a Total Rewards package that will attract, engage and retain the top talent. Elements of the Total Rewards package include competitive base pay and variable compensation opportunities. The Boeing Company also provides eligible employees with an opportunity to enroll in a variety of benefit programs, generally including health insurance, flexible spending accounts, health savings accounts, retirement savings plans, life and disability insurance programs, and a number of programs that provide for both paid and unpaid time away from work. The specific programs and options available to any given employee may vary depending on eligibility factors such as geographic location, date of hire, and the applicability of collective bargaining agreements. Pay is based upon candidate experience and qualifications, as well as market and business considerations . Summary pay range: $ 1 19,850 - $2 03 , 5 50 Applications for this position will be accepted until Jan. 21, 2026 Relocation This position offers relocation based on candidate eligibility. Visa Sponsorship Employer will not sponsor applicants for employment visa status. Shift This position is for 1st shift Equal Opportunity Employer: Boeing is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, national origin, gender, sexual orientation, gender identity, age, physical or mental disability, genetic factors, military/veteran status or other characteristics protected by law.
01/07/2026
Full time
Job Description At Boeing, we innovate and collaborate to make the world a better place. We're committed to fostering an environment for every teammate that's welcoming, respectful and inclusive, with great opportunity for professional growth. Find your future with us. The Boeing Company is currently seeking a Software Engineer - Artificial intelligence to join the Software Engineering Organization l o cated in Tukwila, Seattle, or Everett Washington. This is a Software Engineer - Artificial intelligence position, focusing on LLMs who will work closely with Boeing Artificial Intelligence (AI) team. In this position, you will collaborate with other world-class scientists, researchers, and engineers innovating on a range of technologies such as Artificial Intelligence & Machine Learning (AI/ML), Automation, and Autonomy, etc. As part of Boeing Software Engineering Organization who supports Boeing Technology Innovation (BTI), our software engineers use their expertise to dream up next-generation software capabilities for amazing aerospace, satellite, and autonomy platforms. We develop cutting edge software applications that will improve the future of software capabilities, airplane and flight controls, artificial intelligence, machine learning, and much more. Our products help solve Boeing's most challenging problems across Commercial Airplanes, Defense Space & Security, and Global Services businesses. The projects can range from new software products for the revolutionary 787 Dreamliner to innovative aircrafts across several commercial, autonomy, defense, satellite, and space platforms. Boeing is committed to your development. As part of the software engineering organization, you will have the opportunity to be trained and equipped with the software technology and tools to be successful. In the software capability, you will be encouraged and resourced to pursue your passions, explore different technology domains, and advance your career. Boeing is the world's largest aerospace company and leading manufacturer of commercial jetliners and defense, space and security systems. Here, you'll work alongside more than 170,000 exceptional people focused on bringing great products and services to market. Located in more than 70 countries, Boeing is comprised of one of the most diverse, talented, and innovative workforces you will find anywhere. More than 140,000 of our people hold degrees from approximately 2,700 colleges and universities worldwide. Their expertise and knowledge represent virtually every business and technical field. By building a career at Boeing, you'll have the opportunity to grow your skills, create relationships around the world and help shape the future of aerospace. Our teams are currently hiring for a broad range of experience levels including Associate and Experienced Level Software Engineers. Position Responsibilities: Model Development: Support senior engineers in designing, fine-tuning, and implementing large language models for specific applications. Data Management: Collect, clean, and preprocess large datasets to ensure high quality for model training and evaluation. Prompt Engineering: Design and optimize prompts to improve model performance and efficiency for various tasks. Testing and Evaluation: Conduct model evaluations, analyze results, and perform testing and debugging of AI systems to identify strengths and weaknesses. Code Implementation: Write clean, maintainable, and efficient Python code, adhering to software engineering best practices. Collaboration: Work closely with cross-functional teams, including product managers, system engineers, and other software engineers, to integrate AI solutions into various programs. Documentation: Create clear and thorough documentation for code, models, and evaluation processes. Continuous Learning: Stay forefront in natural language processing and generative AI to recommend and implement state-of-the-art solutions or innovative solutions. This position is expected to be 100% onsite . The selected candidate will be required to work onsite at one of the listed location options. This position must meet export control compliance requirements. To meet export control compliance requirements, a "U.S. Person" as defined by 22 C.F.R. 120.15 is required . "U.S. Person" includes U.S. Citizen, lawful permanent resident, refugee, or asylee. To be considered for this position you will be required to complete a technical assessment as part of the selection process . Failure to complete the assessment will remove you from consideration . Basic Qualifications (Required Skills/ Experience): 1 + years of experience designing and developing software using any of the listed programming languages - Python, C/C++, or Java 1 + years of experience working with backend development and/ or cloud environments Preferred Qualifications (Desired Skills/Experience): Bachelor degree from an accredited course of study in computer science, data science, mathematics, engineering, engineering technology (includes manufacturing engineering technology), chemistry, or physics. Level 2: 1 or more years' related work experience or an equivalent combination of education and experience Level 3: 3 or more years' related work experience or an equivalent combination of education and experience Experience with full software development lifecycle as part of the agile team Proficient in C++ / python / Java Experience with Cloud platforms (e.g., AWS, etc.), Linux, backend, and containerization (e.g., Docker, Kubernetes, OpenShift). Experience with modern microservices architecture, implementation, and operations. Experience with relational database and AI/ML techniques Familiarity with Large Language Models (LLM) MS or PhD in Computer Science or Engineering related field. Drug Free Workplace: Boeing is a Drug Free Workplace (DFW) where post offer applicants and employees are subject to testing for marijuana, cocaine, opioids, amphetamines, PCP, and alcohol when criteria is met as outlined in our policies. Union: This is a union-represented position. Travel: This position may require up to 10% travel . CodeVue Coding Challenge: To be considered for this position you will be required to complete a technical assessment as part of the selection process. Failure to complete the assessment will remove you from consideration. Pay & Benefits: At Boeing, we strive to deliver a Total Rewards package that will attract, engage and retain the top talent. Elements of the Total Rewards package include competitive base pay and variable compensation opportunities. The Boeing Company also provides eligible employees with an opportunity to enroll in a variety of benefit programs, generally including health insurance, flexible spending accounts, health savings accounts, retirement savings plans, life and disability insurance programs, and a number of programs that provide for both paid and unpaid time away from work. The specific programs and options available to any given employee may vary depending on eligibility factors such as geographic location, date of hire, and the applicability of collective bargaining agreements. Pay is based upon candidate experience and qualifications, as well as market and business considerations . Summary pay range: $ 1 19,850 - $2 03 , 5 50 Applications for this position will be accepted until Jan. 21, 2026 Relocation This position offers relocation based on candidate eligibility. Visa Sponsorship Employer will not sponsor applicants for employment visa status. Shift This position is for 1st shift Equal Opportunity Employer: Boeing is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, national origin, gender, sexual orientation, gender identity, age, physical or mental disability, genetic factors, military/veteran status or other characteristics protected by law.
Company Description Adtalem Global Education is a national leader in post-secondary education and leading provider of professional talent to the healthcare industry. Adtalem educates and empowers students with the knowledge and skills to become leaders in their communities and make a lasting impact on public health, well-being and beyond. Through equitable access to education, environments that nurture student success, and a focus on expanding and diversifying the talent pipeline in healthcare, Adtalem is building a brighter future for communities and the world. Adtalem is the parent organization of American University of the Caribbean School of Medicine, Chamberlain University, Ross University School of Medicine, Ross University School of Veterinary Medicine and Walden University. We operate on a hybrid schedule with four in-office days per week (Monday-Thursday). This approach enhances creativity, innovation, communication, and relationship-building, fostering a dynamic and collaborative work environment. Visit for more information and follow us on LinkedIn and Instagram . Job Description Adtalem is a data driven organization. The Data Engineering team builds data solutions that powers strategic and tactical business decisions and supports the Analytics and Artificial Intelligence operations. By implementing the data platform, data pipelines and data governance policies this team provides the basis for decision-making in Adtalem. Adtalem is looking for a Senior Data Engineer who design, build, and maintain robust data engineering solutions that support our company's innovation initiatives and growth objectives. Architect, develop, and optimize scalable data pipelines handling real-time, unstructured, and synthetic datasets Collaborate with cross-functional teams, including data scientists, analysts, and product owners, to deliver innovative data solutions that drive business growth. Design, develop, deploy and support high performance data pipelines both inbound and outbound. Model data platform by applying the business logic and building objects in the semantic layer of the data platform. Leverage streaming technologies and cloud platforms to enable real-time data processing and analytics Optimize data pipelines for performance, scalability, and reliability. Implement CI/CD pipelines to ensure continuous deployment and delivery of our data products. Ensure quality of critical data elements, prepare data quality remediation plans and collaborate with business and system owners to fix the quality issues at its root. Document the design and support strategy of the data pipelines Capture, store and socialize data lineage and operational metadata Troubleshoot and resolve data engineering issues as they arise. Develop REST APIs to expose data to other teams within the company. Stay current with emerging technologies and industry trends related to big data, streaming data, and synthetic data generation Mentor and guide junior data engineers. Qualifications Bachelor's Degree Computer Science, Computer Engineering, Software Engineering, or other related technical field. Master's Degree Computer Science, Computer Engineering, Software Engineering, or other related technical field. Two (2) plus years experience in Google cloud with services like BigQuery, Composer, GCS, DataStream, Dataflows,BQML, Vertex AI. Six (6) plus years experience in data engineering solutions such as data platforms, ingestion, data management, or publication/analytics. Hands-on experience working with real-time, unstructured, and synthetic data, and will be instrumental in advancing our data platform capabilities. Experience in Real Time Data ingestion using GCP PubSub, Kafka, Spark or similar. Expert knowledge on Python programming and SQL. Experience with cloud platforms (AWS, GCP, Azure) and their data services Experience working with Airflow as workflow management tools and build operators to connect, extract and ingest data as needed. Familiarity with synthetic data generation and unstructured data processing Experience in AI/ML data pipelines and frameworks Excellent organizational, prioritization and analytical abilities. Have proven experience working in incremental execution through successful launches. Excellent problem-solving and critical-thinking skills to recognize and comprehend complex data issues affecting the business environment. Experience working in agile environment. Additional Information In support of the pay transparency laws enacted across the country, the expected salary range for this position is between $84,835.61 and $149,076.17. Actual pay will be adjusted based on job-related factors permitted by law, such as experience and training; geographic location; licensure and certifications; market factors; departmental budgets; and responsibility. Our Talent Acquisition Team will be happy to answer any questions you may have, and we look forward to learning more about your salary requirements. The position qualifies for the below benefits. Adtalem offers a robust suite of benefits including: Health, dental, vision, life and disability insurance 401k Retirement Program + 6% employer match Participation in Adtalem's Flexible Time Off (FTO) Policy 12 Paid Holidays For more information related to our benefits please visit: You are also eligible to participate in an annual incentive program, subject to the rules governing the program, whereby an award, if any, depends on various factors, including, without limitation, individual and organizational performance. Equal Opportunity - Minority / Female / Disability / V / Gender Identity / Sexual Orientation
01/07/2026
Full time
Company Description Adtalem Global Education is a national leader in post-secondary education and leading provider of professional talent to the healthcare industry. Adtalem educates and empowers students with the knowledge and skills to become leaders in their communities and make a lasting impact on public health, well-being and beyond. Through equitable access to education, environments that nurture student success, and a focus on expanding and diversifying the talent pipeline in healthcare, Adtalem is building a brighter future for communities and the world. Adtalem is the parent organization of American University of the Caribbean School of Medicine, Chamberlain University, Ross University School of Medicine, Ross University School of Veterinary Medicine and Walden University. We operate on a hybrid schedule with four in-office days per week (Monday-Thursday). This approach enhances creativity, innovation, communication, and relationship-building, fostering a dynamic and collaborative work environment. Visit for more information and follow us on LinkedIn and Instagram . Job Description Adtalem is a data driven organization. The Data Engineering team builds data solutions that powers strategic and tactical business decisions and supports the Analytics and Artificial Intelligence operations. By implementing the data platform, data pipelines and data governance policies this team provides the basis for decision-making in Adtalem. Adtalem is looking for a Senior Data Engineer who design, build, and maintain robust data engineering solutions that support our company's innovation initiatives and growth objectives. Architect, develop, and optimize scalable data pipelines handling real-time, unstructured, and synthetic datasets Collaborate with cross-functional teams, including data scientists, analysts, and product owners, to deliver innovative data solutions that drive business growth. Design, develop, deploy and support high performance data pipelines both inbound and outbound. Model data platform by applying the business logic and building objects in the semantic layer of the data platform. Leverage streaming technologies and cloud platforms to enable real-time data processing and analytics Optimize data pipelines for performance, scalability, and reliability. Implement CI/CD pipelines to ensure continuous deployment and delivery of our data products. Ensure quality of critical data elements, prepare data quality remediation plans and collaborate with business and system owners to fix the quality issues at its root. Document the design and support strategy of the data pipelines Capture, store and socialize data lineage and operational metadata Troubleshoot and resolve data engineering issues as they arise. Develop REST APIs to expose data to other teams within the company. Stay current with emerging technologies and industry trends related to big data, streaming data, and synthetic data generation Mentor and guide junior data engineers. Qualifications Bachelor's Degree Computer Science, Computer Engineering, Software Engineering, or other related technical field. Master's Degree Computer Science, Computer Engineering, Software Engineering, or other related technical field. Two (2) plus years experience in Google cloud with services like BigQuery, Composer, GCS, DataStream, Dataflows,BQML, Vertex AI. Six (6) plus years experience in data engineering solutions such as data platforms, ingestion, data management, or publication/analytics. Hands-on experience working with real-time, unstructured, and synthetic data, and will be instrumental in advancing our data platform capabilities. Experience in Real Time Data ingestion using GCP PubSub, Kafka, Spark or similar. Expert knowledge on Python programming and SQL. Experience with cloud platforms (AWS, GCP, Azure) and their data services Experience working with Airflow as workflow management tools and build operators to connect, extract and ingest data as needed. Familiarity with synthetic data generation and unstructured data processing Experience in AI/ML data pipelines and frameworks Excellent organizational, prioritization and analytical abilities. Have proven experience working in incremental execution through successful launches. Excellent problem-solving and critical-thinking skills to recognize and comprehend complex data issues affecting the business environment. Experience working in agile environment. Additional Information In support of the pay transparency laws enacted across the country, the expected salary range for this position is between $84,835.61 and $149,076.17. Actual pay will be adjusted based on job-related factors permitted by law, such as experience and training; geographic location; licensure and certifications; market factors; departmental budgets; and responsibility. Our Talent Acquisition Team will be happy to answer any questions you may have, and we look forward to learning more about your salary requirements. The position qualifies for the below benefits. Adtalem offers a robust suite of benefits including: Health, dental, vision, life and disability insurance 401k Retirement Program + 6% employer match Participation in Adtalem's Flexible Time Off (FTO) Policy 12 Paid Holidays For more information related to our benefits please visit: You are also eligible to participate in an annual incentive program, subject to the rules governing the program, whereby an award, if any, depends on various factors, including, without limitation, individual and organizational performance. Equal Opportunity - Minority / Female / Disability / V / Gender Identity / Sexual Orientation
Company Description Adtalem Global Education is a national leader in post-secondary education and leading provider of professional talent to the healthcare industry. Adtalem educates and empowers students with the knowledge and skills to become leaders in their communities and make a lasting impact on public health, well-being and beyond. Through equitable access to education, environments that nurture student success, and a focus on expanding and diversifying the talent pipeline in healthcare, Adtalem is building a brighter future for communities and the world. Adtalem is the parent organization of American University of the Caribbean School of Medicine, Chamberlain University, Ross University School of Medicine, Ross University School of Veterinary Medicine and Walden University. We operate on a hybrid schedule with four in-office days per week (Monday-Thursday). This approach enhances creativity, innovation, communication, and relationship-building, fostering a dynamic and collaborative work environment. Visit for more information and follow us on LinkedIn and Instagram . Job Description Adtalem is a data driven organization. The Data Engineering team builds data solutions that powers strategic and tactical business decisions and supports the Analytics and Artificial Intelligence operations. By implementing the data platform, data pipelines and data governance policies this team provides the basis for decision-making in Adtalem. Adtalem is looking for a Senior Data Engineer who design, build, and maintain robust data engineering solutions that support our company's innovation initiatives and growth objectives. Architect, develop, and optimize scalable data pipelines handling real-time, unstructured, and synthetic datasets Collaborate with cross-functional teams, including data scientists, analysts, and product owners, to deliver innovative data solutions that drive business growth. Design, develop, deploy and support high performance data pipelines both inbound and outbound. Model data platform by applying the business logic and building objects in the semantic layer of the data platform. Leverage streaming technologies and cloud platforms to enable real-time data processing and analytics Optimize data pipelines for performance, scalability, and reliability. Implement CI/CD pipelines to ensure continuous deployment and delivery of our data products. Ensure quality of critical data elements, prepare data quality remediation plans and collaborate with business and system owners to fix the quality issues at its root. Document the design and support strategy of the data pipelines Capture, store and socialize data lineage and operational metadata Troubleshoot and resolve data engineering issues as they arise. Develop REST APIs to expose data to other teams within the company. Stay current with emerging technologies and industry trends related to big data, streaming data, and synthetic data generation Mentor and guide junior data engineers. Qualifications Bachelor's Degree Computer Science, Computer Engineering, Software Engineering, or other related technical field. Master's Degree Computer Science, Computer Engineering, Software Engineering, or other related technical field. Two (2) plus years experience in Google cloud with services like BigQuery, Composer, GCS, DataStream, Dataflows,BQML, Vertex AI. Six (6) plus years experience in data engineering solutions such as data platforms, ingestion, data management, or publication/analytics. Hands-on experience working with real-time, unstructured, and synthetic data, and will be instrumental in advancing our data platform capabilities. Experience in Real Time Data ingestion using GCP PubSub, Kafka, Spark or similar. Expert knowledge on Python programming and SQL. Experience with cloud platforms (AWS, GCP, Azure) and their data services Experience working with Airflow as workflow management tools and build operators to connect, extract and ingest data as needed. Familiarity with synthetic data generation and unstructured data processing Experience in AI/ML data pipelines and frameworks Excellent organizational, prioritization and analytical abilities. Have proven experience working in incremental execution through successful launches. Excellent problem-solving and critical-thinking skills to recognize and comprehend complex data issues affecting the business environment. Experience working in agile environment. Additional Information In support of the pay transparency laws enacted across the country, the expected salary range for this position is between $84,835.61 and $149,076.17. Actual pay will be adjusted based on job-related factors permitted by law, such as experience and training; geographic location; licensure and certifications; market factors; departmental budgets; and responsibility. Our Talent Acquisition Team will be happy to answer any questions you may have, and we look forward to learning more about your salary requirements. The position qualifies for the below benefits. Adtalem offers a robust suite of benefits including: Health, dental, vision, life and disability insurance 401k Retirement Program + 6% employer match Participation in Adtalem's Flexible Time Off (FTO) Policy 12 Paid Holidays For more information related to our benefits please visit: You are also eligible to participate in an annual incentive program, subject to the rules governing the program, whereby an award, if any, depends on various factors, including, without limitation, individual and organizational performance. Equal Opportunity - Minority / Female / Disability / V / Gender Identity / Sexual Orientation
01/07/2026
Full time
Company Description Adtalem Global Education is a national leader in post-secondary education and leading provider of professional talent to the healthcare industry. Adtalem educates and empowers students with the knowledge and skills to become leaders in their communities and make a lasting impact on public health, well-being and beyond. Through equitable access to education, environments that nurture student success, and a focus on expanding and diversifying the talent pipeline in healthcare, Adtalem is building a brighter future for communities and the world. Adtalem is the parent organization of American University of the Caribbean School of Medicine, Chamberlain University, Ross University School of Medicine, Ross University School of Veterinary Medicine and Walden University. We operate on a hybrid schedule with four in-office days per week (Monday-Thursday). This approach enhances creativity, innovation, communication, and relationship-building, fostering a dynamic and collaborative work environment. Visit for more information and follow us on LinkedIn and Instagram . Job Description Adtalem is a data driven organization. The Data Engineering team builds data solutions that powers strategic and tactical business decisions and supports the Analytics and Artificial Intelligence operations. By implementing the data platform, data pipelines and data governance policies this team provides the basis for decision-making in Adtalem. Adtalem is looking for a Senior Data Engineer who design, build, and maintain robust data engineering solutions that support our company's innovation initiatives and growth objectives. Architect, develop, and optimize scalable data pipelines handling real-time, unstructured, and synthetic datasets Collaborate with cross-functional teams, including data scientists, analysts, and product owners, to deliver innovative data solutions that drive business growth. Design, develop, deploy and support high performance data pipelines both inbound and outbound. Model data platform by applying the business logic and building objects in the semantic layer of the data platform. Leverage streaming technologies and cloud platforms to enable real-time data processing and analytics Optimize data pipelines for performance, scalability, and reliability. Implement CI/CD pipelines to ensure continuous deployment and delivery of our data products. Ensure quality of critical data elements, prepare data quality remediation plans and collaborate with business and system owners to fix the quality issues at its root. Document the design and support strategy of the data pipelines Capture, store and socialize data lineage and operational metadata Troubleshoot and resolve data engineering issues as they arise. Develop REST APIs to expose data to other teams within the company. Stay current with emerging technologies and industry trends related to big data, streaming data, and synthetic data generation Mentor and guide junior data engineers. Qualifications Bachelor's Degree Computer Science, Computer Engineering, Software Engineering, or other related technical field. Master's Degree Computer Science, Computer Engineering, Software Engineering, or other related technical field. Two (2) plus years experience in Google cloud with services like BigQuery, Composer, GCS, DataStream, Dataflows,BQML, Vertex AI. Six (6) plus years experience in data engineering solutions such as data platforms, ingestion, data management, or publication/analytics. Hands-on experience working with real-time, unstructured, and synthetic data, and will be instrumental in advancing our data platform capabilities. Experience in Real Time Data ingestion using GCP PubSub, Kafka, Spark or similar. Expert knowledge on Python programming and SQL. Experience with cloud platforms (AWS, GCP, Azure) and their data services Experience working with Airflow as workflow management tools and build operators to connect, extract and ingest data as needed. Familiarity with synthetic data generation and unstructured data processing Experience in AI/ML data pipelines and frameworks Excellent organizational, prioritization and analytical abilities. Have proven experience working in incremental execution through successful launches. Excellent problem-solving and critical-thinking skills to recognize and comprehend complex data issues affecting the business environment. Experience working in agile environment. Additional Information In support of the pay transparency laws enacted across the country, the expected salary range for this position is between $84,835.61 and $149,076.17. Actual pay will be adjusted based on job-related factors permitted by law, such as experience and training; geographic location; licensure and certifications; market factors; departmental budgets; and responsibility. Our Talent Acquisition Team will be happy to answer any questions you may have, and we look forward to learning more about your salary requirements. The position qualifies for the below benefits. Adtalem offers a robust suite of benefits including: Health, dental, vision, life and disability insurance 401k Retirement Program + 6% employer match Participation in Adtalem's Flexible Time Off (FTO) Policy 12 Paid Holidays For more information related to our benefits please visit: You are also eligible to participate in an annual incentive program, subject to the rules governing the program, whereby an award, if any, depends on various factors, including, without limitation, individual and organizational performance. Equal Opportunity - Minority / Female / Disability / V / Gender Identity / Sexual Orientation
Company Description Adtalem Global Education is a national leader in post-secondary education and leading provider of professional talent to the healthcare industry. Adtalem educates and empowers students with the knowledge and skills to become leaders in their communities and make a lasting impact on public health, well-being and beyond. Through equitable access to education, environments that nurture student success, and a focus on expanding and diversifying the talent pipeline in healthcare, Adtalem is building a brighter future for communities and the world. Adtalem is the parent organization of American University of the Caribbean School of Medicine, Chamberlain University, Ross University School of Medicine, Ross University School of Veterinary Medicine and Walden University. We operate on a hybrid schedule with four in-office days per week (Monday-Thursday). This approach enhances creativity, innovation, communication, and relationship-building, fostering a dynamic and collaborative work environment. Visit for more information and follow us on LinkedIn and Instagram . Job Description Adtalem is a data driven organization. The Data Engineering team builds data solutions that powers strategic and tactical business decisions and supports the Analytics and Artificial Intelligence operations. By implementing the data platform, data pipelines and data governance policies this team provides the basis for decision-making in Adtalem. Adtalem is looking for a Senior Data Engineer who design, build, and maintain robust data engineering solutions that support our company's innovation initiatives and growth objectives. Architect, develop, and optimize scalable data pipelines handling real-time, unstructured, and synthetic datasets Collaborate with cross-functional teams, including data scientists, analysts, and product owners, to deliver innovative data solutions that drive business growth. Design, develop, deploy and support high performance data pipelines both inbound and outbound. Model data platform by applying the business logic and building objects in the semantic layer of the data platform. Leverage streaming technologies and cloud platforms to enable real-time data processing and analytics Optimize data pipelines for performance, scalability, and reliability. Implement CI/CD pipelines to ensure continuous deployment and delivery of our data products. Ensure quality of critical data elements, prepare data quality remediation plans and collaborate with business and system owners to fix the quality issues at its root. Document the design and support strategy of the data pipelines Capture, store and socialize data lineage and operational metadata Troubleshoot and resolve data engineering issues as they arise. Develop REST APIs to expose data to other teams within the company. Stay current with emerging technologies and industry trends related to big data, streaming data, and synthetic data generation Mentor and guide junior data engineers. Qualifications Bachelor's Degree Computer Science, Computer Engineering, Software Engineering, or other related technical field. Master's Degree Computer Science, Computer Engineering, Software Engineering, or other related technical field. Two (2) plus years experience in Google cloud with services like BigQuery, Composer, GCS, DataStream, Dataflows,BQML, Vertex AI. Six (6) plus years experience in data engineering solutions such as data platforms, ingestion, data management, or publication/analytics. Hands-on experience working with real-time, unstructured, and synthetic data, and will be instrumental in advancing our data platform capabilities. Experience in Real Time Data ingestion using GCP PubSub, Kafka, Spark or similar. Expert knowledge on Python programming and SQL. Experience with cloud platforms (AWS, GCP, Azure) and their data services Experience working with Airflow as workflow management tools and build operators to connect, extract and ingest data as needed. Familiarity with synthetic data generation and unstructured data processing Experience in AI/ML data pipelines and frameworks Excellent organizational, prioritization and analytical abilities. Have proven experience working in incremental execution through successful launches. Excellent problem-solving and critical-thinking skills to recognize and comprehend complex data issues affecting the business environment. Experience working in agile environment. Additional Information In support of the pay transparency laws enacted across the country, the expected salary range for this position is between $84,835.61 and $149,076.17. Actual pay will be adjusted based on job-related factors permitted by law, such as experience and training; geographic location; licensure and certifications; market factors; departmental budgets; and responsibility. Our Talent Acquisition Team will be happy to answer any questions you may have, and we look forward to learning more about your salary requirements. The position qualifies for the below benefits. Adtalem offers a robust suite of benefits including: Health, dental, vision, life and disability insurance 401k Retirement Program + 6% employer match Participation in Adtalem's Flexible Time Off (FTO) Policy 12 Paid Holidays For more information related to our benefits please visit: You are also eligible to participate in an annual incentive program, subject to the rules governing the program, whereby an award, if any, depends on various factors, including, without limitation, individual and organizational performance. Equal Opportunity - Minority / Female / Disability / V / Gender Identity / Sexual Orientation
01/07/2026
Full time
Company Description Adtalem Global Education is a national leader in post-secondary education and leading provider of professional talent to the healthcare industry. Adtalem educates and empowers students with the knowledge and skills to become leaders in their communities and make a lasting impact on public health, well-being and beyond. Through equitable access to education, environments that nurture student success, and a focus on expanding and diversifying the talent pipeline in healthcare, Adtalem is building a brighter future for communities and the world. Adtalem is the parent organization of American University of the Caribbean School of Medicine, Chamberlain University, Ross University School of Medicine, Ross University School of Veterinary Medicine and Walden University. We operate on a hybrid schedule with four in-office days per week (Monday-Thursday). This approach enhances creativity, innovation, communication, and relationship-building, fostering a dynamic and collaborative work environment. Visit for more information and follow us on LinkedIn and Instagram . Job Description Adtalem is a data driven organization. The Data Engineering team builds data solutions that powers strategic and tactical business decisions and supports the Analytics and Artificial Intelligence operations. By implementing the data platform, data pipelines and data governance policies this team provides the basis for decision-making in Adtalem. Adtalem is looking for a Senior Data Engineer who design, build, and maintain robust data engineering solutions that support our company's innovation initiatives and growth objectives. Architect, develop, and optimize scalable data pipelines handling real-time, unstructured, and synthetic datasets Collaborate with cross-functional teams, including data scientists, analysts, and product owners, to deliver innovative data solutions that drive business growth. Design, develop, deploy and support high performance data pipelines both inbound and outbound. Model data platform by applying the business logic and building objects in the semantic layer of the data platform. Leverage streaming technologies and cloud platforms to enable real-time data processing and analytics Optimize data pipelines for performance, scalability, and reliability. Implement CI/CD pipelines to ensure continuous deployment and delivery of our data products. Ensure quality of critical data elements, prepare data quality remediation plans and collaborate with business and system owners to fix the quality issues at its root. Document the design and support strategy of the data pipelines Capture, store and socialize data lineage and operational metadata Troubleshoot and resolve data engineering issues as they arise. Develop REST APIs to expose data to other teams within the company. Stay current with emerging technologies and industry trends related to big data, streaming data, and synthetic data generation Mentor and guide junior data engineers. Qualifications Bachelor's Degree Computer Science, Computer Engineering, Software Engineering, or other related technical field. Master's Degree Computer Science, Computer Engineering, Software Engineering, or other related technical field. Two (2) plus years experience in Google cloud with services like BigQuery, Composer, GCS, DataStream, Dataflows,BQML, Vertex AI. Six (6) plus years experience in data engineering solutions such as data platforms, ingestion, data management, or publication/analytics. Hands-on experience working with real-time, unstructured, and synthetic data, and will be instrumental in advancing our data platform capabilities. Experience in Real Time Data ingestion using GCP PubSub, Kafka, Spark or similar. Expert knowledge on Python programming and SQL. Experience with cloud platforms (AWS, GCP, Azure) and their data services Experience working with Airflow as workflow management tools and build operators to connect, extract and ingest data as needed. Familiarity with synthetic data generation and unstructured data processing Experience in AI/ML data pipelines and frameworks Excellent organizational, prioritization and analytical abilities. Have proven experience working in incremental execution through successful launches. Excellent problem-solving and critical-thinking skills to recognize and comprehend complex data issues affecting the business environment. Experience working in agile environment. Additional Information In support of the pay transparency laws enacted across the country, the expected salary range for this position is between $84,835.61 and $149,076.17. Actual pay will be adjusted based on job-related factors permitted by law, such as experience and training; geographic location; licensure and certifications; market factors; departmental budgets; and responsibility. Our Talent Acquisition Team will be happy to answer any questions you may have, and we look forward to learning more about your salary requirements. The position qualifies for the below benefits. Adtalem offers a robust suite of benefits including: Health, dental, vision, life and disability insurance 401k Retirement Program + 6% employer match Participation in Adtalem's Flexible Time Off (FTO) Policy 12 Paid Holidays For more information related to our benefits please visit: You are also eligible to participate in an annual incentive program, subject to the rules governing the program, whereby an award, if any, depends on various factors, including, without limitation, individual and organizational performance. Equal Opportunity - Minority / Female / Disability / V / Gender Identity / Sexual Orientation
Company Description Adtalem Global Education is a national leader in post-secondary education and leading provider of professional talent to the healthcare industry. Adtalem educates and empowers students with the knowledge and skills to become leaders in their communities and make a lasting impact on public health, well-being and beyond. Through equitable access to education, environments that nurture student success, and a focus on expanding and diversifying the talent pipeline in healthcare, Adtalem is building a brighter future for communities and the world. Adtalem is the parent organization of American University of the Caribbean School of Medicine, Chamberlain University, Ross University School of Medicine, Ross University School of Veterinary Medicine and Walden University. We operate on a hybrid schedule with four in-office days per week (Monday-Thursday). This approach enhances creativity, innovation, communication, and relationship-building, fostering a dynamic and collaborative work environment. Visit for more information and follow us on LinkedIn and Instagram . Job Description Adtalem is a data driven organization. The Data Engineering team builds data solutions that powers strategic and tactical business decisions and supports the Analytics and Artificial Intelligence operations. By implementing the data platform, data pipelines and data governance policies this team provides the basis for decision-making in Adtalem. Adtalem is looking for a Senior Data Engineer who design, build, and maintain robust data engineering solutions that support our company's innovation initiatives and growth objectives. Architect, develop, and optimize scalable data pipelines handling real-time, unstructured, and synthetic datasets Collaborate with cross-functional teams, including data scientists, analysts, and product owners, to deliver innovative data solutions that drive business growth. Design, develop, deploy and support high performance data pipelines both inbound and outbound. Model data platform by applying the business logic and building objects in the semantic layer of the data platform. Leverage streaming technologies and cloud platforms to enable real-time data processing and analytics Optimize data pipelines for performance, scalability, and reliability. Implement CI/CD pipelines to ensure continuous deployment and delivery of our data products. Ensure quality of critical data elements, prepare data quality remediation plans and collaborate with business and system owners to fix the quality issues at its root. Document the design and support strategy of the data pipelines Capture, store and socialize data lineage and operational metadata Troubleshoot and resolve data engineering issues as they arise. Develop REST APIs to expose data to other teams within the company. Stay current with emerging technologies and industry trends related to big data, streaming data, and synthetic data generation Mentor and guide junior data engineers. Qualifications Bachelor's Degree Computer Science, Computer Engineering, Software Engineering, or other related technical field. Master's Degree Computer Science, Computer Engineering, Software Engineering, or other related technical field. Two (2) plus years experience in Google cloud with services like BigQuery, Composer, GCS, DataStream, Dataflows,BQML, Vertex AI. Six (6) plus years experience in data engineering solutions such as data platforms, ingestion, data management, or publication/analytics. Hands-on experience working with real-time, unstructured, and synthetic data, and will be instrumental in advancing our data platform capabilities. Experience in Real Time Data ingestion using GCP PubSub, Kafka, Spark or similar. Expert knowledge on Python programming and SQL. Experience with cloud platforms (AWS, GCP, Azure) and their data services Experience working with Airflow as workflow management tools and build operators to connect, extract and ingest data as needed. Familiarity with synthetic data generation and unstructured data processing Experience in AI/ML data pipelines and frameworks Excellent organizational, prioritization and analytical abilities. Have proven experience working in incremental execution through successful launches. Excellent problem-solving and critical-thinking skills to recognize and comprehend complex data issues affecting the business environment. Experience working in agile environment. Additional Information In support of the pay transparency laws enacted across the country, the expected salary range for this position is between $84,835.61 and $149,076.17. Actual pay will be adjusted based on job-related factors permitted by law, such as experience and training; geographic location; licensure and certifications; market factors; departmental budgets; and responsibility. Our Talent Acquisition Team will be happy to answer any questions you may have, and we look forward to learning more about your salary requirements. The position qualifies for the below benefits. Adtalem offers a robust suite of benefits including: Health, dental, vision, life and disability insurance 401k Retirement Program + 6% employer match Participation in Adtalem's Flexible Time Off (FTO) Policy 12 Paid Holidays For more information related to our benefits please visit: You are also eligible to participate in an annual incentive program, subject to the rules governing the program, whereby an award, if any, depends on various factors, including, without limitation, individual and organizational performance. Equal Opportunity - Minority / Female / Disability / V / Gender Identity / Sexual Orientation
01/07/2026
Full time
Company Description Adtalem Global Education is a national leader in post-secondary education and leading provider of professional talent to the healthcare industry. Adtalem educates and empowers students with the knowledge and skills to become leaders in their communities and make a lasting impact on public health, well-being and beyond. Through equitable access to education, environments that nurture student success, and a focus on expanding and diversifying the talent pipeline in healthcare, Adtalem is building a brighter future for communities and the world. Adtalem is the parent organization of American University of the Caribbean School of Medicine, Chamberlain University, Ross University School of Medicine, Ross University School of Veterinary Medicine and Walden University. We operate on a hybrid schedule with four in-office days per week (Monday-Thursday). This approach enhances creativity, innovation, communication, and relationship-building, fostering a dynamic and collaborative work environment. Visit for more information and follow us on LinkedIn and Instagram . Job Description Adtalem is a data driven organization. The Data Engineering team builds data solutions that powers strategic and tactical business decisions and supports the Analytics and Artificial Intelligence operations. By implementing the data platform, data pipelines and data governance policies this team provides the basis for decision-making in Adtalem. Adtalem is looking for a Senior Data Engineer who design, build, and maintain robust data engineering solutions that support our company's innovation initiatives and growth objectives. Architect, develop, and optimize scalable data pipelines handling real-time, unstructured, and synthetic datasets Collaborate with cross-functional teams, including data scientists, analysts, and product owners, to deliver innovative data solutions that drive business growth. Design, develop, deploy and support high performance data pipelines both inbound and outbound. Model data platform by applying the business logic and building objects in the semantic layer of the data platform. Leverage streaming technologies and cloud platforms to enable real-time data processing and analytics Optimize data pipelines for performance, scalability, and reliability. Implement CI/CD pipelines to ensure continuous deployment and delivery of our data products. Ensure quality of critical data elements, prepare data quality remediation plans and collaborate with business and system owners to fix the quality issues at its root. Document the design and support strategy of the data pipelines Capture, store and socialize data lineage and operational metadata Troubleshoot and resolve data engineering issues as they arise. Develop REST APIs to expose data to other teams within the company. Stay current with emerging technologies and industry trends related to big data, streaming data, and synthetic data generation Mentor and guide junior data engineers. Qualifications Bachelor's Degree Computer Science, Computer Engineering, Software Engineering, or other related technical field. Master's Degree Computer Science, Computer Engineering, Software Engineering, or other related technical field. Two (2) plus years experience in Google cloud with services like BigQuery, Composer, GCS, DataStream, Dataflows,BQML, Vertex AI. Six (6) plus years experience in data engineering solutions such as data platforms, ingestion, data management, or publication/analytics. Hands-on experience working with real-time, unstructured, and synthetic data, and will be instrumental in advancing our data platform capabilities. Experience in Real Time Data ingestion using GCP PubSub, Kafka, Spark or similar. Expert knowledge on Python programming and SQL. Experience with cloud platforms (AWS, GCP, Azure) and their data services Experience working with Airflow as workflow management tools and build operators to connect, extract and ingest data as needed. Familiarity with synthetic data generation and unstructured data processing Experience in AI/ML data pipelines and frameworks Excellent organizational, prioritization and analytical abilities. Have proven experience working in incremental execution through successful launches. Excellent problem-solving and critical-thinking skills to recognize and comprehend complex data issues affecting the business environment. Experience working in agile environment. Additional Information In support of the pay transparency laws enacted across the country, the expected salary range for this position is between $84,835.61 and $149,076.17. Actual pay will be adjusted based on job-related factors permitted by law, such as experience and training; geographic location; licensure and certifications; market factors; departmental budgets; and responsibility. Our Talent Acquisition Team will be happy to answer any questions you may have, and we look forward to learning more about your salary requirements. The position qualifies for the below benefits. Adtalem offers a robust suite of benefits including: Health, dental, vision, life and disability insurance 401k Retirement Program + 6% employer match Participation in Adtalem's Flexible Time Off (FTO) Policy 12 Paid Holidays For more information related to our benefits please visit: You are also eligible to participate in an annual incentive program, subject to the rules governing the program, whereby an award, if any, depends on various factors, including, without limitation, individual and organizational performance. Equal Opportunity - Minority / Female / Disability / V / Gender Identity / Sexual Orientation
Principal Associate, Data Scientist - AI Software Engineering Data is at the center of everything we do. As a startup, we disrupted the credit card industry by individually personalizing every credit card offer using statistical modeling and the relational database, cutting edge technology in 1988! Fast-forward a few years, and this little innovation and our passion for data has skyrocketed us to a Fortune 200 company and a leader in the world of data-driven decision-making. As a Data Scientist at Capital One, you'll be part of a team that's leading the next wave of disruption at a whole new scale, using the latest in computing and machine learning technologies and operating across billions of customer records to unlock the big opportunities that help everyday people save money, time and agony in their financial lives. Team Description The AI Foundations - AI Software Engineering Data Science team designs, builds, and delivers state-of-the-art, scalable AI architectures that transform how software is developed at Capital One. We partner closely with product and engineering teams to create multi-agent solutions across the software development lifecycle-including code generation, migration, troubleshooting, root-cause analysis, and documentation-leveraging technologies such as LangGraph, MCP, agent-to-agent protocols, and advanced model customization techniques. Role Description In this role, you will: Partner with a cross-functional team of data scientists, software engineers, and product managers to deliver a product customers love Leverage a broad stack of technologies - Python, Conda, AWS, H2O, Spark, and more - to reveal the insights hidden within huge volumes of numeric and textual data Build machine learning models through all phases of development, from design through training, evaluation, validation, and implementation Flex your interpersonal skills to translate the complexity of your work into tangible business goals The Ideal Candidate is: Innovative. You continually research and evaluate emerging technologies. You stay current on published state-of-the-art methods, technologies, and applications and seek out opportunities to apply them. Creative. You thrive on bringing definition to big, undefined problems. You love asking questions and pushing hard to find answers. You're not afraid to share a new idea. Technical. You're comfortable with open-source languages and are passionate about developing further. You have hands-on experience developing data science solutions using open-source tools and cloud computing platforms. Statistically-minded. You've built models, validated them, and backtested them. You know how to interpret a confusion matrix or a ROC curve. You have experience with clustering, classification, sentiment analysis, time series, and deep learning. A data guru. "Big data" doesn't faze you. You have the skills to retrieve, combine, and analyze data from a variety of sources and structures. You know understanding the data is often the key to great data science. Basic Qualifications: Currently has, or is in the process of obtaining one of the following with an expectation that the required degree will be obtained on or before the scheduled start date: A Bachelor's Degree in a quantitative field (Statistics, Economics, Operations Research, Analytics, Mathematics, Computer Science, or a related quantitative field) plus 5 years of experience performing data analytics A Master's Degree in a quantitative field (Statistics, Economics, Operations Research, Analytics, Mathematics, Computer Science, or a related quantitative field) or an MBA with a quantitative concentration plus 3 years of experience performing data analytics A PhD in a quantitative field (Statistics, Economics, Operations Research, Analytics, Mathematics, Computer Science, or a related quantitative field) Preferred Qualifications: Master's Degree in "STEM" field (Science, Technology, Engineering, or Mathematics) plus 3 years of experience in data analytics, or PhD in "STEM" field (Science, Technology, Engineering, or Mathematics) Experience working with AWS Experience building production-grade agentic platforms, including RAG and graph-augmented systems, MCP or tool-calling integrations Demonstrated expertise in advanced model customization techniques-such as fine-tuning, parameter-efficient tuning (LoRA/QLoRA), reinforcement learning, or preference optimization Prior research and publications in AI/ML conferences Capital One will consider sponsoring a new qualified applicant for employment authorization for this position. The minimum and maximum full-time annual salaries for this role are listed below, by location. Please note that this salary information is solely for candidates hired to perform work within one of these locations, and refers to the amount Capital One is willing to pay at the time of this posting. Salaries for part-time roles will be prorated based upon the agreed upon number of hours to be regularly worked. McLean, VA: $158,600 - $181,000 for Princ Associate, Data Science New York, NY: $173,000 - $197,400 for Princ Associate, Data Science San Jose, CA: $173,000 - $197,400 for Princ Associate, Data Science Candidates hired to work in other locations will be subject to the pay range associated with that location, and the actual annualized salary amount offered to any candidate at the time of hire will be reflected solely in the candidate's offer letter. This role is also eligible to earn performance based incentive compensation, which may include cash bonus(es) and/or long term incentives (LTI). Incentives could be discretionary or non discretionary depending on the plan. Capital One offers a comprehensive, competitive, and inclusive set of health, financial and other benefits that support your total well-being. Learn more at the Capital One Careers website . Eligibility varies based on full or part-time status, exempt or non-exempt status, and management level. This role is expected to accept applications for a minimum of 5 business days.No agencies please. Capital One is an equal opportunity employer (EOE, including disability/vet) committed to non-discrimination in compliance with applicable federal, state, and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries, including, to the extent applicable, Article 23-A of the New York Correction Law; San Francisco, California Police Code Article 49, Sections ; New York City's Fair Chance Act; Philadelphia's Fair Criminal Records Screening Act; and other applicable federal, state, and local laws and regulations regarding criminal background inquiries. If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation, please contact Capital One Recruiting at 1- or via email at . All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations. For technical support or questions about Capital One's recruiting process, please send an email to Capital One does not provide, endorse nor guarantee and is not liable for third-party products, services, educational tools or other information available through this site. Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada, any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC).
12/17/2025
Full time
Principal Associate, Data Scientist - AI Software Engineering Data is at the center of everything we do. As a startup, we disrupted the credit card industry by individually personalizing every credit card offer using statistical modeling and the relational database, cutting edge technology in 1988! Fast-forward a few years, and this little innovation and our passion for data has skyrocketed us to a Fortune 200 company and a leader in the world of data-driven decision-making. As a Data Scientist at Capital One, you'll be part of a team that's leading the next wave of disruption at a whole new scale, using the latest in computing and machine learning technologies and operating across billions of customer records to unlock the big opportunities that help everyday people save money, time and agony in their financial lives. Team Description The AI Foundations - AI Software Engineering Data Science team designs, builds, and delivers state-of-the-art, scalable AI architectures that transform how software is developed at Capital One. We partner closely with product and engineering teams to create multi-agent solutions across the software development lifecycle-including code generation, migration, troubleshooting, root-cause analysis, and documentation-leveraging technologies such as LangGraph, MCP, agent-to-agent protocols, and advanced model customization techniques. Role Description In this role, you will: Partner with a cross-functional team of data scientists, software engineers, and product managers to deliver a product customers love Leverage a broad stack of technologies - Python, Conda, AWS, H2O, Spark, and more - to reveal the insights hidden within huge volumes of numeric and textual data Build machine learning models through all phases of development, from design through training, evaluation, validation, and implementation Flex your interpersonal skills to translate the complexity of your work into tangible business goals The Ideal Candidate is: Innovative. You continually research and evaluate emerging technologies. You stay current on published state-of-the-art methods, technologies, and applications and seek out opportunities to apply them. Creative. You thrive on bringing definition to big, undefined problems. You love asking questions and pushing hard to find answers. You're not afraid to share a new idea. Technical. You're comfortable with open-source languages and are passionate about developing further. You have hands-on experience developing data science solutions using open-source tools and cloud computing platforms. Statistically-minded. You've built models, validated them, and backtested them. You know how to interpret a confusion matrix or a ROC curve. You have experience with clustering, classification, sentiment analysis, time series, and deep learning. A data guru. "Big data" doesn't faze you. You have the skills to retrieve, combine, and analyze data from a variety of sources and structures. You know understanding the data is often the key to great data science. Basic Qualifications: Currently has, or is in the process of obtaining one of the following with an expectation that the required degree will be obtained on or before the scheduled start date: A Bachelor's Degree in a quantitative field (Statistics, Economics, Operations Research, Analytics, Mathematics, Computer Science, or a related quantitative field) plus 5 years of experience performing data analytics A Master's Degree in a quantitative field (Statistics, Economics, Operations Research, Analytics, Mathematics, Computer Science, or a related quantitative field) or an MBA with a quantitative concentration plus 3 years of experience performing data analytics A PhD in a quantitative field (Statistics, Economics, Operations Research, Analytics, Mathematics, Computer Science, or a related quantitative field) Preferred Qualifications: Master's Degree in "STEM" field (Science, Technology, Engineering, or Mathematics) plus 3 years of experience in data analytics, or PhD in "STEM" field (Science, Technology, Engineering, or Mathematics) Experience working with AWS Experience building production-grade agentic platforms, including RAG and graph-augmented systems, MCP or tool-calling integrations Demonstrated expertise in advanced model customization techniques-such as fine-tuning, parameter-efficient tuning (LoRA/QLoRA), reinforcement learning, or preference optimization Prior research and publications in AI/ML conferences Capital One will consider sponsoring a new qualified applicant for employment authorization for this position. The minimum and maximum full-time annual salaries for this role are listed below, by location. Please note that this salary information is solely for candidates hired to perform work within one of these locations, and refers to the amount Capital One is willing to pay at the time of this posting. Salaries for part-time roles will be prorated based upon the agreed upon number of hours to be regularly worked. McLean, VA: $158,600 - $181,000 for Princ Associate, Data Science New York, NY: $173,000 - $197,400 for Princ Associate, Data Science San Jose, CA: $173,000 - $197,400 for Princ Associate, Data Science Candidates hired to work in other locations will be subject to the pay range associated with that location, and the actual annualized salary amount offered to any candidate at the time of hire will be reflected solely in the candidate's offer letter. This role is also eligible to earn performance based incentive compensation, which may include cash bonus(es) and/or long term incentives (LTI). Incentives could be discretionary or non discretionary depending on the plan. Capital One offers a comprehensive, competitive, and inclusive set of health, financial and other benefits that support your total well-being. Learn more at the Capital One Careers website . Eligibility varies based on full or part-time status, exempt or non-exempt status, and management level. This role is expected to accept applications for a minimum of 5 business days.No agencies please. Capital One is an equal opportunity employer (EOE, including disability/vet) committed to non-discrimination in compliance with applicable federal, state, and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries, including, to the extent applicable, Article 23-A of the New York Correction Law; San Francisco, California Police Code Article 49, Sections ; New York City's Fair Chance Act; Philadelphia's Fair Criminal Records Screening Act; and other applicable federal, state, and local laws and regulations regarding criminal background inquiries. If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation, please contact Capital One Recruiting at 1- or via email at . All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations. For technical support or questions about Capital One's recruiting process, please send an email to Capital One does not provide, endorse nor guarantee and is not liable for third-party products, services, educational tools or other information available through this site. Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada, any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC).
Principal Associate, Data Scientist - AI Software Engineering Data is at the center of everything we do. As a startup, we disrupted the credit card industry by individually personalizing every credit card offer using statistical modeling and the relational database, cutting edge technology in 1988! Fast-forward a few years, and this little innovation and our passion for data has skyrocketed us to a Fortune 200 company and a leader in the world of data-driven decision-making. As a Data Scientist at Capital One, you'll be part of a team that's leading the next wave of disruption at a whole new scale, using the latest in computing and machine learning technologies and operating across billions of customer records to unlock the big opportunities that help everyday people save money, time and agony in their financial lives. Team Description The AI Foundations - AI Software Engineering Data Science team designs, builds, and delivers state-of-the-art, scalable AI architectures that transform how software is developed at Capital One. We partner closely with product and engineering teams to create multi-agent solutions across the software development lifecycle-including code generation, migration, troubleshooting, root-cause analysis, and documentation-leveraging technologies such as LangGraph, MCP, agent-to-agent protocols, and advanced model customization techniques. Role Description In this role, you will: Partner with a cross-functional team of data scientists, software engineers, and product managers to deliver a product customers love Leverage a broad stack of technologies - Python, Conda, AWS, H2O, Spark, and more - to reveal the insights hidden within huge volumes of numeric and textual data Build machine learning models through all phases of development, from design through training, evaluation, validation, and implementation Flex your interpersonal skills to translate the complexity of your work into tangible business goals The Ideal Candidate is: Innovative. You continually research and evaluate emerging technologies. You stay current on published state-of-the-art methods, technologies, and applications and seek out opportunities to apply them. Creative. You thrive on bringing definition to big, undefined problems. You love asking questions and pushing hard to find answers. You're not afraid to share a new idea. Technical. You're comfortable with open-source languages and are passionate about developing further. You have hands-on experience developing data science solutions using open-source tools and cloud computing platforms. Statistically-minded. You've built models, validated them, and backtested them. You know how to interpret a confusion matrix or a ROC curve. You have experience with clustering, classification, sentiment analysis, time series, and deep learning. A data guru. "Big data" doesn't faze you. You have the skills to retrieve, combine, and analyze data from a variety of sources and structures. You know understanding the data is often the key to great data science. Basic Qualifications: Currently has, or is in the process of obtaining one of the following with an expectation that the required degree will be obtained on or before the scheduled start date: A Bachelor's Degree in a quantitative field (Statistics, Economics, Operations Research, Analytics, Mathematics, Computer Science, or a related quantitative field) plus 5 years of experience performing data analytics A Master's Degree in a quantitative field (Statistics, Economics, Operations Research, Analytics, Mathematics, Computer Science, or a related quantitative field) or an MBA with a quantitative concentration plus 3 years of experience performing data analytics A PhD in a quantitative field (Statistics, Economics, Operations Research, Analytics, Mathematics, Computer Science, or a related quantitative field) Preferred Qualifications: Master's Degree in "STEM" field (Science, Technology, Engineering, or Mathematics) plus 3 years of experience in data analytics, or PhD in "STEM" field (Science, Technology, Engineering, or Mathematics) Experience working with AWS Experience building production-grade agentic platforms, including RAG and graph-augmented systems, MCP or tool-calling integrations Demonstrated expertise in advanced model customization techniques-such as fine-tuning, parameter-efficient tuning (LoRA/QLoRA), reinforcement learning, or preference optimization Prior research and publications in AI/ML conferences Capital One will consider sponsoring a new qualified applicant for employment authorization for this position. The minimum and maximum full-time annual salaries for this role are listed below, by location. Please note that this salary information is solely for candidates hired to perform work within one of these locations, and refers to the amount Capital One is willing to pay at the time of this posting. Salaries for part-time roles will be prorated based upon the agreed upon number of hours to be regularly worked. McLean, VA: $158,600 - $181,000 for Princ Associate, Data Science New York, NY: $173,000 - $197,400 for Princ Associate, Data Science San Jose, CA: $173,000 - $197,400 for Princ Associate, Data Science Candidates hired to work in other locations will be subject to the pay range associated with that location, and the actual annualized salary amount offered to any candidate at the time of hire will be reflected solely in the candidate's offer letter. This role is also eligible to earn performance based incentive compensation, which may include cash bonus(es) and/or long term incentives (LTI). Incentives could be discretionary or non discretionary depending on the plan. Capital One offers a comprehensive, competitive, and inclusive set of health, financial and other benefits that support your total well-being. Learn more at the Capital One Careers website . Eligibility varies based on full or part-time status, exempt or non-exempt status, and management level. This role is expected to accept applications for a minimum of 5 business days.No agencies please. Capital One is an equal opportunity employer (EOE, including disability/vet) committed to non-discrimination in compliance with applicable federal, state, and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries, including, to the extent applicable, Article 23-A of the New York Correction Law; San Francisco, California Police Code Article 49, Sections ; New York City's Fair Chance Act; Philadelphia's Fair Criminal Records Screening Act; and other applicable federal, state, and local laws and regulations regarding criminal background inquiries. If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation, please contact Capital One Recruiting at 1- or via email at . All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations. For technical support or questions about Capital One's recruiting process, please send an email to Capital One does not provide, endorse nor guarantee and is not liable for third-party products, services, educational tools or other information available through this site. Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada, any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC).
12/17/2025
Full time
Principal Associate, Data Scientist - AI Software Engineering Data is at the center of everything we do. As a startup, we disrupted the credit card industry by individually personalizing every credit card offer using statistical modeling and the relational database, cutting edge technology in 1988! Fast-forward a few years, and this little innovation and our passion for data has skyrocketed us to a Fortune 200 company and a leader in the world of data-driven decision-making. As a Data Scientist at Capital One, you'll be part of a team that's leading the next wave of disruption at a whole new scale, using the latest in computing and machine learning technologies and operating across billions of customer records to unlock the big opportunities that help everyday people save money, time and agony in their financial lives. Team Description The AI Foundations - AI Software Engineering Data Science team designs, builds, and delivers state-of-the-art, scalable AI architectures that transform how software is developed at Capital One. We partner closely with product and engineering teams to create multi-agent solutions across the software development lifecycle-including code generation, migration, troubleshooting, root-cause analysis, and documentation-leveraging technologies such as LangGraph, MCP, agent-to-agent protocols, and advanced model customization techniques. Role Description In this role, you will: Partner with a cross-functional team of data scientists, software engineers, and product managers to deliver a product customers love Leverage a broad stack of technologies - Python, Conda, AWS, H2O, Spark, and more - to reveal the insights hidden within huge volumes of numeric and textual data Build machine learning models through all phases of development, from design through training, evaluation, validation, and implementation Flex your interpersonal skills to translate the complexity of your work into tangible business goals The Ideal Candidate is: Innovative. You continually research and evaluate emerging technologies. You stay current on published state-of-the-art methods, technologies, and applications and seek out opportunities to apply them. Creative. You thrive on bringing definition to big, undefined problems. You love asking questions and pushing hard to find answers. You're not afraid to share a new idea. Technical. You're comfortable with open-source languages and are passionate about developing further. You have hands-on experience developing data science solutions using open-source tools and cloud computing platforms. Statistically-minded. You've built models, validated them, and backtested them. You know how to interpret a confusion matrix or a ROC curve. You have experience with clustering, classification, sentiment analysis, time series, and deep learning. A data guru. "Big data" doesn't faze you. You have the skills to retrieve, combine, and analyze data from a variety of sources and structures. You know understanding the data is often the key to great data science. Basic Qualifications: Currently has, or is in the process of obtaining one of the following with an expectation that the required degree will be obtained on or before the scheduled start date: A Bachelor's Degree in a quantitative field (Statistics, Economics, Operations Research, Analytics, Mathematics, Computer Science, or a related quantitative field) plus 5 years of experience performing data analytics A Master's Degree in a quantitative field (Statistics, Economics, Operations Research, Analytics, Mathematics, Computer Science, or a related quantitative field) or an MBA with a quantitative concentration plus 3 years of experience performing data analytics A PhD in a quantitative field (Statistics, Economics, Operations Research, Analytics, Mathematics, Computer Science, or a related quantitative field) Preferred Qualifications: Master's Degree in "STEM" field (Science, Technology, Engineering, or Mathematics) plus 3 years of experience in data analytics, or PhD in "STEM" field (Science, Technology, Engineering, or Mathematics) Experience working with AWS Experience building production-grade agentic platforms, including RAG and graph-augmented systems, MCP or tool-calling integrations Demonstrated expertise in advanced model customization techniques-such as fine-tuning, parameter-efficient tuning (LoRA/QLoRA), reinforcement learning, or preference optimization Prior research and publications in AI/ML conferences Capital One will consider sponsoring a new qualified applicant for employment authorization for this position. The minimum and maximum full-time annual salaries for this role are listed below, by location. Please note that this salary information is solely for candidates hired to perform work within one of these locations, and refers to the amount Capital One is willing to pay at the time of this posting. Salaries for part-time roles will be prorated based upon the agreed upon number of hours to be regularly worked. McLean, VA: $158,600 - $181,000 for Princ Associate, Data Science New York, NY: $173,000 - $197,400 for Princ Associate, Data Science San Jose, CA: $173,000 - $197,400 for Princ Associate, Data Science Candidates hired to work in other locations will be subject to the pay range associated with that location, and the actual annualized salary amount offered to any candidate at the time of hire will be reflected solely in the candidate's offer letter. This role is also eligible to earn performance based incentive compensation, which may include cash bonus(es) and/or long term incentives (LTI). Incentives could be discretionary or non discretionary depending on the plan. Capital One offers a comprehensive, competitive, and inclusive set of health, financial and other benefits that support your total well-being. Learn more at the Capital One Careers website . Eligibility varies based on full or part-time status, exempt or non-exempt status, and management level. This role is expected to accept applications for a minimum of 5 business days.No agencies please. Capital One is an equal opportunity employer (EOE, including disability/vet) committed to non-discrimination in compliance with applicable federal, state, and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries, including, to the extent applicable, Article 23-A of the New York Correction Law; San Francisco, California Police Code Article 49, Sections ; New York City's Fair Chance Act; Philadelphia's Fair Criminal Records Screening Act; and other applicable federal, state, and local laws and regulations regarding criminal background inquiries. If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation, please contact Capital One Recruiting at 1- or via email at . All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations. For technical support or questions about Capital One's recruiting process, please send an email to Capital One does not provide, endorse nor guarantee and is not liable for third-party products, services, educational tools or other information available through this site. Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada, any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC).