Job DescriptionJob DescriptionBenefits: 401(k) 401(k) matching Bonus based on performance Competitive salary Dental insurance Health insurance Opportunity for advancement Paid time off Parental leave Signing bonus Training & development Vision insurance Role Overview Join a highimpact team as a Technology Advisory Consultant and place yourself at the heart of enterprise digitalization. In this role you will help organizations align technology with business goals, driving digital transformation, operational efficiency, and competitive differentiation, with a focus on consumption, subscription, event, and usagedriven business models for market leaders and high-growth companies in sectors that include media, high-tech, telecom, life sciences, shipping and logistics, mobility, travel, and more. The position blends handson digital engineering, transformative program delivery, and client advisory, and offers a clear career pathway from consultant to architecture leadership. The position requires hybrid/remote work flexibility and readiness for targeted domestic and global travel. Role and impact You will join a team that translates operational objectives into transformation roadmaps and delivers technology that scales to todays high-volume data-driven business world. As a consultant you will contribute to solution design workshops and implement technology that properly handles business transactions (e.g., EV charging transactions, membership subscriptions, media streaming events, parcel shipping, travel bookings, expressway usage, medical device usage, etc.). Your work will be visible to executive stakeholders and will materially influence client transformation programs and platform roadmaps. Key responsibilities Design, build, and operate orchestration workflows that convert heterogeneous event formats into auditable, analyticsready datasets Assist in and facilitate client workshops to capture requirements, map value, and define transition strategies Deliver advisory services including architecture reviews, governance frameworks, and operational playbooks Implement continuous integration and delivery pipelines, deploy in Kubernetes and cloud environments, and tune throughput under load Author runbooks and dashboards that enable sustainable operations Support scoping and contribute to thought leadership that positions clients for longterm digital advantage Candidate profile and skills Were seeking candidates with a bachelors degree in computer science, software engineering, or other closely related fields. Preferred consulting strengths include: Workshop facilitation and participation Strong stakeholder communication Ability to map technical tradeoffs against business value Ability to engage with international stakeholders and travel domestically/globally for strategic engagements (on an infrequent, targeted basis) Preferred technical skills include: Mediation platform knowledge Scripting and automation (Shell, Python, or similar) Kubernetes and Helm proficiency CI/CD pipeline experience Observability tooling familiarity (e.g., Prometheus, Grafana, ELK) Growth path This role is a fast track to solution architecture: youll receive mentorship, milestonebased progression, and a professional development budget to pursue certifications and advanced training. We offer a competitive compensation package including marketleading base salary and performance bonuses, and a full benefits suite covering medical, dental, vision, retirement match, generous PTO, and parental leave. The role is based in Cincinnati, OH or Indianapolis, IN with hybrid work options and expectations for domestic and international travel. Be part of the team that turns event streams into strategic assets. If you thrive on complexity, enjoy advising leaders, and are eager to grow from consultant into a cuttingedge architect, this role is your launchpad. Flexible work from home options available.
04/24/2026
Full time
Job DescriptionJob DescriptionBenefits: 401(k) 401(k) matching Bonus based on performance Competitive salary Dental insurance Health insurance Opportunity for advancement Paid time off Parental leave Signing bonus Training & development Vision insurance Role Overview Join a highimpact team as a Technology Advisory Consultant and place yourself at the heart of enterprise digitalization. In this role you will help organizations align technology with business goals, driving digital transformation, operational efficiency, and competitive differentiation, with a focus on consumption, subscription, event, and usagedriven business models for market leaders and high-growth companies in sectors that include media, high-tech, telecom, life sciences, shipping and logistics, mobility, travel, and more. The position blends handson digital engineering, transformative program delivery, and client advisory, and offers a clear career pathway from consultant to architecture leadership. The position requires hybrid/remote work flexibility and readiness for targeted domestic and global travel. Role and impact You will join a team that translates operational objectives into transformation roadmaps and delivers technology that scales to todays high-volume data-driven business world. As a consultant you will contribute to solution design workshops and implement technology that properly handles business transactions (e.g., EV charging transactions, membership subscriptions, media streaming events, parcel shipping, travel bookings, expressway usage, medical device usage, etc.). Your work will be visible to executive stakeholders and will materially influence client transformation programs and platform roadmaps. Key responsibilities Design, build, and operate orchestration workflows that convert heterogeneous event formats into auditable, analyticsready datasets Assist in and facilitate client workshops to capture requirements, map value, and define transition strategies Deliver advisory services including architecture reviews, governance frameworks, and operational playbooks Implement continuous integration and delivery pipelines, deploy in Kubernetes and cloud environments, and tune throughput under load Author runbooks and dashboards that enable sustainable operations Support scoping and contribute to thought leadership that positions clients for longterm digital advantage Candidate profile and skills Were seeking candidates with a bachelors degree in computer science, software engineering, or other closely related fields. Preferred consulting strengths include: Workshop facilitation and participation Strong stakeholder communication Ability to map technical tradeoffs against business value Ability to engage with international stakeholders and travel domestically/globally for strategic engagements (on an infrequent, targeted basis) Preferred technical skills include: Mediation platform knowledge Scripting and automation (Shell, Python, or similar) Kubernetes and Helm proficiency CI/CD pipeline experience Observability tooling familiarity (e.g., Prometheus, Grafana, ELK) Growth path This role is a fast track to solution architecture: youll receive mentorship, milestonebased progression, and a professional development budget to pursue certifications and advanced training. We offer a competitive compensation package including marketleading base salary and performance bonuses, and a full benefits suite covering medical, dental, vision, retirement match, generous PTO, and parental leave. The role is based in Cincinnati, OH or Indianapolis, IN with hybrid work options and expectations for domestic and international travel. Be part of the team that turns event streams into strategic assets. If you thrive on complexity, enjoy advising leaders, and are eager to grow from consultant into a cuttingedge architect, this role is your launchpad. Flexible work from home options available.
Job DescriptionJob DescriptionData is only as powerful as the foundation beneath it. At SEP, we're building that foundation - and we're looking for a Senior Data Engineer who's ready to lead the way. We've been partnering with companies to build software that matters since 1988. From Fortune 100 enterprises to fast-moving scale-ups, our clients depend on us to tackle complexity, deliver with craft, and stay engaged through every stage of the work. Now, we're growing our data practice - and we want someone who's as excited about clean, connected, trustworthy data as we are.What we have to offer Variety on every axis - tools, technologies, market sectors, methodologies Flexible, reasonable work schedules Extensive opportunities to learn and develop yourself A community of friendly, talented, and effective peers Opportunities to try out different roles with minimal risk Gorgeous facilities What you'll be doing Provide technical direction on data engagements and mentor other engineers Design and implement data pipelines and transformations (batch and streaming) Develop data models for analytics use cases Implement data quality checks and testing strategies for pipelines Configure and manage orchestration and workflow tooling Build and maintain infrastructure as code for data platforms Translate architectural direction into implementation plans Communicate progress, risks, and technical tradeoffs to stakeholders Support client meetings in a technical capacity Key attributes for applicants A passion for great products, software development, and learning Expert-level SQL skills with strong proficiency in Python; experience with Spark, Scala, R, or C# is a plus Deep expertise in data pipeline development, including batch and streaming patterns Solid understanding of data modeling patterns for analytics (dimensional modeling, data vault, lakehouse architectures) Experience with cloud data platforms - Azure preferred, AWS experience also valued Familiarity with modern data platforms such as Databricks, Snowflake, Microsoft Fabric, or Redshift Understanding of orchestration and workflow management (Apache Airflow, Databricks Workflows, Temporal, or similar) Experience with data quality frameworks and testing strategies for pipelines Familiarity with infrastructure as code tooling (Terraform, ARM/Bicep) and CI/CD pipelines (GitHub Actions or similar) is a plus Experience with analytics engineering tools like dbt and data catalog tools (Unity Catalog, Microsoft Purview) is a plus Ability to evaluate architectural tradeoffs in data systems (OLAP vs. OLTP, batch vs. streaming, warehouse vs. lakehouse) Comfortable with ambiguity; can clarify requirements through conversation Interest in mentoring and developing less experienced engineers Professional data engineering experience (8+ years desired) Must be legally authorized to work in the United States Must not require visa sponsorship or have work authorization based on OPT or CPT Must be able to work from our office in Westfield, IN without relocation financial assistance SEP is a software product design and development company located in Westfield, IN. We provide powerful teams of thoughtful developers and designers to bring ideas to life. Founded in 1988, SEP is one of Indiana's largest software development firms with 180 employees. Our clients span from Fortune 100 to scale-up companies. We are 100% employee-owned through an ESOP and are consistently recognized for our great culture (Top Workplaces, Best Place to Work in Indiana, Techpoint Mira Exceptional Employer). We are an equal opportunity employer as to all protected groups, including protected veterans and individuals with disabilities
04/24/2026
Full time
Job DescriptionJob DescriptionData is only as powerful as the foundation beneath it. At SEP, we're building that foundation - and we're looking for a Senior Data Engineer who's ready to lead the way. We've been partnering with companies to build software that matters since 1988. From Fortune 100 enterprises to fast-moving scale-ups, our clients depend on us to tackle complexity, deliver with craft, and stay engaged through every stage of the work. Now, we're growing our data practice - and we want someone who's as excited about clean, connected, trustworthy data as we are.What we have to offer Variety on every axis - tools, technologies, market sectors, methodologies Flexible, reasonable work schedules Extensive opportunities to learn and develop yourself A community of friendly, talented, and effective peers Opportunities to try out different roles with minimal risk Gorgeous facilities What you'll be doing Provide technical direction on data engagements and mentor other engineers Design and implement data pipelines and transformations (batch and streaming) Develop data models for analytics use cases Implement data quality checks and testing strategies for pipelines Configure and manage orchestration and workflow tooling Build and maintain infrastructure as code for data platforms Translate architectural direction into implementation plans Communicate progress, risks, and technical tradeoffs to stakeholders Support client meetings in a technical capacity Key attributes for applicants A passion for great products, software development, and learning Expert-level SQL skills with strong proficiency in Python; experience with Spark, Scala, R, or C# is a plus Deep expertise in data pipeline development, including batch and streaming patterns Solid understanding of data modeling patterns for analytics (dimensional modeling, data vault, lakehouse architectures) Experience with cloud data platforms - Azure preferred, AWS experience also valued Familiarity with modern data platforms such as Databricks, Snowflake, Microsoft Fabric, or Redshift Understanding of orchestration and workflow management (Apache Airflow, Databricks Workflows, Temporal, or similar) Experience with data quality frameworks and testing strategies for pipelines Familiarity with infrastructure as code tooling (Terraform, ARM/Bicep) and CI/CD pipelines (GitHub Actions or similar) is a plus Experience with analytics engineering tools like dbt and data catalog tools (Unity Catalog, Microsoft Purview) is a plus Ability to evaluate architectural tradeoffs in data systems (OLAP vs. OLTP, batch vs. streaming, warehouse vs. lakehouse) Comfortable with ambiguity; can clarify requirements through conversation Interest in mentoring and developing less experienced engineers Professional data engineering experience (8+ years desired) Must be legally authorized to work in the United States Must not require visa sponsorship or have work authorization based on OPT or CPT Must be able to work from our office in Westfield, IN without relocation financial assistance SEP is a software product design and development company located in Westfield, IN. We provide powerful teams of thoughtful developers and designers to bring ideas to life. Founded in 1988, SEP is one of Indiana's largest software development firms with 180 employees. Our clients span from Fortune 100 to scale-up companies. We are 100% employee-owned through an ESOP and are consistently recognized for our great culture (Top Workplaces, Best Place to Work in Indiana, Techpoint Mira Exceptional Employer). We are an equal opportunity employer as to all protected groups, including protected veterans and individuals with disabilities
Job DescriptionJob Description We're seeking a Data & AI Engineer to develop intelligent data pipelines and analytics solutions that power smarter decisions across silicon design, verification, and manufacturing. You'll transform engineering data into actionable insights through automation, modeling, and visualization. Responsibilities Build and maintain data pipelines to support machine learning and analytics workflows. Collect, clean, and transform large, complex datasets from engineering environments. Develop and train predictive models for yield, performance, and anomaly detection. Automate recurring data analysis tasks and integrate models into engineering processes. Collaborate with design and software teams to embed AI-driven insights into products. Create dashboards and visualization tools for reporting and decision-making. Document code, models, and processes for transparency and reproducibility. Qualifications Proficiency in Python and ML frameworks (TensorFlow, PyTorch, Scikit-learn). Strong skills in data manipulation (Pandas, NumPy, SQL). Experience with workflow orchestration (Airflow, Spark, or similar). 3-5 years of experience in data engineering or applied AI. Bachelor's degree in Electrical Engineering, Computer Science, or related field. Preferred / Plus Familiarity with semiconductor design, verification, or manufacturing datasets. Understanding of statistical modeling and predictive maintenance. Experience with cloud environments (AWS, Azure, GCP) and version control (Git). Knowledge of MLOps principles (deployment, monitoring, CI/CD).
04/24/2026
Full time
Job DescriptionJob Description We're seeking a Data & AI Engineer to develop intelligent data pipelines and analytics solutions that power smarter decisions across silicon design, verification, and manufacturing. You'll transform engineering data into actionable insights through automation, modeling, and visualization. Responsibilities Build and maintain data pipelines to support machine learning and analytics workflows. Collect, clean, and transform large, complex datasets from engineering environments. Develop and train predictive models for yield, performance, and anomaly detection. Automate recurring data analysis tasks and integrate models into engineering processes. Collaborate with design and software teams to embed AI-driven insights into products. Create dashboards and visualization tools for reporting and decision-making. Document code, models, and processes for transparency and reproducibility. Qualifications Proficiency in Python and ML frameworks (TensorFlow, PyTorch, Scikit-learn). Strong skills in data manipulation (Pandas, NumPy, SQL). Experience with workflow orchestration (Airflow, Spark, or similar). 3-5 years of experience in data engineering or applied AI. Bachelor's degree in Electrical Engineering, Computer Science, or related field. Preferred / Plus Familiarity with semiconductor design, verification, or manufacturing datasets. Understanding of statistical modeling and predictive maintenance. Experience with cloud environments (AWS, Azure, GCP) and version control (Git). Knowledge of MLOps principles (deployment, monitoring, CI/CD).
Job DescriptionJob Description We are looking for an Product Analyst to support a growing product initiative focused on improving pre-sales decision-making and helping the business secure new opportunities. This Long-term Contract position is based in Boston, Massachusetts, and will partner closely with product leadership, technical teams, and business stakeholders to shape requirements for an AI-driven solution in its early stages. The person in this role will turn business needs into clear delivery plans, contribute to testing and refinement, and help guide cross-functional work across CRM and cloud-based platforms. Responsibilities: • Translate commercial and operational needs into well-defined user stories, functional details, and actionable requirements for development teams. • Work closely with product managers, product owners, engineers, QA professionals, and UX partners to move features from concept through validation. • Support an emerging AI-enabled risk assessment product by helping define how it connects with Dynamics and related business platforms. • Maintain and refine work items in project and product management tools, ensuring stories, acceptance criteria, and priorities are clearly documented. • Participate in testing activities, including user acceptance support and quality review, to confirm solutions meet business expectations. • Facilitate collaboration across cross-functional teams and contribute to discussions with stakeholders when business input or alignment is needed. • Help break down complex business challenges into technical requirements that can be efficiently executed within the software development lifecycle. • Contribute to cloud-connected solution planning involving platforms such as Microsoft Azure, AWS, or Google Cloud, depending on project needs. • Support integration-oriented workstreams, including event-driven or real-time data flow concepts, where product changes must be reflected across connected systems promptl • At least 3 years of experience in a Product Analyst, Business Analyst, or similar role supporting technology or platform-based initiatives. • Hands-on experience gathering requirements and converting business objectives into user stories, workflows, and development-ready documentation. • Working knowledge of Microsoft Dynamics CRM and related Dynamics environments within product or project delivery settings. • Experience using Jira or comparable tools to manage backlog items, track progress, and document delivery artifacts. • Familiarity with QA processes, UAT coordination, and broader SDLC practices in cross-functional delivery teams. • Exposure to cloud environments such as Microsoft Azure, AWS, or Google Cloud Platform. • Strong written and verbal communication skills with the ability to work effectively across technical teams and business stakeholders. • Previous experience in a large organization or consulting-style environment is preferred, especially on integration or data orchestration projects.
04/24/2026
Full time
Job DescriptionJob Description We are looking for an Product Analyst to support a growing product initiative focused on improving pre-sales decision-making and helping the business secure new opportunities. This Long-term Contract position is based in Boston, Massachusetts, and will partner closely with product leadership, technical teams, and business stakeholders to shape requirements for an AI-driven solution in its early stages. The person in this role will turn business needs into clear delivery plans, contribute to testing and refinement, and help guide cross-functional work across CRM and cloud-based platforms. Responsibilities: • Translate commercial and operational needs into well-defined user stories, functional details, and actionable requirements for development teams. • Work closely with product managers, product owners, engineers, QA professionals, and UX partners to move features from concept through validation. • Support an emerging AI-enabled risk assessment product by helping define how it connects with Dynamics and related business platforms. • Maintain and refine work items in project and product management tools, ensuring stories, acceptance criteria, and priorities are clearly documented. • Participate in testing activities, including user acceptance support and quality review, to confirm solutions meet business expectations. • Facilitate collaboration across cross-functional teams and contribute to discussions with stakeholders when business input or alignment is needed. • Help break down complex business challenges into technical requirements that can be efficiently executed within the software development lifecycle. • Contribute to cloud-connected solution planning involving platforms such as Microsoft Azure, AWS, or Google Cloud, depending on project needs. • Support integration-oriented workstreams, including event-driven or real-time data flow concepts, where product changes must be reflected across connected systems promptl • At least 3 years of experience in a Product Analyst, Business Analyst, or similar role supporting technology or platform-based initiatives. • Hands-on experience gathering requirements and converting business objectives into user stories, workflows, and development-ready documentation. • Working knowledge of Microsoft Dynamics CRM and related Dynamics environments within product or project delivery settings. • Experience using Jira or comparable tools to manage backlog items, track progress, and document delivery artifacts. • Familiarity with QA processes, UAT coordination, and broader SDLC practices in cross-functional delivery teams. • Exposure to cloud environments such as Microsoft Azure, AWS, or Google Cloud Platform. • Strong written and verbal communication skills with the ability to work effectively across technical teams and business stakeholders. • Previous experience in a large organization or consulting-style environment is preferred, especially on integration or data orchestration projects.
Job DescriptionJob DescriptionDescription: We are looking for a Senior AI Engineer to join our Belfast-based Data & AI Platform Team in building a next-generation data platform which will leverage aPriori's proprietary data to deliver powerful insights and AI capabilities. As a Senior AI Engineer, you will play a critical role in shaping and delivering AI-powered features within aPriori's Data Platform, delivered via APIs and our product surfaces. You will design and implement production-grade solutions that leverage both generative AI with LLMs and traditional AI/ML. Working closely with Product Management and product teams, you will integrate AI capabilities into our products, enabling AI-powered insights, automation, and new user experiences that increase aPriori's value to our customers. This role is hands-on and highly collaborative: you will design and implement LLM-driven features, leverage and extend a modern data pipeline, and contribute to building scalable AI infrastructure using cloud-based AI/ML services such as AWS Bedrock/Sagemaker, GCP Vertex AI, TensorFlow, PyTorch. Location: Belfast, NI (Hybrid) or Remote UK Responsibilities Design and build production-ready AI/ML systems, with an emphasis on Standard Model LLM-powered product and platform features. Leverage LLM tooling/APIs such as LangChain and MCP connectors to implement retrieval-augmented generation (RAG), copilot-assistants and agentic workflows. Partner with Product Management and product teams to translate requirements into AI-powered capabilities that surface directly in user-facing products. Apply MLOps and LLMOps best practices: monitoring, evaluation, prompt versioning, cost/performance optimization. Combine traditional AI/ML with modern GenAI approaches to deliver hybrid solutions where appropriate. Collaborate with Data Engineers to establish a scalable data pipeline that serves structured data shaped for LLM consumption, feature store data for traditional AI, and Trad/GenAI-enhanced insights for internal and customer-facing BI use cases. Mentor and upskill peers in on core AI/ML and LLMOps practices, raising the overall AI/ML competency of the team. Stay current with developments in GenAI, LLMOps, generative AI safety frameworks, and evaluate their potential for adoption within the platform. Requirements: Hands-on familiarity with Prompt Engineering by leveraging LLM frameworks such as LangChain. Strong programming skills in Python and familiarity with modern data/ML pipelines. Solid understanding of data engineering practices (ETL/ELT, streaming, orchestration with Airflow/Temporal, dbt, Kafka, etc.). Knowledge of LLMOps/MLOps practices: CI/CD for ML, model monitoring, drift detection, evaluation metrics, governance. Strong collaboration and communication skills: able to partner with Product, Platform, and Data teams to drive AI features from concept to production. Demonstrated ability to mentor and upskill engineers, particularly in data/ML workflows. Skilled in designing/building/deploying /operating LLM standard model-powered features in production (chatbots, copilots, RAG systems, agents) Proficient in working with traditional cloud AI/ML platforms such as Amazon Sagemaker, GCP Vertex AI, or Azure ML and frameworks such as TensorFlow, PyTorch, scikit-learn. Education and Experience 7+ years of professional software engineering experience, including 3+ years experience in traditional AI/ML and 1+ year experience in building LLM applications on standard models. Bachelor's or Master's in Computer Science, AI/ML, Data Science, or related field (or equivalent experience). aPriori Offers A team environment where your experience is valued, your voice is heard, and the work that you do makes an impact for our customers and employees. aPriori offers competitive compensation and unique benefits including pension match, private medical & dental, flexible time off, aPriori days, and more in a dynamic, growing, innovative environment! About aPriori Founded in 2003, aPriori is disrupting the industry's status quo with groundbreaking work helping manufacturers digitally transform their businesses. Through our unique, patented, intellectual property, we enable manufacturers to accelerate product design, and bring products to markets faster while providing visibility to the sustainability of their design and manufacturing choices. Our impact is profound - our customers save millions of dollars each year, accelerate time to market, all while creating a better world for future generations. Though we are an established software firm, through our continued growth, we have maintained the dynamic, collaborative nature of a start-up. With a global presence, including North America, Europe, Asia, and India, we encourage an inclusive work environment and support employees' growth through education, training, wellness, and other programs. As our greatest asset, employees' contributions are acknowledged through monthly company-wide meetings, often with promotions and awards. We promote a positive work culture, employee-friendly policies, flexible work schedules, pub nights, and an additional day off each quarter known as "aPriori Day". Interested in joining our team? We continue to build an organisation of highly talented, self-motivated individuals. Our unique environment empowers employees to bring their best selves each day, asking, "How can I do better?" and then exceeding expectations. We work together towards a common goal. We nurture and celebrate each other's successes. Employees embrace opportunities to build new skills as well as step into leadership positions where they are supported and mentored by the Senior Leadership team to grow into impactful individual contributor roles or to effectively manage teams. Innovation, adaptability, and a desire to increase your value are essential. If you possess these qualities, we want to hear from you! GDPR Notice:
04/24/2026
Full time
Job DescriptionJob DescriptionDescription: We are looking for a Senior AI Engineer to join our Belfast-based Data & AI Platform Team in building a next-generation data platform which will leverage aPriori's proprietary data to deliver powerful insights and AI capabilities. As a Senior AI Engineer, you will play a critical role in shaping and delivering AI-powered features within aPriori's Data Platform, delivered via APIs and our product surfaces. You will design and implement production-grade solutions that leverage both generative AI with LLMs and traditional AI/ML. Working closely with Product Management and product teams, you will integrate AI capabilities into our products, enabling AI-powered insights, automation, and new user experiences that increase aPriori's value to our customers. This role is hands-on and highly collaborative: you will design and implement LLM-driven features, leverage and extend a modern data pipeline, and contribute to building scalable AI infrastructure using cloud-based AI/ML services such as AWS Bedrock/Sagemaker, GCP Vertex AI, TensorFlow, PyTorch. Location: Belfast, NI (Hybrid) or Remote UK Responsibilities Design and build production-ready AI/ML systems, with an emphasis on Standard Model LLM-powered product and platform features. Leverage LLM tooling/APIs such as LangChain and MCP connectors to implement retrieval-augmented generation (RAG), copilot-assistants and agentic workflows. Partner with Product Management and product teams to translate requirements into AI-powered capabilities that surface directly in user-facing products. Apply MLOps and LLMOps best practices: monitoring, evaluation, prompt versioning, cost/performance optimization. Combine traditional AI/ML with modern GenAI approaches to deliver hybrid solutions where appropriate. Collaborate with Data Engineers to establish a scalable data pipeline that serves structured data shaped for LLM consumption, feature store data for traditional AI, and Trad/GenAI-enhanced insights for internal and customer-facing BI use cases. Mentor and upskill peers in on core AI/ML and LLMOps practices, raising the overall AI/ML competency of the team. Stay current with developments in GenAI, LLMOps, generative AI safety frameworks, and evaluate their potential for adoption within the platform. Requirements: Hands-on familiarity with Prompt Engineering by leveraging LLM frameworks such as LangChain. Strong programming skills in Python and familiarity with modern data/ML pipelines. Solid understanding of data engineering practices (ETL/ELT, streaming, orchestration with Airflow/Temporal, dbt, Kafka, etc.). Knowledge of LLMOps/MLOps practices: CI/CD for ML, model monitoring, drift detection, evaluation metrics, governance. Strong collaboration and communication skills: able to partner with Product, Platform, and Data teams to drive AI features from concept to production. Demonstrated ability to mentor and upskill engineers, particularly in data/ML workflows. Skilled in designing/building/deploying /operating LLM standard model-powered features in production (chatbots, copilots, RAG systems, agents) Proficient in working with traditional cloud AI/ML platforms such as Amazon Sagemaker, GCP Vertex AI, or Azure ML and frameworks such as TensorFlow, PyTorch, scikit-learn. Education and Experience 7+ years of professional software engineering experience, including 3+ years experience in traditional AI/ML and 1+ year experience in building LLM applications on standard models. Bachelor's or Master's in Computer Science, AI/ML, Data Science, or related field (or equivalent experience). aPriori Offers A team environment where your experience is valued, your voice is heard, and the work that you do makes an impact for our customers and employees. aPriori offers competitive compensation and unique benefits including pension match, private medical & dental, flexible time off, aPriori days, and more in a dynamic, growing, innovative environment! About aPriori Founded in 2003, aPriori is disrupting the industry's status quo with groundbreaking work helping manufacturers digitally transform their businesses. Through our unique, patented, intellectual property, we enable manufacturers to accelerate product design, and bring products to markets faster while providing visibility to the sustainability of their design and manufacturing choices. Our impact is profound - our customers save millions of dollars each year, accelerate time to market, all while creating a better world for future generations. Though we are an established software firm, through our continued growth, we have maintained the dynamic, collaborative nature of a start-up. With a global presence, including North America, Europe, Asia, and India, we encourage an inclusive work environment and support employees' growth through education, training, wellness, and other programs. As our greatest asset, employees' contributions are acknowledged through monthly company-wide meetings, often with promotions and awards. We promote a positive work culture, employee-friendly policies, flexible work schedules, pub nights, and an additional day off each quarter known as "aPriori Day". Interested in joining our team? We continue to build an organisation of highly talented, self-motivated individuals. Our unique environment empowers employees to bring their best selves each day, asking, "How can I do better?" and then exceeding expectations. We work together towards a common goal. We nurture and celebrate each other's successes. Employees embrace opportunities to build new skills as well as step into leadership positions where they are supported and mentored by the Senior Leadership team to grow into impactful individual contributor roles or to effectively manage teams. Innovation, adaptability, and a desire to increase your value are essential. If you possess these qualities, we want to hear from you! GDPR Notice:
Job DescriptionJob DescriptionJOB DESCRIPTION:The Software Integration & Operations (SIO) group turns frontier autonomy into mission-ready aircraft. We own the commit-to-flight pipeline-deterministic simulation at cluster scale, HIL/VIL integration, CI/CD, automated testing, and release engineering. Our goal is simple: make AI fly-safely, repeatably, and fast. As a Staff Simulation Engineer, you will be dedicated to Shield AI's next-generation aircraft program, adopting our existing infrastructure to advance new capabilities. You'll design, build, and scale C++-based simulation tools that test and validate the full aircraft software stack-from autonomy to avionics-before it ever flies. Your simulation infrastructure will enable developers to test faster, system engineers to evaluate performance earlier, and release engineers to ship with confidence. WHAT YOU'LL DO:Build and scale simulation frameworks for integrated testing of autonomy, GNC, and embedded systems in C++.Design deterministic, high-performance simulation tools capable of faster-than-real-time execution for development, testing, and release.Integrate core physics, dynamics, and system models (aerodynamics, propulsion, controls) into a unified simulation environment.Collaborate across autonomy, embedded, GNC, and test engineering to ensure the simulation mirrors real aircraft behavior and mission scenarios.Develop infrastructure for CI integration, parallel simulation execution, and automated regression testing.Profile, optimize, and validate C++ codebases for performance, determinism, and fidelity.Contribute to architecture decisions that define the next generation of aircraft simulation tools within Shield AI.Mentor engineers and guide best practices in C++, simulation architecture, and performance engineering. REQUIRED QUALIFICATIONS:BS or MS in Computer Science, Aerospace, Robotics, or related field.5+ years of experience in software development, with emphasis on modern C++ (C+ or later) and performance optimization.Proven experience developing or integrating simulation systems for robotics, aerospace, or autonomous systems.Strong grasp of real-time and deterministic software design, including multi-threading, synchronization, and memory management.Understanding of rigid-body dynamics, kinematics, and basic flight mechanics.Familiarity with DevOps-integrated simulation workflows, including CI/CD and containerized environments.Ability to debug complex build and runtime environments (CMake, dependency management, logging, profiling tools).Strong collaboration and communication skills across software, hardware, and systems disciplines. PREFERRED QUALIFICATIONS:Experience with distributed or cloud-based simulation (e.g., cluster orchestration, Kubernetes).Working knowledge of Python for data analysis, test automation, or simulation orchestration.Familiarity with sensor and actuator modeling, and integrating avionics or autonomy software within simulation. We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analyzing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.
04/24/2026
Full time
Job DescriptionJob DescriptionJOB DESCRIPTION:The Software Integration & Operations (SIO) group turns frontier autonomy into mission-ready aircraft. We own the commit-to-flight pipeline-deterministic simulation at cluster scale, HIL/VIL integration, CI/CD, automated testing, and release engineering. Our goal is simple: make AI fly-safely, repeatably, and fast. As a Staff Simulation Engineer, you will be dedicated to Shield AI's next-generation aircraft program, adopting our existing infrastructure to advance new capabilities. You'll design, build, and scale C++-based simulation tools that test and validate the full aircraft software stack-from autonomy to avionics-before it ever flies. Your simulation infrastructure will enable developers to test faster, system engineers to evaluate performance earlier, and release engineers to ship with confidence. WHAT YOU'LL DO:Build and scale simulation frameworks for integrated testing of autonomy, GNC, and embedded systems in C++.Design deterministic, high-performance simulation tools capable of faster-than-real-time execution for development, testing, and release.Integrate core physics, dynamics, and system models (aerodynamics, propulsion, controls) into a unified simulation environment.Collaborate across autonomy, embedded, GNC, and test engineering to ensure the simulation mirrors real aircraft behavior and mission scenarios.Develop infrastructure for CI integration, parallel simulation execution, and automated regression testing.Profile, optimize, and validate C++ codebases for performance, determinism, and fidelity.Contribute to architecture decisions that define the next generation of aircraft simulation tools within Shield AI.Mentor engineers and guide best practices in C++, simulation architecture, and performance engineering. REQUIRED QUALIFICATIONS:BS or MS in Computer Science, Aerospace, Robotics, or related field.5+ years of experience in software development, with emphasis on modern C++ (C+ or later) and performance optimization.Proven experience developing or integrating simulation systems for robotics, aerospace, or autonomous systems.Strong grasp of real-time and deterministic software design, including multi-threading, synchronization, and memory management.Understanding of rigid-body dynamics, kinematics, and basic flight mechanics.Familiarity with DevOps-integrated simulation workflows, including CI/CD and containerized environments.Ability to debug complex build and runtime environments (CMake, dependency management, logging, profiling tools).Strong collaboration and communication skills across software, hardware, and systems disciplines. PREFERRED QUALIFICATIONS:Experience with distributed or cloud-based simulation (e.g., cluster orchestration, Kubernetes).Working knowledge of Python for data analysis, test automation, or simulation orchestration.Familiarity with sensor and actuator modeling, and integrating avionics or autonomy software within simulation. We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analyzing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.
Job DescriptionJob DescriptionDescription: Zylo is the enterprise leader in SaaS Management, enabling companies to discover, manage, and optimize their SaaS applications. Zylo helps companies reduce costs and minimize risk by centralizing SaaS inventory, license, and renewal management. Trusted by industry leaders, Zylo's AI-powered platform provides unmatched visibility into SaaS usage and spend. Powered by the industry's most intelligent discovery engine, Zylo continuously uncovers hidden SaaS applications, giving companies greater control over their SaaS portfolio. With more than 30 million SaaS licenses and $75 billion in SaaS spend under management, Zylo delivers the deepest insights, backed by more data than any other provider. Overview We are seeking an experienced Senior AI Engineer to lead the evolution of our enterprise SaaS platform's agentic AI capabilities. You'll drive strategic AI initiatives that solve complex client problems while working with large-scale datasets for global enterprise customers. This role combines deep technical expertise in AI agents, RAG systems, and enterprise integration with strategic thinking about how AI can transform our platform and deliver exceptional business value What you will do Drive strategic AI initiatives that directly impact client success and business growth, defining technical roadmaps and influencing product strategy to solve complex enterprise problems Architect and enhance our agentic processes for enterprise-scale deployments, building sophisticated multi-agent orchestration patterns for complex workflows Design advanced agent memory systems and context management solutions that maintain coherence across long-running conversations and extended enterprise tasks Build and implement RAG (Retrieval-Augmented Generation) systems to dramatically improve AI accuracy, including knowledge retrieval pipelines and semantic search optimization for large-scale datasets Develop enterprise-grade MCP (Model Context Protocol) services enabling seamless client agent integration with standardized APIs, security protocols, and comprehensive documentation Leverage AWS technologies (Bedrock, Lambda, etc) to architect AI solutions with optimal performance, cost efficiency, and enterprise-scale LLM integration Design and optimize schemas for storing LLM interactions, agent state, and conversation history while building monitoring systems for AI operations Lead cross-functional initiatives to integrate AI throughout our platform ecosystem, partnering with product and engineering teams to deliver measurable business value Translate complex technical AI concepts into business value, working directly with enterprise clients to understand their needs and influence strategic platform decisions Mentor engineering teams on AI best practices, emerging technologies, and enterprise AI governance while maintaining high engineering standards for production AI systems. Requirements: What you need Bachelor's or Master's degree in Data Science, Computer Science, Statistics, Mathematics, or a related field. 5+ years of experience in AI/ML engineering with at least 2 years in a senior role Proven experience building and deploying AI agents or conversational AI systems in production Experience working with large-scale enterprise datasets and SaaS platforms. Expertise in design patterns for memory systems and context management solutions and optimization for AI workloads Experience with Amazon Bedrock and AWS Lambda for serverless AI deployments Experience with RAG systems, vector databases, and semantic search Understanding of Model Context Protocol (MCP) and AI agent integration patterns Proficiency in programming languages such as Python, PySpark, SQL and ML frameworks such as TensorFlow, PyTorch Knowledge of enterprise security patterns and compliance requirements Ability to articulate technical concepts to technical and non-technical stakeholders. Ability to thrive in a fast-paced, dynamic environment. Flexibility to adapt to changing priorities and requirements. Nice to have Experience in SaaS Management or Software Asset Management. Ph.D. in Data Science, Computer Science, Statistics, Mathematics, or a related field. Knowledge of ethical AI, bias mitigation, and AI safety best practices Experience with LangChain and LangGraph frameworks At Zylo, we're committed to Growing Stronger Together by fostering a diverse and inclusive workplace. We believe that a variety of perspectives not only fuels innovation, but also allows us to better serve our diverse customer base. If you meet the essential qualifications, we encourage you to apply and join us on this journey. Still growing in your career? Connect with our talent community-we're always looking for future Zylos who share our passion for continuous learning.
04/24/2026
Full time
Job DescriptionJob DescriptionDescription: Zylo is the enterprise leader in SaaS Management, enabling companies to discover, manage, and optimize their SaaS applications. Zylo helps companies reduce costs and minimize risk by centralizing SaaS inventory, license, and renewal management. Trusted by industry leaders, Zylo's AI-powered platform provides unmatched visibility into SaaS usage and spend. Powered by the industry's most intelligent discovery engine, Zylo continuously uncovers hidden SaaS applications, giving companies greater control over their SaaS portfolio. With more than 30 million SaaS licenses and $75 billion in SaaS spend under management, Zylo delivers the deepest insights, backed by more data than any other provider. Overview We are seeking an experienced Senior AI Engineer to lead the evolution of our enterprise SaaS platform's agentic AI capabilities. You'll drive strategic AI initiatives that solve complex client problems while working with large-scale datasets for global enterprise customers. This role combines deep technical expertise in AI agents, RAG systems, and enterprise integration with strategic thinking about how AI can transform our platform and deliver exceptional business value What you will do Drive strategic AI initiatives that directly impact client success and business growth, defining technical roadmaps and influencing product strategy to solve complex enterprise problems Architect and enhance our agentic processes for enterprise-scale deployments, building sophisticated multi-agent orchestration patterns for complex workflows Design advanced agent memory systems and context management solutions that maintain coherence across long-running conversations and extended enterprise tasks Build and implement RAG (Retrieval-Augmented Generation) systems to dramatically improve AI accuracy, including knowledge retrieval pipelines and semantic search optimization for large-scale datasets Develop enterprise-grade MCP (Model Context Protocol) services enabling seamless client agent integration with standardized APIs, security protocols, and comprehensive documentation Leverage AWS technologies (Bedrock, Lambda, etc) to architect AI solutions with optimal performance, cost efficiency, and enterprise-scale LLM integration Design and optimize schemas for storing LLM interactions, agent state, and conversation history while building monitoring systems for AI operations Lead cross-functional initiatives to integrate AI throughout our platform ecosystem, partnering with product and engineering teams to deliver measurable business value Translate complex technical AI concepts into business value, working directly with enterprise clients to understand their needs and influence strategic platform decisions Mentor engineering teams on AI best practices, emerging technologies, and enterprise AI governance while maintaining high engineering standards for production AI systems. Requirements: What you need Bachelor's or Master's degree in Data Science, Computer Science, Statistics, Mathematics, or a related field. 5+ years of experience in AI/ML engineering with at least 2 years in a senior role Proven experience building and deploying AI agents or conversational AI systems in production Experience working with large-scale enterprise datasets and SaaS platforms. Expertise in design patterns for memory systems and context management solutions and optimization for AI workloads Experience with Amazon Bedrock and AWS Lambda for serverless AI deployments Experience with RAG systems, vector databases, and semantic search Understanding of Model Context Protocol (MCP) and AI agent integration patterns Proficiency in programming languages such as Python, PySpark, SQL and ML frameworks such as TensorFlow, PyTorch Knowledge of enterprise security patterns and compliance requirements Ability to articulate technical concepts to technical and non-technical stakeholders. Ability to thrive in a fast-paced, dynamic environment. Flexibility to adapt to changing priorities and requirements. Nice to have Experience in SaaS Management or Software Asset Management. Ph.D. in Data Science, Computer Science, Statistics, Mathematics, or a related field. Knowledge of ethical AI, bias mitigation, and AI safety best practices Experience with LangChain and LangGraph frameworks At Zylo, we're committed to Growing Stronger Together by fostering a diverse and inclusive workplace. We believe that a variety of perspectives not only fuels innovation, but also allows us to better serve our diverse customer base. If you meet the essential qualifications, we encourage you to apply and join us on this journey. Still growing in your career? Connect with our talent community-we're always looking for future Zylos who share our passion for continuous learning.
Job DescriptionJob DescriptionJOB DESCRIPTION: The Software Integration & Operations (SIO) group turns frontier autonomy into mission-ready aircraft. We own the commit-to-flight pipeline-deterministic simulation at cluster scale, HIL/VIL integration, CI/CD, automated testing, and release engineering. Our goal is simple: make AI fly-safely, repeatably, and fast. As a Senior Modeling & Simulation Engineer, you will be dedicated to Shield AI's next-generation aircraft program, advancing our infrastructure to enable a streamlined simulation test pipeline. You'll design, build, and scale the models and infrastructure needed to use Shield's simulation platform for qualified software validation efforts. Your simulation infrastructure will enable developers to test faster, system engineers to evaluate performance earlier, and release engineers to ship with confidence. What You'll Do Build and scale simulation frameworks for integrated testing of autonomy, GNC, and embedded systems in C++. Design deterministic, high-performance simulation tools capable of faster-than-real-time execution for development, testing, and release. Implement scenario simulation tooling and formal test infrastructure. Collaborate across autonomy, embedded, GNC, and test engineering to ensure the simulation mirrors real aircraft behavior and mission scenarios. Profile, optimize, and validate C++ codebases for performance, determinism, and fidelity. Perform verification & validation (V&V) analysis activities on model tools. Contribute to architecture decisions that define the next generation of aircraft simulation tools within Shield AI. Mentor engineers and guide best practices in C++, simulation architecture, and performance engineering. Required Qualifications BS or MS in Computer Science, Aerospace, Robotics, or related field. 5+ years of experience in software development, with emphasis on modern C++ (C+ or later) and performance optimization. Proven experience developing or integrating simulation systems for robotics, aerospace, or autonomous systems. Strong grasp of real-time and deterministic software design, including multi-threading, synchronization, and memory management. Experience with DevOps-integrated simulation workflows, including CI/CD and automated hardware testing environments. Preferred Qualifications Understanding of rigid-body dynamics, kinematics, and basic flight and sensor mechanics. Familiarity with sensor and actuator modeling and integrating avionics or autonomy software within simulation. Working knowledge of Python for data analysis, test automation, or simulation orchestration. Ability to debug complex build and runtime environments (CMake, CPM, dependency management, logging, profiling tools). Strong collaboration and communication skills across software, hardware, and systems disciplines. We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analyzing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.
04/24/2026
Full time
Job DescriptionJob DescriptionJOB DESCRIPTION: The Software Integration & Operations (SIO) group turns frontier autonomy into mission-ready aircraft. We own the commit-to-flight pipeline-deterministic simulation at cluster scale, HIL/VIL integration, CI/CD, automated testing, and release engineering. Our goal is simple: make AI fly-safely, repeatably, and fast. As a Senior Modeling & Simulation Engineer, you will be dedicated to Shield AI's next-generation aircraft program, advancing our infrastructure to enable a streamlined simulation test pipeline. You'll design, build, and scale the models and infrastructure needed to use Shield's simulation platform for qualified software validation efforts. Your simulation infrastructure will enable developers to test faster, system engineers to evaluate performance earlier, and release engineers to ship with confidence. What You'll Do Build and scale simulation frameworks for integrated testing of autonomy, GNC, and embedded systems in C++. Design deterministic, high-performance simulation tools capable of faster-than-real-time execution for development, testing, and release. Implement scenario simulation tooling and formal test infrastructure. Collaborate across autonomy, embedded, GNC, and test engineering to ensure the simulation mirrors real aircraft behavior and mission scenarios. Profile, optimize, and validate C++ codebases for performance, determinism, and fidelity. Perform verification & validation (V&V) analysis activities on model tools. Contribute to architecture decisions that define the next generation of aircraft simulation tools within Shield AI. Mentor engineers and guide best practices in C++, simulation architecture, and performance engineering. Required Qualifications BS or MS in Computer Science, Aerospace, Robotics, or related field. 5+ years of experience in software development, with emphasis on modern C++ (C+ or later) and performance optimization. Proven experience developing or integrating simulation systems for robotics, aerospace, or autonomous systems. Strong grasp of real-time and deterministic software design, including multi-threading, synchronization, and memory management. Experience with DevOps-integrated simulation workflows, including CI/CD and automated hardware testing environments. Preferred Qualifications Understanding of rigid-body dynamics, kinematics, and basic flight and sensor mechanics. Familiarity with sensor and actuator modeling and integrating avionics or autonomy software within simulation. Working knowledge of Python for data analysis, test automation, or simulation orchestration. Ability to debug complex build and runtime environments (CMake, CPM, dependency management, logging, profiling tools). Strong collaboration and communication skills across software, hardware, and systems disciplines. We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analyzing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.
Job DescriptionJob Description Application Deadline: March 5, 2026 or until position is filled. General Summary Network Operations Center (NOC) Technicians monitor critical network elements and engage in proactive network systems monitoring, providing 24/7 operational monitoring and support to customer's network and devices throughout our industry. This position is responsible for technical support and issues that come into the NOC via customer and/or monitoring software. This role responds to escalations and resolves issues that arise from hardware and software failures on our network. The NOC is responsible for resolving all manageable incidents within its scope and capabilities supporting first call resolution, escalating only complex or unresolved issues beyond its operational authority. This position is the central communication coordinator for the Company's Field and Network Operations group. The Network Operations Center Technician (NOC) maintains the network and responds as necessary to incidents across several field divisions. Essential Duties and Responsibilities Monitor, support and trouble-shoot network issues throughout the Company's field divisions. Answer inbound calls and resolve incidents for field service technicians and provide proactive network outage notification to the Network Field Infrastructure team; manage closure and document resolution; perform required maintenance to contain, eradicate and recover from network outages and infrastructure security incidents. Perform triage and escalation of network events; utilize advanced diagnostic tools to isolate and remediate incidents. Diagnose issues with routers, switches and wireless uplinks; document and escalate issues. Provide proactive network outage notification and coordinate restoration with Field Infrastructure and Engineering teams. Access network equipment remotely to identify and resolve issues. Create and maintain accurate incident and change tickets; document resolution steps and lessons learned. Maintain the Company's network environment; utilize all provided utilities, tools and applications; ensure a high level of service availability. Respond to network alarms and execute the proper classification, prioritization, and escalation workflows. Provide critical service outage notification and escalate issues for timely resolution; notify appropriate Company personnel as appropriate. Collaborate with DevOps and NetOps teams to identify automation opportunities in monitoring, ticketing, and workflow orchestration. Maintain and comply with standard Company technical and administrative procedures. Monitor network status and provide support for issues related to the network infrastructure, hardware and applications. Develop and maintain excellent customer relationships. Act as a command center for the Network and Field groups during network crises. Work with service providers to resolve circuit problems. Manage the on-call and escalation directory. Communicate and liaise with all other Company departments; notify appropriate parties immediately of any issues which may affect efficient operations including, but not limited to, outages, service disruptions, tower volume and repeated customer complaints. Shift work is required inclusive of weekends, holiday and/or evening work and some travel may be required. Other duties as assigned. Job Requirements Previous experience in a Network Operations Center preferred. Strong familiarity with PC, networking and electronics systems. Basic understanding of networking concepts including Internet Protocol (IP) addressing/subnetting, Virtual Local Area Network (VLAN), Layer 2 switching, Layer 3 routing and Dynamic Host Configuration Protocol (DHCP) and Domain Name System (DNS). Knowledge of IP networking, VLANs, routing/switching, and RF/Wireless infrastructure. Ability to interpret logs, metrics, and telemetry data for proactive issue detection. Basic experience with Network Management System (NMS) and monitoring applications. Strong network problem isolation, resolution, analytical and diagnostic skills. Basic understanding of electrical theory, electrical operation and broadcast radio frequency (RF). Ability to work independently and accommodate various shifts in a 24x7x365 environment. Must be comfortable working in a high stress, fast paced environment with shifting priorities. Excellent communications skills (verbal and written). Exceptional attention to detail. Self-starter who can find and resolve issues as they are identified. Proficient with Microsoft Office Suite and other job-related software; basic understanding of Microsoft Visio. Contribute to internal knowledge-sharing and peer mentorship to improve team capability and efficiency. Working Conditions Employee remains in the sitting position for prolonged hours. Employee is occasionally required to stand, walk, use hands to handle or feel objects, tools or controls; reach with hands and arms; talk and hear. Employee must occasionally lift and/or move up to 15 pounds. Specific vision abilities required by the job include close vision, distance vision, color vision, peripheral vision, depth perception and the ability to adjust focus. Working conditions may include being in an open (shared) cubicle/workspace area Daily travel within the Company's geographical footprint and external destinations may be required. Disclaimer This job description is not meant to be an all-inclusive statement of every duty and responsibility which will ever be required of an employee in this position, however, the employee will be held responsible for all duties assigned. Please feel free to review our Benefits at the following link:
04/24/2026
Full time
Job DescriptionJob Description Application Deadline: March 5, 2026 or until position is filled. General Summary Network Operations Center (NOC) Technicians monitor critical network elements and engage in proactive network systems monitoring, providing 24/7 operational monitoring and support to customer's network and devices throughout our industry. This position is responsible for technical support and issues that come into the NOC via customer and/or monitoring software. This role responds to escalations and resolves issues that arise from hardware and software failures on our network. The NOC is responsible for resolving all manageable incidents within its scope and capabilities supporting first call resolution, escalating only complex or unresolved issues beyond its operational authority. This position is the central communication coordinator for the Company's Field and Network Operations group. The Network Operations Center Technician (NOC) maintains the network and responds as necessary to incidents across several field divisions. Essential Duties and Responsibilities Monitor, support and trouble-shoot network issues throughout the Company's field divisions. Answer inbound calls and resolve incidents for field service technicians and provide proactive network outage notification to the Network Field Infrastructure team; manage closure and document resolution; perform required maintenance to contain, eradicate and recover from network outages and infrastructure security incidents. Perform triage and escalation of network events; utilize advanced diagnostic tools to isolate and remediate incidents. Diagnose issues with routers, switches and wireless uplinks; document and escalate issues. Provide proactive network outage notification and coordinate restoration with Field Infrastructure and Engineering teams. Access network equipment remotely to identify and resolve issues. Create and maintain accurate incident and change tickets; document resolution steps and lessons learned. Maintain the Company's network environment; utilize all provided utilities, tools and applications; ensure a high level of service availability. Respond to network alarms and execute the proper classification, prioritization, and escalation workflows. Provide critical service outage notification and escalate issues for timely resolution; notify appropriate Company personnel as appropriate. Collaborate with DevOps and NetOps teams to identify automation opportunities in monitoring, ticketing, and workflow orchestration. Maintain and comply with standard Company technical and administrative procedures. Monitor network status and provide support for issues related to the network infrastructure, hardware and applications. Develop and maintain excellent customer relationships. Act as a command center for the Network and Field groups during network crises. Work with service providers to resolve circuit problems. Manage the on-call and escalation directory. Communicate and liaise with all other Company departments; notify appropriate parties immediately of any issues which may affect efficient operations including, but not limited to, outages, service disruptions, tower volume and repeated customer complaints. Shift work is required inclusive of weekends, holiday and/or evening work and some travel may be required. Other duties as assigned. Job Requirements Previous experience in a Network Operations Center preferred. Strong familiarity with PC, networking and electronics systems. Basic understanding of networking concepts including Internet Protocol (IP) addressing/subnetting, Virtual Local Area Network (VLAN), Layer 2 switching, Layer 3 routing and Dynamic Host Configuration Protocol (DHCP) and Domain Name System (DNS). Knowledge of IP networking, VLANs, routing/switching, and RF/Wireless infrastructure. Ability to interpret logs, metrics, and telemetry data for proactive issue detection. Basic experience with Network Management System (NMS) and monitoring applications. Strong network problem isolation, resolution, analytical and diagnostic skills. Basic understanding of electrical theory, electrical operation and broadcast radio frequency (RF). Ability to work independently and accommodate various shifts in a 24x7x365 environment. Must be comfortable working in a high stress, fast paced environment with shifting priorities. Excellent communications skills (verbal and written). Exceptional attention to detail. Self-starter who can find and resolve issues as they are identified. Proficient with Microsoft Office Suite and other job-related software; basic understanding of Microsoft Visio. Contribute to internal knowledge-sharing and peer mentorship to improve team capability and efficiency. Working Conditions Employee remains in the sitting position for prolonged hours. Employee is occasionally required to stand, walk, use hands to handle or feel objects, tools or controls; reach with hands and arms; talk and hear. Employee must occasionally lift and/or move up to 15 pounds. Specific vision abilities required by the job include close vision, distance vision, color vision, peripheral vision, depth perception and the ability to adjust focus. Working conditions may include being in an open (shared) cubicle/workspace area Daily travel within the Company's geographical footprint and external destinations may be required. Disclaimer This job description is not meant to be an all-inclusive statement of every duty and responsibility which will ever be required of an employee in this position, however, the employee will be held responsible for all duties assigned. Please feel free to review our Benefits at the following link:
Job DescriptionJob DescriptionJOB DESCRIPTION: The Aircraft Simulation team turns frontier autonomy into mission-ready aircraft. We own the commit-to-flight pipeline-deterministic aircraft and mission simulation, HITL/SITL integration, CI/CD, and tooling for automated flight qualification testing. Our goal is simple: make AI fly-safely, reliably, and fast. As a Senior Modeling & Simulation Engineer, you will be dedicated to Shield AI's next-generation aircraft program, contributing to our modeling and simulation tooling pipeline. You'll design, build, and scale novel aircraft subsystem models, develop infrastructure that enables automated testing for our XBAT product line, and perform verification and validation of simulation pipelines. You will also conduct system performance analysis to evaluate expected and actual flight and mission performance using simulation tools and publish results for consumption by customers. What You'll Do Develop models and infrastructure for the integrated simulation pipeline in C++. Design deterministic, high-performance simulation tools capable of faster-than-real-time execution for development, testing, and release. Implement test scenarios and write unit, system, and regression tests. Collaborate across autonomy, embedded, GNC, and test engineering to ensure the simulation mirrors real aircraft behavior and mission scenarios. Contribute to platform-agnostic simulation tooling to accelerate future development efforts Perform verification and validation (V&V) analysis activities on model tools. Conduct system performance analysis and generate reports and visualizations. Utilize best practices in C++, simulation architecture, and performance engineering. Required Qualifications BS or MS in Computer Science, Aerospace, Robotics, or related field. 5+ years of experience in software development, with emphasis on modern C++ (C+ or later) and performance optimization. Strong understanding of rigid-body dynamics, kinematics, and basic flight and sensor mechanics. Proven experience developing or integrating simulation systems for robotics, aerospace, or autonomous systems. Ability to debug complex build and runtime environments (CMake, CPM, package management, logging, & profiling tools). Experience with software testing tools (GTest, etc.) Experience with model V&V. Strong collaboration and communication skills across software, hardware, and systems disciplines. Preferred Qualifications Grasp of real-time and deterministic software design, including multi-threading, synchronization, and memory management. Experience with DevOps-integrated simulation workflows, including CI/CD and automated hardware testing environments. Working knowledge of Python for data analysis, test automation, or simulation orchestration. Familiarity with aircraft and flight physics modeling. We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analyzing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.
04/24/2026
Full time
Job DescriptionJob DescriptionJOB DESCRIPTION: The Aircraft Simulation team turns frontier autonomy into mission-ready aircraft. We own the commit-to-flight pipeline-deterministic aircraft and mission simulation, HITL/SITL integration, CI/CD, and tooling for automated flight qualification testing. Our goal is simple: make AI fly-safely, reliably, and fast. As a Senior Modeling & Simulation Engineer, you will be dedicated to Shield AI's next-generation aircraft program, contributing to our modeling and simulation tooling pipeline. You'll design, build, and scale novel aircraft subsystem models, develop infrastructure that enables automated testing for our XBAT product line, and perform verification and validation of simulation pipelines. You will also conduct system performance analysis to evaluate expected and actual flight and mission performance using simulation tools and publish results for consumption by customers. What You'll Do Develop models and infrastructure for the integrated simulation pipeline in C++. Design deterministic, high-performance simulation tools capable of faster-than-real-time execution for development, testing, and release. Implement test scenarios and write unit, system, and regression tests. Collaborate across autonomy, embedded, GNC, and test engineering to ensure the simulation mirrors real aircraft behavior and mission scenarios. Contribute to platform-agnostic simulation tooling to accelerate future development efforts Perform verification and validation (V&V) analysis activities on model tools. Conduct system performance analysis and generate reports and visualizations. Utilize best practices in C++, simulation architecture, and performance engineering. Required Qualifications BS or MS in Computer Science, Aerospace, Robotics, or related field. 5+ years of experience in software development, with emphasis on modern C++ (C+ or later) and performance optimization. Strong understanding of rigid-body dynamics, kinematics, and basic flight and sensor mechanics. Proven experience developing or integrating simulation systems for robotics, aerospace, or autonomous systems. Ability to debug complex build and runtime environments (CMake, CPM, package management, logging, & profiling tools). Experience with software testing tools (GTest, etc.) Experience with model V&V. Strong collaboration and communication skills across software, hardware, and systems disciplines. Preferred Qualifications Grasp of real-time and deterministic software design, including multi-threading, synchronization, and memory management. Experience with DevOps-integrated simulation workflows, including CI/CD and automated hardware testing environments. Working knowledge of Python for data analysis, test automation, or simulation orchestration. Familiarity with aircraft and flight physics modeling. We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analyzing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.
Job DescriptionJob DescriptionJob Summary Location: Austin, TX (Hybrid - 3 days in office) Software engineering is changing fast. AI tools can already help with small tasks. But the real opportunity isnt in isolated prompts - its in redesigning how mission-critical systems are built, maintained, and scaled using AI as a first-class engineering capability. Were looking for an exceptional Applied AI Engineer to help answer a fundamental question: What does it look like for a high-performance engineering team building institutional-grade financial software to use AI to its full potential - without compromising quality, security, or reliability? What you will do This is a hands-on engineering role. You will ship production code while simultaneously embedding AI deeply into how we operate. You will: Build AI-powered engineering workflows that meaningfully accelerate delivery without sacrificing reliability or correctness. Design and deploy agent-based systems, orchestration layers, and AI-assisted tooling that operate in real production environments. Contribute directly to core product code, using AI to amplify your own impact and prove out the workflows you introduce. Systematize AI usage across the team, turning ad hoc experimentation into consistent, high-quality processes with guardrails and observability. Prototype AI-enabled product features, partnering closely with product and engineering. Continuously evaluate emerging models and tools, exercising strong judgment about what to adopt and what to ignore. Engineer AI infrastructure as a production system, including CI/CD integration, configuration management, and safe rollout strategies. You wont be leading a team of people (yet). Youll be leading a team of AI agents. Who You Are Youre a strong engineer first. Youve already shipped and owned meaningful production systems. You think in architectures, trade-offs, and failure modes - not just features. System design comes naturally to you. Youre deeply excited about how AI is reshaping software engineering. Not casually interested - genuinely obsessed. You experiment constantly. Youve likely built your own agents, internal tooling, or AI-powered workflows simply because you couldnt resist. You: Code daily and enjoy it. Have strong production instincts and know what good looks like. Care about code quality, correctness, and reliability. Take pride in unblocking others and increasing team leverage. Think about optimization, efficiency, and long-term engineering velocity. Are comfortable operating in ambiguity and turning emerging ideas into production-ready systems. Treat AI infrastructure like any other critical backend system - versioned, observable, reproducible, and safe. You likely have open-source contributions, side projects, or public experiments demonstrating how you use AI in real engineering environments. You dont just use AI tools. You engineer workflows around them. This Role Is Not For You If You're primarily interested in AI research or training new foundation models. This is about applying existing models to real engineering systems. You prefer advisory or short-term strategy work. We are looking for someone embedded deeply with the team who owns outcomes over time. You enjoy automation disconnected from real production constraints. Everything here touches real code and real systems. You're looking to manage a team. This is an individual contributor role focused on hands-on building. You need fully remote work. This role is highly collaborative and based in Austin (3 days per week in office).
04/24/2026
Full time
Job DescriptionJob DescriptionJob Summary Location: Austin, TX (Hybrid - 3 days in office) Software engineering is changing fast. AI tools can already help with small tasks. But the real opportunity isnt in isolated prompts - its in redesigning how mission-critical systems are built, maintained, and scaled using AI as a first-class engineering capability. Were looking for an exceptional Applied AI Engineer to help answer a fundamental question: What does it look like for a high-performance engineering team building institutional-grade financial software to use AI to its full potential - without compromising quality, security, or reliability? What you will do This is a hands-on engineering role. You will ship production code while simultaneously embedding AI deeply into how we operate. You will: Build AI-powered engineering workflows that meaningfully accelerate delivery without sacrificing reliability or correctness. Design and deploy agent-based systems, orchestration layers, and AI-assisted tooling that operate in real production environments. Contribute directly to core product code, using AI to amplify your own impact and prove out the workflows you introduce. Systematize AI usage across the team, turning ad hoc experimentation into consistent, high-quality processes with guardrails and observability. Prototype AI-enabled product features, partnering closely with product and engineering. Continuously evaluate emerging models and tools, exercising strong judgment about what to adopt and what to ignore. Engineer AI infrastructure as a production system, including CI/CD integration, configuration management, and safe rollout strategies. You wont be leading a team of people (yet). Youll be leading a team of AI agents. Who You Are Youre a strong engineer first. Youve already shipped and owned meaningful production systems. You think in architectures, trade-offs, and failure modes - not just features. System design comes naturally to you. Youre deeply excited about how AI is reshaping software engineering. Not casually interested - genuinely obsessed. You experiment constantly. Youve likely built your own agents, internal tooling, or AI-powered workflows simply because you couldnt resist. You: Code daily and enjoy it. Have strong production instincts and know what good looks like. Care about code quality, correctness, and reliability. Take pride in unblocking others and increasing team leverage. Think about optimization, efficiency, and long-term engineering velocity. Are comfortable operating in ambiguity and turning emerging ideas into production-ready systems. Treat AI infrastructure like any other critical backend system - versioned, observable, reproducible, and safe. You likely have open-source contributions, side projects, or public experiments demonstrating how you use AI in real engineering environments. You dont just use AI tools. You engineer workflows around them. This Role Is Not For You If You're primarily interested in AI research or training new foundation models. This is about applying existing models to real engineering systems. You prefer advisory or short-term strategy work. We are looking for someone embedded deeply with the team who owns outcomes over time. You enjoy automation disconnected from real production constraints. Everything here touches real code and real systems. You're looking to manage a team. This is an individual contributor role focused on hands-on building. You need fully remote work. This role is highly collaborative and based in Austin (3 days per week in office).
Job DescriptionJob Description We are sharing a specialised part-time consulting opportunity for experienced software engineers with strong backgrounds in distributed systems, backend infrastructure, cloud environments, and production-grade systems engineering. This role supports current and upcoming remote consulting opportunities focused on structured technical workflows, distributed systems engineering, technical evaluation, and high-quality project execution. Selected professionals will apply their software engineering expertise to design and optimize backend infrastructure, support scalable distributed services, collaborate across technical workflows, follow technical instructions with precision, and contribute to high-quality technical deliverables. This opportunity is especially well-suited to engineers with strong backend and systems expertise who are comfortable building reliable distributed services and supporting high-performance runtime environments. Key Responsibilities Professionals in this role may contribute to: Distributed Systems & Infrastructure Engineering Design, build, and optimize distributed infrastructure across high-performance compute environments Evaluate and improve system performance across compute, networking, storage, and service layers, identifying and resolving bottlenecks Implement monitoring, observability, and fault-tolerance mechanisms for long-running processes and distributed workflows Backend Systems Development Develop core backend systems including services, APIs, and orchestration layers that support complex technical workflows Build and maintain runtime infrastructure including task scheduling, state management, inter-service communication, and execution reliability Support production-grade infrastructure for scalable backend systems and distributed application behavior Technical Collaboration & Iteration Collaborate closely with technical teams to integrate backend services, infrastructure workflows, and production systems Participate in synchronous collaboration sessions to review architecture decisions, troubleshoot distributed systems, and iterate on design improvements Help shape foundational systems that support scalable, reliable technical operations Ideal Profile Strong candidates may have: A strong foundation in Computer Science, Software Engineering, or Systems Design with experience building large-scale distributed systems Proficiency in one or more backend or systems programming languages such as Go, Rust, Python, C++, Java, Scala, C#, Kotlin, or TypeScript / JavaScript Experience with cloud infrastructure such as AWS, GCP, or Azure Experience with containerization and orchestration tools such as Docker and Kubernetes Strong experience designing production-grade backend services, APIs, and distributed systems Excellent collaboration and communication skills Preferred Qualifications Familiarity with advanced backend infrastructure, workflow orchestration, or high-scale runtime environments Knowledge of networking, data streaming, caching, and performance optimization in distributed systems Strong comfort working across both systems engineering and infrastructure contexts Ability to operate effectively in fast-moving, technically demanding environments Why This Opportunity Apply specialised distributed systems and backend engineering expertise to high-impact technical work Contribute to scalable infrastructure, backend services, and production engineering workflows Work on practical, detail-oriented assignments with strong real-world relevance Collaborate across technical teams on challenging problems in scalability, reliability, and system performance Contract Details Independent contractor role Fully remote with flexible scheduling Hourly compensation of $110-$175 per hour Expected commitment of 30-40 hours per week Participation in synchronous collaboration sessions is required, with 4-hour windows 2-3 times per week Projects may be extended, shortened, or concluded early depending on project needs and performance Weekly payments via Stripe or Wise Work will not involve access to confidential or proprietary information from any employer, client, or institution Please note: We are unable to support H1-B or STEM OPT candidates at this time Start date: Immediate About the Platform This opportunity is available through 24-MAG LLC. We connect experienced professionals with remote consulting opportunities across technical, evaluation, and project-based workstreams. By submitting this application, you acknowledge that your information may be processed by 24-MAG LLC for recruitment and opportunity matching in accordance with our Privacy Policy:
04/24/2026
Full time
Job DescriptionJob Description We are sharing a specialised part-time consulting opportunity for experienced software engineers with strong backgrounds in distributed systems, backend infrastructure, cloud environments, and production-grade systems engineering. This role supports current and upcoming remote consulting opportunities focused on structured technical workflows, distributed systems engineering, technical evaluation, and high-quality project execution. Selected professionals will apply their software engineering expertise to design and optimize backend infrastructure, support scalable distributed services, collaborate across technical workflows, follow technical instructions with precision, and contribute to high-quality technical deliverables. This opportunity is especially well-suited to engineers with strong backend and systems expertise who are comfortable building reliable distributed services and supporting high-performance runtime environments. Key Responsibilities Professionals in this role may contribute to: Distributed Systems & Infrastructure Engineering Design, build, and optimize distributed infrastructure across high-performance compute environments Evaluate and improve system performance across compute, networking, storage, and service layers, identifying and resolving bottlenecks Implement monitoring, observability, and fault-tolerance mechanisms for long-running processes and distributed workflows Backend Systems Development Develop core backend systems including services, APIs, and orchestration layers that support complex technical workflows Build and maintain runtime infrastructure including task scheduling, state management, inter-service communication, and execution reliability Support production-grade infrastructure for scalable backend systems and distributed application behavior Technical Collaboration & Iteration Collaborate closely with technical teams to integrate backend services, infrastructure workflows, and production systems Participate in synchronous collaboration sessions to review architecture decisions, troubleshoot distributed systems, and iterate on design improvements Help shape foundational systems that support scalable, reliable technical operations Ideal Profile Strong candidates may have: A strong foundation in Computer Science, Software Engineering, or Systems Design with experience building large-scale distributed systems Proficiency in one or more backend or systems programming languages such as Go, Rust, Python, C++, Java, Scala, C#, Kotlin, or TypeScript / JavaScript Experience with cloud infrastructure such as AWS, GCP, or Azure Experience with containerization and orchestration tools such as Docker and Kubernetes Strong experience designing production-grade backend services, APIs, and distributed systems Excellent collaboration and communication skills Preferred Qualifications Familiarity with advanced backend infrastructure, workflow orchestration, or high-scale runtime environments Knowledge of networking, data streaming, caching, and performance optimization in distributed systems Strong comfort working across both systems engineering and infrastructure contexts Ability to operate effectively in fast-moving, technically demanding environments Why This Opportunity Apply specialised distributed systems and backend engineering expertise to high-impact technical work Contribute to scalable infrastructure, backend services, and production engineering workflows Work on practical, detail-oriented assignments with strong real-world relevance Collaborate across technical teams on challenging problems in scalability, reliability, and system performance Contract Details Independent contractor role Fully remote with flexible scheduling Hourly compensation of $110-$175 per hour Expected commitment of 30-40 hours per week Participation in synchronous collaboration sessions is required, with 4-hour windows 2-3 times per week Projects may be extended, shortened, or concluded early depending on project needs and performance Weekly payments via Stripe or Wise Work will not involve access to confidential or proprietary information from any employer, client, or institution Please note: We are unable to support H1-B or STEM OPT candidates at this time Start date: Immediate About the Platform This opportunity is available through 24-MAG LLC. We connect experienced professionals with remote consulting opportunities across technical, evaluation, and project-based workstreams. By submitting this application, you acknowledge that your information may be processed by 24-MAG LLC for recruitment and opportunity matching in accordance with our Privacy Policy:
Job DescriptionJob Description Looking for an innovative, high-growth, multi-award-winning company in one of the hottest segments of the security market? Look no further than Veracode! Veracode is a global leader in Application Risk Management for the AI era. Powered by trillions of lines of code scans and a proprietary AI-generated remediation engine, the Veracode platform is trusted by organizations worldwide to build and maintain secure software from code creation to cloud deployment. Learn more at , on the Veracode blog, and on LinkedIn and Twitter. We are looking for a motivated DevOps Co-op to join our engineering team and gain hands-on experience in cloud infrastructure, automation, and CI/CD pipelines. This role will allow you to work closely with experienced DevOps engineers, software developers, and IT teams to help improve the reliability, scalability, and security of our systems. Responsibilities: Assist in managing and automating cloud infrastructure (AWS, Azure, or GCP). Work with CI/CD pipelines (Jenkins, GitHub Actions, GitLab CI, etc.) to streamline software deployments. Support containerization and orchestration efforts using Docker and Kubernetes. Monitor and troubleshoot system performance, logs, and security incidents. Write scripts and automation tools in Python, Bash, or Terraform to improve workflow efficiency. Collaborate with development teams to improve deployment processes and infrastructure reliability. Participate in infrastructure as code (IaC) development and configuration management. Document processes, configurations, and best practices for internal use. Qualifications: Currently enrolled in a Masters degree in Computer Science, Software Engineering, or related program. Understanding of Linux/Unix systems and networking fundamentals. Familiarity with cloud platforms (AWS, Azure, or GCP) is a plus. Basic experience with scripting languages (Python, Bash, etc.). Knowledge of version control systems (Git) and CI/CD concepts. Exposure to containerization tools like Docker and Kubernetes is a plus. Strong problem-solving skills and a willingness to learn new technologies. Why Join Us? Hands-on experience with modern DevOps tools and best practices. Work on real-world projects in a collaborative environment. Mentorship and learning opportunities from experienced engineers. Opportunity to contribute to meaningful infrastructure improvements. Fraudulent Recruitment Alert - Be Aware and Stay Informed At Veracode, we prioritize a secure recruitment process. Unfortunately, fake recruitment and job offer scams are on the rise. They aim to deceive candidates through emails and calls to obtain sensitive information. Here's our recruitment promise to you: Comprehensive Interview Process: We never extend job offers without a comprehensive interview process involving our recruitment team and hiring managers. Offer Communications: Our job offers are not sent solely through email, and we will never ask you to pay for your own hardware. Email Verification: Recruiting emails from Veracode will always originate from an email address. If you have any doubts about the authenticity of an email, letter, or telephone communication claiming to be from Veracode, please reach out to us at before taking any further action.
04/24/2026
Full time
Job DescriptionJob Description Looking for an innovative, high-growth, multi-award-winning company in one of the hottest segments of the security market? Look no further than Veracode! Veracode is a global leader in Application Risk Management for the AI era. Powered by trillions of lines of code scans and a proprietary AI-generated remediation engine, the Veracode platform is trusted by organizations worldwide to build and maintain secure software from code creation to cloud deployment. Learn more at , on the Veracode blog, and on LinkedIn and Twitter. We are looking for a motivated DevOps Co-op to join our engineering team and gain hands-on experience in cloud infrastructure, automation, and CI/CD pipelines. This role will allow you to work closely with experienced DevOps engineers, software developers, and IT teams to help improve the reliability, scalability, and security of our systems. Responsibilities: Assist in managing and automating cloud infrastructure (AWS, Azure, or GCP). Work with CI/CD pipelines (Jenkins, GitHub Actions, GitLab CI, etc.) to streamline software deployments. Support containerization and orchestration efforts using Docker and Kubernetes. Monitor and troubleshoot system performance, logs, and security incidents. Write scripts and automation tools in Python, Bash, or Terraform to improve workflow efficiency. Collaborate with development teams to improve deployment processes and infrastructure reliability. Participate in infrastructure as code (IaC) development and configuration management. Document processes, configurations, and best practices for internal use. Qualifications: Currently enrolled in a Masters degree in Computer Science, Software Engineering, or related program. Understanding of Linux/Unix systems and networking fundamentals. Familiarity with cloud platforms (AWS, Azure, or GCP) is a plus. Basic experience with scripting languages (Python, Bash, etc.). Knowledge of version control systems (Git) and CI/CD concepts. Exposure to containerization tools like Docker and Kubernetes is a plus. Strong problem-solving skills and a willingness to learn new technologies. Why Join Us? Hands-on experience with modern DevOps tools and best practices. Work on real-world projects in a collaborative environment. Mentorship and learning opportunities from experienced engineers. Opportunity to contribute to meaningful infrastructure improvements. Fraudulent Recruitment Alert - Be Aware and Stay Informed At Veracode, we prioritize a secure recruitment process. Unfortunately, fake recruitment and job offer scams are on the rise. They aim to deceive candidates through emails and calls to obtain sensitive information. Here's our recruitment promise to you: Comprehensive Interview Process: We never extend job offers without a comprehensive interview process involving our recruitment team and hiring managers. Offer Communications: Our job offers are not sent solely through email, and we will never ask you to pay for your own hardware. Email Verification: Recruiting emails from Veracode will always originate from an email address. If you have any doubts about the authenticity of an email, letter, or telephone communication claiming to be from Veracode, please reach out to us at before taking any further action.
STAND 8 provides end to end IT solutions to enterprise partners across the United States and with offices in Los Angeles, New York, New Jersey, Atlanta, and more including internationally in Mexico and India. We're looking for an experienced, forward-thinking engineer to strengthen our pipeline delivery capabilities by architecting robust CI/CD workflows within our Platform Engineering team. In this role, you will drive the design and evolution of scalable, secure, and automated pipelines, primarily utilizing GitLab CI/CD and Python. You'll work closely with diverse technology teams to standardize deployment patterns, embed security scanning, and champion pipeline-as-code methodologies. This position plays a key role in improving platform efficiency, accelerating software delivery, and advancing automation practices across the organization. Location & Work Type Location: Carrollton, Texas Work Type: Onsite Key Responsibilities Design, implement, and manage scalable and resilient CI/CD pipelines using GitLab CI/CD to support microservices and monolithic applications. Develop and maintain advanced automation scripts and tooling using Python to streamline build, test, and release processes. Architect and maintain reusable pipeline templates and libraries to ensure standardization and ease of adoption across development teams. Integrate Infrastructure-as-Code (IaC) workflows (Terraform/OpenTofu) into application pipelines for automated environment provisioning. Implement and enforce security best practices within the CI/CD lifecycle, including SAST/DAST scanning, dependency checking, and secret management. Collaborate closely with diverse teams to optimize build times, manage artifact lifecycles, and provide Pipeline Engineering expertise. Troubleshoot and resolve complex pipeline failures, build errors, and deployment issues across Windows and Linux environments. Implement and manage pipeline observability and metrics to ensure deployment visibility and proactive issue detection. Clearly and concisely contribute to the development and documentation of Pipeline Engineering standards and GitOps best practices. Stay up-to-date with the latest industry trends and technologies in CI/CD, DevSecOps, and build automation. Provide mentorship and guidance to junior team members on pipeline architecture and Python automation. Qualifications Required: Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent experience). 5+ years of experience in a Platform, DevOps, Release, or Pipeline Engineer role. Extensive hands-on experience designing and implementing complex CI/CD pipelines using GitLab CI/CD. Strong scripting and software development skills, specifically with Python, for automation and API integration. Solid understanding of Windows/Linux Server administration as it relates to build agents and deployment targets. Proven experience integrating infrastructure-as-code (IaC) tools, specifically Terraform (OpenTofu) and AWS CDK, into automated pipelines. Experience deploying and managing applications in cloud environments, particularly Amazon Web Services (AWS). Deep understanding of security best practices (DevSecOps) and their implementation in CI/CD pipelines (e.g., SonarQube). Solid understanding of version control strategies (Git branching models) and artifact management (e.g., Artifactory). Excellent problem-solving and troubleshooting skills related to build and deployment failures. Strong communication and collaboration skills. Preferred (Optional): Experience with containerization & orchestration technologies (e.g., Docker, Kubernetes/EKS) Relevant AWS or Platform/DevOps certifications. Strong background with .NET/Core build processes and deployment patterns. Experience migrating legacy pipelines (e.g., Jenkins) to GitLab CI/CD. Understanding of Windows server build processes using tools like Packer and Chocolatey. Experience with monitoring tools integrated into deployment workflows (e.g., New Relic, CloudWatch). Benefits Medical coverage and Health Savings Account (HSA) through Anthem Dental/Vision/Various Ancillary coverages through Unum 401(k) retirement savings plan Paid-time-off options Company-paid Employee Assistance Program (EAP) Discount programs through ADP WorkforceNow Additional Details The base range for this contract position is $62.11 - $72.11 / per hour, depending on experience. Our pay ranges are determined by role, level, and location. The range displayed on each job posting reflects the minimum and maximum target for new hires of this position across all US locations. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Qualified applicants with arrest or conviction records will be considered. About Us STAND 8 provides end-to-end IT solutions to enterprise partners across the United States and globally with offices in Los Angeles, Atlanta, New York, Mexico, Japan, India, and more. STAND 8 focuses on the "bleeding edge" of technology and leverages automation, process, marketing, and over fifteen years of success and growth to provide a world-class experience for our customers, partners, and employees. Our mission is to impact the world positively by creating success through PEOPLE, PROCESS, and TECHNOLOGY. Check out more at ; and reach out today to explore opportunities to grow together! By applying to this position, your data will be processed in accordance with the STAND 8 Privacy Policy.
04/24/2026
Full time
STAND 8 provides end to end IT solutions to enterprise partners across the United States and with offices in Los Angeles, New York, New Jersey, Atlanta, and more including internationally in Mexico and India. We're looking for an experienced, forward-thinking engineer to strengthen our pipeline delivery capabilities by architecting robust CI/CD workflows within our Platform Engineering team. In this role, you will drive the design and evolution of scalable, secure, and automated pipelines, primarily utilizing GitLab CI/CD and Python. You'll work closely with diverse technology teams to standardize deployment patterns, embed security scanning, and champion pipeline-as-code methodologies. This position plays a key role in improving platform efficiency, accelerating software delivery, and advancing automation practices across the organization. Location & Work Type Location: Carrollton, Texas Work Type: Onsite Key Responsibilities Design, implement, and manage scalable and resilient CI/CD pipelines using GitLab CI/CD to support microservices and monolithic applications. Develop and maintain advanced automation scripts and tooling using Python to streamline build, test, and release processes. Architect and maintain reusable pipeline templates and libraries to ensure standardization and ease of adoption across development teams. Integrate Infrastructure-as-Code (IaC) workflows (Terraform/OpenTofu) into application pipelines for automated environment provisioning. Implement and enforce security best practices within the CI/CD lifecycle, including SAST/DAST scanning, dependency checking, and secret management. Collaborate closely with diverse teams to optimize build times, manage artifact lifecycles, and provide Pipeline Engineering expertise. Troubleshoot and resolve complex pipeline failures, build errors, and deployment issues across Windows and Linux environments. Implement and manage pipeline observability and metrics to ensure deployment visibility and proactive issue detection. Clearly and concisely contribute to the development and documentation of Pipeline Engineering standards and GitOps best practices. Stay up-to-date with the latest industry trends and technologies in CI/CD, DevSecOps, and build automation. Provide mentorship and guidance to junior team members on pipeline architecture and Python automation. Qualifications Required: Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent experience). 5+ years of experience in a Platform, DevOps, Release, or Pipeline Engineer role. Extensive hands-on experience designing and implementing complex CI/CD pipelines using GitLab CI/CD. Strong scripting and software development skills, specifically with Python, for automation and API integration. Solid understanding of Windows/Linux Server administration as it relates to build agents and deployment targets. Proven experience integrating infrastructure-as-code (IaC) tools, specifically Terraform (OpenTofu) and AWS CDK, into automated pipelines. Experience deploying and managing applications in cloud environments, particularly Amazon Web Services (AWS). Deep understanding of security best practices (DevSecOps) and their implementation in CI/CD pipelines (e.g., SonarQube). Solid understanding of version control strategies (Git branching models) and artifact management (e.g., Artifactory). Excellent problem-solving and troubleshooting skills related to build and deployment failures. Strong communication and collaboration skills. Preferred (Optional): Experience with containerization & orchestration technologies (e.g., Docker, Kubernetes/EKS) Relevant AWS or Platform/DevOps certifications. Strong background with .NET/Core build processes and deployment patterns. Experience migrating legacy pipelines (e.g., Jenkins) to GitLab CI/CD. Understanding of Windows server build processes using tools like Packer and Chocolatey. Experience with monitoring tools integrated into deployment workflows (e.g., New Relic, CloudWatch). Benefits Medical coverage and Health Savings Account (HSA) through Anthem Dental/Vision/Various Ancillary coverages through Unum 401(k) retirement savings plan Paid-time-off options Company-paid Employee Assistance Program (EAP) Discount programs through ADP WorkforceNow Additional Details The base range for this contract position is $62.11 - $72.11 / per hour, depending on experience. Our pay ranges are determined by role, level, and location. The range displayed on each job posting reflects the minimum and maximum target for new hires of this position across all US locations. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Qualified applicants with arrest or conviction records will be considered. About Us STAND 8 provides end-to-end IT solutions to enterprise partners across the United States and globally with offices in Los Angeles, Atlanta, New York, Mexico, Japan, India, and more. STAND 8 focuses on the "bleeding edge" of technology and leverages automation, process, marketing, and over fifteen years of success and growth to provide a world-class experience for our customers, partners, and employees. Our mission is to impact the world positively by creating success through PEOPLE, PROCESS, and TECHNOLOGY. Check out more at ; and reach out today to explore opportunities to grow together! By applying to this position, your data will be processed in accordance with the STAND 8 Privacy Policy.
Pay Rate: $45 per hour Location: Irvine, CA Summary: Work Mode: Not specified Responsibilities: Designs and builds scalable cloud-native applications, integrating AI and machine learning capabilities. Leads development of microservices, APIs, and full-stack solutions, ensuring reliability, security, and performance. Collaborates with cross-functional teams to implement DevOps practices, automated delivery pipelines, and strong observability. Contributes to AI-driven features such as document processing, chatbots, and AI agents. Architects, designs, and develops microservices-based applications for scalability. Utilizes containerization technologies for consistent deployment across environments. Implements and manages container orchestration for automated deployment and scaling. Architects solutions that scale horizontally and optimize resource utilization. Designs resilient and fault-tolerant applications for high availability. Implements robust monitoring and logging practices using tools like Prometheus and Grafana. Champions API-first design principles for seamless communication between microservices. Mentors and guides other developers, fostering a culture of engineering excellence. Assists in building AI/ML solutions, including document understanding and chatbot functionality. Supports development of AI agents and workflows integrating with internal systems. Tests and evaluates AI models, including large language models and computer vision systems. Gains exposure to MLOps practices such as model versioning and deployment automation. Embraces agile methodologies for rapid and iterative development cycles. Documents work clearly and adheres to established coding standards and practices. Requirements: Bachelor's degree in Computer Science or related field, or 5 years of experience. 10 years of experience in software development, including designing and deploying cloud-native applications. Proficiency in Python and libraries such as TensorFlow and PyTorch. Expertise in multiple programming languages such as C, Rust, Java, Python, or similar. Expertise in backend development and working knowledge of databases (SQL, NoSQL). Expertise in JavaScript frameworks like Angular.js, React.js, or Vue.js. Experience with scalable and resilient system design. Experience in Enterprise software design principles and event-driven architecture.
04/24/2026
Full time
Pay Rate: $45 per hour Location: Irvine, CA Summary: Work Mode: Not specified Responsibilities: Designs and builds scalable cloud-native applications, integrating AI and machine learning capabilities. Leads development of microservices, APIs, and full-stack solutions, ensuring reliability, security, and performance. Collaborates with cross-functional teams to implement DevOps practices, automated delivery pipelines, and strong observability. Contributes to AI-driven features such as document processing, chatbots, and AI agents. Architects, designs, and develops microservices-based applications for scalability. Utilizes containerization technologies for consistent deployment across environments. Implements and manages container orchestration for automated deployment and scaling. Architects solutions that scale horizontally and optimize resource utilization. Designs resilient and fault-tolerant applications for high availability. Implements robust monitoring and logging practices using tools like Prometheus and Grafana. Champions API-first design principles for seamless communication between microservices. Mentors and guides other developers, fostering a culture of engineering excellence. Assists in building AI/ML solutions, including document understanding and chatbot functionality. Supports development of AI agents and workflows integrating with internal systems. Tests and evaluates AI models, including large language models and computer vision systems. Gains exposure to MLOps practices such as model versioning and deployment automation. Embraces agile methodologies for rapid and iterative development cycles. Documents work clearly and adheres to established coding standards and practices. Requirements: Bachelor's degree in Computer Science or related field, or 5 years of experience. 10 years of experience in software development, including designing and deploying cloud-native applications. Proficiency in Python and libraries such as TensorFlow and PyTorch. Expertise in multiple programming languages such as C, Rust, Java, Python, or similar. Expertise in backend development and working knowledge of databases (SQL, NoSQL). Expertise in JavaScript frameworks like Angular.js, React.js, or Vue.js. Experience with scalable and resilient system design. Experience in Enterprise software design principles and event-driven architecture.
Data Engineer, Senior REMOTE 12 Months Pay: $80-110 per hour Open to all US candidates that can work 8 - 5pm PST Description: The Data Engineering Platform Lead is responsible for the overall design, operation, and evolution of enterprise data platforms supporting analytics, integration, and business intelligence. This role owns the data engineering platform stackincluding Snowflake, Informatica, dbt, AWS, and CI/CD toolingand serves as the primary interface between data engineering teams and partner organizations such as Cyber Security, Cloud Infrastructure, DevOps, and Enterprise Architecture. This is a handson technical leadership role with strong platform management and communication responsibilities, ensuring platforms are secure, scalable, costeffective, and aligned with enterprise standards. Key Responsibilities Platform Ownership & Strategy Own the endtoend data engineering platform, including data storage, ingestion, transformation, orchestration, and DevOps tooling. Define and maintain platform standards, reference architectures, and best practices for: Data ingestion (e.g., Informatica) Data transformation and modeling (dbt) Data warehousing (Snowflake) Drive platform roadmaps and adoption of modern data engineering patterns. Evaluate and recommend platform enhancements, tooling upgrades, and new capabilities. CrossFunctional Collaboration Act as the primary point of contact with: Cyber Security (data protection, access controls, audits) Cloud Infrastructure (AWS services, scalability, resiliency) DevOps (CI/CD, environment management, automation) Enterprise Architecture and Data Governance teams Translate data engineering needs into platform and infrastructure requirements and ensure alignment across teams. Technical Leadership Provide technical leadership and guidance to data engineers using the platform. Establish and enforce dbt standards, including: Project structure and layering Naming conventions and documentation Testing, freshness, and data quality practices Guide solution designs to ensure efficient, secure, and scalable use of Snowflake and dbt. Promote software engineering and DevOps best practices for analytics engineering. DevOps & Automation Lead implementation of CI/CD pipelines for data platforms, including dbt deployments. Integrate Gitbased workflows, automated testing, and controlled promotion across environments. Partner with DevOps teams to improve reliability, repeatability, and release confidence. Operational Excellence Ensure platform reliability, performance, and cost optimization. Define monitoring, alerting, and support models for platform services. Support compliance, audit, and risk remediation activities. Maintain platform documentation, runbooks, and onboarding materials. Required Technical Skills Strong experience managing and operating enterprise data platforms, including: Snowflake (architecture, administration, performance, security) Informatica (PowerCenter and/or Informatica Cloud) dbt (core concepts, modeling patterns, tests, documentation, environments) AWS (foundational services supporting data platforms) Handson experience with DevOps and CI/CD, including: Git / GitHub (branching and version control) Jenkins (or equivalent CI/CD tools) Strong understanding of: Data modeling and transformation best practices Cloud security and access controls Platform scalability, reliability, and cost management Communication & Leadership Skills Excellent verbal and written communication skills. Proven ability to work across multiple teams with differing priorities. Comfortable leading design reviews, architecture discussions, and platform governance forums. Ability to influence without direct authority and drive adoption of standards. Qualifications Bachelors degree in Computer Science, Engineering, or related field. 7+ years of experience in data engineering or data platform engineering. 3+ years in a technical lead, platform lead, or principallevel role. Experience in large enterprise or regulated environments preferred. Preferred Qualifications Experience supporting federated development teams using shared dbt and Snowflake platforms. Familiarity with data governance, metadata management, and analytics catalogs. Experience defining operating and support models for shared platforms.
04/24/2026
Full time
Data Engineer, Senior REMOTE 12 Months Pay: $80-110 per hour Open to all US candidates that can work 8 - 5pm PST Description: The Data Engineering Platform Lead is responsible for the overall design, operation, and evolution of enterprise data platforms supporting analytics, integration, and business intelligence. This role owns the data engineering platform stackincluding Snowflake, Informatica, dbt, AWS, and CI/CD toolingand serves as the primary interface between data engineering teams and partner organizations such as Cyber Security, Cloud Infrastructure, DevOps, and Enterprise Architecture. This is a handson technical leadership role with strong platform management and communication responsibilities, ensuring platforms are secure, scalable, costeffective, and aligned with enterprise standards. Key Responsibilities Platform Ownership & Strategy Own the endtoend data engineering platform, including data storage, ingestion, transformation, orchestration, and DevOps tooling. Define and maintain platform standards, reference architectures, and best practices for: Data ingestion (e.g., Informatica) Data transformation and modeling (dbt) Data warehousing (Snowflake) Drive platform roadmaps and adoption of modern data engineering patterns. Evaluate and recommend platform enhancements, tooling upgrades, and new capabilities. CrossFunctional Collaboration Act as the primary point of contact with: Cyber Security (data protection, access controls, audits) Cloud Infrastructure (AWS services, scalability, resiliency) DevOps (CI/CD, environment management, automation) Enterprise Architecture and Data Governance teams Translate data engineering needs into platform and infrastructure requirements and ensure alignment across teams. Technical Leadership Provide technical leadership and guidance to data engineers using the platform. Establish and enforce dbt standards, including: Project structure and layering Naming conventions and documentation Testing, freshness, and data quality practices Guide solution designs to ensure efficient, secure, and scalable use of Snowflake and dbt. Promote software engineering and DevOps best practices for analytics engineering. DevOps & Automation Lead implementation of CI/CD pipelines for data platforms, including dbt deployments. Integrate Gitbased workflows, automated testing, and controlled promotion across environments. Partner with DevOps teams to improve reliability, repeatability, and release confidence. Operational Excellence Ensure platform reliability, performance, and cost optimization. Define monitoring, alerting, and support models for platform services. Support compliance, audit, and risk remediation activities. Maintain platform documentation, runbooks, and onboarding materials. Required Technical Skills Strong experience managing and operating enterprise data platforms, including: Snowflake (architecture, administration, performance, security) Informatica (PowerCenter and/or Informatica Cloud) dbt (core concepts, modeling patterns, tests, documentation, environments) AWS (foundational services supporting data platforms) Handson experience with DevOps and CI/CD, including: Git / GitHub (branching and version control) Jenkins (or equivalent CI/CD tools) Strong understanding of: Data modeling and transformation best practices Cloud security and access controls Platform scalability, reliability, and cost management Communication & Leadership Skills Excellent verbal and written communication skills. Proven ability to work across multiple teams with differing priorities. Comfortable leading design reviews, architecture discussions, and platform governance forums. Ability to influence without direct authority and drive adoption of standards. Qualifications Bachelors degree in Computer Science, Engineering, or related field. 7+ years of experience in data engineering or data platform engineering. 3+ years in a technical lead, platform lead, or principallevel role. Experience in large enterprise or regulated environments preferred. Preferred Qualifications Experience supporting federated development teams using shared dbt and Snowflake platforms. Familiarity with data governance, metadata management, and analytics catalogs. Experience defining operating and support models for shared platforms.
DivIHN (pronounced "divine") is a CMMI ML3-certified Technology and Talent solutions firm. Driven by a unique Purpose, Culture, and Value Delivery Model, we enable meaningful connections between talented professionals and forward-thinking organizations. Since our formation in 2002, organizations across commercial and public sectors have been trusting us to help build their teams with exceptional temporary and permanent talent. Visit us at to learn more and view our open positions. Please apply or call one of us to learn more For further inquiries about this opportunity, please contact our Talent Specialist, Sri at . Title: DevOps Engineer Duration: 7 Months Location: Sylmar, CA Only W2 candidates are eligible for this position. Third-party or C2C candidates will not be considered. Job Description: Skills: Strong verbal and written communications with ability to effectively communicate at multiple levels in the organization Multitasks, prioritizes and meets deadlines in timely manner Ability to maintain regular and predictable attendance Duties: Passion for DevOps, DevSecOps, Agile, and Security Working knowledge of Azure and Azure PaaS services Working knowledge of Waterfall, Agile, and primarily DevOps development methodologies Normally determines technical objectives of assignments. Exercises latitude in approach to solutioning Knowledgeable in managing software code projects and conducting management of said projects Experience with Automation in testing or orchestration Experience with tools such as Terraform, Git, Gitlab Knowledgeable in CI/CD in relation to Gitlab, Azure DevOps, and/or similar platforms Exposed to security checks in CI/CD pipelines Understanding of virtualization and container technologies (Docker, AKS/Kubernetes, etc.) Experience with REST APIs Experience managing vendor relationships Ability to contribute to dev ops workflows via scripting and other regular system administration activities Demonstrated knowledge of scripting languages General exposure to how key networking components operate/function, primarily firewalls and load-balancers/reverse proxies such F5 LTM/GTM or Azure native services Provide system performance optimization, maintenance and production support (if escalated to) Refine conceptual system requirements into a technical design consisting of job flows and program specifications Understand customers' business objectives and system requirements and work closely with customers to determine their strategic requirements and measure performance against expectations. Assist in customer resolutions Responsible for staying abreast of new developments in technologies and making recommendations as appropriate Applies enterprise security policies and standards when performing all operational duties Experience and Education Required: Bachelors degree in Computer Science, Computer, Electrical or Biomedical Engineering. Knowledge of software coding. Knowledge of software development lifecycle management tools. Organized, on-time, quick learner and detailed oriented. Excellent documentation skills in delivering information that adds value to managements decision-making process. Experienced in quantitative, analytical, organizational, and follow-up skills. Polished communicator - written documentation and oral presentations/ discussions/ meetings. Excellent reputation for building relationships across various levels of an organization. Energized attitude, proactive thinker and self-starter. We are primarily looking for somebody who has good coding skills, no testing experience required. Education in computer science is preferred. 0- 2 years of experience. Looking for candidates who are organized and proficient thinkers with excellent communication skills. Med device exp is nice to have but not required. Prefer local candidates but relocation at own expense is fine as long as it's within two weeks and onboarding is not interrupted. Potential to extend and/or convert to FTE for right candidate but no guarantees. Builds and maintains CI/CD pipelines Manages cloud infrastructure and containerized environments Implements monitoring, logging, and alerting Automates repetitive operational tasks Ensures smooth, reliable deployments About us: DivIHN, the 'IT Asset Performance Services' organization, provides Professional Consulting, Custom Projects, and Professional Resource Augmentation services to clients in the Mid-West and beyond. The strategic characteristics of the organization are Standardization, Specialization, and Collaboration. DivIHN is an equal opportunity employer. DivIHN does not and shall not discriminate against any employee or qualified applicant on the basis of race, color, religion (creed), gender, gender expression, age, national origin (ancestry), disability, marital status, sexual orientation, or military status. REST APIs, Scripting languages, CI/CD pipelines
04/24/2026
Full time
DivIHN (pronounced "divine") is a CMMI ML3-certified Technology and Talent solutions firm. Driven by a unique Purpose, Culture, and Value Delivery Model, we enable meaningful connections between talented professionals and forward-thinking organizations. Since our formation in 2002, organizations across commercial and public sectors have been trusting us to help build their teams with exceptional temporary and permanent talent. Visit us at to learn more and view our open positions. Please apply or call one of us to learn more For further inquiries about this opportunity, please contact our Talent Specialist, Sri at . Title: DevOps Engineer Duration: 7 Months Location: Sylmar, CA Only W2 candidates are eligible for this position. Third-party or C2C candidates will not be considered. Job Description: Skills: Strong verbal and written communications with ability to effectively communicate at multiple levels in the organization Multitasks, prioritizes and meets deadlines in timely manner Ability to maintain regular and predictable attendance Duties: Passion for DevOps, DevSecOps, Agile, and Security Working knowledge of Azure and Azure PaaS services Working knowledge of Waterfall, Agile, and primarily DevOps development methodologies Normally determines technical objectives of assignments. Exercises latitude in approach to solutioning Knowledgeable in managing software code projects and conducting management of said projects Experience with Automation in testing or orchestration Experience with tools such as Terraform, Git, Gitlab Knowledgeable in CI/CD in relation to Gitlab, Azure DevOps, and/or similar platforms Exposed to security checks in CI/CD pipelines Understanding of virtualization and container technologies (Docker, AKS/Kubernetes, etc.) Experience with REST APIs Experience managing vendor relationships Ability to contribute to dev ops workflows via scripting and other regular system administration activities Demonstrated knowledge of scripting languages General exposure to how key networking components operate/function, primarily firewalls and load-balancers/reverse proxies such F5 LTM/GTM or Azure native services Provide system performance optimization, maintenance and production support (if escalated to) Refine conceptual system requirements into a technical design consisting of job flows and program specifications Understand customers' business objectives and system requirements and work closely with customers to determine their strategic requirements and measure performance against expectations. Assist in customer resolutions Responsible for staying abreast of new developments in technologies and making recommendations as appropriate Applies enterprise security policies and standards when performing all operational duties Experience and Education Required: Bachelors degree in Computer Science, Computer, Electrical or Biomedical Engineering. Knowledge of software coding. Knowledge of software development lifecycle management tools. Organized, on-time, quick learner and detailed oriented. Excellent documentation skills in delivering information that adds value to managements decision-making process. Experienced in quantitative, analytical, organizational, and follow-up skills. Polished communicator - written documentation and oral presentations/ discussions/ meetings. Excellent reputation for building relationships across various levels of an organization. Energized attitude, proactive thinker and self-starter. We are primarily looking for somebody who has good coding skills, no testing experience required. Education in computer science is preferred. 0- 2 years of experience. Looking for candidates who are organized and proficient thinkers with excellent communication skills. Med device exp is nice to have but not required. Prefer local candidates but relocation at own expense is fine as long as it's within two weeks and onboarding is not interrupted. Potential to extend and/or convert to FTE for right candidate but no guarantees. Builds and maintains CI/CD pipelines Manages cloud infrastructure and containerized environments Implements monitoring, logging, and alerting Automates repetitive operational tasks Ensures smooth, reliable deployments About us: DivIHN, the 'IT Asset Performance Services' organization, provides Professional Consulting, Custom Projects, and Professional Resource Augmentation services to clients in the Mid-West and beyond. The strategic characteristics of the organization are Standardization, Specialization, and Collaboration. DivIHN is an equal opportunity employer. DivIHN does not and shall not discriminate against any employee or qualified applicant on the basis of race, color, religion (creed), gender, gender expression, age, national origin (ancestry), disability, marital status, sexual orientation, or military status. REST APIs, Scripting languages, CI/CD pipelines
AI Infrastructure Engineer (Python) Full Time 5-Days Onsite in NYC About the Role We are seeking an AI Infrastructure Engineer (Python) to support, scale, and enhance a production AI and data platform. This role sits at the intersection of AI infrastructure, cloud engineering, and agent-based systems. You will be responsible for ensuring the reliability, scalability, and performance of AI-driven systems operating in production environments across multi-cloud platforms (Azure and GCP). This is not a modeling or research role it's focused on building and maintaining the infrastructure that powers AI systems at scale. This is an excellent opportunity for someone with strong foundational engineering skills who is eager to deepen their expertise in AI platforms and cloud-native systems. What You ll Do Systems Engineering & Agent Operations Develop, maintain, and optimize production-grade Python code supporting data pipelines, agent workflows, and platform tooling Own the full lifecycle of Python services (containerization, deployment, versioning, runtime management) Manage environment configurations, secrets injection, and dependency management across containerized services Build internal Python tooling and shared libraries to accelerate development workflows Troubleshoot production issues end-to-end across application and infrastructure layers AI Platform & Scaling Operate and scale AI-driven agent systems in production environments Ensure high availability, performance, and resilience under load Support integrations between AI agents and data platforms Build observability tools (logging, monitoring, tracing, alerting) Implement auto-scaling strategies for containerized workloads Contribute to evaluation frameworks and quality standards for AI systems Infrastructure & Cloud Operations Develop and manage infrastructure using Terraform across Azure and GCP Manage cloud services including container registries, identity systems, secrets management, and networking Deploy and maintain workflow orchestration tools (e.g., Prefect) Maintain CI/CD pipelines and release workflows Document systems, workflows, and data lineage with clear runbooks What We re Looking For Required 3 5 years of experience in Software Engineering, DevOps, or MLOps Strong Python skills with experience building production systems Experience with Docker and containerized applications in cloud environments (Azure and/or GCP) Hands-on experience with Terraform Experience with secrets management tools and secure configuration practices Familiarity with CI/CD pipelines and Git-based workflows Strong troubleshooting and systems-thinking mindset Interest in AI systems and infrastructure Preferred Experience with Azure services (Container Apps, ACR, Key Vault, Managed Identities, VNets) Experience with GCP services (Cloud Run, GKE, Vertex AI, IAM, Secret Manager) Familiarity with workflow orchestration tools (e.g., Prefect) Exposure to AI/agent frameworks (e.g., LangChain, MCP) Experience with observability tools (e.g., MLflow, Langfuse) Experience with data tools such as dbt or Snowflake Familiarity with multi-cloud environments
04/24/2026
Full time
AI Infrastructure Engineer (Python) Full Time 5-Days Onsite in NYC About the Role We are seeking an AI Infrastructure Engineer (Python) to support, scale, and enhance a production AI and data platform. This role sits at the intersection of AI infrastructure, cloud engineering, and agent-based systems. You will be responsible for ensuring the reliability, scalability, and performance of AI-driven systems operating in production environments across multi-cloud platforms (Azure and GCP). This is not a modeling or research role it's focused on building and maintaining the infrastructure that powers AI systems at scale. This is an excellent opportunity for someone with strong foundational engineering skills who is eager to deepen their expertise in AI platforms and cloud-native systems. What You ll Do Systems Engineering & Agent Operations Develop, maintain, and optimize production-grade Python code supporting data pipelines, agent workflows, and platform tooling Own the full lifecycle of Python services (containerization, deployment, versioning, runtime management) Manage environment configurations, secrets injection, and dependency management across containerized services Build internal Python tooling and shared libraries to accelerate development workflows Troubleshoot production issues end-to-end across application and infrastructure layers AI Platform & Scaling Operate and scale AI-driven agent systems in production environments Ensure high availability, performance, and resilience under load Support integrations between AI agents and data platforms Build observability tools (logging, monitoring, tracing, alerting) Implement auto-scaling strategies for containerized workloads Contribute to evaluation frameworks and quality standards for AI systems Infrastructure & Cloud Operations Develop and manage infrastructure using Terraform across Azure and GCP Manage cloud services including container registries, identity systems, secrets management, and networking Deploy and maintain workflow orchestration tools (e.g., Prefect) Maintain CI/CD pipelines and release workflows Document systems, workflows, and data lineage with clear runbooks What We re Looking For Required 3 5 years of experience in Software Engineering, DevOps, or MLOps Strong Python skills with experience building production systems Experience with Docker and containerized applications in cloud environments (Azure and/or GCP) Hands-on experience with Terraform Experience with secrets management tools and secure configuration practices Familiarity with CI/CD pipelines and Git-based workflows Strong troubleshooting and systems-thinking mindset Interest in AI systems and infrastructure Preferred Experience with Azure services (Container Apps, ACR, Key Vault, Managed Identities, VNets) Experience with GCP services (Cloud Run, GKE, Vertex AI, IAM, Secret Manager) Familiarity with workflow orchestration tools (e.g., Prefect) Exposure to AI/agent frameworks (e.g., LangChain, MCP) Experience with observability tools (e.g., MLflow, Langfuse) Experience with data tools such as dbt or Snowflake Familiarity with multi-cloud environments
Full-Stack Agentic AI Developer (Hybrid with travel) with Product Ownership Responsibilities Full-Stack Agentic AI Developer Position Summary We are looking for a Full-Stack Agentic AI Developer who can build autonomous, goal-driven AI systems and ship them as products that solve real business problems. This is not a traditional developer role. The ideal candidate blends deep technical fluency in agentic frameworks and AI-native development tools with a product owner s instinct for prioritization, stakeholder management, and outcome-driven delivery. You will design, build, and orchestrate multi-agent systems that plan, reason, use tools, recover from errors, and collaborate with humans when the stakes are high. Equally important, you will own the product roadmap for the solutions you build translating business objectives into technical requirements, managing backlogs, and ensuring what ships creates measurable value. Full-Stack Agentic AI Developer What You Will Do Agentic AI Development Design, develop, and deploy autonomous and semi-autonomous AI agent systems that interpret goals, gather context, select tools, and execute end-to-end workflows Build and maintain custom skills, plugins, and tool integrations that extend agent capabilities across enterprise environments Architect multi-agent orchestration patterns including agent-to-agent delegation, parallel task execution, and human-in-the-loop escalation paths Implement agentic memory frameworks, context management strategies, and guardrails that ensure reliable, safe, and auditable agent behavior Develop and refine system prompts, reasoning chains, and evaluation pipelines to continuously improve agent performance Full-Stack Engineering Build robust, scalable web applications and APIs that serve as the interface and backbone for AI-powered solutions Work across front-end and back-end technologies to deliver complete, production-ready systems Design and implement data pipelines, integrations, and infrastructure to support agent operations at scale Write clean, maintainable, well-tested code and conduct thorough code reviews Product Ownership & Business Alignment Own the product vision and roadmap for the AI solutions you build from discovery through delivery and iteration Translate business objectives and stakeholder needs into prioritized backlogs, user stories, and acceptance criteria Collaborate directly with clients, executives, and cross-functional teams to define scope, manage expectations, and communicate progress Measure and report on product outcomes using business-relevant KPIs, not just technical metrics Make build-vs-buy and technology selection decisions grounded in ROI, time-to-value, and strategic fit Facilitate sprint planning, demos, and retrospectives functioning as a player-coach who both builds and leads Full-Stack Agentic AI Developer Required Qualifications Experience & Education 5 10 years of software development experience with progressive responsibility Bachelor s degree in Computer Science, Software Engineering, or related field (or equivalent practical experience) 2+ years of hands-on experience building AI-powered applications, including agent-based systems Demonstrated experience functioning as a product owner, product manager, or technical lead with direct business-facing accountability Agentic AI & LLM Expertise Proven experience with agentic coding tools and AI-native development environments. Proficiency in one or more of the following is required: Claude Code terminal-native agentic development, sub-agents, skills authoring, MCP server integration OpenAI Codex autonomous cloud-based coding agents, background task execution, PR workflows Cursor AI-native IDE, multi-model routing, Composer multi-file editing, background agents GitHub Copilot agent mode, code review automation, workspace integration Experience building custom skills, tool definitions, and structured prompt architectures for AI agents Proficiency with LLM orchestration frameworks such as LangChain, LangGraph, CrewAI, AutoGen, or Semantic Kernel Experience with agentic memory and context management (e.g., Mem0, Letta, custom RAG pipelines) Strong understanding of prompt engineering, chain-of-thought reasoning, and evaluation frameworks for agent behavior Programming & Architecture Strong proficiency in Python and JavaScript/TypeScript; additional languages (C#, Go, Rust) are a plus Experience with front-end frameworks (React, Next.js, Angular) and back-end frameworks (FastAPI, Express.js, Flask) Expertise with Git, GitHub workflows, branching strategies, CI/CD pipelines, and infrastructure-as-code Experience building and consuming REST APIs, GraphQL endpoints, and microservices architectures Database expertise spanning relational (PostgreSQL), vector (Pinecone, Zilliz, Chroma), and graph databases (Neo4j) Experience with cloud platforms (AWS, Azure, GCP) and containerization (Docker, Kubernetes) Full-Stack Agentic AI Developer Preferred Qualifications Experience deploying LLM agents in production using serving frameworks such as vLLM, e2B, or Daytona Familiarity with Model Context Protocol (MCP) servers and building custom tool integrations Experience with agent evaluation, red-teaming, and safety testing methodologies Background in AI governance, responsible AI practices, or ISO 42001 / SOC 2 compliance frameworks Experience with AgentOps practices: monitoring, observability, and telemetry for autonomous systems Certified Scrum Product Owner (CSPO), SAFe Product Owner, or equivalent agile certification Prior consulting or professional services experience with client-facing delivery accountability Mobile application development experience Experience mentoring junior developers and building team capability What Sets You Apart The best candidate for this role doesn t just write code they think in systems, products, and outcomes. You understand that the highest-value AI work often happens upstream of production: in defining the right problem, designing the right agent architecture, and ensuring the solution actually moves a business metric. You are equally comfortable whiteboarding an agent orchestration pattern with engineers and presenting a product roadmap to a C-suite audience. You have strong opinions, loosely held, about how autonomous AI systems should be built, tested, and governed. You stay current not because you re told to, but because you re genuinely fascinated by the pace of change in agentic AI and you bring that energy to your team every day.
04/23/2026
Full time
Full-Stack Agentic AI Developer (Hybrid with travel) with Product Ownership Responsibilities Full-Stack Agentic AI Developer Position Summary We are looking for a Full-Stack Agentic AI Developer who can build autonomous, goal-driven AI systems and ship them as products that solve real business problems. This is not a traditional developer role. The ideal candidate blends deep technical fluency in agentic frameworks and AI-native development tools with a product owner s instinct for prioritization, stakeholder management, and outcome-driven delivery. You will design, build, and orchestrate multi-agent systems that plan, reason, use tools, recover from errors, and collaborate with humans when the stakes are high. Equally important, you will own the product roadmap for the solutions you build translating business objectives into technical requirements, managing backlogs, and ensuring what ships creates measurable value. Full-Stack Agentic AI Developer What You Will Do Agentic AI Development Design, develop, and deploy autonomous and semi-autonomous AI agent systems that interpret goals, gather context, select tools, and execute end-to-end workflows Build and maintain custom skills, plugins, and tool integrations that extend agent capabilities across enterprise environments Architect multi-agent orchestration patterns including agent-to-agent delegation, parallel task execution, and human-in-the-loop escalation paths Implement agentic memory frameworks, context management strategies, and guardrails that ensure reliable, safe, and auditable agent behavior Develop and refine system prompts, reasoning chains, and evaluation pipelines to continuously improve agent performance Full-Stack Engineering Build robust, scalable web applications and APIs that serve as the interface and backbone for AI-powered solutions Work across front-end and back-end technologies to deliver complete, production-ready systems Design and implement data pipelines, integrations, and infrastructure to support agent operations at scale Write clean, maintainable, well-tested code and conduct thorough code reviews Product Ownership & Business Alignment Own the product vision and roadmap for the AI solutions you build from discovery through delivery and iteration Translate business objectives and stakeholder needs into prioritized backlogs, user stories, and acceptance criteria Collaborate directly with clients, executives, and cross-functional teams to define scope, manage expectations, and communicate progress Measure and report on product outcomes using business-relevant KPIs, not just technical metrics Make build-vs-buy and technology selection decisions grounded in ROI, time-to-value, and strategic fit Facilitate sprint planning, demos, and retrospectives functioning as a player-coach who both builds and leads Full-Stack Agentic AI Developer Required Qualifications Experience & Education 5 10 years of software development experience with progressive responsibility Bachelor s degree in Computer Science, Software Engineering, or related field (or equivalent practical experience) 2+ years of hands-on experience building AI-powered applications, including agent-based systems Demonstrated experience functioning as a product owner, product manager, or technical lead with direct business-facing accountability Agentic AI & LLM Expertise Proven experience with agentic coding tools and AI-native development environments. Proficiency in one or more of the following is required: Claude Code terminal-native agentic development, sub-agents, skills authoring, MCP server integration OpenAI Codex autonomous cloud-based coding agents, background task execution, PR workflows Cursor AI-native IDE, multi-model routing, Composer multi-file editing, background agents GitHub Copilot agent mode, code review automation, workspace integration Experience building custom skills, tool definitions, and structured prompt architectures for AI agents Proficiency with LLM orchestration frameworks such as LangChain, LangGraph, CrewAI, AutoGen, or Semantic Kernel Experience with agentic memory and context management (e.g., Mem0, Letta, custom RAG pipelines) Strong understanding of prompt engineering, chain-of-thought reasoning, and evaluation frameworks for agent behavior Programming & Architecture Strong proficiency in Python and JavaScript/TypeScript; additional languages (C#, Go, Rust) are a plus Experience with front-end frameworks (React, Next.js, Angular) and back-end frameworks (FastAPI, Express.js, Flask) Expertise with Git, GitHub workflows, branching strategies, CI/CD pipelines, and infrastructure-as-code Experience building and consuming REST APIs, GraphQL endpoints, and microservices architectures Database expertise spanning relational (PostgreSQL), vector (Pinecone, Zilliz, Chroma), and graph databases (Neo4j) Experience with cloud platforms (AWS, Azure, GCP) and containerization (Docker, Kubernetes) Full-Stack Agentic AI Developer Preferred Qualifications Experience deploying LLM agents in production using serving frameworks such as vLLM, e2B, or Daytona Familiarity with Model Context Protocol (MCP) servers and building custom tool integrations Experience with agent evaluation, red-teaming, and safety testing methodologies Background in AI governance, responsible AI practices, or ISO 42001 / SOC 2 compliance frameworks Experience with AgentOps practices: monitoring, observability, and telemetry for autonomous systems Certified Scrum Product Owner (CSPO), SAFe Product Owner, or equivalent agile certification Prior consulting or professional services experience with client-facing delivery accountability Mobile application development experience Experience mentoring junior developers and building team capability What Sets You Apart The best candidate for this role doesn t just write code they think in systems, products, and outcomes. You understand that the highest-value AI work often happens upstream of production: in defining the right problem, designing the right agent architecture, and ensuring the solution actually moves a business metric. You are equally comfortable whiteboarding an agent orchestration pattern with engineers and presenting a product roadmap to a C-suite audience. You have strong opinions, loosely held, about how autonomous AI systems should be built, tested, and governed. You stay current not because you re told to, but because you re genuinely fascinated by the pace of change in agentic AI and you bring that energy to your team every day.
V2Soft is a global leader in IT services and business solutions, delivering innovative and cost-effective technology solutions worldwide since 1998. We have headquarteerd in Bloomfiled Hills, MI and have 16 offices spread across six countries. We partner with Fortune 500 companies to address complex business challenges. Our services span AI, IT staffing, cloud computing, engineering, mobility, testing, and more. Certified with CMMI Level 3 and ISO standards, V2Soft is committed to quality and security. Beyond our work, we actively support local communities and non-profits, reflecting our core values. Join us to be part of a dynamic and impactful global company! Please visit us at to know more . Remote Role, Only W2, No C2C. Skills Required: Java Skills Preferred: PostgreSQL, Angular, GCP, Kubernetes Experience Required: • 7+ years of hands-on IBM Sterling OMS development/implementation experience in enterprise environments • Strong expertise in OMS functional and technical areas, particularly around orchestration workflows and core OMS modules. Participant Modeling Process Modeling (Sales, Returns & Exchanges, Purchase/DropShip, Transfer Orders) Sourcing Payment Processing Inventory Management Agents, event-driven automation, and background processes SDF, Catalog, Pricing/Repricing, and tax-related recalculations (as applicable) • Experience working in containerized versions of IBM Sterling OMS • Experience in customizing IBM Sterling OrderHub and CallCenter modules • Strong hands-on development skills in Core Java (modern Java versions; advanced concepts preferred). • Strong experience with RESTful services, JSON, and integration patterns. • Solid working knowledge of SQL and data troubleshooting. • Strong understanding of software engineering fundamentals: OOP, design principles, debugging, performance considerations, and code maintainability. • Demonstrated commitment to code quality and unit testing (e.g., JUnit/Mockito or equivalent testing frameworks). Good communication skills and the ability to work effectively with multiple stakeholders across teams. Experience Preferred: • Experience with Spring Boot or similar frameworks for building supporting services. • Experience with transformations such as XSL (where applicable in OMS customizations). • Familiarity with containerized development concepts (e.g., Docker) and CI/CD fundamentals. • Exposure to frontend technologies such as React and/or TypeScript (helpful but not the primary focus). • Familiarity with modern engineering productivity tooling and/or AI-assisted development concepts is a plus. Education Required: Bachelor's Degree Additional Information : Remote Position V2Soft is an Equal Opportunity Employer ( EOE). We welcome applicants from all backgrounds, including individuals with disabilities and veterans. - to view all of our open opportunities and to learn more about our benefits.
04/23/2026
Full time
V2Soft is a global leader in IT services and business solutions, delivering innovative and cost-effective technology solutions worldwide since 1998. We have headquarteerd in Bloomfiled Hills, MI and have 16 offices spread across six countries. We partner with Fortune 500 companies to address complex business challenges. Our services span AI, IT staffing, cloud computing, engineering, mobility, testing, and more. Certified with CMMI Level 3 and ISO standards, V2Soft is committed to quality and security. Beyond our work, we actively support local communities and non-profits, reflecting our core values. Join us to be part of a dynamic and impactful global company! Please visit us at to know more . Remote Role, Only W2, No C2C. Skills Required: Java Skills Preferred: PostgreSQL, Angular, GCP, Kubernetes Experience Required: • 7+ years of hands-on IBM Sterling OMS development/implementation experience in enterprise environments • Strong expertise in OMS functional and technical areas, particularly around orchestration workflows and core OMS modules. Participant Modeling Process Modeling (Sales, Returns & Exchanges, Purchase/DropShip, Transfer Orders) Sourcing Payment Processing Inventory Management Agents, event-driven automation, and background processes SDF, Catalog, Pricing/Repricing, and tax-related recalculations (as applicable) • Experience working in containerized versions of IBM Sterling OMS • Experience in customizing IBM Sterling OrderHub and CallCenter modules • Strong hands-on development skills in Core Java (modern Java versions; advanced concepts preferred). • Strong experience with RESTful services, JSON, and integration patterns. • Solid working knowledge of SQL and data troubleshooting. • Strong understanding of software engineering fundamentals: OOP, design principles, debugging, performance considerations, and code maintainability. • Demonstrated commitment to code quality and unit testing (e.g., JUnit/Mockito or equivalent testing frameworks). Good communication skills and the ability to work effectively with multiple stakeholders across teams. Experience Preferred: • Experience with Spring Boot or similar frameworks for building supporting services. • Experience with transformations such as XSL (where applicable in OMS customizations). • Familiarity with containerized development concepts (e.g., Docker) and CI/CD fundamentals. • Exposure to frontend technologies such as React and/or TypeScript (helpful but not the primary focus). • Familiarity with modern engineering productivity tooling and/or AI-assisted development concepts is a plus. Education Required: Bachelor's Degree Additional Information : Remote Position V2Soft is an Equal Opportunity Employer ( EOE). We welcome applicants from all backgrounds, including individuals with disabilities and veterans. - to view all of our open opportunities and to learn more about our benefits.