At Dematic, we are revolutionizing our data landscape in support of cutting-edge Artificial Intelligence (AI) use cases. We are forming multiple teams that will spearhead the creation of the platform's foundational components. These teams go beyond traditional data ingestion; they are architects of a microservices-driven platform, providing abstractions that empower other teams to seamlessly extend the platform. We are seeking a dynamic and highly skilled Senior Data Engineer who has extensive experience building self-service enterprise scale data platforms with microservices architecture and leading these foundational efforts. This role demands someone who not only possesses a profound understanding of the data engineering landscape but also has experience with software engineering design patterns and microservices frameworks. The ideal candidate will be 100% hands-on, deep in the code and individual contributor who will contribute significantly to platform development and actively shape our data ecosystem. We offer: Career Development Competitive Compensation and Benefits Pay Transparency Global Opportunities Learn More Here: Dematic provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training. The base pay range for this role is estimated to be $133,125 - $204,125 at the time of posting. Final compensation will be determined by various factors such as work location, education, experience, knowledge, and skills. Tasks and Qualifications: What You Will do in This Role: As a senior data engineer, you will be responsible for ideation, architecture, design and development of our key data platform components. Create and maintain essential data platform SDKs and libraries, adhering to industry best practices. Design and develop connector frameworks and modern connectors to source data from disparate systems both on-prem and cloud. Design and optimize data storage, processing, and querying performance for large-scale datasets using industry best practices while keeping costs in check. Design and develop data quality frameworks and processes to ensure the accuracy and reliability of data. Collaborate with data scientists, analysts, and cross functional teams to design data models, database schemas and data storage solutions. Proactively identify and contribute towards platform resiliency. Design and develop observability and data governance frameworks and practices. Stay up to date with the latest data on engineering trends, technologies, and best practices. Drive the deployment and release cycles, ensuring a robust and scalable platform. Partner with AI enablement teams across the organization as well as KION IT Cloud Infrastructure, AI Platform, and security teams to ensure AI/ML capabilities align with IT framework and guidelines within KION. Partner with Business Transformation Data Management teams to align on master data. What We are Looking for: 5+ years of proven experience in modern cloud data engineering and software engineering. Proven ability to build end-to-end data platforms and data services (beyond ETL). Strong cloud experience, preferably GCP; Azure (ADLS) and Databricks are strong pluses. Hands-on experience with Databricks (Delta Lake, Spark, ML/ETL workflows). Experience working in multi-cloud environments (GCP / Azure). Proficiency with platforms such as BigQuery, Dataflow, Dataform, Cloud Run, DBT, Dataproc, SQL, Python, Airflow, Pub/Sub, and equivalent Azure tooling such as ADLS, Azure Functions, and Databricks. Experience with microservices architectures (Kubernetes, Docker). Deep experience with batch and streaming data infrastructures. Strong hands-on experience with metadata management, data catalogs, data lineage, data quality, and data observability frameworks. Strong understanding of data modeling, data architecture, and data governance. Solid experience with DataOps, CI/CD, and test automation. Excellent experience with observability tooling. Experience building data platforms supporting AI use cases and machine learning Production level experience with universal semantic layers. Production level experience with implementing either 3rd party or open-source metadata management platforms. Production level experience with data platform resiliency. Experience building large-scale data platforms on Azure, including Azure Data Lake and Databricks.
03/03/2026
At Dematic, we are revolutionizing our data landscape in support of cutting-edge Artificial Intelligence (AI) use cases. We are forming multiple teams that will spearhead the creation of the platform's foundational components. These teams go beyond traditional data ingestion; they are architects of a microservices-driven platform, providing abstractions that empower other teams to seamlessly extend the platform. We are seeking a dynamic and highly skilled Senior Data Engineer who has extensive experience building self-service enterprise scale data platforms with microservices architecture and leading these foundational efforts. This role demands someone who not only possesses a profound understanding of the data engineering landscape but also has experience with software engineering design patterns and microservices frameworks. The ideal candidate will be 100% hands-on, deep in the code and individual contributor who will contribute significantly to platform development and actively shape our data ecosystem. We offer: Career Development Competitive Compensation and Benefits Pay Transparency Global Opportunities Learn More Here: Dematic provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training. The base pay range for this role is estimated to be $133,125 - $204,125 at the time of posting. Final compensation will be determined by various factors such as work location, education, experience, knowledge, and skills. Tasks and Qualifications: What You Will do in This Role: As a senior data engineer, you will be responsible for ideation, architecture, design and development of our key data platform components. Create and maintain essential data platform SDKs and libraries, adhering to industry best practices. Design and develop connector frameworks and modern connectors to source data from disparate systems both on-prem and cloud. Design and optimize data storage, processing, and querying performance for large-scale datasets using industry best practices while keeping costs in check. Design and develop data quality frameworks and processes to ensure the accuracy and reliability of data. Collaborate with data scientists, analysts, and cross functional teams to design data models, database schemas and data storage solutions. Proactively identify and contribute towards platform resiliency. Design and develop observability and data governance frameworks and practices. Stay up to date with the latest data on engineering trends, technologies, and best practices. Drive the deployment and release cycles, ensuring a robust and scalable platform. Partner with AI enablement teams across the organization as well as KION IT Cloud Infrastructure, AI Platform, and security teams to ensure AI/ML capabilities align with IT framework and guidelines within KION. Partner with Business Transformation Data Management teams to align on master data. What We are Looking for: 5+ years of proven experience in modern cloud data engineering and software engineering. Proven ability to build end-to-end data platforms and data services (beyond ETL). Strong cloud experience, preferably GCP; Azure (ADLS) and Databricks are strong pluses. Hands-on experience with Databricks (Delta Lake, Spark, ML/ETL workflows). Experience working in multi-cloud environments (GCP / Azure). Proficiency with platforms such as BigQuery, Dataflow, Dataform, Cloud Run, DBT, Dataproc, SQL, Python, Airflow, Pub/Sub, and equivalent Azure tooling such as ADLS, Azure Functions, and Databricks. Experience with microservices architectures (Kubernetes, Docker). Deep experience with batch and streaming data infrastructures. Strong hands-on experience with metadata management, data catalogs, data lineage, data quality, and data observability frameworks. Strong understanding of data modeling, data architecture, and data governance. Solid experience with DataOps, CI/CD, and test automation. Excellent experience with observability tooling. Experience building data platforms supporting AI use cases and machine learning Production level experience with universal semantic layers. Production level experience with implementing either 3rd party or open-source metadata management platforms. Production level experience with data platform resiliency. Experience building large-scale data platforms on Azure, including Azure Data Lake and Databricks.
City/State Virginia Beach, VA Work Shift First (Days) Overview: Sentara is hiring for a Senior Data Scientist! This position is fully remote. Overview We are seeking a highly skilled and experienced Data Science ML Operations and Gen AI Engineer (or Senior) to join us and help advance our current and future work applying machine learning, deep learning, and NLP to deliver better healthcare. The Senior Data Scientist will leverage data to improve healthcare outcomes and drive data-driven decision-making. Leveraging expertise in statistical analysis and machine learning, this role will collaborate with cross-functional teams to solve complex healthcare challenges and enhance patient care. This role will directly contribute to advancing medical research, optimizing healthcare processes, and delivering innovative solutions in the healthcare industry. As a Senior ML Engineer on our team, you will play a crucial role in identifying gaps in our existing ML platform and architecting and building solutions to address those gaps. You will also collaborate with the AI team's ML Scientists and our partner data engineering and software development teams to bring ML AND Gen AI models to production and maintain their health and integrity while in production. Your expertise in machine learning and Gen AI, coupled with a strong background in software development, will be instrumental in driving the success of Sentara's AI/ML initiatives. Qualifications: • 5+ years building production software/ML systems, including 1+ years of experience with LLMs/GenAI. • Proficient in Python and one major DL/LLM stack (e.g., PyTorch/Transformers); experience with LangChain/LlamaIndex, vector DBs, and cloud (AWS/Azure/GCP). • Demonstrated delivery of RAG, prompt engineering, evaluation frameworks, and guardrails in production. • Strength in APIs, distributed systems, and ML Ops (K8s, CI/CD, monitoring). • Experience with EPIC health platform is highly preferred • Experience with ML platforms and ML Ops: Demonstrated experience in assessing and improving ML platforms, identifying gaps, and architecting solutions to address them. Strong familiarity with ML platform components such as data ingestion, preprocessing, feature stores, model training, deployment, and monitoring. • Experience with SQL and big data platforms such as Postgres, Redshift and Snowflake • Experience with Agile/Scrum methodology and best practices Preferred: • Previous work experience with Generative AI and ML Ops in healthcare EPIC environment • Understanding of use and implementation of Vector Databases • Kubernetes container orchestration experience Responsibilities • Responsible for design and development of production-grade Machine Learning ops and Gen AI solutions • Lead hands-on delivery of scalable GenAI solutions from problem framing prototyping evaluation production monitoring. • Build internal copilots/assistants (knowledge search, code/content generation) and client-facing products (conversational analytics, summarization, recommendations, workflow automation). • Design RAG pipelines, embedding strategies, vector search, and model orchestration; evaluate fine-tuning vs. prompt engineering. • Implement guardrails, safety filters, prompt/version management, latency/throughput optimizations, and cost controls. • ML platform and ML Ops: Identify areas that require improvements or additional functionalities and use your expertise in machine learning and software engineering to architect and develop solutions that fill gaps in our ML platform and development ecosystem. Analyze system performance, scalability, and reliability to pinpoint opportunities for enhancement. Develop tools and solutions that help the team build, deploy, and monitor AI/ML solutions efficiently. • System scalability and reliability: Optimize the scalability, performance, and reliability and AI Team solutions by implementing best practices and leveraging industry-standard technologies. Collaborate with infrastructure teams to ensure smooth integration and deployment of ML solutions. Design scalable and efficient systems that leverage the power of machine learning for enhanced performance and capabilities. • Data processing and workflow pipelines: Streamline data ingestion, preprocessing, feature engineering, and model training workflows to improve efficiency and reduce latency. Work with data engineering and data platform teams to design and implement robust data pipelines that support the AI team's needs. • Model deployment and monitoring: Evaluate and optimize model prototypes for real-world performance. Work with infrastructure and development teams to integrate ML models into production systems. Work closely with partner teams to communicate and understand technical requirements and challenges. • As part of Sentara's Data Science team you will be responsible for implementation and operationalization of AI/ML models. You will work with other machine learning engineers, data scientists, software engineers and platform engineers to ensure success of the AI/ML implementations. Specific responsibilities will include: • Apply software engineering rigor and best practices to machine learning, including AI/MLOPs, CI/CD, automation, etc. • Take offline models data scientists build and turn them into a real machine learning production system. Education Bachelor's Degree (Required) Certification/Licensure No specific certification or licensure requirements Experience Required to have 5+ years of experience as a Data Scientist with a strong focus on Azure and Microsoft Data Science, AI, and machine learning toolsets. Required to have strong problem-solving skills and the ability to tackle complex healthcare challenges using data-driven approaches. Can help the Data Science infrastructure building up, working with ML Ops team for model implementation, mentoring and developing junior staff. Required to have s trong proficiency in data analysis, data manipulation, and data visualization using Python. Required to have f amiliarity with healthcare-related datasets, medical terminologies, and electronic health records (EHR) data. Required to have knowledge of statistical techniques, hypothesis testing, and experimental design for healthcare research. Required to have s trong machine learning expertise: Proficient in machine learning algorithms, statistical modeling, and data analysis. Hands-on experience with standard ML frameworks (e.g., TensorFlow, PyTorch) and libraries (e.g., scikit-learn, XGBoost, TensorFlow, or Keras). Required to have solid understanding of data engineering principles, data structures, and algorithms. Proficient in Python and/or other programming languages commonly used in ML development. Required to have experience in technologies, frameworks and architecture like Java or Python, Angular, React, JSON, Application Servers, CI/CD is preferred. Required to have experience with one or more AI automations platforms like Kubeflow pipeline, MLFlow, Azure Pipeline, AWS Sage Maker Pipeline, Airflow, Jenkins, Spark, Hadoop, Kafka, Jira and GIT. We provide market-competitive compensation packages, inclusive of base pay, incentives, and benefits. The base pay rate for full-time employment is: $91,416.00 - $152,380.80. Additional compensation may be available for this role such as shift differentials, standby/on-call, overtime, premiums, extra shift incentives, or bonus opportunities. Benefits: Caring For Your Family and Your Career • Medical, Dental, Vision plans • Adoption, Fertility and Surrogacy Reimbursement up to $10,000 • Paid Time Off and Sick Leave • Paid Parental & Family Caregiver Leave • Emergency Backup Care • Long-Term, Short-Term Disability, and Critical Illness plans • Life Insurance • 401k/403B with Employer Match • Tuition Assistance - $5,250/year and discounted educational opportunities through Guild Education • Student Debt Pay Down - $10,000 • Reimbursement for certifications and free access to complete CEUs and professional development •Pet Insurance •Legal Resources Plan •Colleagues have the opportunity to earn an annual discretionary bonus ifestablished system and employee eligibility criteria is met. Sentara Health is an equal opportunity employer and prides itself on the diversity and inclusiveness of its close to an almost 30,000-member workforce. Diversity, inclusion, and belonging is a guiding principle of the organization to ensure its workforce reflects the communities it serves. In support of our mission "to improve health every day," this is a tobacco-free environment. For positions that are available as remote work, Sentara Health employs associates in the following states: Alabama, Delaware, Florida, Georgia, Idaho, Indiana, Kansas, Louisiana, Maine, Maryland, Minnesota, Nebraska, Nevada, New Hampshire, North Carolina, North Dakota, Ohio, Oklahoma, Pennsylvania, South Carolina, South Dakota, Tennessee, Texas, Utah, Virginia, Washington, West Virginia, Wisconsin, and Wyoming.
03/03/2026
Full time
City/State Virginia Beach, VA Work Shift First (Days) Overview: Sentara is hiring for a Senior Data Scientist! This position is fully remote. Overview We are seeking a highly skilled and experienced Data Science ML Operations and Gen AI Engineer (or Senior) to join us and help advance our current and future work applying machine learning, deep learning, and NLP to deliver better healthcare. The Senior Data Scientist will leverage data to improve healthcare outcomes and drive data-driven decision-making. Leveraging expertise in statistical analysis and machine learning, this role will collaborate with cross-functional teams to solve complex healthcare challenges and enhance patient care. This role will directly contribute to advancing medical research, optimizing healthcare processes, and delivering innovative solutions in the healthcare industry. As a Senior ML Engineer on our team, you will play a crucial role in identifying gaps in our existing ML platform and architecting and building solutions to address those gaps. You will also collaborate with the AI team's ML Scientists and our partner data engineering and software development teams to bring ML AND Gen AI models to production and maintain their health and integrity while in production. Your expertise in machine learning and Gen AI, coupled with a strong background in software development, will be instrumental in driving the success of Sentara's AI/ML initiatives. Qualifications: • 5+ years building production software/ML systems, including 1+ years of experience with LLMs/GenAI. • Proficient in Python and one major DL/LLM stack (e.g., PyTorch/Transformers); experience with LangChain/LlamaIndex, vector DBs, and cloud (AWS/Azure/GCP). • Demonstrated delivery of RAG, prompt engineering, evaluation frameworks, and guardrails in production. • Strength in APIs, distributed systems, and ML Ops (K8s, CI/CD, monitoring). • Experience with EPIC health platform is highly preferred • Experience with ML platforms and ML Ops: Demonstrated experience in assessing and improving ML platforms, identifying gaps, and architecting solutions to address them. Strong familiarity with ML platform components such as data ingestion, preprocessing, feature stores, model training, deployment, and monitoring. • Experience with SQL and big data platforms such as Postgres, Redshift and Snowflake • Experience with Agile/Scrum methodology and best practices Preferred: • Previous work experience with Generative AI and ML Ops in healthcare EPIC environment • Understanding of use and implementation of Vector Databases • Kubernetes container orchestration experience Responsibilities • Responsible for design and development of production-grade Machine Learning ops and Gen AI solutions • Lead hands-on delivery of scalable GenAI solutions from problem framing prototyping evaluation production monitoring. • Build internal copilots/assistants (knowledge search, code/content generation) and client-facing products (conversational analytics, summarization, recommendations, workflow automation). • Design RAG pipelines, embedding strategies, vector search, and model orchestration; evaluate fine-tuning vs. prompt engineering. • Implement guardrails, safety filters, prompt/version management, latency/throughput optimizations, and cost controls. • ML platform and ML Ops: Identify areas that require improvements or additional functionalities and use your expertise in machine learning and software engineering to architect and develop solutions that fill gaps in our ML platform and development ecosystem. Analyze system performance, scalability, and reliability to pinpoint opportunities for enhancement. Develop tools and solutions that help the team build, deploy, and monitor AI/ML solutions efficiently. • System scalability and reliability: Optimize the scalability, performance, and reliability and AI Team solutions by implementing best practices and leveraging industry-standard technologies. Collaborate with infrastructure teams to ensure smooth integration and deployment of ML solutions. Design scalable and efficient systems that leverage the power of machine learning for enhanced performance and capabilities. • Data processing and workflow pipelines: Streamline data ingestion, preprocessing, feature engineering, and model training workflows to improve efficiency and reduce latency. Work with data engineering and data platform teams to design and implement robust data pipelines that support the AI team's needs. • Model deployment and monitoring: Evaluate and optimize model prototypes for real-world performance. Work with infrastructure and development teams to integrate ML models into production systems. Work closely with partner teams to communicate and understand technical requirements and challenges. • As part of Sentara's Data Science team you will be responsible for implementation and operationalization of AI/ML models. You will work with other machine learning engineers, data scientists, software engineers and platform engineers to ensure success of the AI/ML implementations. Specific responsibilities will include: • Apply software engineering rigor and best practices to machine learning, including AI/MLOPs, CI/CD, automation, etc. • Take offline models data scientists build and turn them into a real machine learning production system. Education Bachelor's Degree (Required) Certification/Licensure No specific certification or licensure requirements Experience Required to have 5+ years of experience as a Data Scientist with a strong focus on Azure and Microsoft Data Science, AI, and machine learning toolsets. Required to have strong problem-solving skills and the ability to tackle complex healthcare challenges using data-driven approaches. Can help the Data Science infrastructure building up, working with ML Ops team for model implementation, mentoring and developing junior staff. Required to have s trong proficiency in data analysis, data manipulation, and data visualization using Python. Required to have f amiliarity with healthcare-related datasets, medical terminologies, and electronic health records (EHR) data. Required to have knowledge of statistical techniques, hypothesis testing, and experimental design for healthcare research. Required to have s trong machine learning expertise: Proficient in machine learning algorithms, statistical modeling, and data analysis. Hands-on experience with standard ML frameworks (e.g., TensorFlow, PyTorch) and libraries (e.g., scikit-learn, XGBoost, TensorFlow, or Keras). Required to have solid understanding of data engineering principles, data structures, and algorithms. Proficient in Python and/or other programming languages commonly used in ML development. Required to have experience in technologies, frameworks and architecture like Java or Python, Angular, React, JSON, Application Servers, CI/CD is preferred. Required to have experience with one or more AI automations platforms like Kubeflow pipeline, MLFlow, Azure Pipeline, AWS Sage Maker Pipeline, Airflow, Jenkins, Spark, Hadoop, Kafka, Jira and GIT. We provide market-competitive compensation packages, inclusive of base pay, incentives, and benefits. The base pay rate for full-time employment is: $91,416.00 - $152,380.80. Additional compensation may be available for this role such as shift differentials, standby/on-call, overtime, premiums, extra shift incentives, or bonus opportunities. Benefits: Caring For Your Family and Your Career • Medical, Dental, Vision plans • Adoption, Fertility and Surrogacy Reimbursement up to $10,000 • Paid Time Off and Sick Leave • Paid Parental & Family Caregiver Leave • Emergency Backup Care • Long-Term, Short-Term Disability, and Critical Illness plans • Life Insurance • 401k/403B with Employer Match • Tuition Assistance - $5,250/year and discounted educational opportunities through Guild Education • Student Debt Pay Down - $10,000 • Reimbursement for certifications and free access to complete CEUs and professional development •Pet Insurance •Legal Resources Plan •Colleagues have the opportunity to earn an annual discretionary bonus ifestablished system and employee eligibility criteria is met. Sentara Health is an equal opportunity employer and prides itself on the diversity and inclusiveness of its close to an almost 30,000-member workforce. Diversity, inclusion, and belonging is a guiding principle of the organization to ensure its workforce reflects the communities it serves. In support of our mission "to improve health every day," this is a tobacco-free environment. For positions that are available as remote work, Sentara Health employs associates in the following states: Alabama, Delaware, Florida, Georgia, Idaho, Indiana, Kansas, Louisiana, Maine, Maryland, Minnesota, Nebraska, Nevada, New Hampshire, North Carolina, North Dakota, Ohio, Oklahoma, Pennsylvania, South Carolina, South Dakota, Tennessee, Texas, Utah, Virginia, Washington, West Virginia, Wisconsin, and Wyoming.
High-impact, senior-level role building scalable cloud applications for a fast-growing, PE-backed technology company transforming logistics and warehousing. This Jobot Job is hosted by: Jamie Beene Are you a fit? Easy Apply now by clicking the "Apply Now" button and sending us your resume. Salary: $120,000 - $140,000 per year A bit about us: Founded over a decade ago and based in Lakewood, Colorado, with teams operating nationwide, we are redefining how businesses access and manage warehouse space through a modern, technology-driven platform. We operate at the intersection of logistics, real estate, and software, building scalable solutions that support rapid growth, operational efficiency, and an exceptional customer experience. Our environment is fast-moving, collaborative, and built for engineers who want real ownership and visibility into the systems they build. Why join us? Competitive Compensation: Senior-level base salary + performance-based incentives Equity Exposure: Join a PE-backed, high-growth technology platform Comprehensive Benefits: Medical, Dental, Vision, Life Insurance 401(k) with Match Generous PTO & Paid Holidays Remote-First Culture: Work from anywhere in the US High Impact Role: Own complex systems end-to-end with direct influence on product direction Modern Tech Stack: .NET, React, Azure, cloud-native architecture Collaborative Environment: Partner closely with DevOps, Product, and Business leaders Job Details Key Responsibilities and Duties Application Development & Architecture Design, develop, and maintain full-stack applications using .NET (C#) and React Build and consume RESTful APIs and event-driven services with performance and security in mind Translate business requirements into scalable, maintainable technical solutions Lead and contribute to technical design and architectural decisions Frontend Engineering Build modern, responsive UIs using React, TypeScript, and contemporary UI frameworks Implement reusable components, state management, and frontend performance optimizations Collaborate closely with UX/UI partners to deliver high-quality user experiences Backend & Data Develop backend services using ASP.NET Core and Web APIs Design and optimize relational data models (SQL Server and similar) Implement authentication, authorization, and role-based access controls Cloud, DevOps & Quality Work within Azure environments including App Services, Functions, CI/CD pipelines, and monitoring tools Partner with DevOps to ensure reliable deployments and production readiness Write automated unit, integration, and end-to-end tests Participate in code reviews and uphold engineering best practices Collaboration & Leadership Serve as a senior technical contributor within an agile team Mentor mid-level and junior engineers Participate in sprint planning, backlog refinement, and estimation Identify opportunities for refactoring, automation, and continuous improvement Qualifications Needed: Bachelor's degree in Computer Science or related technical field, or equivalent professional experience Minimum 7+ years of professional software development experience Strong background in .NET / C#, including ASP.NET Core and Web APIs Strong experience with React and TypeScript Solid understanding of RESTful API design, authentication, and security best practices Experience with relational databases and data modeling Cloud experience, Azure strongly preferred Familiarity with CI/CD pipelines and DevOps workflows Proven ability to work independently on complex, production-grade systems Nice to Have: Experience with Azure services (App Services, Functions, Service Bus, Storage) Microservices or modular monolithic architectures Exposure to low-code/no-code platforms (Power Platform) and integrations Performance tuning and automated testing frameworks Interested in hearing more? Easy Apply now by clicking the "Apply Now" button. Jobot is an Equal Opportunity Employer. We provide an inclusive work environment that celebrates diversity and all qualified candidates receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, age (40 and over), disability, military status, genetic information or any other basis protected by applicable federal, state, or local laws. Jobot also prohibits harassment of applicants or employees based on any of these protected categories. It is Jobot's policy to comply with all applicable federal, state and local laws respecting consideration of unemployment status in making hiring decisions. Sometimes Jobot is required to perform background checks with your authorization. Jobot will consider qualified candidates with criminal histories in a manner consistent with any applicable federal, state, or local law regarding criminal backgrounds, including but not limited to the Los Angeles Fair Chance Initiative for Hiring and the San Francisco Fair Chance Ordinance. Information collected and processed as part of your Jobot candidate profile, and any job applications, resumes, or other information you choose to submit is subject to Jobot's Privacy Policy, as well as the Jobot California Worker Privacy Notice and Jobot Notice Regarding Automated Employment Decision Tools which are available at By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from Jobot, and/or its agents and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here:
03/03/2026
Full time
High-impact, senior-level role building scalable cloud applications for a fast-growing, PE-backed technology company transforming logistics and warehousing. This Jobot Job is hosted by: Jamie Beene Are you a fit? Easy Apply now by clicking the "Apply Now" button and sending us your resume. Salary: $120,000 - $140,000 per year A bit about us: Founded over a decade ago and based in Lakewood, Colorado, with teams operating nationwide, we are redefining how businesses access and manage warehouse space through a modern, technology-driven platform. We operate at the intersection of logistics, real estate, and software, building scalable solutions that support rapid growth, operational efficiency, and an exceptional customer experience. Our environment is fast-moving, collaborative, and built for engineers who want real ownership and visibility into the systems they build. Why join us? Competitive Compensation: Senior-level base salary + performance-based incentives Equity Exposure: Join a PE-backed, high-growth technology platform Comprehensive Benefits: Medical, Dental, Vision, Life Insurance 401(k) with Match Generous PTO & Paid Holidays Remote-First Culture: Work from anywhere in the US High Impact Role: Own complex systems end-to-end with direct influence on product direction Modern Tech Stack: .NET, React, Azure, cloud-native architecture Collaborative Environment: Partner closely with DevOps, Product, and Business leaders Job Details Key Responsibilities and Duties Application Development & Architecture Design, develop, and maintain full-stack applications using .NET (C#) and React Build and consume RESTful APIs and event-driven services with performance and security in mind Translate business requirements into scalable, maintainable technical solutions Lead and contribute to technical design and architectural decisions Frontend Engineering Build modern, responsive UIs using React, TypeScript, and contemporary UI frameworks Implement reusable components, state management, and frontend performance optimizations Collaborate closely with UX/UI partners to deliver high-quality user experiences Backend & Data Develop backend services using ASP.NET Core and Web APIs Design and optimize relational data models (SQL Server and similar) Implement authentication, authorization, and role-based access controls Cloud, DevOps & Quality Work within Azure environments including App Services, Functions, CI/CD pipelines, and monitoring tools Partner with DevOps to ensure reliable deployments and production readiness Write automated unit, integration, and end-to-end tests Participate in code reviews and uphold engineering best practices Collaboration & Leadership Serve as a senior technical contributor within an agile team Mentor mid-level and junior engineers Participate in sprint planning, backlog refinement, and estimation Identify opportunities for refactoring, automation, and continuous improvement Qualifications Needed: Bachelor's degree in Computer Science or related technical field, or equivalent professional experience Minimum 7+ years of professional software development experience Strong background in .NET / C#, including ASP.NET Core and Web APIs Strong experience with React and TypeScript Solid understanding of RESTful API design, authentication, and security best practices Experience with relational databases and data modeling Cloud experience, Azure strongly preferred Familiarity with CI/CD pipelines and DevOps workflows Proven ability to work independently on complex, production-grade systems Nice to Have: Experience with Azure services (App Services, Functions, Service Bus, Storage) Microservices or modular monolithic architectures Exposure to low-code/no-code platforms (Power Platform) and integrations Performance tuning and automated testing frameworks Interested in hearing more? Easy Apply now by clicking the "Apply Now" button. Jobot is an Equal Opportunity Employer. We provide an inclusive work environment that celebrates diversity and all qualified candidates receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, age (40 and over), disability, military status, genetic information or any other basis protected by applicable federal, state, or local laws. Jobot also prohibits harassment of applicants or employees based on any of these protected categories. It is Jobot's policy to comply with all applicable federal, state and local laws respecting consideration of unemployment status in making hiring decisions. Sometimes Jobot is required to perform background checks with your authorization. Jobot will consider qualified candidates with criminal histories in a manner consistent with any applicable federal, state, or local law regarding criminal backgrounds, including but not limited to the Los Angeles Fair Chance Initiative for Hiring and the San Francisco Fair Chance Ordinance. Information collected and processed as part of your Jobot candidate profile, and any job applications, resumes, or other information you choose to submit is subject to Jobot's Privacy Policy, as well as the Jobot California Worker Privacy Notice and Jobot Notice Regarding Automated Employment Decision Tools which are available at By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from Jobot, and/or its agents and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here:
Credit Acceptance is proud to be an award-winning company with local and national workplace recognition in multiple categories! Our world-class culture is shaped by dedicated Team Members who share a drive to succeed as professionals and together as a company. A great product, amazing people and our stable financial history have made us one of the largest used car finance companies nationally. Our Engineering and Analytics Team Members utilize the latest technology to develop, monitor, and maintain complex practices that help optimize our success. Our Team Members value being challenged, are encouraged to express their ideas, and have the flexibility to enjoy work life balance. We build intrinsic value by partnering with all functions of our business to support their success and make strategic business decisions. We focus on professional development and continuous improvement while enjoying a casual work environment and Great Place to Work culture! Outcomes and Activities: This position will work from home; occasional planned travel to an assigned Southfield, Michigan office location may be required. However, this position is permitted to work at a Southfield, Michigan office location if requested by the team member. Design and implement core components of the data platform (e.g., data lake, streaming infrastructure, DaaS, catalog), emphasizing scalability, reliability, and observability. Balance hands-on delivery with architectural foresight, contributing to cross-functional initiatives that strengthen the platform. Partner with data and engineering stakeholders to understand requirements and deliver effective, efficient solutions for data acquisition, transformation, and integration. Write unit and integration tests, validating software against acceptance criteria to ensure platform reliability. Apply and promote team standards for coding, documentation, and testing, ensuring maintainable and high-quality engineering practices. Conduct impact analysis to identify dependencies and assess potential risks of changes across applications and services. Develop a strong understanding of platform use cases and business processes to align technical solutions with organizational needs. Experiment with new tools and approaches, validate assumptions, and recommend solutions that improve the platform's capabilities. Participate in design and code reviews, providing constructive feedback and communicating changes effectively. Document platform components and designs, ensuring projects are maintainable and understandable by others. Troubleshoot and resolve production issues, proposing effective solutions to restore platform stability. Contribute to sprint commitments and actively engage in Agile practices, including retrospectives and process improvements. Engage in continuous learning, deepening knowledge of modern data platform technologies, distributed systems, and engineering best practices. Competencies: The following items detail how you will be successful in this role. Customer Empathy: Customer Empathy is the ability to understand the perspectives, pain points, and experiences of customers. It involves actively putting oneself in the customer's shoes, comprehending their needs and challenges, and using that understanding to provide a better, more customer-centric experience. Engineering Excellence: Engineering Excellence is about bringing great craftsmanship and thought leadership to deliver an outstanding product that delights customers and solves for the business. This involves the pursuit and achievement of high standards, best practices, innovation, and superior solutions. One Team: A One Team mindset refers to a collaborative approach across the organization, where individuals work together seamlessly, without boundaries, as a single, cohesive team. Shared goals, open communication and mutual support create a sense of collective purpose. This enables teams to navigate challenges and pursue shared objectives more effectively. Owner's Mindset: Owner's Mindset involves adopting a set of behaviors that reflect a sense of responsibility, accountability, strategic thinking, and a proactive approach to managing your domain. As an owner, you understand the business and your domain(s) deeply and solve for the right outcome for the domain(s) and the business. Requirements: Bachelor's degree in Computer Science, Information Systems, or a closely related field; or equivalent work experience Minimum 5 years of software engineering experience, with recent hands-on experience building and maintaining data platforms or distributed systems in cloud environments Strong knowledge of software engineering best practices, with practical experience building and operating data platforms, products, or solutions Experience building and operating applications on cloud platforms (e.g., AWS, Azure, or GCP), including deploying and supporting containerized services (Docker, Kubernetes, ECS/EKS) Familiarity with lakehouse principles (Delta Lake, Iceberg, or Hudi) and best practices for schema evolution, versioning, and performance optimization Experience with observability practices (metrics, logs, tracing, alerting) and tools (e.g., Dynatrace, Splunk, CloudWatch) to ensure platform reliability Knowledge of data storage technologies relevant to data platforms, including object stores (S3, ADLS, GCS), relational databases, and NoSQL systems Awareness of data governance and security practices (e.g., access controls, encryption, compliance considerations), with the ability to design platform components that align with organizational standards Solid understanding of distributed systems concepts (scalability, reliability, consistency, partitioning) and their application to data platforms Experience working with enterprise-class applications where uptime, reliability, and scalability are essential Strong programming skills in one or more languages commonly used for platform engineering (e.g., Python, Java, Scala, Go) Demonstrated ability to mentor and coach less experienced engineers, contributing to team growth and technical maturity Familiarity with Agile delivery practices and other software development lifecycle methodologies Preferred: Hands-on experience with lakehouse technologies (Delta, Iceberg, Hudi), beyond conceptual familiarity Exposure to workflow orchestration frameworks (Airflow, Dagster, Prefect, Databricks Workflows) Experience with CI/CD pipelines for automated testing and deployment Exposure to observability tooling (Datadog, Prometheus, Grafana, ELK, Dynatrace, Splunk) beyond basics Experience debugging performance issues and optimizing systems for cost and scale Financial services or FinTech industry experience Knowledge and Skills: Designs and implements platform components with a focus on scalability, reliability, and maintainability, following established standards Collaborates with team members and partners to deliver high-quality solutions Explores new tools or practices under guidance and contributes ideas to improve the platform Participates in code reviews, shares knowledge with peers, and supports team-level improvements Applies knowledge of cloud, data, and platform technologies to build effective solutions Understands how technical work supports business outcomes and aligns with platform goals Communicates clearly in technical discussions, design reviews, and documentation Works independently on well-defined tasks and projects while seeking guidance on complex or ambiguous problems Target Compensation: A competitive base salary range from $130,047 - $190,735. This position is eligible for an annual variable cash bonus, between 7.5 - 15%. Bonus amounts are based on individual performance. Final compensation within the range is influenced by many factors including role-specific skills, depth and experience level, industry background, relevant education and certifications. Candidates who reside in the following major metropolitan areas may be eligible for a premium on top of the posted range based on their specific zone: San Francisco, Seattle, Boston, New York City, Los Angeles and San Diego. INDENGLP Benefits Excellent benefits package that includes 401(K) match, adoption assistance, parental leave, tuition reimbursement, comprehensive medical/ dental/vision and many nonstandard benefits that make us a Great Place to Work Our Company Values: To be successful in this role, Team Members need to be: Positive by maintaining resiliency and focusing on solutions Respectful by collaborating and actively listening Insightful by cultivating innovation, accumulating business and role specific knowledge, demonstrating self-awareness and making quality decisions Direct by effectively communicating and conveying courage Earnest by taking accountability, applying feedback and effectively planning and priority setting Expectations: Remain compliant with our policies processes and legal guidelines All other duties as assigned Attendance as required by department Advice! We understand that your career search may look different than others. Our hiring team wants to make sure that this would be a fit not just for us, but for you long term. If you are actively looking or starting to explore new opportunities, send us your application! P.S. We have great details around our stats, success, history and more. We're proud of our culture and are happy to share why - let's talk! Required degrees must have been earned at institutions of Higher Education which are accredited by the Council for Higher Education Accreditation or equivalent. Credit Acceptance is dedicated to providing a safe and inclusive working environment for all. As part of our Culture of Compliance, we are proud to be an Equal Opportunity Employer and value our culturally diverse workforce . click apply for full job details
03/03/2026
Full time
Credit Acceptance is proud to be an award-winning company with local and national workplace recognition in multiple categories! Our world-class culture is shaped by dedicated Team Members who share a drive to succeed as professionals and together as a company. A great product, amazing people and our stable financial history have made us one of the largest used car finance companies nationally. Our Engineering and Analytics Team Members utilize the latest technology to develop, monitor, and maintain complex practices that help optimize our success. Our Team Members value being challenged, are encouraged to express their ideas, and have the flexibility to enjoy work life balance. We build intrinsic value by partnering with all functions of our business to support their success and make strategic business decisions. We focus on professional development and continuous improvement while enjoying a casual work environment and Great Place to Work culture! Outcomes and Activities: This position will work from home; occasional planned travel to an assigned Southfield, Michigan office location may be required. However, this position is permitted to work at a Southfield, Michigan office location if requested by the team member. Design and implement core components of the data platform (e.g., data lake, streaming infrastructure, DaaS, catalog), emphasizing scalability, reliability, and observability. Balance hands-on delivery with architectural foresight, contributing to cross-functional initiatives that strengthen the platform. Partner with data and engineering stakeholders to understand requirements and deliver effective, efficient solutions for data acquisition, transformation, and integration. Write unit and integration tests, validating software against acceptance criteria to ensure platform reliability. Apply and promote team standards for coding, documentation, and testing, ensuring maintainable and high-quality engineering practices. Conduct impact analysis to identify dependencies and assess potential risks of changes across applications and services. Develop a strong understanding of platform use cases and business processes to align technical solutions with organizational needs. Experiment with new tools and approaches, validate assumptions, and recommend solutions that improve the platform's capabilities. Participate in design and code reviews, providing constructive feedback and communicating changes effectively. Document platform components and designs, ensuring projects are maintainable and understandable by others. Troubleshoot and resolve production issues, proposing effective solutions to restore platform stability. Contribute to sprint commitments and actively engage in Agile practices, including retrospectives and process improvements. Engage in continuous learning, deepening knowledge of modern data platform technologies, distributed systems, and engineering best practices. Competencies: The following items detail how you will be successful in this role. Customer Empathy: Customer Empathy is the ability to understand the perspectives, pain points, and experiences of customers. It involves actively putting oneself in the customer's shoes, comprehending their needs and challenges, and using that understanding to provide a better, more customer-centric experience. Engineering Excellence: Engineering Excellence is about bringing great craftsmanship and thought leadership to deliver an outstanding product that delights customers and solves for the business. This involves the pursuit and achievement of high standards, best practices, innovation, and superior solutions. One Team: A One Team mindset refers to a collaborative approach across the organization, where individuals work together seamlessly, without boundaries, as a single, cohesive team. Shared goals, open communication and mutual support create a sense of collective purpose. This enables teams to navigate challenges and pursue shared objectives more effectively. Owner's Mindset: Owner's Mindset involves adopting a set of behaviors that reflect a sense of responsibility, accountability, strategic thinking, and a proactive approach to managing your domain. As an owner, you understand the business and your domain(s) deeply and solve for the right outcome for the domain(s) and the business. Requirements: Bachelor's degree in Computer Science, Information Systems, or a closely related field; or equivalent work experience Minimum 5 years of software engineering experience, with recent hands-on experience building and maintaining data platforms or distributed systems in cloud environments Strong knowledge of software engineering best practices, with practical experience building and operating data platforms, products, or solutions Experience building and operating applications on cloud platforms (e.g., AWS, Azure, or GCP), including deploying and supporting containerized services (Docker, Kubernetes, ECS/EKS) Familiarity with lakehouse principles (Delta Lake, Iceberg, or Hudi) and best practices for schema evolution, versioning, and performance optimization Experience with observability practices (metrics, logs, tracing, alerting) and tools (e.g., Dynatrace, Splunk, CloudWatch) to ensure platform reliability Knowledge of data storage technologies relevant to data platforms, including object stores (S3, ADLS, GCS), relational databases, and NoSQL systems Awareness of data governance and security practices (e.g., access controls, encryption, compliance considerations), with the ability to design platform components that align with organizational standards Solid understanding of distributed systems concepts (scalability, reliability, consistency, partitioning) and their application to data platforms Experience working with enterprise-class applications where uptime, reliability, and scalability are essential Strong programming skills in one or more languages commonly used for platform engineering (e.g., Python, Java, Scala, Go) Demonstrated ability to mentor and coach less experienced engineers, contributing to team growth and technical maturity Familiarity with Agile delivery practices and other software development lifecycle methodologies Preferred: Hands-on experience with lakehouse technologies (Delta, Iceberg, Hudi), beyond conceptual familiarity Exposure to workflow orchestration frameworks (Airflow, Dagster, Prefect, Databricks Workflows) Experience with CI/CD pipelines for automated testing and deployment Exposure to observability tooling (Datadog, Prometheus, Grafana, ELK, Dynatrace, Splunk) beyond basics Experience debugging performance issues and optimizing systems for cost and scale Financial services or FinTech industry experience Knowledge and Skills: Designs and implements platform components with a focus on scalability, reliability, and maintainability, following established standards Collaborates with team members and partners to deliver high-quality solutions Explores new tools or practices under guidance and contributes ideas to improve the platform Participates in code reviews, shares knowledge with peers, and supports team-level improvements Applies knowledge of cloud, data, and platform technologies to build effective solutions Understands how technical work supports business outcomes and aligns with platform goals Communicates clearly in technical discussions, design reviews, and documentation Works independently on well-defined tasks and projects while seeking guidance on complex or ambiguous problems Target Compensation: A competitive base salary range from $130,047 - $190,735. This position is eligible for an annual variable cash bonus, between 7.5 - 15%. Bonus amounts are based on individual performance. Final compensation within the range is influenced by many factors including role-specific skills, depth and experience level, industry background, relevant education and certifications. Candidates who reside in the following major metropolitan areas may be eligible for a premium on top of the posted range based on their specific zone: San Francisco, Seattle, Boston, New York City, Los Angeles and San Diego. INDENGLP Benefits Excellent benefits package that includes 401(K) match, adoption assistance, parental leave, tuition reimbursement, comprehensive medical/ dental/vision and many nonstandard benefits that make us a Great Place to Work Our Company Values: To be successful in this role, Team Members need to be: Positive by maintaining resiliency and focusing on solutions Respectful by collaborating and actively listening Insightful by cultivating innovation, accumulating business and role specific knowledge, demonstrating self-awareness and making quality decisions Direct by effectively communicating and conveying courage Earnest by taking accountability, applying feedback and effectively planning and priority setting Expectations: Remain compliant with our policies processes and legal guidelines All other duties as assigned Attendance as required by department Advice! We understand that your career search may look different than others. Our hiring team wants to make sure that this would be a fit not just for us, but for you long term. If you are actively looking or starting to explore new opportunities, send us your application! P.S. We have great details around our stats, success, history and more. We're proud of our culture and are happy to share why - let's talk! Required degrees must have been earned at institutions of Higher Education which are accredited by the Council for Higher Education Accreditation or equivalent. Credit Acceptance is dedicated to providing a safe and inclusive working environment for all. As part of our Culture of Compliance, we are proud to be an Equal Opportunity Employer and value our culturally diverse workforce . click apply for full job details
Position: DevOps Engineer II Position Duties: Design & Implementation 70% Design and implementation of the companys internal build and release management system. o Mainly aim at putting in place new processes to improve the current architecture of the continuous integration and continuous delivery systems in order to keep up with new requirements of the SaaS transition. Security 20% Automation of the security penetration testing process as well as continuously working with the development team senior leadership on patching security vulnerabilities as soon as they are detected to make sure COGNIRAs SaaS product and infrastructure meet the required security standards. Technical Support 5% Technical support of the production systems to provide coverage during U.S working hours. Assisting the development teams located at the Atlanta office with DevOps related matters and requests. Non-technical 5% Hiring & team growth. o Assist with technical knowledge in interviews of DevOps candidates for both full-time positions and internships. Education/Experience Requirements: Bachelors degree in Computer Engineering or related and 24 months of experience as a DevOps Engineer or related Special Requirements: 2 years of experience working with (1) Kubernetes, Prometheus & Grafana, NGINX, Elastic Stack (OpenSearch), SonarCloud, OWASP ZAP, WIZ, Vanta, and Cassandra (Deep understanding of managing distributed NoSQL databases), and (2) Terraform, ArgoCD, PostgreSQL, and Azure Databricks (Exposure to cloud-based analytics platforms and data pipeline integration). Employer will accept experience gained while the candidate was completing a bachelors degree, provided the experience was not part of the academic program, not required for course credit, and was obtained through bona fide employment. Location of Employment: 1349 W Peachtree St NW, Suite 1750, Atlanta, GA 30309 Email resumes and job history to
03/03/2026
Position: DevOps Engineer II Position Duties: Design & Implementation 70% Design and implementation of the companys internal build and release management system. o Mainly aim at putting in place new processes to improve the current architecture of the continuous integration and continuous delivery systems in order to keep up with new requirements of the SaaS transition. Security 20% Automation of the security penetration testing process as well as continuously working with the development team senior leadership on patching security vulnerabilities as soon as they are detected to make sure COGNIRAs SaaS product and infrastructure meet the required security standards. Technical Support 5% Technical support of the production systems to provide coverage during U.S working hours. Assisting the development teams located at the Atlanta office with DevOps related matters and requests. Non-technical 5% Hiring & team growth. o Assist with technical knowledge in interviews of DevOps candidates for both full-time positions and internships. Education/Experience Requirements: Bachelors degree in Computer Engineering or related and 24 months of experience as a DevOps Engineer or related Special Requirements: 2 years of experience working with (1) Kubernetes, Prometheus & Grafana, NGINX, Elastic Stack (OpenSearch), SonarCloud, OWASP ZAP, WIZ, Vanta, and Cassandra (Deep understanding of managing distributed NoSQL databases), and (2) Terraform, ArgoCD, PostgreSQL, and Azure Databricks (Exposure to cloud-based analytics platforms and data pipeline integration). Employer will accept experience gained while the candidate was completing a bachelors degree, provided the experience was not part of the academic program, not required for course credit, and was obtained through bona fide employment. Location of Employment: 1349 W Peachtree St NW, Suite 1750, Atlanta, GA 30309 Email resumes and job history to
SENIOR SOLUTIONS ARCHITECT WHAT IS THE OPPORTUNITY? Senior Solution Architect is responsible for designing, developing and maintaining on-premises and cloud native applications. Participates in improving our SDLC, Engineering, and DevOps practices. Develops both front-end and back-end solutions. Designs, develops and maintains solutions, and provides technical guidance to other Team Members. Participates in creating SDLC, Architecture, Design and Coding standards. Identifies complex business problems and participates in generating solutions. Identifies performance issues, analyzes root causes and participates in performance improvement activities. Participates in remedying compliance issues. Documents and communicates problems, designs and solutions. WHAT WILL YOU DO? Lead the design and development of cross-functional, multi-platform application systems Writing great quality code driving towards automated testing and validations Perform analysis of performance; plan and execute activities for performance tuning, monitoring, deployment and production support Guide and support the implementation, maintenance and updates of CI/CD pipelines on a cloud environment Collaborate with business partners, architects and other groups to identify complex technical and functional needs of systems Collaborate with multiple, enterprise-wide distributed performing teams to deliver new capabilities in business applications Design and develop API's for Omni-channel clients Look for opportunities to simplify code, existing or new architectures and vendor dependencies Recommend rationalization opportunities throughout the portfolio of applications and systems Provide technical guidance to team members and solution architects Build APIs and UIs that deal with large volume of transactions and involves large data sets Own the full lifecycle for software development, from ideation to production Provide programming expertise and business analysis skills within broad business areas; as a senior member of a project team participate in analyzing, designing, modifying, and developing business applications Participate in solution designs to meet technical specifications - guide team members and solution architects Create and recommend changes in development standards including design, coding and testing standards Analyze and develop data models, logical database designs and data definitions across multiple computing environments (e.g., host based, distributed systems, client server, etc.) Comply with architectural standards and established methodologies and practices WHAT DO YOU NEED TO SUCCEED? Required Qualifications Bachelor's Degree or equivalent Minimum of 10+ years of software development experience including UI as well as middle tier and backend Minimum of 6+ years of solution architecture Additional Qualifications 6+ years of experience Cloud Solution Architecture Experience (Azure a plus)-Primary focus will be Microsoft Technologies and .NET - however, experience with Java and/or AWS is a plus 6+ years of experience Microservices Architecture 6+ years of experience designing REST APIs 6+Experience working in fully DevOps enabled environment 6+ years of experience with databases and data modeling / design (SQL and NoSQL plus) 4+ years of experience designing in the context of workflows and rules Need strong communication skills - in addition to designing solutions, this role will be the primary driver of getting our technology and designs approved for use. This will require close partnership with Enterprise Architecture, InfoSec and other teams - and strong communication, presentation and collaboration skills Experience designing using containers and container orchestration platform (AKS) plus 4+year of experience designing asynchronous/ event driven as well as synchronous systems 4+years of experience incorporating security into the application architecture Experience with developing architecture, coding standards and patterns Experience with developing architecture, coding standards and patterns Large application design and implementation experience including architecture and design of modern web, mobile, and integration (cloud/on-premises) platforms Excellent verbal and written communications, interpersonal, and analytical skills is required. Experience with Agile development methodology including Scrum, XP, FDD, TDD, and SAFe. Extensive experience with API management toolsets, DevOps, server infrastructure, network infrastructure, caching methodologies, information security, and database technologies Proven track record in generating alternative solution approaches and driving a pragmatic trade-off solution Proven track record of optimizing the development activities with a strong focus on DevOps and automation Significant experience documenting solution designs using a variety of approaches (e.g. using UML) Solid Experience with DDD, TDD and BDD - ability to direct and guide the team through these approaches Proven ability to learn new technologies and evaluate for fitness into a specific business context through POCs and other evaluations Deep experience with reviewing code and suggesting refactoring for performance, quality and maintainability and other attributes Experienced in designing APIs, scale and secure them. Ability to integrate with internal and external systems and secure those integrations Experience integrating systems and applications with CRM systems like Salesforce Ability to operate and guide the team in all areas of the technology stack - front, middle tier and the backend Ability to quickly learn new technologies and evaluate for fit into a specific business context through POCs and other evaluations - ability to orchestrate POCs and evaluations Ability to review code and suggest refactoring for performance, quality and maintainability and guide the team through related activities Experience with following technologies: C#/.Net, Java, Python, JavaScript, TypeScript, Angular/REACT, CSS, HTML SQL, AKS, Azure DevOps, NoSQL, Azure, AWS, Serverless OAuth, SAML, APIM WHAT'S IN IT FOR YOU? Compensation Starting base salary: $122,535 - $208,715 per year. Exact compensation may vary based on skills, experience, and location. This job is eligible for bonus and/or commissions. Benefits and Perks At City National, we strive to be the best at whatever we do, including the benefits and perks we offer our colleagues including: Comprehensive healthcare coverage, including Medical, Dental and Vision plans, available the first of the month following start date Generous 401(k) company matching contribution Career Development through Tuition Reimbursement and other internal upskilling and training resources Valued Time Away benefits including vacation, sick and volunteer time Specialized health and family planning benefits including fertility benefits, and cancer, diabetes and musculoskeletal support programs Career Mobility support from a dedicated recruitment team Colleague Resource Groups to support networking and community engagement Get a more detailed look at our Benefits and Perks. ABOUT US Since day one we've always gone further than the competition to help our clients, colleagues and communities flourish. City National Bank was founded in 1954 by entrepreneurs for entrepreneurs and that legacy of integrity, community and unparalleled client relationships continues today. City National is a subsidiary of Royal Bank of Canada, one of North America's leading diversified financial services companies. To learn more about City National and our dynamic company culture, visit us at About Us. INCLUSION AND EQUAL OPPORTUNITY EMPLOYMENT City National Bank fosters an inclusive environment where all forms of diversity are valued and leveraged to make us a better company and employer. We are an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sexual orientation, gender identity, national origin, disability, veteran status or other basis protected by law. It is unlawful in Massachusetts to require or administer a lie detector test as a condition of employment or continued employment. An employer who violates this law shall be subject to criminal penalties and civil liability. Represents basic qualifications for the position. To be considered for this position, you must at least meet the required qualifications. accepts applications on an ongoing basis, until filled. Unless otherwise indicated as fully remote, reporting into a designated City National location is an essential function of the job.
03/03/2026
Full time
SENIOR SOLUTIONS ARCHITECT WHAT IS THE OPPORTUNITY? Senior Solution Architect is responsible for designing, developing and maintaining on-premises and cloud native applications. Participates in improving our SDLC, Engineering, and DevOps practices. Develops both front-end and back-end solutions. Designs, develops and maintains solutions, and provides technical guidance to other Team Members. Participates in creating SDLC, Architecture, Design and Coding standards. Identifies complex business problems and participates in generating solutions. Identifies performance issues, analyzes root causes and participates in performance improvement activities. Participates in remedying compliance issues. Documents and communicates problems, designs and solutions. WHAT WILL YOU DO? Lead the design and development of cross-functional, multi-platform application systems Writing great quality code driving towards automated testing and validations Perform analysis of performance; plan and execute activities for performance tuning, monitoring, deployment and production support Guide and support the implementation, maintenance and updates of CI/CD pipelines on a cloud environment Collaborate with business partners, architects and other groups to identify complex technical and functional needs of systems Collaborate with multiple, enterprise-wide distributed performing teams to deliver new capabilities in business applications Design and develop API's for Omni-channel clients Look for opportunities to simplify code, existing or new architectures and vendor dependencies Recommend rationalization opportunities throughout the portfolio of applications and systems Provide technical guidance to team members and solution architects Build APIs and UIs that deal with large volume of transactions and involves large data sets Own the full lifecycle for software development, from ideation to production Provide programming expertise and business analysis skills within broad business areas; as a senior member of a project team participate in analyzing, designing, modifying, and developing business applications Participate in solution designs to meet technical specifications - guide team members and solution architects Create and recommend changes in development standards including design, coding and testing standards Analyze and develop data models, logical database designs and data definitions across multiple computing environments (e.g., host based, distributed systems, client server, etc.) Comply with architectural standards and established methodologies and practices WHAT DO YOU NEED TO SUCCEED? Required Qualifications Bachelor's Degree or equivalent Minimum of 10+ years of software development experience including UI as well as middle tier and backend Minimum of 6+ years of solution architecture Additional Qualifications 6+ years of experience Cloud Solution Architecture Experience (Azure a plus)-Primary focus will be Microsoft Technologies and .NET - however, experience with Java and/or AWS is a plus 6+ years of experience Microservices Architecture 6+ years of experience designing REST APIs 6+Experience working in fully DevOps enabled environment 6+ years of experience with databases and data modeling / design (SQL and NoSQL plus) 4+ years of experience designing in the context of workflows and rules Need strong communication skills - in addition to designing solutions, this role will be the primary driver of getting our technology and designs approved for use. This will require close partnership with Enterprise Architecture, InfoSec and other teams - and strong communication, presentation and collaboration skills Experience designing using containers and container orchestration platform (AKS) plus 4+year of experience designing asynchronous/ event driven as well as synchronous systems 4+years of experience incorporating security into the application architecture Experience with developing architecture, coding standards and patterns Experience with developing architecture, coding standards and patterns Large application design and implementation experience including architecture and design of modern web, mobile, and integration (cloud/on-premises) platforms Excellent verbal and written communications, interpersonal, and analytical skills is required. Experience with Agile development methodology including Scrum, XP, FDD, TDD, and SAFe. Extensive experience with API management toolsets, DevOps, server infrastructure, network infrastructure, caching methodologies, information security, and database technologies Proven track record in generating alternative solution approaches and driving a pragmatic trade-off solution Proven track record of optimizing the development activities with a strong focus on DevOps and automation Significant experience documenting solution designs using a variety of approaches (e.g. using UML) Solid Experience with DDD, TDD and BDD - ability to direct and guide the team through these approaches Proven ability to learn new technologies and evaluate for fitness into a specific business context through POCs and other evaluations Deep experience with reviewing code and suggesting refactoring for performance, quality and maintainability and other attributes Experienced in designing APIs, scale and secure them. Ability to integrate with internal and external systems and secure those integrations Experience integrating systems and applications with CRM systems like Salesforce Ability to operate and guide the team in all areas of the technology stack - front, middle tier and the backend Ability to quickly learn new technologies and evaluate for fit into a specific business context through POCs and other evaluations - ability to orchestrate POCs and evaluations Ability to review code and suggest refactoring for performance, quality and maintainability and guide the team through related activities Experience with following technologies: C#/.Net, Java, Python, JavaScript, TypeScript, Angular/REACT, CSS, HTML SQL, AKS, Azure DevOps, NoSQL, Azure, AWS, Serverless OAuth, SAML, APIM WHAT'S IN IT FOR YOU? Compensation Starting base salary: $122,535 - $208,715 per year. Exact compensation may vary based on skills, experience, and location. This job is eligible for bonus and/or commissions. Benefits and Perks At City National, we strive to be the best at whatever we do, including the benefits and perks we offer our colleagues including: Comprehensive healthcare coverage, including Medical, Dental and Vision plans, available the first of the month following start date Generous 401(k) company matching contribution Career Development through Tuition Reimbursement and other internal upskilling and training resources Valued Time Away benefits including vacation, sick and volunteer time Specialized health and family planning benefits including fertility benefits, and cancer, diabetes and musculoskeletal support programs Career Mobility support from a dedicated recruitment team Colleague Resource Groups to support networking and community engagement Get a more detailed look at our Benefits and Perks. ABOUT US Since day one we've always gone further than the competition to help our clients, colleagues and communities flourish. City National Bank was founded in 1954 by entrepreneurs for entrepreneurs and that legacy of integrity, community and unparalleled client relationships continues today. City National is a subsidiary of Royal Bank of Canada, one of North America's leading diversified financial services companies. To learn more about City National and our dynamic company culture, visit us at About Us. INCLUSION AND EQUAL OPPORTUNITY EMPLOYMENT City National Bank fosters an inclusive environment where all forms of diversity are valued and leveraged to make us a better company and employer. We are an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sexual orientation, gender identity, national origin, disability, veteran status or other basis protected by law. It is unlawful in Massachusetts to require or administer a lie detector test as a condition of employment or continued employment. An employer who violates this law shall be subject to criminal penalties and civil liability. Represents basic qualifications for the position. To be considered for this position, you must at least meet the required qualifications. accepts applications on an ongoing basis, until filled. Unless otherwise indicated as fully remote, reporting into a designated City National location is an essential function of the job.
Newcastle Associates, Inc.
San Francisco, California
Architect/ Sr. Software Engineer We are seeking an experienced software professional to join a forward-looking engineering team with a financial services company. This position involves providing architectural and technical leadership in the building a applications and services used to facilitate customer interaction of their mortgage service. The role sits with both the software engineering and data services teams. Fully remote position in a distributed team. Architecture & Technical Leadership Design and architect scalable, high-performance web applications Define technical standards, best practices, and development workflows Lead architectural decisions across front-end, back-end, and data layers Guide the team in adopting modern development patterns and tools Full Stack Development Develop rich, responsive user interfaces using React + Redux and/or Angular Build scalable backend services using Node.js and Express Implement real-time functionality using Socket.IO Design and maintain RESTful APIs and web services Develop dynamic client-side functionality using JavaScript, jQuery, HTML, CSS, and AJAX Data & Integration Design and manage data models using MongoDB, SQL, and Mongoose Optimize data queries and ensure data integrity and performance Integrate third-party systems and services through RESTful APIs Engineering Excellence Practice and promote Test-Driven Development (TDD) Write automated tests using Mocha/Chai, Enzyme, and Protractor Maintain and enhance CI/CD pipelines and build processes Utilize modern build tools including npm, bower, grunt, gulp, and webpack Manage version control using Git and structured Git workflows Required Qualifications 1015+ years of professional software development experience Proven experience as a Senior Engineer or Technical Architect Strong expertise in: JavaScript (ES6+), React + Redux and/or Angular, Node.js / Express MongoDB and SQL databases Deep understanding of RESTful services and API design Strong knowledge of TDD and automated testing frameworks Experience working in Agile development environments Excellent written and verbal communication skills Preferred Qualifications Experience leading technical teams or projects Experience designing microservices-based architectures Knowledge of performance optimization and scalability strategies Cloud platform experience (AWS, Azure, or GCP) What Were Looking For A hands-on technical leader who enjoys solving complex problems Someone who values clean, maintainable code and engineering rigor A collaborative team player who thrives in pair programming environments A strong communicator who can bridge business and technology
03/01/2026
Architect/ Sr. Software Engineer We are seeking an experienced software professional to join a forward-looking engineering team with a financial services company. This position involves providing architectural and technical leadership in the building a applications and services used to facilitate customer interaction of their mortgage service. The role sits with both the software engineering and data services teams. Fully remote position in a distributed team. Architecture & Technical Leadership Design and architect scalable, high-performance web applications Define technical standards, best practices, and development workflows Lead architectural decisions across front-end, back-end, and data layers Guide the team in adopting modern development patterns and tools Full Stack Development Develop rich, responsive user interfaces using React + Redux and/or Angular Build scalable backend services using Node.js and Express Implement real-time functionality using Socket.IO Design and maintain RESTful APIs and web services Develop dynamic client-side functionality using JavaScript, jQuery, HTML, CSS, and AJAX Data & Integration Design and manage data models using MongoDB, SQL, and Mongoose Optimize data queries and ensure data integrity and performance Integrate third-party systems and services through RESTful APIs Engineering Excellence Practice and promote Test-Driven Development (TDD) Write automated tests using Mocha/Chai, Enzyme, and Protractor Maintain and enhance CI/CD pipelines and build processes Utilize modern build tools including npm, bower, grunt, gulp, and webpack Manage version control using Git and structured Git workflows Required Qualifications 1015+ years of professional software development experience Proven experience as a Senior Engineer or Technical Architect Strong expertise in: JavaScript (ES6+), React + Redux and/or Angular, Node.js / Express MongoDB and SQL databases Deep understanding of RESTful services and API design Strong knowledge of TDD and automated testing frameworks Experience working in Agile development environments Excellent written and verbal communication skills Preferred Qualifications Experience leading technical teams or projects Experience designing microservices-based architectures Knowledge of performance optimization and scalability strategies Cloud platform experience (AWS, Azure, or GCP) What Were Looking For A hands-on technical leader who enjoys solving complex problems Someone who values clean, maintainable code and engineering rigor A collaborative team player who thrives in pair programming environments A strong communicator who can bridge business and technology
Senior Data Engineer Needed - $225K-$300K - Supply Chain Software Pioneer This Jobot Job is hosted by: Steven Zacharias Are you a fit? Easy Apply now by clicking the "Apply Now" button and sending us your resume. Salary: $225,000 - $300,000 per year A bit about us: We are a growing software development firm that's looking for a seasoned Data Engineer! If interested, please apply or email me your resume directly at - /> Why join us? $225,000-$300,000 Base Salary Health / Dental / Vision 401k w/ employer match PTO Job Details Scope of Responsibilities: Data ingestion pipelines for existing and new data sources (both batch and real-time streaming) Experience with all forms of multi-modal data types Monitoring frameworks to protect data quality and consistency Data modeling design and data architecture skills to support reporting and analytics requirements Experience in the latest generation of data lake architectures, stream and batch processing, and managed cloud services to support scale. Qualifications Degree in Computer Science, Mathematics, Statistics, or other data-intensive discipline with substantive engineering experience. 5+ years demonstrated development experience using SQL, Scala, Spark, Flink, Beam, and/or Python 5+ years demonstrated experience in data management (structured and unstructured) and modern database technologies Demonstrated experience developing data pipelines to support machine learning, LLMs or other analytical solutions Experience working with multi-cloud providers such as AWS and Azure Interested in hearing more? Easy Apply now by clicking the "Apply Now" button. Jobot is an Equal Opportunity Employer. We provide an inclusive work environment that celebrates diversity and all qualified candidates receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, age (40 and over), disability, military status, genetic information or any other basis protected by applicable federal, state, or local laws. Jobot also prohibits harassment of applicants or employees based on any of these protected categories. It is Jobot's policy to comply with all applicable federal, state and local laws respecting consideration of unemployment status in making hiring decisions. Sometimes Jobot is required to perform background checks with your authorization. Jobot will consider qualified candidates with criminal histories in a manner consistent with any applicable federal, state, or local law regarding criminal backgrounds, including but not limited to the Los Angeles Fair Chance Initiative for Hiring and the San Francisco Fair Chance Ordinance. Information collected and processed as part of your Jobot candidate profile, and any job applications, resumes, or other information you choose to submit is subject to Jobot's Privacy Policy, as well as the Jobot California Worker Privacy Notice and Jobot Notice Regarding Automated Employment Decision Tools which are available at By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from Jobot, and/or its agents and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here:
03/01/2026
Full time
Senior Data Engineer Needed - $225K-$300K - Supply Chain Software Pioneer This Jobot Job is hosted by: Steven Zacharias Are you a fit? Easy Apply now by clicking the "Apply Now" button and sending us your resume. Salary: $225,000 - $300,000 per year A bit about us: We are a growing software development firm that's looking for a seasoned Data Engineer! If interested, please apply or email me your resume directly at - /> Why join us? $225,000-$300,000 Base Salary Health / Dental / Vision 401k w/ employer match PTO Job Details Scope of Responsibilities: Data ingestion pipelines for existing and new data sources (both batch and real-time streaming) Experience with all forms of multi-modal data types Monitoring frameworks to protect data quality and consistency Data modeling design and data architecture skills to support reporting and analytics requirements Experience in the latest generation of data lake architectures, stream and batch processing, and managed cloud services to support scale. Qualifications Degree in Computer Science, Mathematics, Statistics, or other data-intensive discipline with substantive engineering experience. 5+ years demonstrated development experience using SQL, Scala, Spark, Flink, Beam, and/or Python 5+ years demonstrated experience in data management (structured and unstructured) and modern database technologies Demonstrated experience developing data pipelines to support machine learning, LLMs or other analytical solutions Experience working with multi-cloud providers such as AWS and Azure Interested in hearing more? Easy Apply now by clicking the "Apply Now" button. Jobot is an Equal Opportunity Employer. We provide an inclusive work environment that celebrates diversity and all qualified candidates receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, age (40 and over), disability, military status, genetic information or any other basis protected by applicable federal, state, or local laws. Jobot also prohibits harassment of applicants or employees based on any of these protected categories. It is Jobot's policy to comply with all applicable federal, state and local laws respecting consideration of unemployment status in making hiring decisions. Sometimes Jobot is required to perform background checks with your authorization. Jobot will consider qualified candidates with criminal histories in a manner consistent with any applicable federal, state, or local law regarding criminal backgrounds, including but not limited to the Los Angeles Fair Chance Initiative for Hiring and the San Francisco Fair Chance Ordinance. Information collected and processed as part of your Jobot candidate profile, and any job applications, resumes, or other information you choose to submit is subject to Jobot's Privacy Policy, as well as the Jobot California Worker Privacy Notice and Jobot Notice Regarding Automated Employment Decision Tools which are available at By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from Jobot, and/or its agents and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here:
A financial firm is looking for a Senior Analytics Engineer to join their team in Jersey City, NJ. Compensation: $150-170K US Citizens/GC Holders Only; No visa sponsorship 5 days onsite - Candidates must be local Domain experience preferred, finance experience required Responsibilities: Design and develop next-generation equity and credit analytics platforms. Work closely with business partners and analysts to understand data requirements and analyze and design the necessary data pipeline and database design. Design, implement and maintain data pipelines that can efficiently and reliably ingest and store data from a variety of internal and external sources. Develop robust quality control processing, monitoring, and workflow dashboards. Proficient in handling backend database work, well-versed in Python or Java, with the expertise to process and load data seamlessly into databases. Integrate risk and quantitative models. Simplify and automate existing manual data processes. Provide support for overnight batch jobs. Participate in strategic discussions. Work closely with a team of frontend and backend engineers, product managers, and analysts. Qualifications: Bachelor's or master's degree in computer science, Engineering, Physics, Math, or related work experience. 10+ years of expertise in application design, coding, testing, maintenance, and debugging. Experience creating and maintaining Conceptual, Logical, and Physical data models. Experience in building data pipelines, designing data models, and architecting data systems from the ground up. Skilled in developing Python APIs and writing code for loading and processing data. Experience in writing complex SQL queries, stored procedures, functions, and query optimization performance tuning. Strong proficiency in Java, REST, Microservices, Spring Boot, and API gateway. Experience working with various cloud technologies, including AWS, Azure, GCP, Snowflake, Spark, and their associated tools. Ability to identify opportunities to reuse data and reduce redundancy across the enterprise. Experience with Git/GitHub. Experience with DevOps tools like Jira, Confluence, and CI/CD pipelines (Jenkins). Experience with messaging technologies such as Kafka, as well as queuing technologies and other related tools. Must be willing to take full ownership of projects, covering discovery, analysis, technical design and implementation, testing, and deployment tasks. Strong communication skills; comfortable working closely with senior quantitative analysts, risk analysts and business partners. A strong desire to document and share work done to aid in long term support. Experience working in the finance industry. Experience with market data vendors - Bloomberg, Markit, ICE/Client, S&P, Moodys, Fitch, Russell, Intex, JPM, Factset, State Street, CRD, and Yieldbook. Experience working on distributed system and handling & processing of large-scale data (trades, risk, market data etc). Knowledge of fixed income analytics for asset types such as corporate bonds, Treasuries, Derivatives, Sovereigns, Bank Loans, MBS, ABS, and CLO. Proficient in managing foundational data, including security master, entity master, and account master.
03/01/2026
Full time
A financial firm is looking for a Senior Analytics Engineer to join their team in Jersey City, NJ. Compensation: $150-170K US Citizens/GC Holders Only; No visa sponsorship 5 days onsite - Candidates must be local Domain experience preferred, finance experience required Responsibilities: Design and develop next-generation equity and credit analytics platforms. Work closely with business partners and analysts to understand data requirements and analyze and design the necessary data pipeline and database design. Design, implement and maintain data pipelines that can efficiently and reliably ingest and store data from a variety of internal and external sources. Develop robust quality control processing, monitoring, and workflow dashboards. Proficient in handling backend database work, well-versed in Python or Java, with the expertise to process and load data seamlessly into databases. Integrate risk and quantitative models. Simplify and automate existing manual data processes. Provide support for overnight batch jobs. Participate in strategic discussions. Work closely with a team of frontend and backend engineers, product managers, and analysts. Qualifications: Bachelor's or master's degree in computer science, Engineering, Physics, Math, or related work experience. 10+ years of expertise in application design, coding, testing, maintenance, and debugging. Experience creating and maintaining Conceptual, Logical, and Physical data models. Experience in building data pipelines, designing data models, and architecting data systems from the ground up. Skilled in developing Python APIs and writing code for loading and processing data. Experience in writing complex SQL queries, stored procedures, functions, and query optimization performance tuning. Strong proficiency in Java, REST, Microservices, Spring Boot, and API gateway. Experience working with various cloud technologies, including AWS, Azure, GCP, Snowflake, Spark, and their associated tools. Ability to identify opportunities to reuse data and reduce redundancy across the enterprise. Experience with Git/GitHub. Experience with DevOps tools like Jira, Confluence, and CI/CD pipelines (Jenkins). Experience with messaging technologies such as Kafka, as well as queuing technologies and other related tools. Must be willing to take full ownership of projects, covering discovery, analysis, technical design and implementation, testing, and deployment tasks. Strong communication skills; comfortable working closely with senior quantitative analysts, risk analysts and business partners. A strong desire to document and share work done to aid in long term support. Experience working in the finance industry. Experience with market data vendors - Bloomberg, Markit, ICE/Client, S&P, Moodys, Fitch, Russell, Intex, JPM, Factset, State Street, CRD, and Yieldbook. Experience working on distributed system and handling & processing of large-scale data (trades, risk, market data etc). Knowledge of fixed income analytics for asset types such as corporate bonds, Treasuries, Derivatives, Sovereigns, Bank Loans, MBS, ABS, and CLO. Proficient in managing foundational data, including security master, entity master, and account master.
Job Description Position Title: Senior Backup Storge Engineer Duration: 12+ months contract with possible conversion and extension Location: Downey, CA, 90242 (Hybrid) Position Description A Senior Backup Storage Engineer is responsible for leading and/or working on the most complex IT infrastructure, modification, installation, testing, implementation, and support of new or existing system hardware and software products. The Senior Backup Storage Engineer will plan, install, configure, test, implement and manage core system hardware and software products in support of an organization's IT architecture and business needs. Special organizational or functional industry position titles for backup storage engineers include, but are not limited to, Backup/Recovery Engineer, Storage Engineer, Backup Engineer, Backup Administrator, and Storage Administrator. The Senior Backup Storage Engineer, in maintenance and support of various backup and storage systems, (that includes CommVault, Dell/EMC, IBM, NetApp, and Pure Storage), is responsible for the administration, installation, configuration, and maintenance of all backup and storage systems to include creating CommVault backup jobs for Linux, UNIX, Windows hosts, Network Attached Storage (NAS) NDMP, databases (DB2, SQL, Oracle, PostgreSQL), masking/zoning/provisioning of storage to UNIX/Windows/VMware servers, performing data recoveries, and general troubleshooting. The Sr. Backup Storage Engineer will be responsible for all data backup/recovery support for cloud service providers (Amazon Web Services - AWS, Microsoft Azure, and Google Cloud Platform - GCP, IBM Cloud), planning and upgrading the systems on a regular basis, checking the health of all the systems, and making necessary changes as required. They will analyze issues reported by internal/external teams and making necessary recommendations; planning and designing systems architecture; working with customers to test applications after changes; upgrade hardware in a timely manner with a minimum downtime window; evaluate new applications software technologies; and/or ensuring the security products are patched on a regular basis. Skills Required The Sr. Backup Storage Engineer will possess in-depth understanding, knowledge and experience of CommVault Complete Data Protection; VMware data protection and recovery using VADP (VMware vStorage API for Data Protection); be proficient in configuring storage arrays with business continuity solutions; perform upgrades; execute storage migration and data center consolidation projects; and administer Pure Storage solutions, including both hardware and software components. The Sr. Backup Storage Engineer will have a comprehensive understanding of IBM storage systems, including configuration and management; be skilled in managing Dell storage systems, with capabilities in setup, troubleshooting, and optimization; ability in NetApp storage solutions, such as ONTAP, SnapMirror, and SnapVault; scripting languages, such as PowerShell and Python, for automating storage management tasks and workflows; proficiency with UNIX systems (including IBM AIX, Linux, and HP-UX) and TCP/IP, along with administering networking hardware; and the ability to design, deploy, and manage scalable and reliable cloud solutions across various platforms, including AWS, Azure, GCP, and IBM Cloud. They will possess strong oral and written communication skills, effective time management, attention to detail, and the ability to translate technical information for non-technical audiences. Experience Required This classification requires, within the last five (5) years, a minimum of three (3) years of experience as a backup administrator to a data backup system, similar to CommVault, including configuration, implementation, and troubleshooting in enterprise environments; three (3) years of experience working with storage solutions from Dell/EMC, IBM, NetApp, and Pure Storage, covering both midrange and enterprise-class storage arrays; and two (2) years of experience using, configuring, implementing, and troubleshooting cloud-based backup, servers, and storage systems in enterprise settings. Education Required This classification requires the possession of a bachelor's degree in an IT-related or Engineering field. Additional qualifying experience may be substituted for the required education on a year-for-year basis
03/01/2026
Full time
Job Description Position Title: Senior Backup Storge Engineer Duration: 12+ months contract with possible conversion and extension Location: Downey, CA, 90242 (Hybrid) Position Description A Senior Backup Storage Engineer is responsible for leading and/or working on the most complex IT infrastructure, modification, installation, testing, implementation, and support of new or existing system hardware and software products. The Senior Backup Storage Engineer will plan, install, configure, test, implement and manage core system hardware and software products in support of an organization's IT architecture and business needs. Special organizational or functional industry position titles for backup storage engineers include, but are not limited to, Backup/Recovery Engineer, Storage Engineer, Backup Engineer, Backup Administrator, and Storage Administrator. The Senior Backup Storage Engineer, in maintenance and support of various backup and storage systems, (that includes CommVault, Dell/EMC, IBM, NetApp, and Pure Storage), is responsible for the administration, installation, configuration, and maintenance of all backup and storage systems to include creating CommVault backup jobs for Linux, UNIX, Windows hosts, Network Attached Storage (NAS) NDMP, databases (DB2, SQL, Oracle, PostgreSQL), masking/zoning/provisioning of storage to UNIX/Windows/VMware servers, performing data recoveries, and general troubleshooting. The Sr. Backup Storage Engineer will be responsible for all data backup/recovery support for cloud service providers (Amazon Web Services - AWS, Microsoft Azure, and Google Cloud Platform - GCP, IBM Cloud), planning and upgrading the systems on a regular basis, checking the health of all the systems, and making necessary changes as required. They will analyze issues reported by internal/external teams and making necessary recommendations; planning and designing systems architecture; working with customers to test applications after changes; upgrade hardware in a timely manner with a minimum downtime window; evaluate new applications software technologies; and/or ensuring the security products are patched on a regular basis. Skills Required The Sr. Backup Storage Engineer will possess in-depth understanding, knowledge and experience of CommVault Complete Data Protection; VMware data protection and recovery using VADP (VMware vStorage API for Data Protection); be proficient in configuring storage arrays with business continuity solutions; perform upgrades; execute storage migration and data center consolidation projects; and administer Pure Storage solutions, including both hardware and software components. The Sr. Backup Storage Engineer will have a comprehensive understanding of IBM storage systems, including configuration and management; be skilled in managing Dell storage systems, with capabilities in setup, troubleshooting, and optimization; ability in NetApp storage solutions, such as ONTAP, SnapMirror, and SnapVault; scripting languages, such as PowerShell and Python, for automating storage management tasks and workflows; proficiency with UNIX systems (including IBM AIX, Linux, and HP-UX) and TCP/IP, along with administering networking hardware; and the ability to design, deploy, and manage scalable and reliable cloud solutions across various platforms, including AWS, Azure, GCP, and IBM Cloud. They will possess strong oral and written communication skills, effective time management, attention to detail, and the ability to translate technical information for non-technical audiences. Experience Required This classification requires, within the last five (5) years, a minimum of three (3) years of experience as a backup administrator to a data backup system, similar to CommVault, including configuration, implementation, and troubleshooting in enterprise environments; three (3) years of experience working with storage solutions from Dell/EMC, IBM, NetApp, and Pure Storage, covering both midrange and enterprise-class storage arrays; and two (2) years of experience using, configuring, implementing, and troubleshooting cloud-based backup, servers, and storage systems in enterprise settings. Education Required This classification requires the possession of a bachelor's degree in an IT-related or Engineering field. Additional qualifying experience may be substituted for the required education on a year-for-year basis
Senior Software Engineer Needed - $120K-$180K - .NET Shop - 100% REMOTE This Jobot Job is hosted by: Steven Zacharias Are you a fit? Easy Apply now by clicking the "Apply Now" button and sending us your resume. Salary: $120,000 - $180,000 per year A bit about us: We are a growing .NET shop that's actively looking for a Senior Software Engineer to work 100% remote! If interested, please apply or email me your resume directly at - /> Why join us? $120,000-$180,000 Base Salary Health / Dental / Vision 401k PTO 100% REMOTE Job Details Qualifications: Proficient with .NET Core, ASP.NET, MVC, Web API, C# (or PHP, MySQL, Laravel, Ruby on Rails, PostgreSQL, Ember) Proficient with JavaScript Understanding of SOLID design principles Experience of unit tests and testable code Proficient with source code control tools and techniques Professional experience developing highly scalable API's and integrations Solid understanding of Web application architecture and operations Experience of React JS (preferred) or other front-end development ecosystem Experience of SQL, document databases, or other data persistence tools Familiarity with design patterns Familiarity with Azure or other cloud platforms Interested in hearing more? Easy Apply now by clicking the "Apply Now" button. Jobot is an Equal Opportunity Employer. We provide an inclusive work environment that celebrates diversity and all qualified candidates receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, age (40 and over), disability, military status, genetic information or any other basis protected by applicable federal, state, or local laws. Jobot also prohibits harassment of applicants or employees based on any of these protected categories. It is Jobot's policy to comply with all applicable federal, state and local laws respecting consideration of unemployment status in making hiring decisions. Sometimes Jobot is required to perform background checks with your authorization. Jobot will consider qualified candidates with criminal histories in a manner consistent with any applicable federal, state, or local law regarding criminal backgrounds, including but not limited to the Los Angeles Fair Chance Initiative for Hiring and the San Francisco Fair Chance Ordinance. Information collected and processed as part of your Jobot candidate profile, and any job applications, resumes, or other information you choose to submit is subject to Jobot's Privacy Policy, as well as the Jobot California Worker Privacy Notice and Jobot Notice Regarding Automated Employment Decision Tools which are available at By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from Jobot, and/or its agents and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here:
03/01/2026
Full time
Senior Software Engineer Needed - $120K-$180K - .NET Shop - 100% REMOTE This Jobot Job is hosted by: Steven Zacharias Are you a fit? Easy Apply now by clicking the "Apply Now" button and sending us your resume. Salary: $120,000 - $180,000 per year A bit about us: We are a growing .NET shop that's actively looking for a Senior Software Engineer to work 100% remote! If interested, please apply or email me your resume directly at - /> Why join us? $120,000-$180,000 Base Salary Health / Dental / Vision 401k PTO 100% REMOTE Job Details Qualifications: Proficient with .NET Core, ASP.NET, MVC, Web API, C# (or PHP, MySQL, Laravel, Ruby on Rails, PostgreSQL, Ember) Proficient with JavaScript Understanding of SOLID design principles Experience of unit tests and testable code Proficient with source code control tools and techniques Professional experience developing highly scalable API's and integrations Solid understanding of Web application architecture and operations Experience of React JS (preferred) or other front-end development ecosystem Experience of SQL, document databases, or other data persistence tools Familiarity with design patterns Familiarity with Azure or other cloud platforms Interested in hearing more? Easy Apply now by clicking the "Apply Now" button. Jobot is an Equal Opportunity Employer. We provide an inclusive work environment that celebrates diversity and all qualified candidates receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, age (40 and over), disability, military status, genetic information or any other basis protected by applicable federal, state, or local laws. Jobot also prohibits harassment of applicants or employees based on any of these protected categories. It is Jobot's policy to comply with all applicable federal, state and local laws respecting consideration of unemployment status in making hiring decisions. Sometimes Jobot is required to perform background checks with your authorization. Jobot will consider qualified candidates with criminal histories in a manner consistent with any applicable federal, state, or local law regarding criminal backgrounds, including but not limited to the Los Angeles Fair Chance Initiative for Hiring and the San Francisco Fair Chance Ordinance. Information collected and processed as part of your Jobot candidate profile, and any job applications, resumes, or other information you choose to submit is subject to Jobot's Privacy Policy, as well as the Jobot California Worker Privacy Notice and Jobot Notice Regarding Automated Employment Decision Tools which are available at By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from Jobot, and/or its agents and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here:
Credit Acceptance is proud to be an award-winning company with local and national workplace recognition in multiple categories! Our world-class culture is shaped by dedicated Team Members who share a drive to succeed as professionals and together as a company. A great product, amazing people and our stable financial history have made us one of the largest used car finance companies nationally. Our Engineering and Analytics Team Members utilize the latest technology to develop, monitor, and maintain complex practices that help optimize our success. Our Team Members value being challenged, are encouraged to express their ideas, and have the flexibility to enjoy work life balance. We build intrinsic value by partnering with all functions of our business to support their success and make strategic business decisions. We focus on professional development and continuous improvement while enjoying a casual work environment and Great Place to Work culture! Outcomes and Activities: This position will work from home; occasional planned travel to an assigned Southfield, Michigan office location may be required. However, this position is permitted to work at a Southfield, Michigan office location if requested by the team member. Own architecture and implementation of key components of the modern data platform (e.g., data lake, streaming infrastructure, DaaS, DAL, data catalog), ensuring production reliability and technical soundness. Drive technical innovation by contributing to system design, implementation, and operational excellence in high-impact areas of the platform. Model strong engineering practices through hands-on work and code contributions, demonstrating how engineers should approach problems and uphold quality. Collaborate with peers across data and engineering teams to influence technology and architecture decisions, providing well-reasoned perspectives. Advocate for adoption of new technologies and demonstrate their value through prototypes, proofs of concept, and integration into team workflows. Align project execution with broader strategies by working with senior engineers and engineering leadership to support the company's technical and business direction. Conduct impact analysis to proactively identify impact of a change across services and systems Evaluate third-party technologies and solutions through technical assessments and provide recommendations that balance technical fit with business needs. Experiment and validate ideas by testing assumptions, analyzing results, and recommending practical solutions to improve platform capabilities. Contribute to documentation of standards and best practices, making platform engineering approaches clear and maintainable for other teams. Debug and resolve complex production issues, applying technical expertise to restore stability across services and systems. Participate in continuous learning and improvement efforts, helping refine processes, design practices, and team workflows for better engineering outcomes. Grow talent by participating in hiring and mentoring team members Competencies: The following items detail how you will be successful in this role. Customer Empathy: Customer Empathy is the ability to understand the perspectives, pain points, and experiences of customers. It involves actively putting oneself in the customer's shoes, comprehending their needs and challenges, and using that understanding to provide a better, more customer-centric experience. Engineering Excellence: Engineering Excellence is about bringing great craftsmanship and thought leadership to deliver an outstanding product that delights customers and solves for the business. This involves the pursuit and achievement of high standards, best practices, innovation, and superior solutions. One Team: A One Team mindset refers to a collaborative approach across the organization, where individuals work together seamlessly, without boundaries, as a single, cohesive team. Shared goals, open communication and mutual support create a sense of collective purpose. This enables teams to navigate challenges and pursue shared objectives more effectively. Owner's Mindset: Owner's Mindset involves adopting a set of behaviors that reflect a sense of responsibility, accountability, strategic thinking, and a proactive approach to managing your domain. As an owner, you understand the business and your domain(s) deeply and solve for the right outcome for the domain(s) and the business. Requirements: Bachelor's degree in Computer Science, Information Systems, or a closely related field; or equivalent work experience Minimum 5 years of software engineering experience, with recent hands-on experience building and maintaining data platforms or distributed systems in cloud environments Strong knowledge of software engineering best practices, with practical experience building and operating data platforms, products, or solutions Experience developing and supporting cloud-native applications (AWS, Azure, or GCP), including containerized services (Docker, Kubernetes, ECS/EKS) Working knowledge of lakehouse technologies (Delta Lake, Iceberg, Hudi) with hands-on experience in schema evolution and optimization Strong understanding of observability practices (metrics, logging, tracing, alerting) and experience applying them with tools such as Dynatrace, Splunk, or CloudWatch to ensure platform reliability and performance. Applied experience with data storage and processing technologies, including object stores (S3, ADLS, GCS), relational databases, and NoSQL systems Awareness of data governance and security practices (e.g., access controls, encryption, compliance considerations), with the ability to design platform components that align with organizational standards Strong knowledge of distributed systems concepts (scalability, reliability, consistency, partitioning) and their application to large-scale data platforms Experience working with enterprise-class applications where uptime, reliability, and scalability are essential Strong programming skills in one or more languages commonly used for platform engineering (e.g., Python, Java, Scala, Go) Demonstrated ability to mentor and coach less experienced engineers, contributing to team growth and technical maturity Familiarity with Agile delivery practices and other software development lifecycle methodologies Preferred: Advanced expertise in lakehouse technologies (Delta, Iceberg, Hudi), including performance tuning and reliability at scale Hands-on experience with workflow orchestration frameworks (Airflow, Dagster, Prefect, Databricks Workflows) Strong background in CI/CD pipelines for platform services Deep familiarity with observability and SRE practices (SLAs/SLOs/SLIs, distributed tracing, advanced monitoring tools) Experience with performance tuning and cost optimization for large-scale data platforms Financial services or FinTech industry experience Knowledge and Skills: Designs and implements major components of the data platform that are scalable, reliable, and aligned with platform strategy Provides technical direction for a team or project, mentors less experienced engineers, and helps raise engineering standards Identifies gaps in current practices and proposes improvements that strengthen platform quality and delivery Collaborates with peers and cross-functional teams, encouraging diverse perspectives to inform decisions Applies strong knowledge of distributed systems, cloud-native services, and data storage technologies to deliver impactful solutions Connects platform initiatives to business value, making tradeoffs and outcomes visible Communicates technical decisions effectively to engineers and stakeholders, both verbally and in writing Operates with autonomy on complex projects, anticipating risks and dependencies while contributing to team-level planning and execution Target Compensation: A competitive base salary range from $154,837 - $227,09. This position is eligible for an annual variable bonus of cash and equity, between 10-20%. Bonus amounts are based on individual performance. Final compensation within the range is influenced by many factors including role-specific skills, depth and experience level, industry background, relevant education and certifications. Candidates who reside in the following major metropolitan areas may be eligible for a premium on top of the posted range based on their specific zone: San Francisco, Seattle, Boston, New York City, Los Angeles and San Diego. INDENGLP Benefits Excellent benefits package that includes 401(K) match, adoption assistance, parental leave, tuition reimbursement, comprehensive medical/ dental/vision and many nonstandard benefits that make us a Great Place to Work Our Company Values: To be successful in this role, Team Members need to be: Positive by maintaining resiliency and focusing on solutions Respectful by collaborating and actively listening Insightful by cultivating innovation, accumulating business and role specific knowledge, demonstrating self-awareness and making quality decisions Direct by effectively communicating and conveying courage Earnest by taking accountability, applying feedback and effectively planning and priority setting Expectations: Remain compliant with our policies processes and legal guidelines All other duties as assigned Attendance as required by department Advice! We understand that your career search may look different than others. Our hiring team wants to make sure that this would be a fit not just for us, but for you long term. If you are actively looking or starting to explore new opportunities, send us your application! P.S. We have great details around our stats, success, history and more. We're proud of our culture and are happy to share why - let's talk . click apply for full job details
03/01/2026
Full time
Credit Acceptance is proud to be an award-winning company with local and national workplace recognition in multiple categories! Our world-class culture is shaped by dedicated Team Members who share a drive to succeed as professionals and together as a company. A great product, amazing people and our stable financial history have made us one of the largest used car finance companies nationally. Our Engineering and Analytics Team Members utilize the latest technology to develop, monitor, and maintain complex practices that help optimize our success. Our Team Members value being challenged, are encouraged to express their ideas, and have the flexibility to enjoy work life balance. We build intrinsic value by partnering with all functions of our business to support their success and make strategic business decisions. We focus on professional development and continuous improvement while enjoying a casual work environment and Great Place to Work culture! Outcomes and Activities: This position will work from home; occasional planned travel to an assigned Southfield, Michigan office location may be required. However, this position is permitted to work at a Southfield, Michigan office location if requested by the team member. Own architecture and implementation of key components of the modern data platform (e.g., data lake, streaming infrastructure, DaaS, DAL, data catalog), ensuring production reliability and technical soundness. Drive technical innovation by contributing to system design, implementation, and operational excellence in high-impact areas of the platform. Model strong engineering practices through hands-on work and code contributions, demonstrating how engineers should approach problems and uphold quality. Collaborate with peers across data and engineering teams to influence technology and architecture decisions, providing well-reasoned perspectives. Advocate for adoption of new technologies and demonstrate their value through prototypes, proofs of concept, and integration into team workflows. Align project execution with broader strategies by working with senior engineers and engineering leadership to support the company's technical and business direction. Conduct impact analysis to proactively identify impact of a change across services and systems Evaluate third-party technologies and solutions through technical assessments and provide recommendations that balance technical fit with business needs. Experiment and validate ideas by testing assumptions, analyzing results, and recommending practical solutions to improve platform capabilities. Contribute to documentation of standards and best practices, making platform engineering approaches clear and maintainable for other teams. Debug and resolve complex production issues, applying technical expertise to restore stability across services and systems. Participate in continuous learning and improvement efforts, helping refine processes, design practices, and team workflows for better engineering outcomes. Grow talent by participating in hiring and mentoring team members Competencies: The following items detail how you will be successful in this role. Customer Empathy: Customer Empathy is the ability to understand the perspectives, pain points, and experiences of customers. It involves actively putting oneself in the customer's shoes, comprehending their needs and challenges, and using that understanding to provide a better, more customer-centric experience. Engineering Excellence: Engineering Excellence is about bringing great craftsmanship and thought leadership to deliver an outstanding product that delights customers and solves for the business. This involves the pursuit and achievement of high standards, best practices, innovation, and superior solutions. One Team: A One Team mindset refers to a collaborative approach across the organization, where individuals work together seamlessly, without boundaries, as a single, cohesive team. Shared goals, open communication and mutual support create a sense of collective purpose. This enables teams to navigate challenges and pursue shared objectives more effectively. Owner's Mindset: Owner's Mindset involves adopting a set of behaviors that reflect a sense of responsibility, accountability, strategic thinking, and a proactive approach to managing your domain. As an owner, you understand the business and your domain(s) deeply and solve for the right outcome for the domain(s) and the business. Requirements: Bachelor's degree in Computer Science, Information Systems, or a closely related field; or equivalent work experience Minimum 5 years of software engineering experience, with recent hands-on experience building and maintaining data platforms or distributed systems in cloud environments Strong knowledge of software engineering best practices, with practical experience building and operating data platforms, products, or solutions Experience developing and supporting cloud-native applications (AWS, Azure, or GCP), including containerized services (Docker, Kubernetes, ECS/EKS) Working knowledge of lakehouse technologies (Delta Lake, Iceberg, Hudi) with hands-on experience in schema evolution and optimization Strong understanding of observability practices (metrics, logging, tracing, alerting) and experience applying them with tools such as Dynatrace, Splunk, or CloudWatch to ensure platform reliability and performance. Applied experience with data storage and processing technologies, including object stores (S3, ADLS, GCS), relational databases, and NoSQL systems Awareness of data governance and security practices (e.g., access controls, encryption, compliance considerations), with the ability to design platform components that align with organizational standards Strong knowledge of distributed systems concepts (scalability, reliability, consistency, partitioning) and their application to large-scale data platforms Experience working with enterprise-class applications where uptime, reliability, and scalability are essential Strong programming skills in one or more languages commonly used for platform engineering (e.g., Python, Java, Scala, Go) Demonstrated ability to mentor and coach less experienced engineers, contributing to team growth and technical maturity Familiarity with Agile delivery practices and other software development lifecycle methodologies Preferred: Advanced expertise in lakehouse technologies (Delta, Iceberg, Hudi), including performance tuning and reliability at scale Hands-on experience with workflow orchestration frameworks (Airflow, Dagster, Prefect, Databricks Workflows) Strong background in CI/CD pipelines for platform services Deep familiarity with observability and SRE practices (SLAs/SLOs/SLIs, distributed tracing, advanced monitoring tools) Experience with performance tuning and cost optimization for large-scale data platforms Financial services or FinTech industry experience Knowledge and Skills: Designs and implements major components of the data platform that are scalable, reliable, and aligned with platform strategy Provides technical direction for a team or project, mentors less experienced engineers, and helps raise engineering standards Identifies gaps in current practices and proposes improvements that strengthen platform quality and delivery Collaborates with peers and cross-functional teams, encouraging diverse perspectives to inform decisions Applies strong knowledge of distributed systems, cloud-native services, and data storage technologies to deliver impactful solutions Connects platform initiatives to business value, making tradeoffs and outcomes visible Communicates technical decisions effectively to engineers and stakeholders, both verbally and in writing Operates with autonomy on complex projects, anticipating risks and dependencies while contributing to team-level planning and execution Target Compensation: A competitive base salary range from $154,837 - $227,09. This position is eligible for an annual variable bonus of cash and equity, between 10-20%. Bonus amounts are based on individual performance. Final compensation within the range is influenced by many factors including role-specific skills, depth and experience level, industry background, relevant education and certifications. Candidates who reside in the following major metropolitan areas may be eligible for a premium on top of the posted range based on their specific zone: San Francisco, Seattle, Boston, New York City, Los Angeles and San Diego. INDENGLP Benefits Excellent benefits package that includes 401(K) match, adoption assistance, parental leave, tuition reimbursement, comprehensive medical/ dental/vision and many nonstandard benefits that make us a Great Place to Work Our Company Values: To be successful in this role, Team Members need to be: Positive by maintaining resiliency and focusing on solutions Respectful by collaborating and actively listening Insightful by cultivating innovation, accumulating business and role specific knowledge, demonstrating self-awareness and making quality decisions Direct by effectively communicating and conveying courage Earnest by taking accountability, applying feedback and effectively planning and priority setting Expectations: Remain compliant with our policies processes and legal guidelines All other duties as assigned Attendance as required by department Advice! We understand that your career search may look different than others. Our hiring team wants to make sure that this would be a fit not just for us, but for you long term. If you are actively looking or starting to explore new opportunities, send us your application! P.S. We have great details around our stats, success, history and more. We're proud of our culture and are happy to share why - let's talk . click apply for full job details
Job Summary Worksite: Northbrook, IL Hybrid Schedule: Onsite Tues - Thurs Remote: Mon & Fri JOB SUMMARY We are seeking a seasoned Senior SAP Integration Developer with deep expertise in SAP Cloud Platform Integration (CPI), SAP Integration Suite, SAP Business Technology Platform (BTP), and Advanced Event Mesh. In this role, you'll design, develop, and maintain scalable integration solutions that bridge cloud and on-premise applications across global enterprise environments. You will also be instrumental in mentoring junior developers and driving best practices for integration design, security, and performance. Job Description MAJOR RESPONSIBILITIES Design, develop, and deploy complex integration flows using SAP BTP Integration Suite, CPI, and Advanced Event Mesh. Build and manage synchronous and asynchronous interfaces between cloud and on-premise systems. Configure and administer SAP BTP environments including connectivity (Cloud Connector), security roles, and API provisioning. Leverage SAP Integration Suite components such as API Management, Open Connectors, CPI-DS, and SAP Event Mesh. Configure and manage Solace messaging infrastructure - topics, queues, durable/non-durable subscriptions. Migrate existing integrations from legacy platforms (e.g., SAP PI) to SAP Integration Suite. Work on API-first integration designs, building secure, scalable APIs using REST/SOAP standards. Monitor and troubleshoot integration issues using SAP Solution Manager, CPI Monitoring Dashboards, and SAP BTP Admin tools. Collaborate with cross-functional teams to gather business requirements and translate them into technical designs. Guide junior developers and lead technical knowledge transfer sessions. Optimize existing integrations for performance, maintainability, and error handling. Participate in Agile/Scrum ceremonies and lead initiatives for continuous improvement. Interface with tools like ElasticSearch, Splunk for monitoring and logging integration behavior. Support production instance and on call issue resolution MINIMUM JOB REQUIREMENTS Education Bachelor's degree in computer science, IT, or related discipline. Work Experience: 4+ years of hands-on software development experience in SAP BTP, SAP Integration Suite, and SAP CPI. Strong background in ABAP development (especially with IDOCs, BAPI, RFC, ALE). Extensive hands-on experience with REST/SOAP web services, XML, JSON, XSLT, and mapping/transformation logic. Deep understanding of SAP PI/PO architecture and migration best practices. Hands-on experience with Advanced Event Mesh. Proficiency in at least one of the following: Java, Groovy, Python. Experience with SAP Fiori and S/4HANA integration scenarios. Proficiency in OAuth, SAML, SSL, and other authentication/authorization protocols. Familiarity with SQL and working knowledge of SAP HANA, Oracle, or other relational databases. Preferred Qualifications 8+ years of experience in SAP integration development. SAP certifications in Integration Suite, BTP, or Cloud Platform Integration. Experience with DevOps tools like Git, Jenkins, Docker, Kubernetes, CI/CD pipelines. Knowledge of cloud infrastructure platforms (AWS, Azure, GCP). Experience with event-driven architecture and microservices. Familiarity with Agile development practices and project management tools (JIRA, Confluence). Prior exposure to SAP Open Connectors, Graph API, and CAPM (Cloud Application Programming Model). TECHNICAL SKILLS Integration Tools: SAP CPI, SAP PI/PO, SAP BTP, SAP API Management, SAP Event Mesh, Cloud Connector Languages: Java, ABAP, Groovy, Python, JavaScript, XML/XSLT Protocols: REST, SOAP, IDOC, OData, RFC, BAPI, SFTP, HTTP Platforms: SAP S/4HANA, SAP ECC, SAP Fiori, SAP HANA Tools: Eclipse, NetWeaver Developer Studio, Postman, SoapUI, GitHub Cloud & DevOps: Docker, Kubernetes, Jenkins, Git, CI/CD pipelines SOFT SKILLS Excellent verbal and written communication skills. Strong problem-solving and analytical mindset. Ability to mentor, coach, and lead junior team members. Comfortable collaborating across global, cross-functional teams. Highly organized and detail oriented. Passion for innovation and continuous learning. Medline Industries, LP, and its subsidiaries, offer a competitive total rewards package, continuing education & training, and tremendous potential with a growing worldwide organization. The anticipated salary range for this position: $101,000.00 - $152,000.00 Annual The actual salary will vary based on applicant's location, education, experience, skills, and abilities. This role is bonus and/or incentive eligible. Medline will not pay less than the applicable minimum wage or salary threshold. Our benefit package includes health insurance, life and disability, 401(k) contributions, paid time off, etc., for employees working 30 or more hours per week on average. For a more comprehensive list of our benefits please click here . For roles where employees work less than 30 hours per week, benefits include 401(k) contributions as well as access to the Employee Assistance Program, Employee Resource Groups and the Employee Service Corp. We're dedicated to creating a Medline where everyone feels they belong and can grow their career. We strive to do this by seeking diversity in all forms, acting inclusively, and ensuring that people have tools and resources to perform at their best. Explore our Belonging page here . Medline Industries, LP is an equal opportunity employer. Medline evaluates qualified individuals without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, age, disability, neurodivergence, protected veteran status, marital or family status, caregiver responsibilities, genetic information, or any other characteristic protected by applicable federal, state, or local laws.
03/01/2026
Full time
Job Summary Worksite: Northbrook, IL Hybrid Schedule: Onsite Tues - Thurs Remote: Mon & Fri JOB SUMMARY We are seeking a seasoned Senior SAP Integration Developer with deep expertise in SAP Cloud Platform Integration (CPI), SAP Integration Suite, SAP Business Technology Platform (BTP), and Advanced Event Mesh. In this role, you'll design, develop, and maintain scalable integration solutions that bridge cloud and on-premise applications across global enterprise environments. You will also be instrumental in mentoring junior developers and driving best practices for integration design, security, and performance. Job Description MAJOR RESPONSIBILITIES Design, develop, and deploy complex integration flows using SAP BTP Integration Suite, CPI, and Advanced Event Mesh. Build and manage synchronous and asynchronous interfaces between cloud and on-premise systems. Configure and administer SAP BTP environments including connectivity (Cloud Connector), security roles, and API provisioning. Leverage SAP Integration Suite components such as API Management, Open Connectors, CPI-DS, and SAP Event Mesh. Configure and manage Solace messaging infrastructure - topics, queues, durable/non-durable subscriptions. Migrate existing integrations from legacy platforms (e.g., SAP PI) to SAP Integration Suite. Work on API-first integration designs, building secure, scalable APIs using REST/SOAP standards. Monitor and troubleshoot integration issues using SAP Solution Manager, CPI Monitoring Dashboards, and SAP BTP Admin tools. Collaborate with cross-functional teams to gather business requirements and translate them into technical designs. Guide junior developers and lead technical knowledge transfer sessions. Optimize existing integrations for performance, maintainability, and error handling. Participate in Agile/Scrum ceremonies and lead initiatives for continuous improvement. Interface with tools like ElasticSearch, Splunk for monitoring and logging integration behavior. Support production instance and on call issue resolution MINIMUM JOB REQUIREMENTS Education Bachelor's degree in computer science, IT, or related discipline. Work Experience: 4+ years of hands-on software development experience in SAP BTP, SAP Integration Suite, and SAP CPI. Strong background in ABAP development (especially with IDOCs, BAPI, RFC, ALE). Extensive hands-on experience with REST/SOAP web services, XML, JSON, XSLT, and mapping/transformation logic. Deep understanding of SAP PI/PO architecture and migration best practices. Hands-on experience with Advanced Event Mesh. Proficiency in at least one of the following: Java, Groovy, Python. Experience with SAP Fiori and S/4HANA integration scenarios. Proficiency in OAuth, SAML, SSL, and other authentication/authorization protocols. Familiarity with SQL and working knowledge of SAP HANA, Oracle, or other relational databases. Preferred Qualifications 8+ years of experience in SAP integration development. SAP certifications in Integration Suite, BTP, or Cloud Platform Integration. Experience with DevOps tools like Git, Jenkins, Docker, Kubernetes, CI/CD pipelines. Knowledge of cloud infrastructure platforms (AWS, Azure, GCP). Experience with event-driven architecture and microservices. Familiarity with Agile development practices and project management tools (JIRA, Confluence). Prior exposure to SAP Open Connectors, Graph API, and CAPM (Cloud Application Programming Model). TECHNICAL SKILLS Integration Tools: SAP CPI, SAP PI/PO, SAP BTP, SAP API Management, SAP Event Mesh, Cloud Connector Languages: Java, ABAP, Groovy, Python, JavaScript, XML/XSLT Protocols: REST, SOAP, IDOC, OData, RFC, BAPI, SFTP, HTTP Platforms: SAP S/4HANA, SAP ECC, SAP Fiori, SAP HANA Tools: Eclipse, NetWeaver Developer Studio, Postman, SoapUI, GitHub Cloud & DevOps: Docker, Kubernetes, Jenkins, Git, CI/CD pipelines SOFT SKILLS Excellent verbal and written communication skills. Strong problem-solving and analytical mindset. Ability to mentor, coach, and lead junior team members. Comfortable collaborating across global, cross-functional teams. Highly organized and detail oriented. Passion for innovation and continuous learning. Medline Industries, LP, and its subsidiaries, offer a competitive total rewards package, continuing education & training, and tremendous potential with a growing worldwide organization. The anticipated salary range for this position: $101,000.00 - $152,000.00 Annual The actual salary will vary based on applicant's location, education, experience, skills, and abilities. This role is bonus and/or incentive eligible. Medline will not pay less than the applicable minimum wage or salary threshold. Our benefit package includes health insurance, life and disability, 401(k) contributions, paid time off, etc., for employees working 30 or more hours per week on average. For a more comprehensive list of our benefits please click here . For roles where employees work less than 30 hours per week, benefits include 401(k) contributions as well as access to the Employee Assistance Program, Employee Resource Groups and the Employee Service Corp. We're dedicated to creating a Medline where everyone feels they belong and can grow their career. We strive to do this by seeking diversity in all forms, acting inclusively, and ensuring that people have tools and resources to perform at their best. Explore our Belonging page here . Medline Industries, LP is an equal opportunity employer. Medline evaluates qualified individuals without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, age, disability, neurodivergence, protected veteran status, marital or family status, caregiver responsibilities, genetic information, or any other characteristic protected by applicable federal, state, or local laws.
Title: Programmer Analyst VI - Full Stack Developer Location: Lansing, MI (Hybrid) Note: This is a W2 contract role - C2C, 1099, & 3 rd party candidates WILL NOT be considered This Programmer Analyst position will act as a Senior Developer in a hybrid waterfall/agile environment on a small development team to write and test code to implement the user stories and requirements for the Long Term Care Reimbursement project. Resources filling this position must have at least 5 recent years' experience working with Angular, C# .Net, JavaScript, SSRS, SQL Server, and working in an environment utilizing hybrid agile/waterfall project management methodologies. Position Duties: Design, develop, and maintain applications using C#.Net and Angular Write user acceptance test plans, creating required test data and assisting users with running tests Participate in requirements gathering session to document scoping, definition, analysis, business design, and technical design phases Coordinate application development and scheduling interfaces with cross-functional teams Assist with debugging complex coding issues Author technical standards, choose technology, and create technical solutions Develop and maintain SSRS reports Participate in artifact reviews with peers, system specialists, Enterprise Security, and other entities to ensure IT solutions and applications adhere agency policies, standards, and guidelines Coordinate with security resources to ensure systems are properly designed according to agency security requirements and standards Participate in Solutions Design Team (SDT) meetings and assist in the creation of Enterprise Architecture Solution Assessments (EASA), infrastructure Service Requests (ISR), hosting documents, and firewall rules, as needed Develop database objects, including stored procedures, functions, triggers, and packages using SQL and PL/SQL Troubleshoot issues using SQL, PL/SQL scripts Ensure proper change management is followed and documented for all changes to system designs and prod changes Develop training content and facilitate training Actively participate in the development and implementation of assigned client agency's strategic direction/plan Serve as technical resource to the Project Manager and liaison to the PMO to assist with resolving project issues Position Qualifications: 10+ years of experience of developing complex systems using C#/.NET and Java (Eclipse IDE) 10+ years of advanced experience in SQL and PL/SQL development 8+ years of programming experience using JavaScript, SSRS, and Microsoft SQL Server 7+ years of experience working with GIT code repository software and 5+ years of experience working with GIT for version control and source code management 5+ years of hands-on experience developing web applications using Angular and modern JavaScript frameworks 5+ years of recent experience writing, compiling, modifying, and debugging complex SQL Server database configuration items, including Stored Procedures, Functions, Triggers, Views, Tables, and linked servers 5+ years of experience using Azure DevOps (ADO) for backlog management, sprint planning, task tracking, and Agile progress reporting 5+ years of experience developing and executing unit and regression tests to ensure application reliability and stability 2+ years of experience with React.js and modern JavaScript (ES6+) Strong experience developing secure web applications, implementing industry best practices to prevent vulnerabilities such as cross-site scripting(XSS) and SQL injection, including secure logging practices Exposure to DevOps practices and cloud platforms, including AWS and Microsoft Azure Hands-on experience Integrating software components into a fully functional software system Hands-on experience using GitHub Copilot to accelerate daily coding tasks, including code generation, refactoring, and documentation; proven ability to integrate GitHub Copilot into development workflows to enhance productivity, code quality, and team collaboration A minimum of a Bachelor's Degree in Information Technology or other relevant field Note: This is a W2 contract role - C2C, 1099, & 3 rd party candidates WILL NOT be considered
03/01/2026
Full time
Title: Programmer Analyst VI - Full Stack Developer Location: Lansing, MI (Hybrid) Note: This is a W2 contract role - C2C, 1099, & 3 rd party candidates WILL NOT be considered This Programmer Analyst position will act as a Senior Developer in a hybrid waterfall/agile environment on a small development team to write and test code to implement the user stories and requirements for the Long Term Care Reimbursement project. Resources filling this position must have at least 5 recent years' experience working with Angular, C# .Net, JavaScript, SSRS, SQL Server, and working in an environment utilizing hybrid agile/waterfall project management methodologies. Position Duties: Design, develop, and maintain applications using C#.Net and Angular Write user acceptance test plans, creating required test data and assisting users with running tests Participate in requirements gathering session to document scoping, definition, analysis, business design, and technical design phases Coordinate application development and scheduling interfaces with cross-functional teams Assist with debugging complex coding issues Author technical standards, choose technology, and create technical solutions Develop and maintain SSRS reports Participate in artifact reviews with peers, system specialists, Enterprise Security, and other entities to ensure IT solutions and applications adhere agency policies, standards, and guidelines Coordinate with security resources to ensure systems are properly designed according to agency security requirements and standards Participate in Solutions Design Team (SDT) meetings and assist in the creation of Enterprise Architecture Solution Assessments (EASA), infrastructure Service Requests (ISR), hosting documents, and firewall rules, as needed Develop database objects, including stored procedures, functions, triggers, and packages using SQL and PL/SQL Troubleshoot issues using SQL, PL/SQL scripts Ensure proper change management is followed and documented for all changes to system designs and prod changes Develop training content and facilitate training Actively participate in the development and implementation of assigned client agency's strategic direction/plan Serve as technical resource to the Project Manager and liaison to the PMO to assist with resolving project issues Position Qualifications: 10+ years of experience of developing complex systems using C#/.NET and Java (Eclipse IDE) 10+ years of advanced experience in SQL and PL/SQL development 8+ years of programming experience using JavaScript, SSRS, and Microsoft SQL Server 7+ years of experience working with GIT code repository software and 5+ years of experience working with GIT for version control and source code management 5+ years of hands-on experience developing web applications using Angular and modern JavaScript frameworks 5+ years of recent experience writing, compiling, modifying, and debugging complex SQL Server database configuration items, including Stored Procedures, Functions, Triggers, Views, Tables, and linked servers 5+ years of experience using Azure DevOps (ADO) for backlog management, sprint planning, task tracking, and Agile progress reporting 5+ years of experience developing and executing unit and regression tests to ensure application reliability and stability 2+ years of experience with React.js and modern JavaScript (ES6+) Strong experience developing secure web applications, implementing industry best practices to prevent vulnerabilities such as cross-site scripting(XSS) and SQL injection, including secure logging practices Exposure to DevOps practices and cloud platforms, including AWS and Microsoft Azure Hands-on experience Integrating software components into a fully functional software system Hands-on experience using GitHub Copilot to accelerate daily coding tasks, including code generation, refactoring, and documentation; proven ability to integrate GitHub Copilot into development workflows to enhance productivity, code quality, and team collaboration A minimum of a Bachelor's Degree in Information Technology or other relevant field Note: This is a W2 contract role - C2C, 1099, & 3 rd party candidates WILL NOT be considered
Job Summary Generalist architect assigned to a corporate functions portfolio (Logistics tech, Sales tech, Ecom etc,.) for creating solution designs for new capabilities/projects that the Portfolio implements and for leading/trouble shooting and supporting operational initiatives within the portfolio like technical upgrades of systems, patches, support for outages etc. Job Description MAJOR RESPONSIBILITIES Acts as a consultant on a broad range of technologies, platforms and vendor offerings and drive targeted business outcomes while maintaining alignment with the overall enterprise architecture. Collaborates with management, business leaders, IT architecture, and other stakeholders to connect the business roadmap with operational decisions. Identifies the organizational impact (for example, on skills, processes, structures or culture) and financial impact of the solutions architecture. Produces expert analysis, research, and designs for technical solutions that enable business capabilities, processes, and functions. Stays abreast of current technology trends both in the industry as a whole and with the applications deployed in the landscape. Leads Technical Design, Analysis, Development and Delivery. Leads evaluation, design, and analysis for the implementation of a solutions architecture across a group of specific business applications or technologies based on enterprise business strategy, business capabilities, value-streams, business requirements and enterprise standards. Translates business and technical requirements into an architectural blueprint to achieve business objectives and documents all architecture design and analysis work. Creates architectural designs to guide and contextualize solution development across products, services, projects, and systems (including applications, technologies, processes and information) in a way that is directly actionable, clear and unambiguous. Works closely with the product owners and project/product managers to ensure a robust architectural runway that can support future business requirements throughout the product lifecycle. Provides consulting support to application teams to ensure the project/product is aligned with the overall enterprise, solution, and application architectures. MINIMUM JOB REQUIREMENTS Education Bachelor's degree in computer science, information-technology, systems engineering or a related study. Work Experience At least 5 years of experience with multiple IT solution development disciplines, including technical or infrastructure architecture, network management, application development, middleware, database management or cloud development. Experience with Cloud technologies; designing and building applications for the cloud (Azure, AWS, GCP). Experience with ERP and middleware applications (such as SAP, Oracle, SAP IS, SAP AEM, Talend). Knowledge / Skills / Abilities Experience delivering presentations to senior-level executives and technical audiences. Experience with various software development technologies (such as JavaScript, HTML, CSS, Java, .NET, PHP, ABAP). Experience with various database technologies (such as MSSQL, MongoDB, Oracle, HANA). Experience developing system designs, strategies, evaluations, and roadmaps. Excellent written and verbal communication skills, with ability to effectively communicate with technical and non-technical staff at all levels of the organization. Solid understanding of product management, agile principles and development methodologies and capability of supporting agile teams by providing advice and guidance on opportunities, impact and risks, taking account of technical and architectural debt. Excellent interpersonal skills in areas such as teamwork, facilitation and negotiation. Strong leadership skills. Excellent analytical and technical skills. Excellent planning and organizational skills. Skilled at influencing, guiding and facilitating stakeholders and peers with decision making. Knowledge of various aspects of an enterprise technology architecture like business, information, data, network and security. Ability to understand and apply various diagraming and modeling techniques. Good understanding of strategic and emerging technology trends, and the practical application of those technologies to evolving business and operating models. Ability to work effectively in a team environment and lead cross-functional teams. Strong understanding of the company's processes, organization, customers, and business models. PREFERRED JOB REQUIREMENTS Work Experience Experience in working with multi-site global teams. 4+ years of DevSecOps experience -including unit testing, CI/CD, security compliance, functional/performance/stress test automation. Experience architecting secure applications for Healthcare and are familiar with PHI, PII, and HIPAA compliance requirements. Knowledge / Skills / Abilities Ability to apply multiple technical solutions to enable future-state business capabilities that, in turn, drive targeted business outcomes. Ability to balance the long-term (big picture) and short-term implications of individual decisions. Ability to remain unbiased toward any specific technology or vendor choice, and is more interested in results than personal preferences. Understanding and knowledge of an Agile system development life cycle methodology (such as Scrum at Scale, SAFe, Kanban, etc.). Trusted and respected as a thought leader who can influence and persuade business and IT leaders and IT development teams. Ability to understand the long-term ("big picture") and short-term perspectives of situations and how they relate to achieving targeted business outcomes. Ability to estimate the financial impact of technology alternatives. Ability to quickly comprehend the functions and capabilities of existing, new and emerging technologies that enable and drive new business designs and models. Medline Industries, LP, and its subsidiaries, offer a competitive total rewards package, continuing education & training, and tremendous potential with a growing worldwide organization. The anticipated salary range for this position: $132,600.00 - $199,160.00 Annual The actual salary will vary based on applicant's location, education, experience, skills, and abilities. This role is bonus and/or incentive eligible. Medline will not pay less than the applicable minimum wage or salary threshold. Our benefit package includes health insurance, life and disability, 401(k) contributions, paid time off, etc., for employees working 30 or more hours per week on average. For a more comprehensive list of our benefits please click here . For roles where employees work less than 30 hours per week, benefits include 401(k) contributions as well as access to the Employee Assistance Program, Employee Resource Groups and the Employee Service Corp. We're dedicated to creating a Medline where everyone feels they belong and can grow their career. We strive to do this by seeking diversity in all forms, acting inclusively, and ensuring that people have tools and resources to perform at their best. Explore our Belonging page here . Medline Industries, LP is an equal opportunity employer. Medline evaluates qualified individuals without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, age, disability, neurodivergence, protected veteran status, marital or family status, caregiver responsibilities, genetic information, or any other characteristic protected by applicable federal, state, or local laws.
03/01/2026
Full time
Job Summary Generalist architect assigned to a corporate functions portfolio (Logistics tech, Sales tech, Ecom etc,.) for creating solution designs for new capabilities/projects that the Portfolio implements and for leading/trouble shooting and supporting operational initiatives within the portfolio like technical upgrades of systems, patches, support for outages etc. Job Description MAJOR RESPONSIBILITIES Acts as a consultant on a broad range of technologies, platforms and vendor offerings and drive targeted business outcomes while maintaining alignment with the overall enterprise architecture. Collaborates with management, business leaders, IT architecture, and other stakeholders to connect the business roadmap with operational decisions. Identifies the organizational impact (for example, on skills, processes, structures or culture) and financial impact of the solutions architecture. Produces expert analysis, research, and designs for technical solutions that enable business capabilities, processes, and functions. Stays abreast of current technology trends both in the industry as a whole and with the applications deployed in the landscape. Leads Technical Design, Analysis, Development and Delivery. Leads evaluation, design, and analysis for the implementation of a solutions architecture across a group of specific business applications or technologies based on enterprise business strategy, business capabilities, value-streams, business requirements and enterprise standards. Translates business and technical requirements into an architectural blueprint to achieve business objectives and documents all architecture design and analysis work. Creates architectural designs to guide and contextualize solution development across products, services, projects, and systems (including applications, technologies, processes and information) in a way that is directly actionable, clear and unambiguous. Works closely with the product owners and project/product managers to ensure a robust architectural runway that can support future business requirements throughout the product lifecycle. Provides consulting support to application teams to ensure the project/product is aligned with the overall enterprise, solution, and application architectures. MINIMUM JOB REQUIREMENTS Education Bachelor's degree in computer science, information-technology, systems engineering or a related study. Work Experience At least 5 years of experience with multiple IT solution development disciplines, including technical or infrastructure architecture, network management, application development, middleware, database management or cloud development. Experience with Cloud technologies; designing and building applications for the cloud (Azure, AWS, GCP). Experience with ERP and middleware applications (such as SAP, Oracle, SAP IS, SAP AEM, Talend). Knowledge / Skills / Abilities Experience delivering presentations to senior-level executives and technical audiences. Experience with various software development technologies (such as JavaScript, HTML, CSS, Java, .NET, PHP, ABAP). Experience with various database technologies (such as MSSQL, MongoDB, Oracle, HANA). Experience developing system designs, strategies, evaluations, and roadmaps. Excellent written and verbal communication skills, with ability to effectively communicate with technical and non-technical staff at all levels of the organization. Solid understanding of product management, agile principles and development methodologies and capability of supporting agile teams by providing advice and guidance on opportunities, impact and risks, taking account of technical and architectural debt. Excellent interpersonal skills in areas such as teamwork, facilitation and negotiation. Strong leadership skills. Excellent analytical and technical skills. Excellent planning and organizational skills. Skilled at influencing, guiding and facilitating stakeholders and peers with decision making. Knowledge of various aspects of an enterprise technology architecture like business, information, data, network and security. Ability to understand and apply various diagraming and modeling techniques. Good understanding of strategic and emerging technology trends, and the practical application of those technologies to evolving business and operating models. Ability to work effectively in a team environment and lead cross-functional teams. Strong understanding of the company's processes, organization, customers, and business models. PREFERRED JOB REQUIREMENTS Work Experience Experience in working with multi-site global teams. 4+ years of DevSecOps experience -including unit testing, CI/CD, security compliance, functional/performance/stress test automation. Experience architecting secure applications for Healthcare and are familiar with PHI, PII, and HIPAA compliance requirements. Knowledge / Skills / Abilities Ability to apply multiple technical solutions to enable future-state business capabilities that, in turn, drive targeted business outcomes. Ability to balance the long-term (big picture) and short-term implications of individual decisions. Ability to remain unbiased toward any specific technology or vendor choice, and is more interested in results than personal preferences. Understanding and knowledge of an Agile system development life cycle methodology (such as Scrum at Scale, SAFe, Kanban, etc.). Trusted and respected as a thought leader who can influence and persuade business and IT leaders and IT development teams. Ability to understand the long-term ("big picture") and short-term perspectives of situations and how they relate to achieving targeted business outcomes. Ability to estimate the financial impact of technology alternatives. Ability to quickly comprehend the functions and capabilities of existing, new and emerging technologies that enable and drive new business designs and models. Medline Industries, LP, and its subsidiaries, offer a competitive total rewards package, continuing education & training, and tremendous potential with a growing worldwide organization. The anticipated salary range for this position: $132,600.00 - $199,160.00 Annual The actual salary will vary based on applicant's location, education, experience, skills, and abilities. This role is bonus and/or incentive eligible. Medline will not pay less than the applicable minimum wage or salary threshold. Our benefit package includes health insurance, life and disability, 401(k) contributions, paid time off, etc., for employees working 30 or more hours per week on average. For a more comprehensive list of our benefits please click here . For roles where employees work less than 30 hours per week, benefits include 401(k) contributions as well as access to the Employee Assistance Program, Employee Resource Groups and the Employee Service Corp. We're dedicated to creating a Medline where everyone feels they belong and can grow their career. We strive to do this by seeking diversity in all forms, acting inclusively, and ensuring that people have tools and resources to perform at their best. Explore our Belonging page here . Medline Industries, LP is an equal opportunity employer. Medline evaluates qualified individuals without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, age, disability, neurodivergence, protected veteran status, marital or family status, caregiver responsibilities, genetic information, or any other characteristic protected by applicable federal, state, or local laws.
DivIHN (pronounced "divine") is a CMMI ML3-certified Technology and Talent solutions firm. Driven by a unique Purpose, Culture, and Value Delivery Model, we enable meaningful connections between talented professionals and forward-thinking organizations. Since our formation in 2002, organizations across commercial and public sectors have been trusting us to help build their teams with exceptional temporary and permanent talent. Visit us at to learn more and view our open positions. Please apply or call one of us to learn more For further inquiries regarding the following opportunity, please contact one of our Talent Specialists, Vijay, at (or) Saravanakumar, at (or) Abdul, at Title: Senior Data Engineer (Remote) Duration: 6 Months Location: Remote Only W2 candidates are eligible for this position. Third-party or C2C candidates will not be considered. Position Overview The Senior Data Engineer is responsible for designing, developing, and maintaining scalable data infrastructure and pipelines that ensure clean, organized, secure, and timely data is available to downstream users. This role supports analytics, reporting, business intelligence, and advanced data initiatives to drive child- and family-centered decision-making for senior leadership and the administration. The Senior Data Engineer works closely with client's leadership, the Illinois Department of Innovation and Technology (DoIT), data architects, analysts, software engineers, and other stakeholders to deliver reliable, compliant, and high-performing data systems. Key Responsibilities Data Architecture and Pipeline Development Design, develop, evaluate, and maintain scalable ETL/ELT data pipelines. Build and manage structured and unstructured data workflows to move Early Childhood program data from source systems to warehouses, data lakes, and analytics platforms. Develop scalable data ingestion and transformation processes to support increasing data volumes and complex analytics needs. Optimize data partitioning, indexing strategies, compression techniques, and distributed processing frameworks to improve storage and query performance. Cloud and Infrastructure Management Design and maintain cloud-based data platforms (e.g., AWS, Azure, Google Cloud, IBM Cloud). Manage data warehouses (e.g., BigQuery, Azure Synapse), data lakes (e.g., Amazon S3, Google Cloud Storage), lakehouses, and related infrastructure. Collaborate on infrastructure management, database administration, and modern data architecture approaches including data mesh and cloud-native solutions. Data Quality, Governance and Compliance Ensure data accuracy, consistency, reliability, and performance. Enforce compliance with FERPA, HIPAA, GDPR, COPPA, and other state/federal data governance requirements. Develop and implement industry-standard data security practices including encryption, access controls, breach notification protocols, and audit readiness. Implement validation rules, anomaly detection, monitoring frameworks, and quality assurance automation. Tools and Technical Expertise Develop and optimize data processing using SQL for transformation and querying. Use Python for scripting, automation, and data engineering tasks. Leverage Databricks and distributed computing frameworks for scalable data processing. Utilize tools such as Airflow, DBT, Splunk, Tableau, and Power BI for orchestration, monitoring, validation, and visualization. Implement CI/CD pipelines, version control strategies, automated testing, and pipeline orchestration best practices. Integration and Advanced Analytics Direct integration of data from APIs, third-party systems, and internal platforms. Support machine learning model deployment and real-time data processing environments. Enable advanced analytics and operationalize data science initiatives to improve decision-making, efficiency, and risk mitigation. Reporting and Stakeholder Communication Coordinate development of dashboards, reports, publications, briefings, and presentations. Ensure timely and accurate submission of mandated state and federal reports. Communicate complex technical findings to both technical and non-technical audiences. Promote interactive data exploration to improve transparency, accountability, and strategic decision-making. Documentation and Process Improvement Develop and maintain detailed documentation of data pipelines, architecture, workflows, and procedures. Establish industry-standard best practices for data engineering, governance, automation, and reproducibility. Lead continuous improvement initiatives to enhance reliability, scalability, and operational efficiency. Leadership and Collaboration Lead and manage collaboration within the Data Engineering section. Partner with leadership, analysts, data scientists, and engineers to build scalable, trusted data systems. Provide technical oversight, mentorship, and strategic direction for data engineering initiatives. Required Qualifications 5 years of experience designing and maintaining scalable data pipelines and modern data infrastructure. Strong proficiency in SQL and Python. Experience with distributed data engineering tools such as Databricks or Spark. Experience with cloud platforms (AWS, Azure, Google Cloud, or similar). Deep understanding of data governance, security, compliance, and regulatory requirements. Experience optimizing storage, partitioning strategies, and query performance. Knowledge of ETL/ELT methodologies and orchestration tools. Preferred Qualifications Experience working in State government or education data systems Experience with Databricks, Airflow, DBT, or similar data orchestration tools Experience supporting machine learning deployment in production Experience with real-time data processing frameworks About us: DivIHN, the 'IT Asset Performance Services' organization, provides Professional Consulting, Custom Projects, and Professional Resource Augmentation services to clients in the Mid-West and beyond. The strategic characteristics of the organization are Standardization, Specialization, and Collaboration. DivIHN is an equal opportunity employer. DivIHN does not and shall not discriminate against any employee or qualified applicant on the basis of race, color, religion (creed), gender, gender expression, age, national origin (ancestry), disability, marital status, sexual orientation, or military status.
03/01/2026
Full time
DivIHN (pronounced "divine") is a CMMI ML3-certified Technology and Talent solutions firm. Driven by a unique Purpose, Culture, and Value Delivery Model, we enable meaningful connections between talented professionals and forward-thinking organizations. Since our formation in 2002, organizations across commercial and public sectors have been trusting us to help build their teams with exceptional temporary and permanent talent. Visit us at to learn more and view our open positions. Please apply or call one of us to learn more For further inquiries regarding the following opportunity, please contact one of our Talent Specialists, Vijay, at (or) Saravanakumar, at (or) Abdul, at Title: Senior Data Engineer (Remote) Duration: 6 Months Location: Remote Only W2 candidates are eligible for this position. Third-party or C2C candidates will not be considered. Position Overview The Senior Data Engineer is responsible for designing, developing, and maintaining scalable data infrastructure and pipelines that ensure clean, organized, secure, and timely data is available to downstream users. This role supports analytics, reporting, business intelligence, and advanced data initiatives to drive child- and family-centered decision-making for senior leadership and the administration. The Senior Data Engineer works closely with client's leadership, the Illinois Department of Innovation and Technology (DoIT), data architects, analysts, software engineers, and other stakeholders to deliver reliable, compliant, and high-performing data systems. Key Responsibilities Data Architecture and Pipeline Development Design, develop, evaluate, and maintain scalable ETL/ELT data pipelines. Build and manage structured and unstructured data workflows to move Early Childhood program data from source systems to warehouses, data lakes, and analytics platforms. Develop scalable data ingestion and transformation processes to support increasing data volumes and complex analytics needs. Optimize data partitioning, indexing strategies, compression techniques, and distributed processing frameworks to improve storage and query performance. Cloud and Infrastructure Management Design and maintain cloud-based data platforms (e.g., AWS, Azure, Google Cloud, IBM Cloud). Manage data warehouses (e.g., BigQuery, Azure Synapse), data lakes (e.g., Amazon S3, Google Cloud Storage), lakehouses, and related infrastructure. Collaborate on infrastructure management, database administration, and modern data architecture approaches including data mesh and cloud-native solutions. Data Quality, Governance and Compliance Ensure data accuracy, consistency, reliability, and performance. Enforce compliance with FERPA, HIPAA, GDPR, COPPA, and other state/federal data governance requirements. Develop and implement industry-standard data security practices including encryption, access controls, breach notification protocols, and audit readiness. Implement validation rules, anomaly detection, monitoring frameworks, and quality assurance automation. Tools and Technical Expertise Develop and optimize data processing using SQL for transformation and querying. Use Python for scripting, automation, and data engineering tasks. Leverage Databricks and distributed computing frameworks for scalable data processing. Utilize tools such as Airflow, DBT, Splunk, Tableau, and Power BI for orchestration, monitoring, validation, and visualization. Implement CI/CD pipelines, version control strategies, automated testing, and pipeline orchestration best practices. Integration and Advanced Analytics Direct integration of data from APIs, third-party systems, and internal platforms. Support machine learning model deployment and real-time data processing environments. Enable advanced analytics and operationalize data science initiatives to improve decision-making, efficiency, and risk mitigation. Reporting and Stakeholder Communication Coordinate development of dashboards, reports, publications, briefings, and presentations. Ensure timely and accurate submission of mandated state and federal reports. Communicate complex technical findings to both technical and non-technical audiences. Promote interactive data exploration to improve transparency, accountability, and strategic decision-making. Documentation and Process Improvement Develop and maintain detailed documentation of data pipelines, architecture, workflows, and procedures. Establish industry-standard best practices for data engineering, governance, automation, and reproducibility. Lead continuous improvement initiatives to enhance reliability, scalability, and operational efficiency. Leadership and Collaboration Lead and manage collaboration within the Data Engineering section. Partner with leadership, analysts, data scientists, and engineers to build scalable, trusted data systems. Provide technical oversight, mentorship, and strategic direction for data engineering initiatives. Required Qualifications 5 years of experience designing and maintaining scalable data pipelines and modern data infrastructure. Strong proficiency in SQL and Python. Experience with distributed data engineering tools such as Databricks or Spark. Experience with cloud platforms (AWS, Azure, Google Cloud, or similar). Deep understanding of data governance, security, compliance, and regulatory requirements. Experience optimizing storage, partitioning strategies, and query performance. Knowledge of ETL/ELT methodologies and orchestration tools. Preferred Qualifications Experience working in State government or education data systems Experience with Databricks, Airflow, DBT, or similar data orchestration tools Experience supporting machine learning deployment in production Experience with real-time data processing frameworks About us: DivIHN, the 'IT Asset Performance Services' organization, provides Professional Consulting, Custom Projects, and Professional Resource Augmentation services to clients in the Mid-West and beyond. The strategic characteristics of the organization are Standardization, Specialization, and Collaboration. DivIHN is an equal opportunity employer. DivIHN does not and shall not discriminate against any employee or qualified applicant on the basis of race, color, religion (creed), gender, gender expression, age, national origin (ancestry), disability, marital status, sexual orientation, or military status.
Senior Software Engineer Needed - $120K-$180K - .NET Shop - 100% REMOTE This Jobot Job is hosted by: Steven Zacharias Are you a fit? Easy Apply now by clicking the "Apply" button and sending us your resume. Salary: $120,000 - $180,000 per year A bit about us: We are a growing .NET shop that's actively looking for a Senior Software Engineer to work 100% remote! If interested, please apply or email me your resume directly at - ! Why join us? $120,000-$180,000 Base Salary Health / Dental / Vision 401k PTO 100% REMOTE Job Details Qualifications: Proficient with .NET Core, ASP.NET, MVC, Web API, C# (or PHP, MySQL, Laravel, Ruby on Rails, PostgreSQL, Ember) Proficient with JavaScript Understanding of SOLID design principles Experience of unit tests and testable code Proficient with source code control tools and techniques Professional experience developing highly scalable API's and integrations Solid understanding of Web application architecture and operations Experience of React JS (preferred) or other front-end development ecosystem Experience of SQL, document databases, or other data persistence tools Familiarity with design patterns Familiarity with Azure or other cloud platforms Interested in hearing more? Easy Apply now by clicking the "Apply" button. Jobot is an Equal Opportunity Employer. We provide an inclusive work environment that celebrates diversity and all qualified candidates receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, age (40 and over), disability, military status, genetic information or any other basis protected by applicable federal, state, or local laws. Jobot also prohibits harassment of applicants or employees based on any of these protected categories. It is Jobot's policy to comply with all applicable federal, state and local laws respecting consideration of unemployment status in making hiring decisions. Sometimes Jobot is required to perform background checks with your authorization. Jobot will consider qualified candidates with criminal histories in a manner consistent with any applicable federal, state, or local law regarding criminal backgrounds, including but not limited to the Los Angeles Fair Chance Initiative for Hiring and the San Francisco Fair Chance Ordinance. Information collected and processed as part of your Jobot candidate profile, and any job applications, resumes, or other information you choose to submit is subject to Jobot's Privacy Policy, as well as the Jobot California Worker Privacy Notice and Jobot Notice Regarding Automated Employment Decision Tools which are available at By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from Jobot, and/or its agents and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here:
03/01/2026
Full time
Senior Software Engineer Needed - $120K-$180K - .NET Shop - 100% REMOTE This Jobot Job is hosted by: Steven Zacharias Are you a fit? Easy Apply now by clicking the "Apply" button and sending us your resume. Salary: $120,000 - $180,000 per year A bit about us: We are a growing .NET shop that's actively looking for a Senior Software Engineer to work 100% remote! If interested, please apply or email me your resume directly at - ! Why join us? $120,000-$180,000 Base Salary Health / Dental / Vision 401k PTO 100% REMOTE Job Details Qualifications: Proficient with .NET Core, ASP.NET, MVC, Web API, C# (or PHP, MySQL, Laravel, Ruby on Rails, PostgreSQL, Ember) Proficient with JavaScript Understanding of SOLID design principles Experience of unit tests and testable code Proficient with source code control tools and techniques Professional experience developing highly scalable API's and integrations Solid understanding of Web application architecture and operations Experience of React JS (preferred) or other front-end development ecosystem Experience of SQL, document databases, or other data persistence tools Familiarity with design patterns Familiarity with Azure or other cloud platforms Interested in hearing more? Easy Apply now by clicking the "Apply" button. Jobot is an Equal Opportunity Employer. We provide an inclusive work environment that celebrates diversity and all qualified candidates receive consideration for employment without regard to race, color, sex, sexual orientation, gender identity, religion, national origin, age (40 and over), disability, military status, genetic information or any other basis protected by applicable federal, state, or local laws. Jobot also prohibits harassment of applicants or employees based on any of these protected categories. It is Jobot's policy to comply with all applicable federal, state and local laws respecting consideration of unemployment status in making hiring decisions. Sometimes Jobot is required to perform background checks with your authorization. Jobot will consider qualified candidates with criminal histories in a manner consistent with any applicable federal, state, or local law regarding criminal backgrounds, including but not limited to the Los Angeles Fair Chance Initiative for Hiring and the San Francisco Fair Chance Ordinance. Information collected and processed as part of your Jobot candidate profile, and any job applications, resumes, or other information you choose to submit is subject to Jobot's Privacy Policy, as well as the Jobot California Worker Privacy Notice and Jobot Notice Regarding Automated Employment Decision Tools which are available at By applying for this job, you agree to receive calls, AI-generated calls, text messages, or emails from Jobot, and/or its agents and contracted partners. Frequency varies for text messages. Message and data rates may apply. Carriers are not liable for delayed or undelivered messages. You can reply STOP to cancel and HELP for help. You can access our privacy policy here:
Senior Technology Development Operations ManagerCooley is seeking a Senior DevOps Manager to join the Infrastructure & Development Operations team.Position summary: The Senior Technology Development Operations (DevOps)Manageris responsible for leading the team thatdesigns, architects,deploys, tests,maintains,and documents the DevOps technology stack. This stack is responsible for facilitating a secure, CI/CD-enabled, and highly availableSaaS-baseddelivery and hosting environment for Cooley's custom applications. The Senior DevOps Manager will build and deploy green-field solutions where needed, and otherwise willprimarilywork to improve theefficiency,security,and availability/reliability of the enterprise DevOps and related infrastructure. This role will workin an integrated fashion with the development teams to build in-depth knowledge of the products and code, attending daily stand-ups as needed. In addition to being technically advanced, this position will use a high degree of emotional intelligence and the ability to work as a team towards complex and layered objectives. Specific duties and responsibilities include, but are not limited to, the following:Position responsibilities: Provide experienced leadership in developing solutions for highly scalable, highly available, hybrid cloud (IaaS, PaaS, SaaS) infrastructure patterns and platform integrations across physical colocations and hyperscalers (AWS and Azure) Manage, build, configure, administer, operate, and maintain all components that comprise our DevOps environment Leverage industry standard Frameworks and Blueprints as a foundation to create best-in-class Terraform IaC module libraries Lead the evolution of our DevOps and DevSecOps practice maturity Act as a key member of the infrastructure architecture team to identify optimization opportunities throughout the infrastructure Define, document, and enforce configuration standards and governance through IaC Develop, test, deploy, and optimize DevOps IaC code deployment pipelines and practices Provision automation using CI/CD (DevOps Pipelines) and IaC (Terraform) tooling Serve as a technical escalation point Work with our development and data teams to integrate products into a DevOps-managed environment Develop and maintain scripts to automate tool/service deployments to our Hybrid Cloud environment through DevOps Pipelines and Releases Participate in software releases and deployments Contribute to the design, update, refinement, and documentation of operational processes Provide technical mentorship and educate team members as a subject matter expert on IaC, containerization, and CI/CD Brainstorm new ideas and ways to improve product delivery and efficiency Consult peer teams for feedback during the design, testing, and implementation stages Serve as direct supervisor and mentor to direct reports Provide day-to-day supervision of direct reports, ensure compliance with assigned work hours and monitor for compliance with all firm and department policies. Manage staffing coverage, review and process time logs/time off requests Support business professional development and continued educational opportunities In collaboration with immediate supervisor and HR, participate in hiring, performance appraisals, counseling, termination and other employee lifecycle events All other duties as assigned or requiredSkills and experience:Required: After orientation at Cooley LLP, exhibit proficiency in the Microsoft Office suite, iManage and other firm applications Ability to work extended and/or weekend hours, as required Ability to travel, as required 7+ years of relevant experience in cloud infrastructure and DevOps with 2+ years of exempt/management experience in relevant roles Proficiency in AWS or Azure architecture, configuration, and security Skilled in CI/CD pipeline design using Azure DevOps, Jenkins, or GitHub Actions Strong Terraform expertise, including advanced workflows and tools like Terragrunt Experience with Docker, Kubernetes, Helm, and GitOps tools (Flux, ArgoCD) Familiarity with microservices deployment and release automation Hands-on with .NET Core containers on Linux and scripting in Linux/Windows Knowledge of open-source and NoSQL databases (e.g., MS SQL, MongoDB, Elasticsearch) Experience with APM tools (Datadog, New Relic, etc.) and IaC security tools (Snyk, tfsec)Preferred: Bachelor's Degree in Computer Science, Information Technology, Engineering, or associated discipline Experience working with advanced ETL data workflows including technologies such as AWS EMR, Azure Synapse, Azure Data Factory, or Apache Hive/Spark/Airflow Supervisory experience Experience with IaC deployment of AKS/EKS/GKE architecture is highly desired Experience with enterprise Data Lake environments using technologies such as DataBricks or SnowflakeCompetencies: Expert analytical/quantitative, problem-solving, and deductive reasoning skills, with experience performing advanced troubleshooting and root cause analysis of complex technical issues Excellent organizational, planning, and time management skills and ability to work either independently or in a team environment to manage competing priorities and meet deadlines Advanced verbal and written communication skills with the ability to present findings, conclusions, alternatives, and information clearly and concisely Experience working with all levels of staff, management, stakeholders, and vendors with ability to build effective relationships through trust and diplomacyCooley offers a competitive compensation and excellent benefits package and is committed to fair and equitable employment practices.EOE.The expected annual pay range for this position with a full-time schedule is $180,000 - $255,000. Please note that final offer amount will be dependent on geographic location, applicable experience and skillset of the candidate.We offer a full range of elective benefits including medical, health savings account (with applicable medical plan), dental, vision, health and/or dependent care flexible spending accounts, pre-tax commuter benefits, life insurance, AD&D, long-term care coverage, backup care for children and/or adults and other parental support benefits. In addition to elective benefit options, benefited employees receive firm-paid life insurance, AD&D, LTD, short term medical benefits as well as 21 days of Paid Time Off ("PTO") and 10 paid holidays each year. We provide generous parental leave and fertility benefits. New employees will attend a detailed benefit orientation to learn more about our many benefits and resources.
03/01/2026
Senior Technology Development Operations ManagerCooley is seeking a Senior DevOps Manager to join the Infrastructure & Development Operations team.Position summary: The Senior Technology Development Operations (DevOps)Manageris responsible for leading the team thatdesigns, architects,deploys, tests,maintains,and documents the DevOps technology stack. This stack is responsible for facilitating a secure, CI/CD-enabled, and highly availableSaaS-baseddelivery and hosting environment for Cooley's custom applications. The Senior DevOps Manager will build and deploy green-field solutions where needed, and otherwise willprimarilywork to improve theefficiency,security,and availability/reliability of the enterprise DevOps and related infrastructure. This role will workin an integrated fashion with the development teams to build in-depth knowledge of the products and code, attending daily stand-ups as needed. In addition to being technically advanced, this position will use a high degree of emotional intelligence and the ability to work as a team towards complex and layered objectives. Specific duties and responsibilities include, but are not limited to, the following:Position responsibilities: Provide experienced leadership in developing solutions for highly scalable, highly available, hybrid cloud (IaaS, PaaS, SaaS) infrastructure patterns and platform integrations across physical colocations and hyperscalers (AWS and Azure) Manage, build, configure, administer, operate, and maintain all components that comprise our DevOps environment Leverage industry standard Frameworks and Blueprints as a foundation to create best-in-class Terraform IaC module libraries Lead the evolution of our DevOps and DevSecOps practice maturity Act as a key member of the infrastructure architecture team to identify optimization opportunities throughout the infrastructure Define, document, and enforce configuration standards and governance through IaC Develop, test, deploy, and optimize DevOps IaC code deployment pipelines and practices Provision automation using CI/CD (DevOps Pipelines) and IaC (Terraform) tooling Serve as a technical escalation point Work with our development and data teams to integrate products into a DevOps-managed environment Develop and maintain scripts to automate tool/service deployments to our Hybrid Cloud environment through DevOps Pipelines and Releases Participate in software releases and deployments Contribute to the design, update, refinement, and documentation of operational processes Provide technical mentorship and educate team members as a subject matter expert on IaC, containerization, and CI/CD Brainstorm new ideas and ways to improve product delivery and efficiency Consult peer teams for feedback during the design, testing, and implementation stages Serve as direct supervisor and mentor to direct reports Provide day-to-day supervision of direct reports, ensure compliance with assigned work hours and monitor for compliance with all firm and department policies. Manage staffing coverage, review and process time logs/time off requests Support business professional development and continued educational opportunities In collaboration with immediate supervisor and HR, participate in hiring, performance appraisals, counseling, termination and other employee lifecycle events All other duties as assigned or requiredSkills and experience:Required: After orientation at Cooley LLP, exhibit proficiency in the Microsoft Office suite, iManage and other firm applications Ability to work extended and/or weekend hours, as required Ability to travel, as required 7+ years of relevant experience in cloud infrastructure and DevOps with 2+ years of exempt/management experience in relevant roles Proficiency in AWS or Azure architecture, configuration, and security Skilled in CI/CD pipeline design using Azure DevOps, Jenkins, or GitHub Actions Strong Terraform expertise, including advanced workflows and tools like Terragrunt Experience with Docker, Kubernetes, Helm, and GitOps tools (Flux, ArgoCD) Familiarity with microservices deployment and release automation Hands-on with .NET Core containers on Linux and scripting in Linux/Windows Knowledge of open-source and NoSQL databases (e.g., MS SQL, MongoDB, Elasticsearch) Experience with APM tools (Datadog, New Relic, etc.) and IaC security tools (Snyk, tfsec)Preferred: Bachelor's Degree in Computer Science, Information Technology, Engineering, or associated discipline Experience working with advanced ETL data workflows including technologies such as AWS EMR, Azure Synapse, Azure Data Factory, or Apache Hive/Spark/Airflow Supervisory experience Experience with IaC deployment of AKS/EKS/GKE architecture is highly desired Experience with enterprise Data Lake environments using technologies such as DataBricks or SnowflakeCompetencies: Expert analytical/quantitative, problem-solving, and deductive reasoning skills, with experience performing advanced troubleshooting and root cause analysis of complex technical issues Excellent organizational, planning, and time management skills and ability to work either independently or in a team environment to manage competing priorities and meet deadlines Advanced verbal and written communication skills with the ability to present findings, conclusions, alternatives, and information clearly and concisely Experience working with all levels of staff, management, stakeholders, and vendors with ability to build effective relationships through trust and diplomacyCooley offers a competitive compensation and excellent benefits package and is committed to fair and equitable employment practices.EOE.The expected annual pay range for this position with a full-time schedule is $180,000 - $255,000. Please note that final offer amount will be dependent on geographic location, applicable experience and skillset of the candidate.We offer a full range of elective benefits including medical, health savings account (with applicable medical plan), dental, vision, health and/or dependent care flexible spending accounts, pre-tax commuter benefits, life insurance, AD&D, long-term care coverage, backup care for children and/or adults and other parental support benefits. In addition to elective benefit options, benefited employees receive firm-paid life insurance, AD&D, LTD, short term medical benefits as well as 21 days of Paid Time Off ("PTO") and 10 paid holidays each year. We provide generous parental leave and fertility benefits. New employees will attend a detailed benefit orientation to learn more about our many benefits and resources.
Were Hiring: Dotnet Senior Architect Location: Onsite Chicago Engagement: FTE / Contract / C2H Rate: $70/hr or $130K annually Shift: Day Visa: No H1B Profiles: Max 20 years of experience Educational Qualification B.Tech or Equivalent Experience Range 18+ years overall experience 8+ years in an architectural role Primary Skills (Must Have TA Screening) 18+ years in web application development (C#, ASP.NET, Web API, .NET Core/Framework) 12+ years in Angular or React 6+ years in Cloud (Azure or GCP) 4+ years in Microservices 10+ years in NUnit 8+ years in Design Patterns 14+ years with relational & NoSQL databases (SQL Server, PostgreSQL, Cosmos DB) 6+ years defining, designing, developing, and deploying web apps on Azure Job Responsibilities (RNR Technical Panel Evaluation) Lead architectural design & development of .NET applications (scalability, performance, security Design & implement cloud architectures on Azure (App Services, Functions, SQL DB, AKS Develop & promote microservices-based solutions. Drive application modernization, legacy transformation, and cloud adoption. Apply design patterns effectively. Oversee RESTful API design & implementation. Translate business requirements into technical solutions with product & customer teams. Provide technical guidance & mentorship to development teams. Establish architectural standards, coding practices, and security protocols. Stay updated with emerging technologies. Lead teams with strong communication & mentoring skills. Implement performance, security, and scalability best practices. Optimize performance (code, caching, DB indexing Collaborate with UX/UI designers & product teams. Conduct code reviews & enforce standards. Manage relational & NoSQL databases. Work with HTML5, CSS3, jQuery, JSON, Bootstrap. Apply software design principles & patterns. 5+ years in unit testing (xUnit, NUnit Deep understanding of architecture patterns. Excellent communication skills. Collaborate effectively onsite with client & offshore teams. Soft Skills (Hiring Manager Evaluation) Strong oral, written & presentation skills Ability to align technical decisions with business goals Confident in architectural decision-making Strong interpersonal & relationship-building skills Constructive feedback culture in code reviews Organization, collaboration & time management skills Analytical mindset with proactive problem-solving Expected Outcome We are seeking a highly skilled Technical Architect with hands-on expertise in the .NET stack to lead end-to-end architecture for alegal domain project. The ideal candidate will design scalable, secure, performance-optimized solutions usingC#, ASP.NET, Web API, .NET Core/Framework, Angular/React, NUnit, and Azure. This role requires strong collaboration onsite with product owners, BAs, and cross-functional teams to translate complex business requirements into robust technical designs. You will establish governance models, conduct design reviews, and ensure adherence to architectural standards. Apply Now Interested candidates can send their resumes to:
01/15/2026
Were Hiring: Dotnet Senior Architect Location: Onsite Chicago Engagement: FTE / Contract / C2H Rate: $70/hr or $130K annually Shift: Day Visa: No H1B Profiles: Max 20 years of experience Educational Qualification B.Tech or Equivalent Experience Range 18+ years overall experience 8+ years in an architectural role Primary Skills (Must Have TA Screening) 18+ years in web application development (C#, ASP.NET, Web API, .NET Core/Framework) 12+ years in Angular or React 6+ years in Cloud (Azure or GCP) 4+ years in Microservices 10+ years in NUnit 8+ years in Design Patterns 14+ years with relational & NoSQL databases (SQL Server, PostgreSQL, Cosmos DB) 6+ years defining, designing, developing, and deploying web apps on Azure Job Responsibilities (RNR Technical Panel Evaluation) Lead architectural design & development of .NET applications (scalability, performance, security Design & implement cloud architectures on Azure (App Services, Functions, SQL DB, AKS Develop & promote microservices-based solutions. Drive application modernization, legacy transformation, and cloud adoption. Apply design patterns effectively. Oversee RESTful API design & implementation. Translate business requirements into technical solutions with product & customer teams. Provide technical guidance & mentorship to development teams. Establish architectural standards, coding practices, and security protocols. Stay updated with emerging technologies. Lead teams with strong communication & mentoring skills. Implement performance, security, and scalability best practices. Optimize performance (code, caching, DB indexing Collaborate with UX/UI designers & product teams. Conduct code reviews & enforce standards. Manage relational & NoSQL databases. Work with HTML5, CSS3, jQuery, JSON, Bootstrap. Apply software design principles & patterns. 5+ years in unit testing (xUnit, NUnit Deep understanding of architecture patterns. Excellent communication skills. Collaborate effectively onsite with client & offshore teams. Soft Skills (Hiring Manager Evaluation) Strong oral, written & presentation skills Ability to align technical decisions with business goals Confident in architectural decision-making Strong interpersonal & relationship-building skills Constructive feedback culture in code reviews Organization, collaboration & time management skills Analytical mindset with proactive problem-solving Expected Outcome We are seeking a highly skilled Technical Architect with hands-on expertise in the .NET stack to lead end-to-end architecture for alegal domain project. The ideal candidate will design scalable, secure, performance-optimized solutions usingC#, ASP.NET, Web API, .NET Core/Framework, Angular/React, NUnit, and Azure. This role requires strong collaboration onsite with product owners, BAs, and cross-functional teams to translate complex business requirements into robust technical designs. You will establish governance models, conduct design reviews, and ensure adherence to architectural standards. Apply Now Interested candidates can send their resumes to:
Job Title: Data Infrastructure Engineer Location: Kennesaw, Georgia Regular/Temporary: Regular Full/Part Time: Full-Time Job ID: 292336 About Us Are you ready to transform lives through academic excellence, innovative research, strong community partnerships and economic opportunity? Kennesaw State University is one of the 50 largest public institutions in the country. With growing enrollment and global reach, we continue to expand our institutional influence and prominence beyond the state of Georgia. We offer more than 190 undergraduate, graduate, and doctoral degrees to empower our 47,000 students to become thought leaders, lifelong learners, and informed global citizens. Our entrepreneurial spirit, high-impact research, and Division I athletics draw students from throughout the region and from more than 100 countries across the globe. Our university's vibrant culture, career opportunities, rich benefits, and values of respect, integrity, collaboration, inclusivity, and accountability make us an employer of choice. We are part of the University System of Georgia . We are searching for talented people to join Kennesaw State University in our vision . Come Take Flight at KSU! Location (Primary Location for Job Responsibilities) Our Kennesaw campus is located at 1000 Chastain Road NW, Kennesaw, GA 30144. Our Marietta campus is located at 1100 South Marietta Parkway, Marietta, GA 30060. Job Summary Focuses on building and maintaining the data infrastructure including the extraction, loading, and staging of data from various data sources, both on-premises or in the cloud. Ensures the seamless and secure transfer of data, optimizing for performance, integration and reliability to enable subsequent data transformation and modeling processes for enterprise reporting and analytics. Responsibilities KEY RESPONSIBILITIES: 1. Develops, tests, and implements data pipelines to efficiently, securely, and reliably ingest data from various data sources into the enterprise data lake or data warehouse 2. Collaborates with data architects and data source SMEs to understand requirements and designs data pipelines accordingly 3. Develops and maintains robust data integrity checks, ensuring data accuracy, timeliness, and consistency 4. Utilizes cloud tools and custom scripts for validation, set up automated anomaly detection, and collaborates with stakeholders to align quality checks with data requirements 5. Enables integration of data from various data sources, such as databases, cloud services and APIs, to facilitate seamless data flow for enterprise and self-service reporting and analytics 6. Monitors and optimizes data pipelines, identifies and troubleshoots issues using tools such as Azure Monitor or similar system 7. Implements automated alerts and collaborates with architects to adopt best practices in pipeline design for enhanced performance 8. Maintains compliance with data security policies and regulations in the data infrastructure 9. Implements encryption, manage access controls, conduct security audits, and stay updated with security best practices to safeguard data integrity and privacy 10. Manages the storage structure within the data lake or data warehouse, optimizing resource utilization and ensuring efficient integration with data transformation and modeling processes 11. Supports senior technical staff in project planning and the development of standard operating procedures 12. Contributes to the establishment of best practices, ensuring project alignment with technical standards and organizational goals 13. Creates and regularly updates technical documentation for data pipeline processes 14. Ensures clear, comprehensive, and accessible documentation is available, covering all aspects of pipeline design, operation, and maintenance Required Qualifications Educational Requirements High school diploma or equivalent Required Experience Five (5) years of related IT experience. Preferred Qualifications Preferred Educational Qualifications An undergraduate or advanced degree from an accredited institution of higher education in Computer Science, Information Systems, Business Administration or related field Preferred Experience Experience working with reporting tools such as Power BI, Tableau, etc. Previous work experience in Higher Education Knowledge, Skills, & Abilities ABILITIES Commitment to continuous learning and staying updated with the latest trends and best practices in data engineering Able to handle multiple tasks or projects at one time meeting assigned deadlines KNOWLEDGE Familiarity with data protection and privacy laws and regulations (i.e., FERPA, HIPAA, etc.) Knowledge of various file formats used in data storage, like parquet, avro, csv, etc., and their implications on performance and storage Understanding of cloud storage services, such as blob storage, data lakes, data lakehouses, and data warehouses Understanding of data warehouse architecture patterns (e.g., Medallion Architecture, OBT, Materialized View Pattern, Star-Schema, etc.) Knowledge of data warehousing principles, including data quality, data enrichment and standardization, and data modeling. Knowledge of data warehouse architecture patterns, such as star schema, One Big Table (OBT), materialized view architecture, etc. Knowledge of best practices in data pipelines orchestration Knowledge of data security practices, including encryption/decryption, and compliance with data governance policies and guidelines SKILLS Excellent interpersonal, initiative, teamwork, problem solving, independent judgment, organization, communication (verbal and written), time management, project management and presentation skills Demonstrated skills in relational databases (e.g., SQL Server, Oracle, MySQL, etc.) Demonstrated skills in data engineering tools (e.g., Azure Data Factory, SSIS, Informatica, Pentaho, Oracle Data Integrator, etc.) Skills in identifying bottlenecks in data pipelines and optimizing for efficiency and scalability Skills in developing efficient and scalable data ingestion pipelines between cloud-based and on-premises data sources and destinations Proficient with computer applications and programs associated with the position (i.e., Microsoft Office suite and other collaboration tools) Proficiency with SQL and its variants (e.g., PL/SQL, T-SQL, etc.) Proficiency in programming/scripting languages (e.g., Python, Java, PowerShell, etc.) Proficiency in data engineering technologies and tools (e.g., Azure Data Factory, Apache Spark, Azure Synapse Analytics, Python, Airflow, etc.) Strong attention to detail and organization skills Strong customer service skills and phone and email etiquette USG Core Values The University System of Georgia is comprised of our 26 institutions of higher education and learning as well as the System Office. Our USG Statement of Core Values are Integrity, Excellence, Accountability, and Respect. These values serve as the foundation for all that we do as an organization, and each USG community member is responsible for demonstrating and upholding these standards. More details on the USG Statement of Core Values and Code of Conduct are available in USG Board Policy 8.2.18.1.2 and can be found on-line at . Additionally, USG supports Freedom of Expression as stated in Board Policy 6.5 Freedom of Expression and Academic Freedom found on-line at . Equal Employment Opportunity Kennesaw State University is an Equal Employment Opportunity Employer. The University is committed to maintaining a fair and respectful environment for living, work and study. To that end, and in accordance with federal and state law, Board of Regents policy, and University policy, the University prohibits harassment of or discrimination against any person because of race, color, sex (including sexual harassment, pregnancy, and medical conditions related to pregnancy), sexual orientation, gender identity, gender expression, ethnicity or national origin, religion, age, genetic information, disability, or veteran or military status by any member of the KSU Community on campus, in connection with a University program or activity, or in a manner that creates a hostile environment for members of the KSU community. For additional information on this policy, or to file a complaint under the provisions of this policy, students, employees, applicants for employment or admission or other third parties should contact the Office of Institutional Equity at English Building, Suite 225, . Other Information This is not a supervisory position. This position does not have any financial responsibilities. This position will not be required to drive. This role is considered a position of trust. This position does not require a purchasing card (P-Card). This position may travel 1% - 24% of the time This position does not require security clearance. Background Check Credit Report Standard Enhanced Per the University System of Georgia background check policy, all final candidates will be required to consent to a criminal background investigation. Final candidates may be asked to disclose criminal record history during the initial screening process and prior to a conditional offer of employment. Applicants for positions of trust with screening . click apply for full job details
01/14/2026
Full time
Job Title: Data Infrastructure Engineer Location: Kennesaw, Georgia Regular/Temporary: Regular Full/Part Time: Full-Time Job ID: 292336 About Us Are you ready to transform lives through academic excellence, innovative research, strong community partnerships and economic opportunity? Kennesaw State University is one of the 50 largest public institutions in the country. With growing enrollment and global reach, we continue to expand our institutional influence and prominence beyond the state of Georgia. We offer more than 190 undergraduate, graduate, and doctoral degrees to empower our 47,000 students to become thought leaders, lifelong learners, and informed global citizens. Our entrepreneurial spirit, high-impact research, and Division I athletics draw students from throughout the region and from more than 100 countries across the globe. Our university's vibrant culture, career opportunities, rich benefits, and values of respect, integrity, collaboration, inclusivity, and accountability make us an employer of choice. We are part of the University System of Georgia . We are searching for talented people to join Kennesaw State University in our vision . Come Take Flight at KSU! Location (Primary Location for Job Responsibilities) Our Kennesaw campus is located at 1000 Chastain Road NW, Kennesaw, GA 30144. Our Marietta campus is located at 1100 South Marietta Parkway, Marietta, GA 30060. Job Summary Focuses on building and maintaining the data infrastructure including the extraction, loading, and staging of data from various data sources, both on-premises or in the cloud. Ensures the seamless and secure transfer of data, optimizing for performance, integration and reliability to enable subsequent data transformation and modeling processes for enterprise reporting and analytics. Responsibilities KEY RESPONSIBILITIES: 1. Develops, tests, and implements data pipelines to efficiently, securely, and reliably ingest data from various data sources into the enterprise data lake or data warehouse 2. Collaborates with data architects and data source SMEs to understand requirements and designs data pipelines accordingly 3. Develops and maintains robust data integrity checks, ensuring data accuracy, timeliness, and consistency 4. Utilizes cloud tools and custom scripts for validation, set up automated anomaly detection, and collaborates with stakeholders to align quality checks with data requirements 5. Enables integration of data from various data sources, such as databases, cloud services and APIs, to facilitate seamless data flow for enterprise and self-service reporting and analytics 6. Monitors and optimizes data pipelines, identifies and troubleshoots issues using tools such as Azure Monitor or similar system 7. Implements automated alerts and collaborates with architects to adopt best practices in pipeline design for enhanced performance 8. Maintains compliance with data security policies and regulations in the data infrastructure 9. Implements encryption, manage access controls, conduct security audits, and stay updated with security best practices to safeguard data integrity and privacy 10. Manages the storage structure within the data lake or data warehouse, optimizing resource utilization and ensuring efficient integration with data transformation and modeling processes 11. Supports senior technical staff in project planning and the development of standard operating procedures 12. Contributes to the establishment of best practices, ensuring project alignment with technical standards and organizational goals 13. Creates and regularly updates technical documentation for data pipeline processes 14. Ensures clear, comprehensive, and accessible documentation is available, covering all aspects of pipeline design, operation, and maintenance Required Qualifications Educational Requirements High school diploma or equivalent Required Experience Five (5) years of related IT experience. Preferred Qualifications Preferred Educational Qualifications An undergraduate or advanced degree from an accredited institution of higher education in Computer Science, Information Systems, Business Administration or related field Preferred Experience Experience working with reporting tools such as Power BI, Tableau, etc. Previous work experience in Higher Education Knowledge, Skills, & Abilities ABILITIES Commitment to continuous learning and staying updated with the latest trends and best practices in data engineering Able to handle multiple tasks or projects at one time meeting assigned deadlines KNOWLEDGE Familiarity with data protection and privacy laws and regulations (i.e., FERPA, HIPAA, etc.) Knowledge of various file formats used in data storage, like parquet, avro, csv, etc., and their implications on performance and storage Understanding of cloud storage services, such as blob storage, data lakes, data lakehouses, and data warehouses Understanding of data warehouse architecture patterns (e.g., Medallion Architecture, OBT, Materialized View Pattern, Star-Schema, etc.) Knowledge of data warehousing principles, including data quality, data enrichment and standardization, and data modeling. Knowledge of data warehouse architecture patterns, such as star schema, One Big Table (OBT), materialized view architecture, etc. Knowledge of best practices in data pipelines orchestration Knowledge of data security practices, including encryption/decryption, and compliance with data governance policies and guidelines SKILLS Excellent interpersonal, initiative, teamwork, problem solving, independent judgment, organization, communication (verbal and written), time management, project management and presentation skills Demonstrated skills in relational databases (e.g., SQL Server, Oracle, MySQL, etc.) Demonstrated skills in data engineering tools (e.g., Azure Data Factory, SSIS, Informatica, Pentaho, Oracle Data Integrator, etc.) Skills in identifying bottlenecks in data pipelines and optimizing for efficiency and scalability Skills in developing efficient and scalable data ingestion pipelines between cloud-based and on-premises data sources and destinations Proficient with computer applications and programs associated with the position (i.e., Microsoft Office suite and other collaboration tools) Proficiency with SQL and its variants (e.g., PL/SQL, T-SQL, etc.) Proficiency in programming/scripting languages (e.g., Python, Java, PowerShell, etc.) Proficiency in data engineering technologies and tools (e.g., Azure Data Factory, Apache Spark, Azure Synapse Analytics, Python, Airflow, etc.) Strong attention to detail and organization skills Strong customer service skills and phone and email etiquette USG Core Values The University System of Georgia is comprised of our 26 institutions of higher education and learning as well as the System Office. Our USG Statement of Core Values are Integrity, Excellence, Accountability, and Respect. These values serve as the foundation for all that we do as an organization, and each USG community member is responsible for demonstrating and upholding these standards. More details on the USG Statement of Core Values and Code of Conduct are available in USG Board Policy 8.2.18.1.2 and can be found on-line at . Additionally, USG supports Freedom of Expression as stated in Board Policy 6.5 Freedom of Expression and Academic Freedom found on-line at . Equal Employment Opportunity Kennesaw State University is an Equal Employment Opportunity Employer. The University is committed to maintaining a fair and respectful environment for living, work and study. To that end, and in accordance with federal and state law, Board of Regents policy, and University policy, the University prohibits harassment of or discrimination against any person because of race, color, sex (including sexual harassment, pregnancy, and medical conditions related to pregnancy), sexual orientation, gender identity, gender expression, ethnicity or national origin, religion, age, genetic information, disability, or veteran or military status by any member of the KSU Community on campus, in connection with a University program or activity, or in a manner that creates a hostile environment for members of the KSU community. For additional information on this policy, or to file a complaint under the provisions of this policy, students, employees, applicants for employment or admission or other third parties should contact the Office of Institutional Equity at English Building, Suite 225, . Other Information This is not a supervisory position. This position does not have any financial responsibilities. This position will not be required to drive. This role is considered a position of trust. This position does not require a purchasing card (P-Card). This position may travel 1% - 24% of the time This position does not require security clearance. Background Check Credit Report Standard Enhanced Per the University System of Georgia background check policy, all final candidates will be required to consent to a criminal background investigation. Final candidates may be asked to disclose criminal record history during the initial screening process and prior to a conditional offer of employment. Applicants for positions of trust with screening . click apply for full job details