Job DescriptionJob Description Candidates must possess work authorization which does not require sponsorship by the employer for a visa. About Infinitive: Infinitive is a data and AI consultancy that enables its clients to modernize, monetize and operationalize their data to create lasting and substantial value We possess deep industry and technology expertise to drive and sustain adoption of new capabilities. We match our people and personalities to our clients' culture while bringing the right mix of talent and skills to enable high return on investment. Infinitive has been named "Best Small Firms to Work For" by Consulting Magazine 7 times most recently in 2024. Infinitive has also been named a Washington Post "Top Workplace", Washington Business Journal "Best Places to Work", and Virginia Business "Best Places to Work." Job Summary: We are seeking a highly skilled Data Architect with extensive experience in Databricks on AWS or Azure to join our dynamic team. The ideal candidate will be responsible for designing, building, and maintaining scalable data architectures, ensuring data integrity, and optimizing data flow and collection. This role requires a deep understanding of cloud-based data solutions and strong expertise in leveraging Databricks to drive data initiatives. Key Responsibilities: Design, develop, and implement data architectures and solutions on Databricks in both AWS and Azure. Collaborate with stakeholders to understand business requirements and translate them into technical specifications. Develop and maintain ETL processes to ensure efficient data flow and integration across systems. Ensure data quality, integrity, and security in all data-related activities. Optimize data storage and retrieval strategies to improve performance and cost-efficiency. Lead the implementation of data governance policies and procedures. Provide technical guidance and mentorship to data engineering teams. Stay updated with the latest industry trends and best practices in data architecture, Databricks, cloud data warehouses, AWS and Azure. Required Skills and Qualifications: Bachelor's or Master's degree in Computer Science, Information Technology, or related field. Proven experience as a Data Architect, with a strong focus on Databricks. Proficiency in Databricks, including the ability to build and optimize Spark applications. Extensive experience with AWS services such as S3, Redshift, EMR, Lambda, Glue, and others. Strong knowledge of Unity Catalog and its application in managing and securing data assets. Strong knowledge of data modeling, ETL processes, and data warehousing concepts. Proficiency in SQL and experience with big data technologies (e.g., Hadoop, Spark). Familiarity with data governance and data security practices. Databricks Certified Associate. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Preferred Qualifications: AWS Certified Solutions Architect or similar certification. Databricks Certified Professional. Experience with other cloud platforms (e.g., Azure, GCP). Knowledge of machine learning and data science concepts. Experience with data visualization tools (e.g., Tableau, Power BI). Prior experience in a leadership or mentorship role. Powered by JazzHR 5vQYkVGXC4
04/26/2026
Full time
Job DescriptionJob Description Candidates must possess work authorization which does not require sponsorship by the employer for a visa. About Infinitive: Infinitive is a data and AI consultancy that enables its clients to modernize, monetize and operationalize their data to create lasting and substantial value We possess deep industry and technology expertise to drive and sustain adoption of new capabilities. We match our people and personalities to our clients' culture while bringing the right mix of talent and skills to enable high return on investment. Infinitive has been named "Best Small Firms to Work For" by Consulting Magazine 7 times most recently in 2024. Infinitive has also been named a Washington Post "Top Workplace", Washington Business Journal "Best Places to Work", and Virginia Business "Best Places to Work." Job Summary: We are seeking a highly skilled Data Architect with extensive experience in Databricks on AWS or Azure to join our dynamic team. The ideal candidate will be responsible for designing, building, and maintaining scalable data architectures, ensuring data integrity, and optimizing data flow and collection. This role requires a deep understanding of cloud-based data solutions and strong expertise in leveraging Databricks to drive data initiatives. Key Responsibilities: Design, develop, and implement data architectures and solutions on Databricks in both AWS and Azure. Collaborate with stakeholders to understand business requirements and translate them into technical specifications. Develop and maintain ETL processes to ensure efficient data flow and integration across systems. Ensure data quality, integrity, and security in all data-related activities. Optimize data storage and retrieval strategies to improve performance and cost-efficiency. Lead the implementation of data governance policies and procedures. Provide technical guidance and mentorship to data engineering teams. Stay updated with the latest industry trends and best practices in data architecture, Databricks, cloud data warehouses, AWS and Azure. Required Skills and Qualifications: Bachelor's or Master's degree in Computer Science, Information Technology, or related field. Proven experience as a Data Architect, with a strong focus on Databricks. Proficiency in Databricks, including the ability to build and optimize Spark applications. Extensive experience with AWS services such as S3, Redshift, EMR, Lambda, Glue, and others. Strong knowledge of Unity Catalog and its application in managing and securing data assets. Strong knowledge of data modeling, ETL processes, and data warehousing concepts. Proficiency in SQL and experience with big data technologies (e.g., Hadoop, Spark). Familiarity with data governance and data security practices. Databricks Certified Associate. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Preferred Qualifications: AWS Certified Solutions Architect or similar certification. Databricks Certified Professional. Experience with other cloud platforms (e.g., Azure, GCP). Knowledge of machine learning and data science concepts. Experience with data visualization tools (e.g., Tableau, Power BI). Prior experience in a leadership or mentorship role. Powered by JazzHR 5vQYkVGXC4
Job DescriptionJob DescriptionAbout InfinitiveInfinitive is a data and AI consultancy that enables its clients to modernize and operationalize their data to create lasting and substantial value. We bring deep industry and technology expertise to drive and sustain adoption of new capabilities. We match our people and personalities to our clients' culture while bringing the right mix of talent and skills to enable measurable value. Infinitive has been named Best Small Firms to Work For by Consulting Magazine 8 times, most recently in 2025. Infinitive has also been named a Washington Post Top Workplace, Washington Business Journal Best Places to Work, and Virginia Business Best Places to Work. Position Summary We are seeking a highly skilled Data Architect to design, implement, and optimize our enterprise data landscape. You will serve as the blueprint designer for our data systems, ensuring that our infrastructure is scalable, secure, and aligned with business objectives. The ideal candidate bridges the gap between high-level business requirements and technical execution, possessing deep expertise in both cloud engineering and sophisticated data modeling. Key Responsibilities Architectural Design: Lead the design and implementation of end-to-end data architecture, from ingestion and storage to transformation and consumption layers. Data Modeling: Develop and maintain conceptual, logical, and physical data models (Star Schema, Snowflake Schema, Data Vault) to support diverse analytics and reporting needs. Cloud Strategy: Design scalable, secure, and high-performance data solutions within the AWS ecosystem. Platform Integration: Oversee the architectural synergy between Databricks (for processing and Lakehouse capabilities) and Snowflake (for cloud data warehousing). Engineering Oversight: Collaborate with Data Engineers to build robust ETL/ELT pipelines, ensuring best practices in CI/CD, data quality, and governance. Strategy & Governance: Establish data standards, selection of tools, and best practices for metadata management and data security. Required Qualifications Cloud Expertise: Proven experience architecting data solutions on AWS (S3, Glue, Athena, Lambda, Redshift, etc.). Data Modeling: Extensive experience in multi-dimensional modeling and designing complex schemas for large-scale data environments. Platform Proficiency: Hands-on experience and architectural knowledge of Databricks (Spark, Delta Lake) and Snowflake. Data Engineering: Strong background in SQL, Python, or Scala, with the ability to vet and guide complex data pipeline development. Communication: Ability to translate complex technical concepts into actionable insights for executive stakeholders and non-technical teams. Preferred Qualifications Relevant certifications (e.g., AWS Certified Data Engineer, Snowflake SnowPro Core, or Databricks Certified Data Engineer Professional). Experience with Infrastructure as Code (Terraform, CloudFormation). Familiarity with data orchestration tools (Airflow, Dagster) and dbt (data build tool). Knowledge of data privacy regulations (GDPR, CCPA) and security frameworks. Why Join Us? At Infinitive, you will work on high-impact projects that transform how our clients leverage their most valuable asset-data. We offer a collaborative environment where innovation is encouraged, and professional growth is a priority. Powered by JazzHR ztYGfGZJ2F
04/26/2026
Full time
Job DescriptionJob DescriptionAbout InfinitiveInfinitive is a data and AI consultancy that enables its clients to modernize and operationalize their data to create lasting and substantial value. We bring deep industry and technology expertise to drive and sustain adoption of new capabilities. We match our people and personalities to our clients' culture while bringing the right mix of talent and skills to enable measurable value. Infinitive has been named Best Small Firms to Work For by Consulting Magazine 8 times, most recently in 2025. Infinitive has also been named a Washington Post Top Workplace, Washington Business Journal Best Places to Work, and Virginia Business Best Places to Work. Position Summary We are seeking a highly skilled Data Architect to design, implement, and optimize our enterprise data landscape. You will serve as the blueprint designer for our data systems, ensuring that our infrastructure is scalable, secure, and aligned with business objectives. The ideal candidate bridges the gap between high-level business requirements and technical execution, possessing deep expertise in both cloud engineering and sophisticated data modeling. Key Responsibilities Architectural Design: Lead the design and implementation of end-to-end data architecture, from ingestion and storage to transformation and consumption layers. Data Modeling: Develop and maintain conceptual, logical, and physical data models (Star Schema, Snowflake Schema, Data Vault) to support diverse analytics and reporting needs. Cloud Strategy: Design scalable, secure, and high-performance data solutions within the AWS ecosystem. Platform Integration: Oversee the architectural synergy between Databricks (for processing and Lakehouse capabilities) and Snowflake (for cloud data warehousing). Engineering Oversight: Collaborate with Data Engineers to build robust ETL/ELT pipelines, ensuring best practices in CI/CD, data quality, and governance. Strategy & Governance: Establish data standards, selection of tools, and best practices for metadata management and data security. Required Qualifications Cloud Expertise: Proven experience architecting data solutions on AWS (S3, Glue, Athena, Lambda, Redshift, etc.). Data Modeling: Extensive experience in multi-dimensional modeling and designing complex schemas for large-scale data environments. Platform Proficiency: Hands-on experience and architectural knowledge of Databricks (Spark, Delta Lake) and Snowflake. Data Engineering: Strong background in SQL, Python, or Scala, with the ability to vet and guide complex data pipeline development. Communication: Ability to translate complex technical concepts into actionable insights for executive stakeholders and non-technical teams. Preferred Qualifications Relevant certifications (e.g., AWS Certified Data Engineer, Snowflake SnowPro Core, or Databricks Certified Data Engineer Professional). Experience with Infrastructure as Code (Terraform, CloudFormation). Familiarity with data orchestration tools (Airflow, Dagster) and dbt (data build tool). Knowledge of data privacy regulations (GDPR, CCPA) and security frameworks. Why Join Us? At Infinitive, you will work on high-impact projects that transform how our clients leverage their most valuable asset-data. We offer a collaborative environment where innovation is encouraged, and professional growth is a priority. Powered by JazzHR ztYGfGZJ2F
Job DescriptionJob Description Candidates must be local to the Washington D.C. metro area. About Infinitive: Infinitive is a data and AI consultancy that enables its clients to modernize, monetize and operationalize their data to create lasting and substantial value We possess deep industry and technology expertise to drive and sustain adoption of new capabilities. We match our people and personalities to our clients' culture while bringing the right mix of talent and skills to enable high return on investment. Infinitive has been named "Best Small Firms to Work For" by Consulting Magazine 6 times most recently in 2023. Infinitive has also been named a Washington Post "Top Workplace", Washington Business Journal "Best Places to Work", and Virginia Business "Best Places to Work." We are seeking a highly skilled and motivated Data Engineer to join our dynamic team. As a Data Engineer, you will play a crucial role in designing, developing, and maintaining our clients data infrastructure. Your expertise in Python, PySpark, ETL processes, CI/CD (Jenkins or GitHub), and experience with both streaming and batch workflows will be essential in ensuring the efficient flow and processing of data to support our clients. Responsibilities: Data Architecture and Design: Collaborate with cross-functional teams to understand data requirements and design robust data architecture solutions. Develop data models and schema designs to optimize data storage and retrieval. ETL Development: Implement ETL processes to extract, transform, and load data from various sources. Ensure data quality, integrity, and consistency throughout the ETL pipeline. Python and PySpark Development: Utilize your expertise in Python and PySpark to develop efficient data processing and analysis scripts. Optimize code for performance and scalability, keeping up-to-date with the latest industry best practices. Data Integration: Integrate data from different systems and sources to provide a unified view for analytical purposes. Collaborate with data scientists and analysts to implement solutions that meet their data integration needs. Streaming and Batch Workflows: Design and implement streaming workflows using PySpark Streaming or other relevant technologies. Develop batch processing workflows for large-scale data processing and analysis. CI/CD Implementation: Implement and maintain continuous integration and continuous deployment (CI/CD) pipelines using Jenkins or GitHub Actions. Automate testing, code deployment, and monitoring processes to ensure the reliability of data pipelines. Qualifications: Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Proven experience as a Data Engineer or similar role. Strong programming skills in Python and expertise in PySpark for both batch and streaming data processing. Hands-on experience with ETL tools and processes. Familiarity with CI/CD tools such as Jenkins or GitHub Actions. Solid understanding of data modeling, database design, and data warehousing concepts. Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Preferred Skills: Knowledge of cloud platforms such as AWS, Azure, or Google Cloud. Experience with version control systems (e.g., Git). Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes). Understanding of data security and privacy best practices. Applicants for employment in the U.S. must possess work authorization which does not require sponsorship by the employer for a visa. Infinitive is an Equal Opportunity Employer. Powered by JazzHR y32jmDQPg7
04/26/2026
Full time
Job DescriptionJob Description Candidates must be local to the Washington D.C. metro area. About Infinitive: Infinitive is a data and AI consultancy that enables its clients to modernize, monetize and operationalize their data to create lasting and substantial value We possess deep industry and technology expertise to drive and sustain adoption of new capabilities. We match our people and personalities to our clients' culture while bringing the right mix of talent and skills to enable high return on investment. Infinitive has been named "Best Small Firms to Work For" by Consulting Magazine 6 times most recently in 2023. Infinitive has also been named a Washington Post "Top Workplace", Washington Business Journal "Best Places to Work", and Virginia Business "Best Places to Work." We are seeking a highly skilled and motivated Data Engineer to join our dynamic team. As a Data Engineer, you will play a crucial role in designing, developing, and maintaining our clients data infrastructure. Your expertise in Python, PySpark, ETL processes, CI/CD (Jenkins or GitHub), and experience with both streaming and batch workflows will be essential in ensuring the efficient flow and processing of data to support our clients. Responsibilities: Data Architecture and Design: Collaborate with cross-functional teams to understand data requirements and design robust data architecture solutions. Develop data models and schema designs to optimize data storage and retrieval. ETL Development: Implement ETL processes to extract, transform, and load data from various sources. Ensure data quality, integrity, and consistency throughout the ETL pipeline. Python and PySpark Development: Utilize your expertise in Python and PySpark to develop efficient data processing and analysis scripts. Optimize code for performance and scalability, keeping up-to-date with the latest industry best practices. Data Integration: Integrate data from different systems and sources to provide a unified view for analytical purposes. Collaborate with data scientists and analysts to implement solutions that meet their data integration needs. Streaming and Batch Workflows: Design and implement streaming workflows using PySpark Streaming or other relevant technologies. Develop batch processing workflows for large-scale data processing and analysis. CI/CD Implementation: Implement and maintain continuous integration and continuous deployment (CI/CD) pipelines using Jenkins or GitHub Actions. Automate testing, code deployment, and monitoring processes to ensure the reliability of data pipelines. Qualifications: Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Proven experience as a Data Engineer or similar role. Strong programming skills in Python and expertise in PySpark for both batch and streaming data processing. Hands-on experience with ETL tools and processes. Familiarity with CI/CD tools such as Jenkins or GitHub Actions. Solid understanding of data modeling, database design, and data warehousing concepts. Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Preferred Skills: Knowledge of cloud platforms such as AWS, Azure, or Google Cloud. Experience with version control systems (e.g., Git). Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes). Understanding of data security and privacy best practices. Applicants for employment in the U.S. must possess work authorization which does not require sponsorship by the employer for a visa. Infinitive is an Equal Opportunity Employer. Powered by JazzHR y32jmDQPg7
Job DescriptionJob Description Candidates must be local to the Washington D.C. metro area. About Infinitive: Infinitive is a data and AI consultancy that enables its clients to modernize, monetize and operationalize their data to create lasting and substantial value. We possess deep industry and technology expertise to drive and sustain adoption of new capabilities. We match our people and personalities to our clients' culture while bringing the right mix of talent and skills to enable high return on investment. Infinitive has been named "Best Small Firms to Work For" by Consulting Magazine 8 times most recently in 2025. Infinitive has also been named a Washington Post "Top Workplace", Washington Business Journal "Best Places to Work", and Virginia Business "Best Places to Work." About the Role: We are seeking a highly skilled Senior Data Engineer to join our dynamic team. The ideal candidate will have a strong background in data engineering, with expertise in Databricks, DevOps tools (Jenkins/Terraform), and data modeling concepts (3NF, Dimensional, Data Vault). As a Senior Data Engineer, you will play a critical role in designing, implementing, and maintaining our client's data infrastructure while ensuring scalability, reliability, and efficiency. Responsibilities: Data Engineering: Design, build, and maintain scalable data pipelines and ETL processes using Databricks and other relevant technologies. DevOps Integration: Implement continuous integration and continuous deployment (CI/CD) pipelines using Jenkins and Terraform to automate deployment, monitoring, and scaling of data infrastructure. Data Modeling: Develop and implement data models based on business requirements, including 3NF, Dimensional, and Data Vault models. Ensure data models adhere to best practices for efficiency, scalability, and maintainability. Performance Optimization: Identify and address performance bottlenecks in data pipelines and queries. Optimize data processing and storage to improve overall system performance. Data Quality Assurance: Implement data quality checks and monitoring processes to ensure data accuracy, completeness, and consistency. Collaboration: Work closely with cross-functional teams including data scientists, analysts, and software engineers to understand data requirements and deliver high-quality solutions. Documentation and Best Practices: Document data pipelines, infrastructure configurations, and data models. Define and enforce best practices for data engineering and DevOps processes. Training and Mentorship: Provide guidance and mentorship to junior team members. Conduct training sessions to promote knowledge sharing and skill development within the team. Qualifications: Bachelor's degree in Computer Science, Engineering, or related field. Master's degree preferred. Proven experience as a Data Engineer, preferably in a cloud-based environment. Strong proficiency in Databricks for data processing and analytics. Hands-on experience with DevOps tools such as Jenkins and Terraform for infrastructure automation. In-depth knowledge of data modeling concepts including 3NF, Dimensional, and Data Vault. Proficiency in SQL and programming languages such as Python or Scala. Experience with cloud platforms such as AWS, Azure, or Google Cloud Platform. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Applicants for employment in the U.S. must possess work authorization which does not require sponsorship by the employer for a visa. Infinitive is an Equal Opportunity Employer. Powered by JazzHR tSyB06zCqs
04/26/2026
Full time
Job DescriptionJob Description Candidates must be local to the Washington D.C. metro area. About Infinitive: Infinitive is a data and AI consultancy that enables its clients to modernize, monetize and operationalize their data to create lasting and substantial value. We possess deep industry and technology expertise to drive and sustain adoption of new capabilities. We match our people and personalities to our clients' culture while bringing the right mix of talent and skills to enable high return on investment. Infinitive has been named "Best Small Firms to Work For" by Consulting Magazine 8 times most recently in 2025. Infinitive has also been named a Washington Post "Top Workplace", Washington Business Journal "Best Places to Work", and Virginia Business "Best Places to Work." About the Role: We are seeking a highly skilled Senior Data Engineer to join our dynamic team. The ideal candidate will have a strong background in data engineering, with expertise in Databricks, DevOps tools (Jenkins/Terraform), and data modeling concepts (3NF, Dimensional, Data Vault). As a Senior Data Engineer, you will play a critical role in designing, implementing, and maintaining our client's data infrastructure while ensuring scalability, reliability, and efficiency. Responsibilities: Data Engineering: Design, build, and maintain scalable data pipelines and ETL processes using Databricks and other relevant technologies. DevOps Integration: Implement continuous integration and continuous deployment (CI/CD) pipelines using Jenkins and Terraform to automate deployment, monitoring, and scaling of data infrastructure. Data Modeling: Develop and implement data models based on business requirements, including 3NF, Dimensional, and Data Vault models. Ensure data models adhere to best practices for efficiency, scalability, and maintainability. Performance Optimization: Identify and address performance bottlenecks in data pipelines and queries. Optimize data processing and storage to improve overall system performance. Data Quality Assurance: Implement data quality checks and monitoring processes to ensure data accuracy, completeness, and consistency. Collaboration: Work closely with cross-functional teams including data scientists, analysts, and software engineers to understand data requirements and deliver high-quality solutions. Documentation and Best Practices: Document data pipelines, infrastructure configurations, and data models. Define and enforce best practices for data engineering and DevOps processes. Training and Mentorship: Provide guidance and mentorship to junior team members. Conduct training sessions to promote knowledge sharing and skill development within the team. Qualifications: Bachelor's degree in Computer Science, Engineering, or related field. Master's degree preferred. Proven experience as a Data Engineer, preferably in a cloud-based environment. Strong proficiency in Databricks for data processing and analytics. Hands-on experience with DevOps tools such as Jenkins and Terraform for infrastructure automation. In-depth knowledge of data modeling concepts including 3NF, Dimensional, and Data Vault. Proficiency in SQL and programming languages such as Python or Scala. Experience with cloud platforms such as AWS, Azure, or Google Cloud Platform. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Applicants for employment in the U.S. must possess work authorization which does not require sponsorship by the employer for a visa. Infinitive is an Equal Opportunity Employer. Powered by JazzHR tSyB06zCqs
Job DescriptionJob Description Candidates must be local to the Washington D.C. metro area. About Infinitive: Infinitive is a data and AI consultancy that enables its clients to modernize, monetize and operationalize their data to create lasting and substantial value We possess deep industry and technology expertise to drive and sustain adoption of new capabilities. We match our people and personalities to our clients' culture while bringing the right mix of talent and skills to enable high return on investment. Infinitive has been named "Best Small Firms to Work For" by Consulting Magazine 7 times most recently in 2024. Infinitive has also been named a Washington Post "Top Workplace", Washington Business Journal "Best Places to Work", and Virginia Business "Best Places to Work." We are seeking a highly skilled and motivated Data Engineer to join our dynamic team. As a Data Engineer, you will play a crucial role in designing, developing, and maintaining our clients data infrastructure. Your expertise in Python, PySpark, ETL processes, CI/CD (Jenkins or GitHub), and experience with both streaming and batch workflows will be essential in ensuring the efficient flow and processing of data to support our clients. Responsibilities: Data Architecture and Design: Collaborate with cross-functional teams to understand data requirements and design robust data architecture solutions. Develop data models and schema designs to optimize data storage and retrieval. ETL Development: Implement ETL processes to extract, transform, and load data from various sources. Ensure data quality, integrity, and consistency throughout the ETL pipeline. Python and PySpark Development: Utilize your expertise in Python and PySpark to develop efficient data processing and analysis scripts. Optimize code for performance and scalability, keeping up-to-date with the latest industry best practices. Data Integration: Integrate data from different systems and sources to provide a unified view for analytical purposes. Collaborate with data scientists and analysts to implement solutions that meet their data integration needs. Streaming and Batch Workflows: Design and implement streaming workflows using PySpark Streaming or other relevant technologies. Develop batch processing workflows for large-scale data processing and analysis. CI/CD Implementation: Implement and maintain continuous integration and continuous deployment (CI/CD) pipelines using Jenkins or GitHub Actions. Automate testing, code deployment, and monitoring processes to ensure the reliability of data pipelines. Qualifications: Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. 7+ years of proven experience as a Data Engineer or similar role. Strong programming skills in Python and expertise in PySpark for both batch and streaming data processing. Hands-on experience with ETL tools and processes. Familiarity with CI/CD tools such as Jenkins or GitHub Actions. Solid understanding of data modeling, database design, and data warehousing concepts. Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Preferred Skills: Knowledge of cloud platforms such as AWS, Azure, or Google Cloud. Experience with version control systems (e.g., Git). Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes). Understanding of data security and privacy best practices. Applicants for employment in the U.S. must possess work authorization which does not require sponsorship by the employer for a visa. Infinitive is an Equal Opportunity Employer. Powered by JazzHR B6XnkdgurR
04/25/2026
Full time
Job DescriptionJob Description Candidates must be local to the Washington D.C. metro area. About Infinitive: Infinitive is a data and AI consultancy that enables its clients to modernize, monetize and operationalize their data to create lasting and substantial value We possess deep industry and technology expertise to drive and sustain adoption of new capabilities. We match our people and personalities to our clients' culture while bringing the right mix of talent and skills to enable high return on investment. Infinitive has been named "Best Small Firms to Work For" by Consulting Magazine 7 times most recently in 2024. Infinitive has also been named a Washington Post "Top Workplace", Washington Business Journal "Best Places to Work", and Virginia Business "Best Places to Work." We are seeking a highly skilled and motivated Data Engineer to join our dynamic team. As a Data Engineer, you will play a crucial role in designing, developing, and maintaining our clients data infrastructure. Your expertise in Python, PySpark, ETL processes, CI/CD (Jenkins or GitHub), and experience with both streaming and batch workflows will be essential in ensuring the efficient flow and processing of data to support our clients. Responsibilities: Data Architecture and Design: Collaborate with cross-functional teams to understand data requirements and design robust data architecture solutions. Develop data models and schema designs to optimize data storage and retrieval. ETL Development: Implement ETL processes to extract, transform, and load data from various sources. Ensure data quality, integrity, and consistency throughout the ETL pipeline. Python and PySpark Development: Utilize your expertise in Python and PySpark to develop efficient data processing and analysis scripts. Optimize code for performance and scalability, keeping up-to-date with the latest industry best practices. Data Integration: Integrate data from different systems and sources to provide a unified view for analytical purposes. Collaborate with data scientists and analysts to implement solutions that meet their data integration needs. Streaming and Batch Workflows: Design and implement streaming workflows using PySpark Streaming or other relevant technologies. Develop batch processing workflows for large-scale data processing and analysis. CI/CD Implementation: Implement and maintain continuous integration and continuous deployment (CI/CD) pipelines using Jenkins or GitHub Actions. Automate testing, code deployment, and monitoring processes to ensure the reliability of data pipelines. Qualifications: Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. 7+ years of proven experience as a Data Engineer or similar role. Strong programming skills in Python and expertise in PySpark for both batch and streaming data processing. Hands-on experience with ETL tools and processes. Familiarity with CI/CD tools such as Jenkins or GitHub Actions. Solid understanding of data modeling, database design, and data warehousing concepts. Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Preferred Skills: Knowledge of cloud platforms such as AWS, Azure, or Google Cloud. Experience with version control systems (e.g., Git). Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes). Understanding of data security and privacy best practices. Applicants for employment in the U.S. must possess work authorization which does not require sponsorship by the employer for a visa. Infinitive is an Equal Opportunity Employer. Powered by JazzHR B6XnkdgurR