A fast-moving healthcare technology team is looking for a hands-on Data Engineer who knows how to move, transform, and scale data properly. This role focuses on building reliable ETL pipelines in AWS, using Python and SQL to turn raw data into clean, trusted datasets that power analytics, reporting, and product decisions.
This is a high-impact engineering role where you'll work closely with analytics, product, and engineering teams to build scalable data infrastructure.
What You'll Be Doing
- Design, build, and maintain scalable ETL / ELT data pipelines
- Ingest data from multiple sources including APIs, databases, files, and streaming platforms
- Optimize pipelines for performance, reliability, and cost efficiency
- Work directly with stakeholders to translate data requirements into production pipelines
- Implement data quality checks, monitoring, and logging
- Support analytics, reporting, and downstream data products
Tech Stack
- Cloud: AWS (S3, Glue, Lambda, Redshift, Athena, EMR)
- Languages: Python
- Data Tools: Glue, Airflow, dbt, CDK or custom ETL frameworks
- Databases: SQL (Redshift, Snowflake, Postgres)
What We're Looking For
- Strong experience as a Data Engineer building production pipelines
- Solid Python for data processing, orchestration, and automation
- Strong SQL and data modeling fundamentals
- Experience working in a cloud-first AWS data environment
- Proven ability to build and maintain reliable ETL pipelines
Details
- Location: Remote
- Rate: Flexible for the right professional
- Employment Type: Contract