Job Description:
We are seeking a highly skilled and experienced *Data Engineer* to help shape and scale our supply chain and operations analytics infrastructure. In this role, you will work closely with cross-functional teams—including Operations, Finance, and Analytics—to design, build, and monitor scalable, production-grade data pipelines. Your work will be critical to driving data-informed decisions across the business.
---
What You’ll Do:
- Develop and maintain automated ETL pipelines using Python, Snowflake SQL, and related technologies.
- Ensure robust data quality through unit testing, validation, and continuous monitoring.
- Collaborate with stakeholders to ingest and transform large healthcare datasets with accuracy and efficiency.
- Leverage AWS services such as S3, DynamoDB, Batch, and Step Functions for data integration and deployment.
- Optimize performance for pipelines processing large-scale datasets (1GB+).
- Translate business requirements into reliable, scalable data solutions.
---
What You Bring:
- 4+ years of hands-on experience as a Data Engineer or in a similar role.
- Proven expertise in Python, SQL, and Snowflake for data engineering tasks.
- Strong experience building and maintaining production-grade ETL pipelines.
- Solid understanding of data validation, transformation, and debugging practices.
- Prior experience with *healthcare or claims datasets* is highly preferred.
- Practical knowledge of AWS technologies: S3, DynamoDB, Batch, Step Functions.
- Experience working with large datasets and complex data environments.
- Excellent verbal and written English communication skills.
---
Work Schedule:
- Full-time remote* position (40 hours/week).
- Working hours must align with . Central Time Zone (CT).