Key Responsibilities:
- Design and manage data pipelines to transform and integrate structured and unstructured data.
- Ensure high data quality and performance.
- Support analytics, reporting, and business intelligence needs by preparing reliable data sets and models for stakeholders.
- Collaborate with Analysts, Digital Project Managers, Developers, and business teams to ensure data accessibility and usefulness.
- Enforce standards for data governance, security, and cost-effective operations.
Ideal candidates will thrive in a collaborative, mission-focused environment and excel in ETL/ELT engineering. They should have experience building scalable data solutions using modern data engineering technologies that impact organizational outcomes.
Required Qualifications:
- Strong proficiency in Structured Query Language (SQL) and at least one programming language such as Python or Scala.
- Hands-on experience developing ETL or ELT pipelines.
- Experience with cloud-native data services (., AWS Glue, AWS Redshift, Azure Data Factory, Azure Synapse, Databricks).
- Good understanding of data modeling and data warehousing concepts.
Desired Qualifications:
- Design, build, and optimize scalable ETL or ELT pipelines handling both structured and unstructured data.
- Ingest and integrate data from internal and external sources into data lakes or data warehouses.
- Ensure that processed data is accurate, complete, and secure.
Outcomes include well-documented, automated pipelines that support downstream analytics without bottlenecks or data errors.