Job Summary:
We are seeking a Snowflake Data Engineer to join our Data & Analytics team. This role involves
designing, implementing, and optimizing Snowflake-based data solutions. The ideal candidate will have
proven, hands-on data engineering expertise in Snowflake, cloud data platforms, ETL/ELT processes,
and Medallion data architecture best practices. The data engineer role has a day-to-day focus on
implementation, performance optimization and scalability. This is a tactical role requiring independent
data analysis and data discovery to understand our existing source systems, fact and dimension data
models, and implement an enterprise data warehouse solution in Snowflake. This role will take direction
from the Lead Snowflake Data Engineer and Director of Data Engineering for their work while bringing
their own domain expertise and experience.
Essential Functions and Tasks:
Participate in the design, development, and maintenance of a scalable Snowflake data solution serving
our enterprise data & analytics team.
Implement data pipelines, ETL/ELT workflows, and data warehouse solutions using Snowflake and
related technologies.
Optimize Snowflake database performance
Collaborate with cross-functional teams of data analysts, business analysts, data scientists, and software
engineers, to define and implement data solutions.
Ensure data quality, integrity, and governance.
Troubleshoot and resolve data-related issues, ensuring high availability and performance of the data
platform.
Education and Experience Requirements:
Bachelors or Masters degree in Computer Science, Information Systems, or a related field.
4+ years of experience in-depth data engineering, with at least 1+ minimum year(s) of dedicated
experience engineering solutions in an enterprise scale Snowflake environment.
Tactical expertise in ANSI SQL, performance tuning, and data modeling techniques.
Strong experience with cloud platforms (preference to Azure) and their data services.
Experience in ETL/ELT development using tools such as Azure Data Factory, dbt, Matillion, Talend, or
Fivetran.
Hands-on experience with scripting languages like Python for data processing.
Snowflake SnowPro certification; preference to the engineering course path.
Experience with CI/CD pipelines, DevOps practices, and Infrastructure as Code (IaC).
Knowledge of streaming data processing frameworks such as Apache Kafka or Spark Streaming.
Familiarity with BI and visualization tools such as PowerBI.
Knowledge, Skills, and Abilities:
Familiarity working in an agile scum team, including sprint planning, daily stand-ups, backlog
grooming, and retrospectives.
Ability to self-manage medium complexity deliverables and document user stories and tasks
through Azure Dev Ops.
Personal accountability to committed sprint user stories and tasks
Strong analytical and problem-solving skills with the ability to handle complex data challenges
Ability to read, understand, and apply state/federal laws, regulations, and policies.
Ability to communicate with diverse personalities in a tactful, mature, and professional manner.
Ability to remain flexible and work within a collaborative and fast paced environment.
Understand and comply with company policies and procedures.
Strong oral, written, and interpersonal communication skills.
Strong time management and organizational skills.
Physical Demands:
40 hours per week
Occasional Standing
Occasional Walking
Sitting for prolonged periods of time
Frequent hand, finger movement
Communicate verbally and in writing
Extensive use of computer keyboard and viewing of computer screen
Specific vision abilities required by this job include close vision