role description:
we are seeking an experienced data engineering specialist to join our dynamic team. the ideal candidate should have a strong background in python and spark, with a minimum of 3 years of relevant experience. the primary focus of this role is to design, develop, and maintain robust data pipelines and data integration solutions. the candidate should have a solid understanding of big data fundamentals, including hive, hadoop, and related technologies. experience in agile delivery and working with cloud-based platforms is highly desirable, and knowledge of the insurance domain and data modeling is a definite plus.
responsibilities:
- design, develop, and maintain scalable data pipelines and data integration solutions.
- collaborate with cross-functional teams to gather and analyze data requirements.
- extract, transform, and load (etl) data from various sources into the data warehouse or data lake.
- develop and optimize data processing jobs using python and spark.
- implement and maintain data governance and data quality standards.
- perform data profiling and analysis to identify data quality issues and provide solutions.
- work closely with stakeholders to understand business needs and translate them into technical requirements.
- collaborate with data scientists and analysts to support their data needs and ensure data availability.
- monitor and optimize the performance of data processing workflows.
requirements:
- bachelor’s degree in computer science, engineering, or a related field.
- minimum of years of experience in data engineering, with a focus on python, sql and spark.
- experience in agile software development methodologies and delivering data engineering projects in an agile environment.
- familiarity with cloud-based platforms (., aws, azure, google cloud) and hands-on experience in deploying data solutions in the cloud is a plus.
- understanding of data modeling concepts and experience working with relational databases and data wa