Job Summary
We are seeking a skilled Data Architect to lead the design and implementation of high-performance, scalable data platforms. This role involves architecting modern data lakes, warehouses, and streaming systems using Databricks and cloud technologies. If you enjoy solving complex data challenges and driving data-driven decision-making, this role is for you.
Key Responsibilities
Design and implement scalable data lakes, data warehouses, and real-time streaming architectures
Build, optimize, and manage Databricks solutions using Spark, Delta Lake, Workflows, and SQL Analytics
Develop cloud-native data platforms on Azure (Synapse, Data Factory, Data Lake) and AWS (Redshift, Glue, S3)
Create and automate ETL/ELT pipelines using Apache Spark, PySpark, and cloud tools
Design and maintain data models (dimensional, normalized, star schemas) to support analytics and reporting
Leverage big data technologies such as Hadoop, Kafka, and Scala for large-scale data processing
Ensure data governance, security, and compliance with standards like GDPR and HIPAA
Optimize Spark workloads and storage for performance and cost efficiency
Collaborate with engineering, analytics, and business teams to align data solutions with organizational goals
Required Skills & Qualifications
8+ years of experience in Data Architecture, Data Engineering, or Analytics
Strong hands-on experience with Databricks (Delta Lake, Spark, MLflow, Pipelines)
Expertise in Azure (Synapse, Data Factory, Data Lake) and AWS (Redshift, S3, Glue)
Proficient in SQL and Python or Scala
Experience with NoSQL databases (., MongoDB) and streaming platforms (., Kafka)
Solid understanding of data governance, security, and compliance best practices
Excellent problem-solving, communication, and cross-functional collaboration skills.
Looking forward to receiving suitable profiles at the earliest.