Job SummaryWe are seeking a skilled Data Architect to lead the design and implementation of high-performance, scalable data platforms. This role involves architecting modern data lakes, warehouses, and streaming systems using Databricks and cloud technologies. If you enjoy solving complex data challenges and driving data-driven decision-making, this role is for you.Key ResponsibilitiesDesign and implement scalable data lakes, data warehouses, and real-time streaming architecturesBuild, optimize, and manage Databricks solutions using Spark, Delta Lake, Workflows, and SQL AnalyticsDevelop cloud-native data platforms on Azure (Synapse, Data Factory, Data Lake) and AWS (Redshift, Glue, S3)Create and automate ETL/ELT pipelines using Apache Spark, PySpark, and cloud toolsDesign and maintain data models (dimensional, normalized, star schemas) to support analytics and reportingLeverage big data technologies such as Hadoop, Kafka, and Scala for large-scale data processingEnsure data governance, security, and compliance with standards like GDPR and HIPAAOptimize Spark workloads and storage for performance and cost efficiencyCollaborate with engineering, analytics, and business teams to align data solutions with organizational goalsRequired Skills & Qualifications8+ years of experience in Data Architecture, Data Engineering, or AnalyticsStrong hands-on experience with Databricks (Delta Lake, Spark, MLflow, Pipelines)Expertise in Azure (Synapse, Data Factory, Data Lake) and AWS (Redshift, S3, Glue)Proficient in SQL and Python or ScalaExperience with NoSQL databases (e.g., MongoDB) and streaming platforms (e.g., Kafka)Solid understanding of data governance, security, and compliance best practicesExcellent problem-solving, communication, and cross-functional collaboration skills.Looking forward to receiving suitable profiles at the earliest.