Role: Bigdata Developer - Scala Spark
Exp: 5+ Yrs
Mode of Work: WFO – All 5 days
Location: Chennai/Bangalore/Pune
Interview: Any one Level F2F
Job Description:
• Total IT / development experience of 3+ years
• Experience in Spark (Scala-Spark ) developing Big Data applications on Hadoop, Hive and/or Kafka, HBase, MongoDB
• Deep knowledge of Scala-Spark libraries to develop and debug complex data engineering challenges
• Experience in developing sustainable data driven solutions with current new generation data technologies to drive our business and technology strategies
• Exposure to deploying on Cloud platforms
• At least 2 years of development experience on designing and developing Data Pipelines for Data Ingestion or Transformation using Spark-Scala
• At least 2 years of development experience in the following Big Data frameworks: File Format (Parquet, AVRO, ORC), Resource Management, Distributed Processing and RDBMS
• At least 2 years of developing applications in Agile with Monitoring, Build Tools, Version Control, Unit Test, Unix Shell Scripting, TDD, CI/CD, Change Management to support DevOps