role: bigdata developer - scala spark
exp: 5+ yrs
mode of work: wfo – all 5 days
location: chennai/bangalore/pune
interview: any one level f2f
job description:
• total it / development experience of 3+ years
• experience in spark (scala-spark ) developing big data applications on hadoop, hive and/or kafka, hbase, mongodb
• deep knowledge of scala-spark libraries to develop and debug complex data engineering challenges
• experience in developing sustainable data driven solutions with current new generation data technologies to drive our business and technology strategies
• exposure to deploying on cloud platforms
• at least 2 years of development experience on designing and developing data pipelines for data ingestion or transformation using spark-scala
• at least 2 years of development experience in the following big data frameworks: file format (parquet, avro, orc), resource management, distributed processing and rdbms
• at least 2 years of developing applications in agile with monitoring, build tools, version control, unit test, unix shell scripting, tdd, ci/cd, change management to support devops