Big Data Engineer (Spark and Scala)

Key Skills

Scala Spark Pyspark

Job Description

Role: Bigdata Developer - Scala Spark

Exp: 5+ Yrs

Mode of Work: WFO – All 5 days

Location: Chennai/Bangalore/Pune

Interview: Any one Level F2F

Job Description:

• Total IT / development experience of 3+ years

• Experience in Spark (Scala-Spark ) developing Big Data applications on Hadoop, Hive and/or Kafka, HBase, MongoDB

• Deep knowledge of Scala-Spark libraries to develop and debug complex data engineering challenges

• Experience in developing sustainable data driven solutions with current new generation data technologies to drive our business and technology strategies

• Exposure to deploying on Cloud platforms

• At least 2 years of development experience on designing and developing Data Pipelines for Data Ingestion or Transformation using Spark-Scala

• At least 2 years of development experience in the following Big Data frameworks: File Format (Parquet, AVRO, ORC), Resource Management, Distributed Processing and RDBMS

• At least 2 years of developing applications in Agile with Monitoring, Build Tools, Version Control, Unit Test, Unix Shell Scripting, TDD, CI/CD, Change Management to support DevOps
  • Experience

    5 Years

  • No. of Openings

    3

  • Education

    Any Bachelor Degree

  • Role

    Big Data Engineer

  • Industry Type

    IT-Hardware & Networking / IT-Software / Software Services

  • Gender

    [ Male / Female ]

  • Job Country

    India

  • Type of Job

    Full Time

  • Work Location Type

    Work from Office

Similar Jobs
Apply Now

Register to Get Relevant Jobs

Get Noticed By Top Recruiters

Become a Premium Job Seeker

  • Higher Boosting
  • Resume Highlighter
  • Verified Stamp
  • Resume Exposure

499/- for 3 months

Pay Now

We use cookies to improve your experience. By continuing to browse the site, you agree to our Privacy Policy Terms & Conditions [Seeker]

Got it