Big Data Engineer (Spark and Scala)

Key Skills

Scala Spark Pyspark

Job Description

role: bigdata developer - scala spark

exp: 5+ yrs

mode of work: wfo – all 5 days

location: chennai/bangalore/pune

interview: any one level f2f

job description:

• total it / development experience of 3+ years

• experience in spark (scala-spark ) developing big data applications on hadoop, hive and/or kafka, hbase, mongodb

• deep knowledge of scala-spark libraries to develop and debug complex data engineering challenges

• experience in developing sustainable data driven solutions with current new generation data technologies to drive our business and technology strategies

• exposure to deploying on cloud platforms

• at least 2 years of development experience on designing and developing data pipelines for data ingestion or transformation using spark-scala

• at least 2 years of development experience in the following big data frameworks: file format (parquet, avro, orc), resource management, distributed processing and rdbms

• at least 2 years of developing applications in agile with monitoring, build tools, version control, unit test, unix shell scripting, tdd, ci/cd, change management to support devops
  • Experience

    5 Years

  • No. of Openings

    3

  • Education

    Any Bachelor Degree

  • Role

    Big Data Engineer

  • Industry Type

    IT-Hardware & Networking / IT-Software / Software Services

  • Gender

    [ Male / Female ]

  • Job Country

    India

  • Type of Job

    Full Time

  • Work Location Type

    Work from Office

Similar Jobs
Apply Now

Register to Get Relevant Jobs

Get Noticed By Top Recruiters

Become a Premium Job Seeker

  • Higher Boosting
  • Resume Highlighter
  • Verified Stamp
  • Resume Exposure

499/- for 3 months

Pay Now

We use cookies to improve your experience. By continuing to browse the site, you agree to our Privacy Policy Terms & Conditions [Seeker]

Got it