3

Hadoop Job Vacancies in Delhi

Opening For Data Engineer

Cynosure Corporate Solutions

  • 3 - 9 yrs
  • Delhi
Apache Python Hadoop SCALA
Job Description: We are looking for Data Engineers to join our team. You will use various methods to transform raw data into useful data systems. For example, youll create algorithms and conduct statistical analysis. Overall, youll strive for efficiency by aligning data systems with business goals. To succeed in this position, you should have strong analytical skills and the ability to combine data from different sources. Data engineer skills also include familiarity with several programming languages and knowledge of machine learning methods. Job Requirements: Participate in the customers system design meetings and collect the functional/technical requirements. Build up data pipelines for consumption by the data science team. Skillful in ETL process and tools. Clear understanding and experience with Python and PySpark or Spark and SCALA, with HIVE, Airflow, Impala, and Hadoop and RDBMS architecture. Experience in writing Python programs and SQL queries. Experience in SQL Query tuning. Experienced in Shell Scripting (Unix/Linux). Build and maintain data pipelines in Spark/Pyspark with SQL and Python or SCALA. Knowledge of Cloud (Azure/AWS/GCP, etc..) technologies is additional. Good to have knowledge of Kubernetes, CI/CD concepts, Apache Kafka Suggest and implement best practices in data integration. Guide the QA team in defining system integration tests as needed. Split the planned deliverables into tasks and assign them to the team. Needs to Maintain/Deploy the ETL code and follow the Agile methodology Needs to work on optimization wherever applicable. Good oral, written and presentation skills. Preferred Qualifications: Degree in Computer Science, IT, or a similar field; a Masters is a plus. Hands-on experience with Python and Pyspark Or Hands-on experience with Spark and SCALA. Great numerical and analytical skills. Working knowledge of cloud platforms such as MS Azure, AWS, etc..
View all details

Salesforce Developer

NUBYS TECHNOLOGY

Triggers Javascript Hadoop Salesforce CRM
Job descriptionExcellent opportunity for freshers/.net/Java developers to learn and build their career in Salesforce.Selected candidates will be trained in Salesforce.Once 03 months training is over, candidates will assist Salesforce Developers on ongoing projects.First salary revision as soon as candidates are allocated on project and successfully start delivering project tasks.Salary revision on completion 1 year and 1.5 years with the organization.Only those candidates will be considered who are ready to sign a service agreement of 02 yearsDesired Candidate Profile0 to 2 years experience in Java/.net/any other programming language.Should have done at least one project during in college/training.Candidate should be able to give demo of the project they have worked on.Hands on experience in designing and developing applications using Java/.net platformsGraduate or Post-GraduateStrong algorithmic skillsAbility to work independently with little supervisionExcellent multi-tasking skillsSelf-motivated with strong team spirit
View all details

Urgent Required For Data Engineer Executive

Perfect Solution Group (Spectrum Placement Services)

Data Engineer Executive Computer Operator SAS-Statistical Analysis System ETL Hadoop DATA ENGINEER Azure JSON XML Scala Spark Github DevOps Data Miration Walk in
Profile - Data Engineer ExecutiveQualification - Graduate With Good Communication SkillExperience - Minimum 1 Year RequiredCandidate Should Have Knowledge of AWS,Spark, Py- Spark, Python, HarkSalary - 24 LPA TO 42 LPA Gender - Male & Female Can ApplyLocation - Pen IndiaDuties & Responsibilities-----Analyze and organize raw data.Build data systems and pipelines.Evaluate business needs and objectives.Interpret trends and patterns.Conduct complex data analysis and report on results.Prepare data for prescriptive and predictive modeling.Build algorithms and prototypes.Only Serious Candidate Apply
View all details

Hadoop Architect / SME

Billiton Services

Hadoop Spark On-prem MapReduce ETL GCP
Need Architect/Senior level with below Hadoop SME Requirements 1. Senior/Lead level in Hadoop2. Experience with On-prem Hadoop and/or non-GCP Hadoop distributions.3. Must have experience with Batch and Streaming ETL workflows on non-GCP/on-prem Hadoop4. Experience with Spark and legacy MapReduce apps (scripts and workflows)5. Experience with Core Hadoop migrations to GCP/Dataproc in addition to data lift-shift
View all details

Get Personalized Job Matches

Based on your experience, skills, interests, and career goals to help you find the most relevant opportunities faster. Register Now!

Hadoop Developer

Telamon HR Solutions

  • 5 - 10 yrs
  • 30.0 Lac/Yr
  • Gurgaon
Hadoop Spark Hive SQL Pig Java Python Hadoop Developer Web Developer Walk in
xperience working data warehouses, including information retrieval, data mining and machine learning as well as experience in building optimized data intensive applications with modern web technologies (such as NoSQL, MongoDB, SparkML, Tensorflow).EducationProfessional Qualification in Data EngineeringMajorComputer Science/ApplicationKnowledgeExpert Knowledge in the areas Data Engineering, ETL, HadoopSkillHadoop, Spark, Pig, Hive, Python, Java, SQLCertificateDiploma/Certificates in the field of Data EngineeringExperience5-10 Years of relevant experience
View all details

Hadoop Data Engineer

Telamon HR Solutions

  • 5 - 10 yrs
  • 30.0 Lac/Yr
  • Gurgaon
Hadoop SQL JAVA PIG SPARK Python Web Developer Walk in
We are looking for a candidate with 5+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:Experience with big data tools: Hadoop, Spark, Kafka, etc.Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.Experience with AWS cloud services: EC2, EMR, RDS, RedshiftExperience with stream-processing systems: Storm, Spark-Streaming, etc.Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
View all details

Apply to 3 Hadoop Job Vacancies in Delhi

  • Delhi Jobs
  • Hyderabad Jobs
  • Ahmedabad Jobs
  • Bangalore Jobs
  • Mumbai Jobs
  • Pune Jobs
  • Chennai Jobs
  • Kolkata Jobs