13

Hadoop Job Vacancies in Delhi NCR

filter
  • Location
  • Role
  • Functional Area
  • Qualification
  • Experience
  • Employer Type

Opening For Data Engineer

Cynosure Corporate Solutions

  • 3 - 9 yrs
  • Delhi
Apache Python Hadoop SCALA
Job Description: We are looking for Data Engineers to join our team. You will use various methods to transform raw data into useful data systems. For example, youll create algorithms and conduct statistical analysis. Overall, youll strive for efficiency by aligning data systems with business goals. To succeed in this position, you should have strong analytical skills and the ability to combine data from different sources. Data engineer skills also include familiarity with several programming languages and knowledge of machine learning methods. Job Requirements: Participate in the customers system design meetings and collect the functional/technical requirements. Build up data pipelines for consumption by the data science team. Skillful in ETL process and tools. Clear understanding and experience with Python and PySpark or Spark and SCALA, with HIVE, Airflow, Impala, and Hadoop and RDBMS architecture. Experience in writing Python programs and SQL queries. Experience in SQL Query tuning. Experienced in Shell Scripting (Unix/Linux). Build and maintain data pipelines in Spark/Pyspark with SQL and Python or SCALA. Knowledge of Cloud (Azure/AWS/GCP, etc..) technologies is additional. Good to have knowledge of Kubernetes, CI/CD concepts, Apache Kafka Suggest and implement best practices in data integration. Guide the QA team in defining system integration tests as needed. Split the planned deliverables into tasks and assign them to the team. Needs to Maintain/Deploy the ETL code and follow the Agile methodology Needs to work on optimization wherever applicable. Good oral, written and presentation skills. Preferred Qualifications: Degree in Computer Science, IT, or a similar field; a Masters is a plus. Hands-on experience with Python and Pyspark Or Hands-on experience with Spark and SCALA. Great numerical and analytical skills. Working knowledge of cloud platforms such as MS Azure, AWS, etc..
View all details

Big Data Architect

NMS Consultant

  • 8 - 14 yrs
  • Gurgaon
Architect Hadoop CI CD Design Development Big Data
Required Skills (must have) : Strong Knowledges/hands-on experience about offers and features of Big data technologies (especially Hadoop, Hortonworks) Strong experience of development using Spark Scala, Java, JavaScript, Nifi, Kafka, Hive, Hbase Strong knowledge of API development Strong knowledge of the java frameworks (Spring MVC, Spring Security) Hands-on knowledge of implementing multi-staged CI / CD with tools like AWS DevOps, Jenkins, BitBucket. Experience in CI / CD integration within the Java / JavaScript ecosystem with build tools like Maven, Grunt, Gulp and other Devops tooling: Jenkins, , GitLab, SonarQube, GERRIT, SBT, Nexus, Docker Experience of AGILE methods (Scrum, Kanban) Active contributions to forum and dev communityRequired Skills (should have) : Knowledges about Elastic Search/Kibana (ELK) knowledge Knowledges about Linux, Unix, Windows environments Strong knowledge on various app monitoring tools. Strong knowledge of web services (WSDL Soap, Restful) Exposure to various Data Visualization tools such as PowerBI, Tableau, and Pentaho etc. Experience with MySQL, NoSql (MongoDB, Redis, DynamoDB) Scripting Skills: Strong scripting (e.g. Python) and automation skills. Operating Systems: Windows and Linux system administration. Monitoring Tools: Experience with system monitoring tools (e.g. Nagios). Problem Solving: Ability to analyze and resolve complex infrastructure resource and application deployment issues
View all details

Hadoop Architect / SME

Billiton Services

Hadoop Spark On-prem MapReduce ETL GCP
Need Architect/Senior level with below Hadoop SME Requirements 1. Senior/Lead level in Hadoop2. Experience with On-prem Hadoop and/or non-GCP Hadoop distributions.3. Must have experience with Batch and Streaming ETL workflows on non-GCP/on-prem Hadoop4. Experience with Spark and legacy MapReduce apps (scripts and workflows)5. Experience with Core Hadoop migrations to GCP/Dataproc in addition to data lift-shift
View all details

Big Data

Saiva System India Pvt Ltd

  • 5 - 10 yrs
  • 27.5 Lac/Yr
  • Noida
Spark Scala Python Pyspark Azure SQL Hive Hadoop Work From Home
Hello Everyone!!,We are hiring #Bigdata_Azure_Professionals for one of our #MNC Clients.Job Location-#PAN_IndiaExperience- #5-#12YearsEmployment Type- #PermanentNotice Period- #Immediate to #60Days#Mandatory_Skills:-#SQL, #Spark/ #Scala, and #Azure_Synapse
View all details

Get Personalized Job Matches

Based on your experience, skills, interests, and career goals to help you find the most relevant opportunities faster. Register Now!

Salesforce Developer

NUBYS TECHNOLOGY

Triggers Javascript Hadoop Salesforce CRM
Job descriptionExcellent opportunity for freshers/.net/Java developers to learn and build their career in Salesforce.Selected candidates will be trained in Salesforce.Once 03 months training is over, candidates will assist Salesforce Developers on ongoing projects.First salary revision as soon as candidates are allocated on project and successfully start delivering project tasks.Salary revision on completion 1 year and 1.5 years with the organization.Only those candidates will be considered who are ready to sign a service agreement of 02 yearsDesired Candidate Profile0 to 2 years experience in Java/.net/any other programming language.Should have done at least one project during in college/training.Candidate should be able to give demo of the project they have worked on.Hands on experience in designing and developing applications using Java/.net platformsGraduate or Post-GraduateStrong algorithmic skillsAbility to work independently with little supervisionExcellent multi-tasking skillsSelf-motivated with strong team spirit
View all details

Hadoop Developer

Telamon HR Solutions

  • 5 - 10 yrs
  • 30.0 Lac/Yr
  • Gurgaon
Hadoop Spark Hive SQL Pig Java Python Hadoop Developer Web Developer Walk in
xperience working data warehouses, including information retrieval, data mining and machine learning as well as experience in building optimized data intensive applications with modern web technologies (such as NoSQL, MongoDB, SparkML, Tensorflow).EducationProfessional Qualification in Data EngineeringMajorComputer Science/ApplicationKnowledgeExpert Knowledge in the areas Data Engineering, ETL, HadoopSkillHadoop, Spark, Pig, Hive, Python, Java, SQLCertificateDiploma/Certificates in the field of Data EngineeringExperience5-10 Years of relevant experience
View all details
  • 3 - 6 yrs
  • 15.0 Lac/Yr
  • Gurgaon
SAS-Statistical Analysis System ETL Hadoop DATA ENGINEER Azure JSON XML Scala Spark Github DevOps Data Miration Hive Work From Home
Job Openings for 20 Data Engineer Jobs with minimum 3 Years Experience in Gurgaon having Educational qualification of : B.C.A, B.Tech/B.E, M.C.A, M.Tech with Good knowledge in hive,spark,HBase etc. for American Express
View all details

Urgent Required For Data Engineer Executive

Perfect Solution Group (Spectrum Placement Services)

Data Engineer Executive Computer Operator SAS-Statistical Analysis System ETL Hadoop DATA ENGINEER Azure JSON XML Scala Spark Github DevOps Data Miration Walk in
Profile - Data Engineer ExecutiveQualification - Graduate With Good Communication SkillExperience - Minimum 1 Year RequiredCandidate Should Have Knowledge of AWS,Spark, Py- Spark, Python, HarkSalary - 24 LPA TO 42 LPA Gender - Male & Female Can ApplyLocation - Pen IndiaDuties & Responsibilities-----Analyze and organize raw data.Build data systems and pipelines.Evaluate business needs and objectives.Interpret trends and patterns.Conduct complex data analysis and report on results.Prepare data for prescriptive and predictive modeling.Build algorithms and prototypes.Only Serious Candidate Apply
View all details

Data Scientist

Actics Technologies

  • 0 - 3 yrs
  • Noida
Python Machine Learning Deep Learning Logistic Regression Big Data Hive Hadoop Power BI Data Scientist Data Analyst Data Architect Data Administrator Data Analyst Data Mining
Roles & Responsibilities :o To use machine learning, data mining and statistical techniques to create new, scalable solutions for business problems.o Design, develop and evaluate highly innovative models for predictive learning.o Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation.Mandatory Experience:o Hands on experience in developing machine learning, deep learning, and/or other related modelling techniques.o Hands-on experience in data preparation, data wrangling, train model, test model and deployment.Desired Experience:o Experience in developing interactive chatbots.o Data Visualization skills.o Statistical modeling techniques such aso Pattern Miningo Decision treeso Support Vector Machineso Random Forestso Clusteringo Association Rule Miningo Must have strong concepts on Algorithm & Data Structure.o Strong understanding of theories, principles, and practices of above techniques.o Excellent hands-on knowledge of modeling tool, either python.o Good to have Chat bot experience
View all details

Salesforce Developer

Nuage BizTech Private Limited

  • 2 - 5 yrs
  • 16.0 Lac/Yr
  • Noida
Triggers Hadoop Salesforce CRM Apex Lightning LWC Salesforce Developer
We are looking for 4 Salesforce Developer Posts in Noida with deep knowledge in Triggers,Hadoop,Salesforce CRM, Apex,Lightning,LWC and Required Educational Qualification is : B.C.A, B.Sc, B.Tech/B.E, Post Graduate Diploma, M.C.A, M.Tech
View all details
  • 5 - 8 yrs
  • 13.0 Lac/Yr
  • Gurgaon
Hadoop Big Data Spark Application Developer Application Programmer APP Developer App Programmer APP Developing Application Programming Android Developer Python Hive Lambda Kubernetes
Project Role: Application Developer Project Role Description: Design, build and configure applications to meet business process and application requirements. Must have Skills: Data EngineeringKey Responsibilities:Implement the data flowExtend the data platform capability for various data typesDesign the data model according to various type of data sourcesDesign the data flows from the data sources to the destinationsDesign abstracted solutions for data ingestion and processingResearching the appropriate data technology and apply to the organization Technical Experience:Expert Programming experience with Python or Scala, with strong ability to refactor codes in a new environmentExperience with Big Data technologies like Hadoop, Apache Spark, Hive etcStrong Knowledge of Spark Applications lifecycleExperience with working on AWS infrastructure and services like EMR, S3, Redshift, Lambda, etc Knowledge of data structures, algorithms, software design principles and test-driven development Nice to have Experience with containerization technologies-KubernetesProfessional Attributes:Problem solving skills and hands on attitudeEager to never stop learningStrong interpersonal and communication skills Fluent in EnglishA Growth mindset with a high energy for change, a passion for sports and a strong sense of humor.
View all details
Python SCALA JAVA AWS - EMR Hadoop Spark Kafka SQL NoSQL Data Architecture Data Structures Storm Flink
ResponsibilitiesCreate and maintain optimal data pipeline architectureAssemble large, complex data sets that meet functional / non-functional business requirements.Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Open Source and AWS big data technologiesBuild analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.Work with data and analytics experts to strive for greater functionality in our data systems.QualificationsExperience building and optimizing big data pipelines, architectures and datasets.Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.Experience interacting with customers and various stakeholders.Strong analytical skills related to working with unstructured datasets.Build processes supporting data transformation, data structures, metadata, dependency and workload management.Working knowledge of message queuing, stream processing, and highly scalable big data lakes.Strong project management and organizational skills.Experience supporting and working with cross-functional teams in a dynamic environment.They should also have experience using the following software/tools:Big data technologies: Hadoop, Spark, Kafka, etc.Relational SQL and NoSQL databases, including Postgres and Cassandra.Data pipeline and workflow management tools: Airflow, NiFi etc.Cloud services: AWS - EMR, RDS, Redshift, Glue. Azure - Databricks, Data Factory. GCP - Dataproc, Pub/SubStream-processing systems: Storm, Spark Streaming, Flink etc.
View all details

BI Developer

Freesoul Technology Services LLP

Business Intelligence Hadoop Developer MAVEN Scrum Work From Home Walk in
Big Data Developer required with experience working in Hadoop in technologies including Spark and Hive. Candidate also needs to haveexperience on CI/CD (Jenkins, Nexus, etc.), and Openshift PaaS implementations, and data pipelines for instance through Control-Mor any other scheduling tool.Candidate will perform the role of senior developer for an established project in the bank where he needs to support existing applicationsas well as work on new requirements from a design and implementation perspectives.Coding System Analysis Mobile Banking Openshift PaaS Relational Databases Apache Spark Control-M GitLab Hadoop Hive Jenkins MAVEN NexusScala Shell Scripting YARN Oracle Scrum/Agile Spring.
View all details

Hadoop Data Engineer

Telamon HR Solutions

  • 5 - 10 yrs
  • 30.0 Lac/Yr
  • Gurgaon
Hadoop SQL JAVA PIG SPARK Python Web Developer Walk in
We are looking for a candidate with 5+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:Experience with big data tools: Hadoop, Spark, Kafka, etc.Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.Experience with AWS cloud services: EC2, EMR, RDS, RedshiftExperience with stream-processing systems: Storm, Spark-Streaming, etc.Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
View all details

Data Scientist

Telamon HR Solutions

  • 4 - 8 yrs
  • 22.5 Lac/Yr
  • Gurgaon
PYTHON Hadoop SQL Data Scientist Data Analyst Data Architect Data Administrator Data Analyst Data Mining Walk in
Experience in automobile domain especially analysing sensor data and IoT is a plus.EducationEngineering graduate in Computers, Masters in Mathematics/Statistics disciplineMajorComputer Engineering/Applications/Mathematics/StatisticsKnowledgeExperience on time series forecasting, unstructured data, deep learning, IoT sensor data and Hadoop platformSkillPython, Hadoop, MatplotLib, QlikView, SQLCertificateDiploma/Certificates in the field of Data EngineeringExperience5-10 Years of relevant experience
View all details

Data Scientist

Telamon HR Solutions

  • 5 - 10 yrs
  • 20.0 Lac/Yr
  • Gurgaon
Python Hadoop MatplotLib QlikView SQL Data Scientist Data Analyst Data Architect Data Administrator Data Analyst Data Mining Walk in
should have good knowlege of Python, Hadoop, MatplotLib, QlikView, SQL,Experience on time series forecasting, unstructured data, deep learning, IoT sensor data and Hadoop platformPerform in-depth analyses of business metrics, on behalf of internal customers (Sales, Marketing, Service, Factory etc.) Proficient with data manipulation and analysis tools like Python, Jupyter Notebook and Hadoop platform. Executing end to end projects and result review with internal customers Ensure the quality, stability, and evolution of our reporting tools like an Ad-hoc Dashboard Work with Data Engineer and Korea headquarters' Data Science team to help define the data standards and requirements helping to ensure our systems are running correctly and efficiently Support global training and proper documentation providing the successful on boarding for internal customers Experience in machine learning, statistics, time-series modelling, collaborative filtering and deep learning Experience in solving data-driven problems especially involving large (structured as well as unstructured) datasets. Experience in automobile domain especially analysing sensor data and IoT is a plus.
View all details

Apply to 13 Hadoop Job Vacancies in Delhi NCR

  • Delhi Ncr Jobs
  • Hyderabad Jobs
  • Ahmedabad Jobs
  • Bangalore Jobs
  • Mumbai Jobs
  • Pune Jobs
  • Chennai Jobs
  • Kolkata Jobs
  • Delhi Jobs