75

Hadoop Jobs

filter
  • Location
  • Role
  • Functional Area
  • Qualification
  • Experience
  • Employer Type

Looking For Big Data Engineer

Talent Zone Consultant

  • 6 - 12 yrs
  • Bangalore
Python SQL Spark Hadoop ETL Tools Data Warehousing Airflow Programming Data Visualization Data Lakes Data Modeling
Key Responsibilities:Build and manage data pipelines and ETL processesWork with large datasets using tools like Spark, Hadoop, or SQLEnsure data quality and performance optimizationRequirements:Experience in Python/SQLHands-on with ETL tools and big data technologiesUnderstanding of data warehousing conceptsBrief Summary:Develops scalable data systems to support analytics and business insights.
View all details
  • 0 - 1 yrs
  • 8.0 Lac/Yr
  • Female
  • Mall Road Amritsar
Data Integration Data Warehousing SQL Informatica ETL Hadoop Big Data Python
We are looking for a motivated Data Engineer to join our team. This part-time position allows you to work from home and is suitable for individuals with little to no experience. The ideal candidate will help us manage and process data to ensure it meets the needs of the business.**Key Responsibilities:**- **Data Collection:** Gather data from various sources to prepare for analysis. Its important to ensure the data is accurate and up-to-date.- **Data Cleaning:** Clean and organize raw data to make it usable. This involves removing errors and inconsistencies, which is crucial for reliable analysis.- **Data Storage:** Help in storing data in databases or cloud storage systems. Proper organization helps in easy access and retrieval of data when needed.- **Collaboration:** Work with other team members to understand their data needs. Communication is key to delivering the right data for their projects.- **Support:** Assist in monitoring data systems and providing technical support. Being proactive in identifying issues helps keep the data flow smooth.**Required Skills and Expectations:**Candidates should have a basic understanding of data management principles. Familiarity with data cleaning tools and database management systems is a plus. The ability to learn new software quickly and a strong attention to detail are essential. Good communication skills are important for working with teammates and understanding project requirements. We encourage fresh graduates and those with relevant qualifications to apply.
View all details

Looking For Data Engineer

BSRI Solutions Pvt Ltd

  • 3 - 5 yrs
  • 16.0 Lac/Yr
  • Chennai
Python Pyspark Developer Scala SQL Hive Hadoop Google Cloud Platform Kafka Developer Infrastructure AS Code GitHub Agile Methodology ETL
Required Qualifications : 3+ years of demonstrated ability with Hive, Python, Spark/Scala, SQL, etc. Google Cloud Platform Experience, Big Query, Cloud Storage, Dataproc, Data Flow, Cloud Composer, Cloud SQL, Pub Sub, Terraform, etc. Experience with Hadoop Ecosystem, Kafka, PCF cloud services Familiar with big data and machine learning tools and platforms Experience with BI tools, such as Alteryx, Data Stage, QlikSense, etc. Design data pipelines and data robots, take a vision and bring it to life Master data engineer; mentors others; works closely with IT architects to set strategy and design projects Provide extensive technical, and strategic advice and guidance to key stakeholders around the data transformation efforts Redesign data flows to prevent recurring data issues Strong analytical and problem-solving skills Possess excellent oral and written communication skills, as well as facilitationand presentation skills, and engaging presentation style. Ability to work as a global team member, as well as independently, in achanging environment and prioritize. Ability to establish and maintain coordinated and effective working relationships with application implementation teams, IT project teams, business customers, and end users. Ability to deliver work within deadlines. Experience with agile/lean methodologies Experience working independently and with minimal supervision Experience with Test Driven Development and Software Craftsmanship Experience with GitHub, Accurev, or other version-control systems Experience with Putty Experience with Datastage Strong Communications skills Ability to illustrate and convey ideas and prototypes effectively with team and partners Presence demonstrating confidence, ability to learn quickly, influence, and shape ideas Key Skills Required - Data Engineer- Python / PySpark / Scala- SQL & Hive- Hadoop Ecosystem- Data Pipeline Design & ETL Development- Google Cloud Platform (BigQuery, Dataproc, Dataflow, Cloud Storage)- Kafka / Streaming Data Processing- Terraform (Infrastructure as Code)- DataStage or Similar ETL Tools- Version Control (GitHub or equivalent)- Agile Methodologies- Strong Analytical & Problem-Solving Skills- Stakeholder Collaboration & CommunicationNice to Have:- Cloud Composer, Cloud SQL, Pub/Sub- BI Tools (Alteryx, QlikSense)- Machine Learning Platform Exposure- Test Driven Development (TDD)- Mentoring & Technical Leadership
View all details
  • 4 - 10 yrs
  • 5.0 Lac/Yr
  • Bangalore
ETL ELT SQL Python Dbt Spark Hadoop Cloud Data CICD Data Security Data Warehousing
Design, build, and maintain ETL/ELT data pipelines and data lake solutions to support analytics and AI/ML use cases. Ensure data quality, performance, and reliability across enterprise data platforms.Key ResponsibilitiesPipeline DevelopmentData Lake EngineeringPerformance & OptimizationCollaboration & SupportRequired Skills & Experience 4+ years of experience in data engineering or ETL development. Proficiency in SQL and Python (or Scala/Java) for data transformations. Hands-on with ETL tools (Informatica, Talend, dbt, SSIS, Glue, or similar). Exposure to big data technologies (Hadoop, Spark, Hive, Delta Lake). Familiarity with cloud data platforms (AWS Glue/Redshift, Azure Data Factory/Synapse, GCP Dataflow/BigQuery). Understanding of workflow orchestration (Airflow, Oozie, Prefect, or Temporal).Preferred Knowledge Experience with real-time data pipelines using Kafka, Kinesis, or Pub/Sub. Basic understanding of data warehousing and dimensional modeling. Exposure to containerization and CI/CD pipelines for data engineering. Knowledge of data security practices (masking, encryption, RBAC).Education & Certifications Bachelors degree in Computer Science, IT, or related field.Preferred certifications:o AWS Data Analytics Specialty / Azure Data Engineer Associate / GCP Data Engineer.o dbt or Informatica/Talend certifications.
View all details

Get Personalized Job Matches

Based on your experience, skills, interests, and career goals to help you find the most relevant opportunities faster. Register Now!

Data Engineer

United Technology

  • 1 - 3 yrs
  • 4.0 Lac/Yr
  • Chennai
Data Integration Data Engineer Hadoop ETL SQL Informatica Apache AWS Big Data Python
We are looking Data Engineer with 1 to 3 years experience in Chennai.Immediate joiners preferred
View all details
  • 8 - 10 yrs
  • Pune
Kafka Scala Spark Hadoop Airflow Data Lakes Kappa Kappa ++ Architectures RDBMS NoSQL Cassandra Redis Oracle
Sr. Big Data Engineer Location: PuneExperience: 10+ years Mode: HybridRole Overview:We are seeking a talented Sr. Big Data Engineer to design, develop, and support a highly scalable, distributed SaaS-based Security Risk Prioritization product. You will lead the design and evolution of our data platform and pipelines, providing technical leadership to a team of engineers and architects.Key Responsibilities: Provide technical leadership on data platform design, roadmaps, and architecture. Design and implement scalable architecture for Big Data and Microservices environments. Drive technology explorations, leveraging knowledge of internal and industry prior art. Ensure quality architecture and design of systems, focusing on performance, scalability, and security. Mentor and provide technical guidance to other engineers.Required Skills & Technologies: Mandatory: Kafka, Scala, Spark. Big Data & Data Streaming: Spark, Kafka, Hadoop, Presto, Airflow, Data lakes, lambda architecture, kappa, and kappa ++ architectures with flink data streaming. Databases & Caching: RDBMS, NoSQL, Oracle, Cassandra, Redis. Search Solutions: Solr, Elastic. ML & Automation: Experience with ML models engineering and related deployment, scripting, and automation. Architecture: In-depth experience with messaging queues and caching components. Other Skills: Strong troubleshooting and performance benchmarking skills for Big Data technologies.Qualifications: Bachelors degree in Computer Science or equivalent. 8+ years of total experience, with 6+ years relevant. 2+ years in designing Big Data solutions with Spark. 3+ years with Kafka and performance testing for large infrastructure.
View all details
  • 4 - 10 yrs
  • 36000/Yr
  • Missouri +1 USA
Data Warehousing Data Management Data Integration SQL Data Extraction ETL Tool Hadoop AWS Big Data Python
Role OverviewThis position requires a detail-oriented data engineer who can independently architect and implement data pipelines, while also serving as a trusted technical partner in client engagements and stakeholder meetings. Youll work hands-on with PySpark, Airflow, Python, and SQL, driving end-to-end data migration and platform modernization efforts across Azure and AWS.In addition to technical execution, youll contribute to sprint planning, backlog prioritization, and continuous integration/deployment of data infrastructure. This is a senior-level individual contributor role with direct visibility across engineering, product, and client delivery functions.Key ResponsibilitiesLead design and development of enterprise-grade data pipelines and cloud data migration architectures.Build scalable, maintainable ETL/ELT pipelines using Apache Airflow, PySpark, and modern data services.Write efficient, modular, and well-tested Python code, grounded in clean architecture and performance principles.Develop and optimize complex SQL queries across diverse relational and analytical databases.Contribute to and uphold standards for data modeling, data governance, and pipeline performance.Own the implementation of CI/CD pipelines to enable reliable deployment of data workflows and infrastructure (e.g., GitHub Actions, Azure DevOps, Jenkins).Embed unit testing, integration testing, and monitoring in all stages of the data pipeline lifecycle.Participate actively in Agile ceremonies: sprint planning, daily stand-ups, retrospectives, and backlog grooming.Collaborate directly with clients, stakeholders, and cross-functional teams to translate business needs into scalable technical solutions.Act as a technical authority within the teamleading architectural decisions and contributing to internal best practices and documentation.
View all details

Data Scientist

Kudos Technolabs

  • 3 - 5 yrs
  • Bangalore
Python ML Frameworks & Libraries NumPy Pandas Scikit-Learn TensorFlow PyTorch Keras Matplotlib EDA Data Visualization Techniques SQL Statistical Analysis Hypothesis Testing Feature Engineering AWS Certified AWS Azure GCP Hadoop Spark Scipy SciPy Machine Learning
We are looking for a highly motivated and skilled Data Scientist to join our team. The ideal candidate will have a strong background in data science , machine learning , and statistical analysis, with hands-on experience in python and industry-standards libraries. You will be responsible for deriving actionable insights, building predictive models and effectively communicating findings through data story telling.
View all details

Opening For Cloud Data Engineer

Talme Technologies Pvt Ltd

Designing and Implementing Data Architecture Strategies Data Integration Data Management Supporting Analytics Technology Selection and Performance Optimization. Technical Skills: In-depth Knowledge Of AWS Services (IAM Redshift NoSQL) Data Processing and Analysis Tools (AWS Glue EMR) Big Data Frameworks (Hadoop Spark) ETL Tools (IBM DataStage ODI in
we are in the lookout for a seasoned Cloud Data Lead. We are eager to connect with you if you have extensive experience in cloud platforms, data architecture, and leadership!
View all details

IT Trainer

Vijaya Management Services

  • 2 - 8 yrs
  • 5.0 Lac/Yr
  • Pune
Ava Python Big Data Technologies Hadoop Spark PiSpark Kafka Airflow Machine Learning Deep Learning Tableau Power BI
Training on Java, Python, Big data technologies, Hadoop, Spark, PiSpark, Kafka, Airflow, Machine Learning, Deep learning, Tableau, Power BI, TableauMin Experience: 2 to 3 years of training experience.
View all details
  • 2 - 6 yrs
  • 8.0 Lac/Yr
  • Bangalore
Unix Shell Scripting Telecom OSS BSS RDBMS Hadoop Hbase Postgre SQL
Objective of the roleResponsible for deploying and supporting the product/CRs at customer environment.Managing the tickets from customer adhering to SLAJob Responsibilities/TasksInstallation of product/solution and its dependencies on serversFollowing up with customer points of contact for various aspects.Integration of product with various external entities and readiness of the solution.End to end system testing before handing over system to customer for user acceptance testing.Driving the UAT with end customer(s) from support perspectiveLaunch and post-go live management of the product, including Monitoring and automation of various jobsSkillsVery good knowledge on Unix and Shell scripting1 to 3 years of experience with Telecom BSS/OSS solution in prior with at least two customers with/without onsite presence.Good knowledge on RDBMS concepts and Oracle QueriesPreferred knowledge on Hadoop/HBase/PostGreSQLExtensive experience of systems development including involvement in all major stages of software development projects.Good Knowledge and appreciation of systems development lifecycles and methodologies.Well established communication, presentation, motivational and inter-personal skills associated with person-management abilities.Thorough understanding, appreciation and analysis of the issues underlying systems development.Experience of the technical aspects of the relevant technologies, software and hardware, to be employed.Prior experience in Telecom sources ETL tool experience is preferred.
View all details
  • 2 - 8 yrs
  • 16.0 Lac/Yr
  • Delhi NCR
Angular JS Developer Node JS Developer Hadoop Developer Python Developer
we are a staff augmentation organisation. we need candidates having expertise in angularjs
View all details

Big Data Analytics

Creative Consultant & Contractor

  • 3 - 7 yrs
  • 9.0 Lac/Yr
  • Bangalore
Hadoop Developer SQL Server Developer Python Developer SCALA Data Warehouse Developer Data Scientist Data Analyst
Hi, we have job opportunity for the post of Big Data Analytics or Developer at Bangalore, Karnataka, candidate who have minimum 3 + years experienced in the same field and they are ready to join immediately those candidates can apply. Company will give you good salary and other benefits also.
View all details
  • 2 - 6 yrs
  • 6.5 Lac/Yr
  • Bangalore Highway Chennai
Data Management Apache Data Integration SQL ETL Tool ETL Hadoop AWS Snowflake Azure
Invitation for B2B Partnerships: Seeking Software Development Support! We are looking to collaborate with companies that can provide 10 skilled Data Engineers to support our data engineering requirements pipeline for European clients on a contract basis.Requirements:Expertise in data engineering, including data processing and ETL.Proficiency in SQL and NoSQL databases.Experience with Hadoop or Spark.Experience in AWS or Azure.Snowflake is an added advantage.If your company is equipped to provide top-notch Data Engineers, we invite you to submit your proposal with terms and conditions.Please contact us by email or WhatsAppsrividya032001@gmail.com (or) 8884752389
View all details
  • 2 - 6 yrs
  • 6.5 Lac/Yr
  • Bangalore National Highway Chennai
Data Warehousing Apache Data Integration Data Management SQL ETL Tool Hadoop AWS Big Data Python Java Java-script React Js
Invitation for B2B Partnerships: Seeking Software Development Support! We are looking to collaborate with companies that can provide 10 skilled Data Engineers to support our data engineering requirements pipeline for Euoropean clients on a contract basis.Requirements:Expertise in data engineering, including data processing and ETL.Proficiency in SQL and NoSQL databases.Experience with Hadoop or Spark.Experience in AWS or Azure.Snowflake is an added advantage.If your company is equipped to provide top-notch Data Engineers, we invite you to submit your proposal with terms and conditions.Please contact us by email or Whatsapprakanalytics@gmail.com (or) 9900173022
View all details
  • 5 - 11 yrs
  • Chennai
Hadoop Cloud Computing GCP SQL Spark Hive DataFlow DataProc
Job Title: Hadoop DeveloperLocation: Chennai, IndiaDuration: Permanent positionWork Type: Onsite Role Industry: Financial TechnicalExperience: 05 YearsPlease share the resume at victor@theqctech.comJob Description: 4 Years of relevant experience is mandatory Chennai is a preferred location but anywhere is also fine. Experience extracting data from a variety of sources, and a desire to expand those skills (working knowledge in SQL and Spark is mandatory) Excellent Data Analysis skills. Must be comfortable with querying and analyzing large amounts of data on Hadoop HDFS using Hive and Spark. Preferably candidates must have experience building applications using Google Cloud Platform-related frameworks such as DataFlow/DataProc/PubSub. Excellent Communication Skills to Understand and Pass on Requirements. Pure GCP with the above experience is also ok
View all details

Hiring For Big Data Developer

krtrimaiq cognitive solution

  • 4 - 8 yrs
  • Bangalore
Python SCALA SQL Hadoop
We are looking for an only immediate joiner and experienced Big Data Developer with a strong background in Kafka, PySpark, Python/Scala, Spark, SQL, and the Hadoop ecosystem.The ideal candidate should have over 5 years of experience and be ready to join immediately. This role requires hands-on expertise in big data technologies and the ability to design and implement robust data processing solutions.Key Responsibilities:Design, develop, and maintain scalable data processing pipelines using Kafka, PySpark, Python/Scala, and Spark.Work extensively with the Kafka and Hadoop ecosystem, including HDFS, Hive, and other related technologies.Write efficient SQL queries for data extraction, transformation, and analysis.Implement and manage Kafka streams for real-time data processing.Utilize scheduling tools to automate data workflows and processes.Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions.Ensure data quality and integrity by implementing robust data validation processes.Optimize existing data processes for performance and scalability.
View all details

Opening For Data Engineer

Cynosure Corporate Solutions

  • 3 - 9 yrs
  • Delhi
Apache Python Hadoop SCALA
Job Description: We are looking for Data Engineers to join our team. You will use various methods to transform raw data into useful data systems. For example, youll create algorithms and conduct statistical analysis. Overall, youll strive for efficiency by aligning data systems with business goals. To succeed in this position, you should have strong analytical skills and the ability to combine data from different sources. Data engineer skills also include familiarity with several programming languages and knowledge of machine learning methods. Job Requirements: Participate in the customers system design meetings and collect the functional/technical requirements. Build up data pipelines for consumption by the data science team. Skillful in ETL process and tools. Clear understanding and experience with Python and PySpark or Spark and SCALA, with HIVE, Airflow, Impala, and Hadoop and RDBMS architecture. Experience in writing Python programs and SQL queries. Experience in SQL Query tuning. Experienced in Shell Scripting (Unix/Linux). Build and maintain data pipelines in Spark/Pyspark with SQL and Python or SCALA. Knowledge of Cloud (Azure/AWS/GCP, etc..) technologies is additional. Good to have knowledge of Kubernetes, CI/CD concepts, Apache Kafka Suggest and implement best practices in data integration. Guide the QA team in defining system integration tests as needed. Split the planned deliverables into tasks and assign them to the team. Needs to Maintain/Deploy the ETL code and follow the Agile methodology Needs to work on optimization wherever applicable. Good oral, written and presentation skills. Preferred Qualifications: Degree in Computer Science, IT, or a similar field; a Masters is a plus. Hands-on experience with Python and Pyspark Or Hands-on experience with Spark and SCALA. Great numerical and analytical skills. Working knowledge of cloud platforms such as MS Azure, AWS, etc..
View all details
  • 5 - 7 yrs
  • 10.0 Lac/Yr
  • Bangalore
Triggers Hadoop Salesforce CRM Work From Home
Salesforce developer possessing 5 to 7 years of experience. Must be willing to work in US shift. This is a full-time remote job.
View all details

Opening For Salesforce Developer

Cognitud Advisory Services

Triggers Hadoop Salesforce CRM
Job Summary: We are looking for a Salesforce Developer who will play a key role in maximizing the efficacy of the CRM. You will be responsible for the design, development, testing and implementation of customizations, applications, extensions and integrations. You will work with a team of fellow engineers and collaborate with our Sales, Customer Success and Marketing teams to translate business needs into effective and scalable products within the CRM. Responsibilities: Develop, implement and maintain Salesforce customizations, applications, extensions and integrations. Participate in the planning/analysis of business requirements for system changes and enhancements. Collaborate inter-departmentally to identify business needs and translate them into technical solutions. Responsible for developing in Apex, Lightning Web Components, Lightning Design System, and other technologies to build customized solutions. Technical leadership, setting best practices including integration and application development, deployment, testing (unit and systems), and iterative refinement. Seek out ways to use Salesforce to improve processes and productivity, and make recommendations to support an organization scaling at a rapid pace. Should be able to work as individual contributor. Excellent communication and collaboration skills.Skills: Must-Have* - Apex, VF, Lighting Web Components Triggers, SOQL, Test Classes Good-to-Have- Java Script knowledgeQualifications: Bachelors degree or higher in Information Technology, Business, Engineering, or a related field. BE/BTech/MCA Full-Time Education
View all details

Senior Software Java Developer

Talent Zone Consultant

  • 3 - 5 yrs
  • 20.0 Lac/Yr
  • Bangalore
Core Java Strong Problem Solving Multithreading Hadoop Spark Java Developer JavaEE JIRA Hibernate Maven Java Javascript Android SQL Jquery Java Programmer HTML5 CSS3 AJAX JSON Spring MVC Walk in
Responsibilities Designing, developing, testing, troubleshooting, debugging, deploying,maintaining, documenting, and delivering large-scale, highly distributed, real-timedata platform. Using Java, object-oriented (OO) design patterns, NoSQL DBs, and datamodeling techniques. Recommending changes in development, maintenance, and system standards. Working in an agile development environment to deliver a high-quality product. Mentor junior software development engineers.Basic Qualifications Bachelors/Masters degree in computer science or a related field. 4+ years of experience in software development. Proficient in Core Java, with a good knowledge of Javas ecosystems. Proficient in multithreading in Java. Excellent problem-solving skills. Possess an extremely sound understanding of the basic areas of ComputerScience such as Algorithms, Data Structures. Strong OO programming and design skills with an understanding of commondesign patterns. Knowledge of professional software engineering practices & best practices forthe full software development life cycle, including coding standards, codereviews, source control management, continuous deployments, testing, andoperations. Candidate must have good written and oral communication skills, be a fastlearner, and have the ability to adapt quickly to a fast-paced developmentenvironment.Preferred Qualifications Experience building large-scale, fault-tolerant distributed systems. Familiarity with Big Data platforms and architecture, relational and no SQLdatabase concepts, solid knowledge of Hadoop, Spark, Hive, etc in general. Demonstrated ability to mentor junior software engineers in all aspects of theirengineering skill-sets.
View all details
  • 5 - 11 yrs
  • 14.0 Lac/Yr
  • Chennai
Hadoop Cloud Computing GCP SQL Spark Hive DataFlow DataProc
Job Title: GCP DeveloperLocation: Chennai, IndiaDuration: Permanent positionWork Type: Onsite Role Industry: Financial TechnicalExperience: 05 YearsPlease share the resume at victor@theqctech.comJob Description: 4 Years of relevant experience is mandatory Chennai is a preferred location but anywhere is also fine. Experience extracting data from a variety of sources, and a desire to expand those skills (working knowledge in SQL and Spark is mandatory) Excellent Data Analysis skills. Must be comfortable with querying and analyzing large amounts of data on Hadoop HDFS using Hive and Spark. Preferably candidates must have experience building applications using Google Cloud Platform-related frameworks such as DataFlow/DataProc/PubSub. Excellent Communication Skills to Understand and Pass on Requirements. Pure GCP with the above experience is also ok
View all details
View More Jobs