43

Big Data Engineer Jobs

filter
  • Location
  • Role
  • Functional Area
  • Qualification
  • Experience
  • Employer Type

Looking For Big Data Engineer

Talent Zone Consultant

  • 6 - 12 yrs
  • Bangalore
Python SQL Spark Hadoop ETL Tools Data Warehousing Airflow Programming Data Visualization Data Lakes Data Modeling
Key Responsibilities:Build and manage data pipelines and ETL processesWork with large datasets using tools like Spark, Hadoop, or SQLEnsure data quality and performance optimizationRequirements:Experience in Python/SQLHands-on with ETL tools and big data technologiesUnderstanding of data warehousing conceptsBrief Summary:Develops scalable data systems to support analytics and business insights.
View all details
  • Fresher
  • Female
  • Chennai
Data Cleansing Big Data Technologies Data Transformation Programming Data Warehousing
We are seeking a motivated and enthusiastic Data Processing Engineer to join our team. This role is perfect for recent graduates or individuals looking to start their career in data processing.As a Data Processing Engineer, you will be responsible for handling and organizing data efficiently. Your key responsibilities will include:- **Data Entry**: Accurately input data into our systems while ensuring all information is correct and up to date.- **Data Quality Assurance**: Review and validate data to identify any errors or inconsistencies, and fix them promptly.- **Data Maintenance**: Regularly update and maintain databases to keep them organized and accessible for team members.- **Reporting**: Generate basic reports from the data you process, helping the team make informed decisions.To be successful in this role, you should possess strong attention to detail and the ability to work independently from home. You must be comfortable using computers and familiar with basic data processing tools. Strong communication skills are essential to collaborate effectively with team members. Having a proactive approach to problem-solving will also be important as you work through data challenges.We welcome applications from females who have completed their 10th grade education and are eager to begin a full-time position in data processing. This is a fantastic opportunity to learn, grow, and launch your career in the field of data.
View all details
  • 0 - 1 yrs
  • 8.0 Lac/Yr
  • Female
  • Mall Road Amritsar
Data Integration Data Warehousing SQL Informatica ETL Hadoop Big Data Python
We are looking for a motivated Data Engineer to join our team. This part-time position allows you to work from home and is suitable for individuals with little to no experience. The ideal candidate will help us manage and process data to ensure it meets the needs of the business.**Key Responsibilities:**- **Data Collection:** Gather data from various sources to prepare for analysis. Its important to ensure the data is accurate and up-to-date.- **Data Cleaning:** Clean and organize raw data to make it usable. This involves removing errors and inconsistencies, which is crucial for reliable analysis.- **Data Storage:** Help in storing data in databases or cloud storage systems. Proper organization helps in easy access and retrieval of data when needed.- **Collaboration:** Work with other team members to understand their data needs. Communication is key to delivering the right data for their projects.- **Support:** Assist in monitoring data systems and providing technical support. Being proactive in identifying issues helps keep the data flow smooth.**Required Skills and Expectations:**Candidates should have a basic understanding of data management principles. Familiarity with data cleaning tools and database management systems is a plus. The ability to learn new software quickly and a strong attention to detail are essential. Good communication skills are important for working with teammates and understanding project requirements. We encourage fresh graduates and those with relevant qualifications to apply.
View all details
  • 4 - 10 yrs
  • 20.0 Lac/Yr
  • Bhubaneswar
AIML Engineer Artificial Intelligence Machine Learning Engineer Frameworks Cloud Services Data Handling Tools MLOps Big Data
AIML Engineer Job DescriptionExp: 4+ yrsNP : Immediate / 15 daysLocation : Bhubaneswar/ Bangalore/ ChennaiPosition OverviewWe are seeking a highly skilled Artificial Intelligence & Machine Learning Engineer to design, develop, and deploy intelligent systems that solve complex business problems. The role involves applying advanced algorithms, data science techniques, and deep learning frameworks to build scalable AI solutions.Key ResponsibilitiesModel Development: Design, train, and optimize ML/DL models for predictive analytics, NLP, computer vision, and recommendation systems.Data Engineering: Collect, preprocess, and analyze large datasets to ensure quality and usability.Deployment & Integration: Implement AI models into production environments using cloud platforms (AWS, Azure, GCP).Research & Innovation: Stay updated with emerging AI/ML technologies and apply them to business challenges.Collaboration: Work closely with product managers, data scientists, and software engineers to deliver end-to-end AI solutions.Performance Monitoring: Continuously evaluate and improve deployed models for accuracy, scalability, and efficiency.Required Skills & QualificationsStrong programming skills in Python, R, Java, or C++.Expertise in ML/DL frameworks: TensorFlow, PyTorch, Keras, Scikit-learn.Solid understanding of statistics, probability, and linear algebra.Experience with cloud services: AWS SageMaker, Azure ML, Google AI Platform.Knowledge of data handling tools: SQL, Spark, Hadoop.Familiarity with NLP, computer vision, reinforcement learning.Strong problem-solving, communication, and teamwork skills.Preferred QualificationsExperience with MLOps (CI/CD pipelines for ML).Hands-on with big data technologies.Publications or contributions to AI research.Infested candidate can share CV to Rekha.C@eagledrift.com
View all details

Get Personalized Job Matches

Based on your experience, skills, interests, and career goals to help you find the most relevant opportunities faster. Register Now!

Looking For Data Architect

Toolify Private Limited

  • 9 - 15 yrs
  • 40.0 Lac/Yr
  • Jaipur
Data Architect Databricks Developer Apache Spark Delta Lake Azure Synapse Azure Data AWS Redshift AWS Glue SQL Pyspark Developer Kafka Engineer Big Data
Job SummaryWe are seeking a skilled Data Architect to lead the design and implementation of high-performance, scalable data platforms. This role involves architecting modern data lakes, warehouses, and streaming systems using Databricks and cloud technologies. If you enjoy solving complex data challenges and driving data-driven decision-making, this role is for you.Key ResponsibilitiesDesign and implement scalable data lakes, data warehouses, and real-time streaming architecturesBuild, optimize, and manage Databricks solutions using Spark, Delta Lake, Workflows, and SQL AnalyticsDevelop cloud-native data platforms on Azure (Synapse, Data Factory, Data Lake) and AWS (Redshift, Glue, S3)Create and automate ETL/ELT pipelines using Apache Spark, PySpark, and cloud toolsDesign and maintain data models (dimensional, normalized, star schemas) to support analytics and reportingLeverage big data technologies such as Hadoop, Kafka, and Scala for large-scale data processingEnsure data governance, security, and compliance with standards like GDPR and HIPAAOptimize Spark workloads and storage for performance and cost efficiencyCollaborate with engineering, analytics, and business teams to align data solutions with organizational goalsRequired Skills & Qualifications8+ years of experience in Data Architecture, Data Engineering, or AnalyticsStrong hands-on experience with Databricks (Delta Lake, Spark, MLflow, Pipelines)Expertise in Azure (Synapse, Data Factory, Data Lake) and AWS (Redshift, S3, Glue)Proficient in SQL and Python or ScalaExperience with NoSQL databases (e.g., MongoDB) and streaming platforms (e.g., Kafka)Solid understanding of data governance, security, and compliance best practicesExcellent problem-solving, communication, and cross-functional collaboration skills.Looking forward to receiving suitable profiles at the earliest.
View all details

Data Engineer

United Technology

  • 1 - 3 yrs
  • 4.0 Lac/Yr
  • Chennai
Data Integration Data Engineer Hadoop ETL SQL Informatica Apache AWS Big Data Python
We are looking Data Engineer with 1 to 3 years experience in Chennai.Immediate joiners preferred
View all details

Senior Database Analyst

Indievisa Immigration Services Pvt Ltd

Database Administration Data Administrator Data Analyst Database Algorithm Engineer Database Administration Database Designer Data Care Solutions Data Conversion Operator Data Analysis Data Architect Data Encoder Data Operator Backup and Recovery Data Quality Database Security ETL Processes Normalization Performance Tuning Query Optimization Big Data Technologies Data Mining Indexing Data Migration Stored Procedures Relational Databases Database Design Reporting Tools Data Modeling Data Warehous
Database analysts design, develop and administer data management solutions using database management software. Data administrators develop and implement data administration policy, standards and models. They are employed in information technology consulting firms and in information technology units throughout the private and public sectors.This group performs some or all of the following duties:Database analystsCollect and document user requirementsDesign and develop database architecture for information systems projectsDesign, construct, modify, integrate, implement and test data models and database management systemsConduct research and provide advice to other informatics professionals regarding the selection, application and implementation of database management toolsOperate database management systems to analyze data and perform data mining analysisMay lead, co-ordinate or supervise other workers in this group.Data administratorsDevelop and implement data administration policy, standards and modelsResearch and document data requirements, data collection and administration policy, data access rules and securityDevelop policies and procedures for network and/or Internet database access and usage and for the backup and recovery of dataConduct research and provide advice to other information systems professionals regarding the collection, availability, security and suitability of dataWrite scripts related to stored procedures and triggersMay lead and co-ordinate teams of data administrators in the development and implementation of data policies, standards and models.
View all details
  • 8 - 12 yrs
  • Kharadi Pune
Core Java Apache Beam Google Cloud Platforms Spring Boot Big Data Microservices Data Base
Requirements:8+ years of experience in Core Java and Spring Framework (Mandatory)Minimum 2 years of experience in Google Cloud Platform (GCP) (Mandatory)Hands-on experience with Apache Beam / Dataflow for building ETL/data pipelines (Mandatory)Strong expertise in big data processing on distributed systemsProficiency with RDBMS, NoSQL, and Cloud-native databasesExperience in handling multiple data formats (Flat file, JSON, Avro, XML, etc.) with schema/contract definitionsExperience in Microservices architecture and API integration patternsStrong understanding of data structures and data model design
View all details
  • 8 - 10 yrs
  • Pune
Kafka Scala Spark Hadoop Airflow Data Lakes Kappa Kappa ++ Architectures RDBMS NoSQL Cassandra Redis Oracle
Sr. Big Data Engineer Location: PuneExperience: 10+ years Mode: HybridRole Overview:We are seeking a talented Sr. Big Data Engineer to design, develop, and support a highly scalable, distributed SaaS-based Security Risk Prioritization product. You will lead the design and evolution of our data platform and pipelines, providing technical leadership to a team of engineers and architects.Key Responsibilities: Provide technical leadership on data platform design, roadmaps, and architecture. Design and implement scalable architecture for Big Data and Microservices environments. Drive technology explorations, leveraging knowledge of internal and industry prior art. Ensure quality architecture and design of systems, focusing on performance, scalability, and security. Mentor and provide technical guidance to other engineers.Required Skills & Technologies: Mandatory: Kafka, Scala, Spark. Big Data & Data Streaming: Spark, Kafka, Hadoop, Presto, Airflow, Data lakes, lambda architecture, kappa, and kappa ++ architectures with flink data streaming. Databases & Caching: RDBMS, NoSQL, Oracle, Cassandra, Redis. Search Solutions: Solr, Elastic. ML & Automation: Experience with ML models engineering and related deployment, scripting, and automation. Architecture: In-depth experience with messaging queues and caching components. Other Skills: Strong troubleshooting and performance benchmarking skills for Big Data technologies.Qualifications: Bachelors degree in Computer Science or equivalent. 8+ years of total experience, with 6+ years relevant. 2+ years in designing Big Data solutions with Spark. 3+ years with Kafka and performance testing for large infrastructure.
View all details

Big Data Engineer (Spark and Scala)

E2E Infoware Management Services

Scala Spark Pyspark
Role: Bigdata Developer - Scala SparkExp: 5+ YrsMode of Work: WFO All 5 daysLocation: Chennai/Bangalore/PuneInterview: Any one Level F2FJob Description: Total IT / development experience of 3+ years Experience in Spark (Scala-Spark ) developing Big Data applications on Hadoop, Hive and/or Kafka, HBase, MongoDB Deep knowledge of Scala-Spark libraries to develop and debug complex data engineering challenges Experience in developing sustainable data driven solutions with current new generation data technologies to drive our business and technology strategies Exposure to deploying on Cloud platforms At least 2 years of development experience on designing and developing Data Pipelines for Data Ingestion or Transformation using Spark-Scala At least 2 years of development experience in the following Big Data frameworks: File Format (Parquet, AVRO, ORC), Resource Management, Distributed Processing and RDBMS At least 2 years of developing applications in Agile with Monitoring, Build Tools, Version Control, Unit Test, Unix Shell Scripting, TDD, CI/CD, Change Management to support DevOps
View all details
  • 4 - 10 yrs
  • 36000/Yr
  • Missouri +1 USA
Data Warehousing Data Management Data Integration SQL Data Extraction ETL Tool Hadoop AWS Big Data Python
Role OverviewThis position requires a detail-oriented data engineer who can independently architect and implement data pipelines, while also serving as a trusted technical partner in client engagements and stakeholder meetings. Youll work hands-on with PySpark, Airflow, Python, and SQL, driving end-to-end data migration and platform modernization efforts across Azure and AWS.In addition to technical execution, youll contribute to sprint planning, backlog prioritization, and continuous integration/deployment of data infrastructure. This is a senior-level individual contributor role with direct visibility across engineering, product, and client delivery functions.Key ResponsibilitiesLead design and development of enterprise-grade data pipelines and cloud data migration architectures.Build scalable, maintainable ETL/ELT pipelines using Apache Airflow, PySpark, and modern data services.Write efficient, modular, and well-tested Python code, grounded in clean architecture and performance principles.Develop and optimize complex SQL queries across diverse relational and analytical databases.Contribute to and uphold standards for data modeling, data governance, and pipeline performance.Own the implementation of CI/CD pipelines to enable reliable deployment of data workflows and infrastructure (e.g., GitHub Actions, Azure DevOps, Jenkins).Embed unit testing, integration testing, and monitoring in all stages of the data pipeline lifecycle.Participate actively in Agile ceremonies: sprint planning, daily stand-ups, retrospectives, and backlog grooming.Collaborate directly with clients, stakeholders, and cross-functional teams to translate business needs into scalable technical solutions.Act as a technical authority within the teamleading architectural decisions and contributing to internal best practices and documentation.
View all details
  • 5 - 7 yrs
  • 10.0 Lac/Yr
  • Coimbatore
Big Data Analytics RIDGE Elastic Reality Python AWS Cloud Engineer Agile
Qualifications: 5-7 years of professional experience in machine learning, data science, or related roles. Should have good exposure and understanding in time series Modelling using ARIMA, ARIMAX Exposure into how to handle underfitting and overfitting. Should be capable of applying techniques which helps to generalize Models. Regularization techniques LASSO, RIDGE & ELASTIC NET and when to apply them. Good exposure in Unsupervised machine learning like clustering, dimensionality reduction, Outlier detection Ability to understand how Models are optimized using various techniques including Gradient Descent approach. Good understanding of deep learning algorithms CNN, RNN, LSTM and how to control overfitting in such cases. Good hands on in data engineering to process huge scale of data using Big Data (Spark/Hive) Good coding practices to write production ready code for creating data pipeline for Models to consume. Very good hands on in python (Pandas/Numpy/Scikit-Learn/NLTK/spaCy/Matplotlib) Able to apply the right level of ML techniques for the given problem statement. Ability to access information contained in data and engineer appropriate features. Familiar with Python language and various platforms for hosting ML models Expert in model training, tuning and validation. Expert in statistical techniques, deep learning methodologies, GenAI, alternate techniques such as Bayesian etc. Exposure to big data and related models Ability to articulate model choice and convert outcome for business decision making. Expert in Model Development Lifecycle from sourcing to model monitoring Ability to create code that is highly performance in the given platform. Ability to map model and business use case to the appropriate platform and tools needed. Understanding of technical and machine learning governance Ability to validate and articulate model choices with relevant metrics (precision, recall, confusion matrix, RMSE,
View all details

Opening For Cloud Data Engineer

Talme Technologies Pvt Ltd

Designing and Implementing Data Architecture Strategies Data Integration Data Management Supporting Analytics Technology Selection and Performance Optimization. Technical Skills: In-depth Knowledge Of AWS Services (IAM Redshift NoSQL) Data Processing and Analysis Tools (AWS Glue EMR) Big Data Frameworks (Hadoop Spark) ETL Tools (IBM DataStage ODI in
we are in the lookout for a seasoned Cloud Data Lead. We are eager to connect with you if you have extensive experience in cloud platforms, data architecture, and leadership!
View all details

Big Data Engineer

Krtrimaiq Cognitive Solutions

  • 4 - 10 yrs
  • 20.0 Lac/Yr
  • Bangalore
Big Data Big Data Architecture Big Data Engineer Big Data Architect
Design, develop, and maintain scalable data processing pipelines using Kafka, PySpark, Python/Scala, and Spark.Work extensively with the Kafka and Hadoop ecosystem, including HDFS, Hive, and other related technologies.Write efficient SQL queries for data extraction, transformation, and analysis.Implement and manage Kafka streams for real-time data processing.Utilize scheduling tools to automate data workflows and processes.
View all details
  • 2 - 6 yrs
  • 6.5 Lac/Yr
  • Bangalore National Highway Chennai
Data Warehousing Apache Data Integration Data Management SQL ETL Tool Hadoop AWS Big Data Python Java Java-script React Js
Invitation for B2B Partnerships: Seeking Software Development Support! We are looking to collaborate with companies that can provide 10 skilled Data Engineers to support our data engineering requirements pipeline for Euoropean clients on a contract basis.Requirements:Expertise in data engineering, including data processing and ETL.Proficiency in SQL and NoSQL databases.Experience with Hadoop or Spark.Experience in AWS or Azure.Snowflake is an added advantage.If your company is equipped to provide top-notch Data Engineers, we invite you to submit your proposal with terms and conditions.Please contact us by email or Whatsapprakanalytics@gmail.com (or) 9900173022
View all details

Data Engineer

Rakesh Retail

Data Warehousing Data Management Informatica Scala Big Data Data Integration
We are looking for 63 Data Engineer Posts in Bhagalpur,Patna,Rohtas,Munger, Bihar,Gopalganj,Champaran, Raipur,Buxar,Aurangabad,Bihar,Sao Jose de Areal, Goa,Shikhar Nagar, Bhubaneswar, with deep knowledge in Data Warehousing,Data Management,Informatica,Scala,Big Data,Data Integration and Required Educational Qualification is : Secondary School, Diploma, B.A, B.C.A, B.B.A, B.Com, B.Sc, B.E, B.Tech, Post Graduate Diploma
View all details

Machine Learning Engineer - Freshers

Thirumoolar IT Solutions

  • 0 - 1 yrs
  • 3.0 Lac/Yr
  • Chennai
Core Java Python Deep Learning Statistics Big Data
Thirumoolar IT Solutions is currently hiring for the position of Machine Learning Engineer - Freshers. This role is suitable for candidates looking to start their careers in machine learning and is based in Tamil Nadu, with a preference for work-from-home arrangements.Job ResponsibilitiesCollaborate with data scientists to identify and understand business problems suitable for machine learning solutions.Preprocess and clean data for model training, addressing missing values and performing feature engineering.Implement and train machine learning models using Python libraries such as scikit-learn, TensorFlow, or PyTorch.Optimize model performance through hyperparameter tuning and feature selection.Develop unit and integration tests to ensure the robustness of models.Deploy trained models to production environments using tools like Docker and Kubernetes.Monitor model performance in production and implement strategies for continuous improvement.Document code, models, and processes for maintainability and reproducibility.RequirementsBachelor's or Master's degree in Computer Science, Statistics, Mathematics, or a related field.Strong programming skills in Python and familiarity with machine learning libraries.Understanding of machine learning algorithms and techniques, including supervised and unsupervised learning.Preferred LocationCandidates based in Chennai, Tamil Nadu, or those willing to work from home are encouraged to apply.
View all details

Big Data Engineer-Scala

Cogent Integrated Business Solutions Inc

  • 5 - 9 yrs
  • Hyderabad
SCALA Java MVC Architecture
Mode of working - OnsiteWork Location- HyderabadRoles and ResponsibilitiesThe Maps team is developing tools to analyze, visualize, process, manage and curate data at large scale. Our team combines disparate signals such as data analytics, community engagement, and user feedback to improve the Maps Platform.As an Engineer, you are responsible to analyze large data sets to identify errors in the map.As an Engineer, you are responsible to design and implement a complex algorithm for resolving the issue, review the solution with a team of engineers and analysts, and integrate the resulting solution into the data processing pipeline.RequirementSome or all of them because we believe intelligent people can pick up whatever they need in a short period of time. You just need to prove that you can: Experience in building modern and scalable REST based microservices using Scala, preferably with Play as MVC framework.2. Expertise with functional programming using SCALA3. Experience in implementing RESTful web services in Scala, Java or similar languages.4. Experience with No SQL and SQL databases.5. Experience in information retrieval and machine learning6. Experience/ knowledge in big data using Scala spark, ML, Kafka, Elastic search will be plus.7.5 to 9 Years experience in the field8.Technology Graduate from a reputed university
View all details
  • 5 - 8 yrs
  • Hyderabad
Spark Scala NoSQL SQL Machine Learning MVC Big Data
Big Data Engineer- SCALARequired Skills:1. Experience in building modern and scalable REST based microservices using Scala, preferably with Play as MVC framework.2. Expertise with functional programming using SCALA3. Experience in implementing RESTful web services in Scala, Java or similar languages.4. Experience with No SQL and SQL databases.5. Experience in information retrieval and machine learning6. Experience/ knowledge in big data using Scala spark, ML, Kafka, and Elastic search will be plus.
View all details
  • 4 - 10 yrs
  • Bangalore
Big Data Spark Hadoop Work From Home
Job TitleTeam Lead / Sr. DeveloperLocation : Any Location Skills:Advanced working SQL/NoSQL knowledge and experience working with relational/nonrelational databases.Expert in PythonExperience with big data tools: Hadoop, Spark, Kafka, Nifi etc.Experience with relational SQL (Oracle & PosgreSQL) and NoSQL (HDFS Hive, HBase, Cassandra) databases.Experience with data pipeline and workflow management tools like Airflow, etc.Experience building and optimizing big data data pipelines, architectures and data sets.Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.Strong analytic skills related to working with unstructured datasets.Build processes supporting data transformation, data structures, metadata, dependency and workload management.A successful history of manipulating, processing and extracting value from large disconnected datasets.Working knowledge of message queuing, stream processing, and highly scalable big data data stores.Strong project management and organizational skills.Experience supporting and working with cross-functional teams in a dynamic environment.
View all details
View More Jobs