94

Big Data Jobs

filter
  • Location
  • Role
  • Functional Area
  • Qualification
  • Experience
  • Employer Type
  • Fresher
  • Female
  • Chennai
Data Cleansing Big Data Technologies Data Transformation Programming Data Warehousing
We are seeking a motivated and enthusiastic Data Processing Engineer to join our team. This role is perfect for recent graduates or individuals looking to start their career in data processing.As a Data Processing Engineer, you will be responsible for handling and organizing data efficiently. Your key responsibilities will include:- **Data Entry**: Accurately input data into our systems while ensuring all information is correct and up to date.- **Data Quality Assurance**: Review and validate data to identify any errors or inconsistencies, and fix them promptly.- **Data Maintenance**: Regularly update and maintain databases to keep them organized and accessible for team members.- **Reporting**: Generate basic reports from the data you process, helping the team make informed decisions.To be successful in this role, you should possess strong attention to detail and the ability to work independently from home. You must be comfortable using computers and familiar with basic data processing tools. Strong communication skills are essential to collaborate effectively with team members. Having a proactive approach to problem-solving will also be important as you work through data challenges.We welcome applications from females who have completed their 10th grade education and are eager to begin a full-time position in data processing. This is a fantastic opportunity to learn, grow, and launch your career in the field of data.
View all details
  • Fresher
  • 6.5 Lac/Yr
  • Basavanagudi Bangalore
Data Verification Google Sheets Keyboard Shortcuts Numeric Keypad Spreadsheet Management Data Input Data Quality Control Data Formatting Data Accuracy Data Extraction Data Cleansing Data Entry Software Data Collection Microsoft Excel Data Visualization Data Quality Data Transformation Big Data Technologies Programming Data Warehousing
We are looking for a motivated Data Processing Engineer to join our team. This part-time role is perfect for freshers who are eager to learn and grow in the field of data management. You will work from home, contributing to our data processing needs.
View all details
  • 0 - 1 yrs
  • 8.0 Lac/Yr
  • Female
  • Mall Road Amritsar
Data Integration Data Warehousing SQL Informatica ETL Hadoop Big Data Python
We are looking for a motivated Data Engineer to join our team. This part-time position allows you to work from home and is suitable for individuals with little to no experience. The ideal candidate will help us manage and process data to ensure it meets the needs of the business.**Key Responsibilities:**- **Data Collection:** Gather data from various sources to prepare for analysis. Its important to ensure the data is accurate and up-to-date.- **Data Cleaning:** Clean and organize raw data to make it usable. This involves removing errors and inconsistencies, which is crucial for reliable analysis.- **Data Storage:** Help in storing data in databases or cloud storage systems. Proper organization helps in easy access and retrieval of data when needed.- **Collaboration:** Work with other team members to understand their data needs. Communication is key to delivering the right data for their projects.- **Support:** Assist in monitoring data systems and providing technical support. Being proactive in identifying issues helps keep the data flow smooth.**Required Skills and Expectations:**Candidates should have a basic understanding of data management principles. Familiarity with data cleaning tools and database management systems is a plus. The ability to learn new software quickly and a strong attention to detail are essential. Good communication skills are important for working with teammates and understanding project requirements. We encourage fresh graduates and those with relevant qualifications to apply.
View all details
  • 4 - 6 yrs
  • 7.0 Lac/Yr
  • Chennai
Data Governance Data Lake Data Loading Data Pipelines Data Transformation Query Optimization Performance Tuning Data Architecture Data Warehousing SQL ETL Scripting Data Integration Data Migration Data Modeling Database Design Snowflake Python Big Data Cloud Computing
We are looking for Certified Snow Flake Developer with 4 to 6 year experience in Chennai.Strong knowledge of SQLExperience with Snowflake architectureUnderstanding of Data Warehousing conceptsExperience with ETL / ELT toolsKnowledge of Cloud Platforms (AWS, Azure, GCP)Programming knowledge in Python, Java, or Scala
View all details

Get Personalized Job Matches

Based on your experience, skills, interests, and career goals to help you find the most relevant opportunities faster. Register Now!
  • 4 - 10 yrs
  • 20.0 Lac/Yr
  • Bhubaneswar
AIML Engineer Artificial Intelligence Machine Learning Engineer Frameworks Cloud Services Data Handling Tools MLOps Big Data
AIML Engineer Job DescriptionExp: 4+ yrsNP : Immediate / 15 daysLocation : Bhubaneswar/ Bangalore/ ChennaiPosition OverviewWe are seeking a highly skilled Artificial Intelligence & Machine Learning Engineer to design, develop, and deploy intelligent systems that solve complex business problems. The role involves applying advanced algorithms, data science techniques, and deep learning frameworks to build scalable AI solutions.Key ResponsibilitiesModel Development: Design, train, and optimize ML/DL models for predictive analytics, NLP, computer vision, and recommendation systems.Data Engineering: Collect, preprocess, and analyze large datasets to ensure quality and usability.Deployment & Integration: Implement AI models into production environments using cloud platforms (AWS, Azure, GCP).Research & Innovation: Stay updated with emerging AI/ML technologies and apply them to business challenges.Collaboration: Work closely with product managers, data scientists, and software engineers to deliver end-to-end AI solutions.Performance Monitoring: Continuously evaluate and improve deployed models for accuracy, scalability, and efficiency.Required Skills & QualificationsStrong programming skills in Python, R, Java, or C++.Expertise in ML/DL frameworks: TensorFlow, PyTorch, Keras, Scikit-learn.Solid understanding of statistics, probability, and linear algebra.Experience with cloud services: AWS SageMaker, Azure ML, Google AI Platform.Knowledge of data handling tools: SQL, Spark, Hadoop.Familiarity with NLP, computer vision, reinforcement learning.Strong problem-solving, communication, and teamwork skills.Preferred QualificationsExperience with MLOps (CI/CD pipelines for ML).Hands-on with big data technologies.Publications or contributions to AI research.Infested candidate can share CV to Rekha.C@eagledrift.com
View all details

Looking For Data Architect

Toolify Private Limited

  • 9 - 15 yrs
  • 40.0 Lac/Yr
  • Jaipur
Data Architect Databricks Developer Apache Spark Delta Lake Azure Synapse Azure Data AWS Redshift AWS Glue SQL Pyspark Developer Kafka Engineer Big Data
Job SummaryWe are seeking a skilled Data Architect to lead the design and implementation of high-performance, scalable data platforms. This role involves architecting modern data lakes, warehouses, and streaming systems using Databricks and cloud technologies. If you enjoy solving complex data challenges and driving data-driven decision-making, this role is for you.Key ResponsibilitiesDesign and implement scalable data lakes, data warehouses, and real-time streaming architecturesBuild, optimize, and manage Databricks solutions using Spark, Delta Lake, Workflows, and SQL AnalyticsDevelop cloud-native data platforms on Azure (Synapse, Data Factory, Data Lake) and AWS (Redshift, Glue, S3)Create and automate ETL/ELT pipelines using Apache Spark, PySpark, and cloud toolsDesign and maintain data models (dimensional, normalized, star schemas) to support analytics and reportingLeverage big data technologies such as Hadoop, Kafka, and Scala for large-scale data processingEnsure data governance, security, and compliance with standards like GDPR and HIPAAOptimize Spark workloads and storage for performance and cost efficiencyCollaborate with engineering, analytics, and business teams to align data solutions with organizational goalsRequired Skills & Qualifications8+ years of experience in Data Architecture, Data Engineering, or AnalyticsStrong hands-on experience with Databricks (Delta Lake, Spark, MLflow, Pipelines)Expertise in Azure (Synapse, Data Factory, Data Lake) and AWS (Redshift, S3, Glue)Proficient in SQL and Python or ScalaExperience with NoSQL databases (e.g., MongoDB) and streaming platforms (e.g., Kafka)Solid understanding of data governance, security, and compliance best practicesExcellent problem-solving, communication, and cross-functional collaboration skills.Looking forward to receiving suitable profiles at the earliest.
View all details
  • Fresher
  • 3.0 Lac/Yr
  • Samba Jammu
Typing Data Entry
Company Provide a Golden Opportunity At Home JobLatest Data Entry Project AvailableSpear Your Free Time In jobNo experience in this jobhouse wife and student gold job;
View all details

Senior Database Analyst

Indievisa Immigration Services Pvt Ltd

Database Administration Data Administrator Data Analyst Database Algorithm Engineer Database Administration Database Designer Data Care Solutions Data Conversion Operator Data Analysis Data Architect Data Encoder Data Operator Backup and Recovery Data Quality Database Security ETL Processes Normalization Performance Tuning Query Optimization Big Data Technologies Data Mining Indexing Data Migration Stored Procedures Relational Databases Database Design Reporting Tools Data Modeling Data Warehous
Database analysts design, develop and administer data management solutions using database management software. Data administrators develop and implement data administration policy, standards and models. They are employed in information technology consulting firms and in information technology units throughout the private and public sectors.This group performs some or all of the following duties:Database analystsCollect and document user requirementsDesign and develop database architecture for information systems projectsDesign, construct, modify, integrate, implement and test data models and database management systemsConduct research and provide advice to other informatics professionals regarding the selection, application and implementation of database management toolsOperate database management systems to analyze data and perform data mining analysisMay lead, co-ordinate or supervise other workers in this group.Data administratorsDevelop and implement data administration policy, standards and modelsResearch and document data requirements, data collection and administration policy, data access rules and securityDevelop policies and procedures for network and/or Internet database access and usage and for the backup and recovery of dataConduct research and provide advice to other information systems professionals regarding the collection, availability, security and suitability of dataWrite scripts related to stored procedures and triggersMay lead and co-ordinate teams of data administrators in the development and implementation of data policies, standards and models.
View all details
  • 8 - 12 yrs
  • Kharadi Pune
Core Java Apache Beam Google Cloud Platforms Spring Boot Big Data Microservices Data Base
Requirements:8+ years of experience in Core Java and Spring Framework (Mandatory)Minimum 2 years of experience in Google Cloud Platform (GCP) (Mandatory)Hands-on experience with Apache Beam / Dataflow for building ETL/data pipelines (Mandatory)Strong expertise in big data processing on distributed systemsProficiency with RDBMS, NoSQL, and Cloud-native databasesExperience in handling multiple data formats (Flat file, JSON, Avro, XML, etc.) with schema/contract definitionsExperience in Microservices architecture and API integration patternsStrong understanding of data structures and data model design
View all details
  • 8 - 10 yrs
  • Pune
Kafka Scala Spark Hadoop Airflow Data Lakes Kappa Kappa ++ Architectures RDBMS NoSQL Cassandra Redis Oracle
Sr. Big Data Engineer Location: PuneExperience: 10+ years Mode: HybridRole Overview:We are seeking a talented Sr. Big Data Engineer to design, develop, and support a highly scalable, distributed SaaS-based Security Risk Prioritization product. You will lead the design and evolution of our data platform and pipelines, providing technical leadership to a team of engineers and architects.Key Responsibilities: Provide technical leadership on data platform design, roadmaps, and architecture. Design and implement scalable architecture for Big Data and Microservices environments. Drive technology explorations, leveraging knowledge of internal and industry prior art. Ensure quality architecture and design of systems, focusing on performance, scalability, and security. Mentor and provide technical guidance to other engineers.Required Skills & Technologies: Mandatory: Kafka, Scala, Spark. Big Data & Data Streaming: Spark, Kafka, Hadoop, Presto, Airflow, Data lakes, lambda architecture, kappa, and kappa ++ architectures with flink data streaming. Databases & Caching: RDBMS, NoSQL, Oracle, Cassandra, Redis. Search Solutions: Solr, Elastic. ML & Automation: Experience with ML models engineering and related deployment, scripting, and automation. Architecture: In-depth experience with messaging queues and caching components. Other Skills: Strong troubleshooting and performance benchmarking skills for Big Data technologies.Qualifications: Bachelors degree in Computer Science or equivalent. 8+ years of total experience, with 6+ years relevant. 2+ years in designing Big Data solutions with Spark. 3+ years with Kafka and performance testing for large infrastructure.
View all details

Big Data Engineer (Spark and Scala)

E2E Infoware Management Services

Scala Spark Pyspark
Role: Bigdata Developer - Scala SparkExp: 5+ YrsMode of Work: WFO All 5 daysLocation: Chennai/Bangalore/PuneInterview: Any one Level F2FJob Description: Total IT / development experience of 3+ years Experience in Spark (Scala-Spark ) developing Big Data applications on Hadoop, Hive and/or Kafka, HBase, MongoDB Deep knowledge of Scala-Spark libraries to develop and debug complex data engineering challenges Experience in developing sustainable data driven solutions with current new generation data technologies to drive our business and technology strategies Exposure to deploying on Cloud platforms At least 2 years of development experience on designing and developing Data Pipelines for Data Ingestion or Transformation using Spark-Scala At least 2 years of development experience in the following Big Data frameworks: File Format (Parquet, AVRO, ORC), Resource Management, Distributed Processing and RDBMS At least 2 years of developing applications in Agile with Monitoring, Build Tools, Version Control, Unit Test, Unix Shell Scripting, TDD, CI/CD, Change Management to support DevOps
View all details
  • 4 - 10 yrs
  • 36000/Yr
  • Missouri +1 USA
Data Warehousing Data Management Data Integration SQL Data Extraction ETL Tool Hadoop AWS Big Data Python
Role OverviewThis position requires a detail-oriented data engineer who can independently architect and implement data pipelines, while also serving as a trusted technical partner in client engagements and stakeholder meetings. Youll work hands-on with PySpark, Airflow, Python, and SQL, driving end-to-end data migration and platform modernization efforts across Azure and AWS.In addition to technical execution, youll contribute to sprint planning, backlog prioritization, and continuous integration/deployment of data infrastructure. This is a senior-level individual contributor role with direct visibility across engineering, product, and client delivery functions.Key ResponsibilitiesLead design and development of enterprise-grade data pipelines and cloud data migration architectures.Build scalable, maintainable ETL/ELT pipelines using Apache Airflow, PySpark, and modern data services.Write efficient, modular, and well-tested Python code, grounded in clean architecture and performance principles.Develop and optimize complex SQL queries across diverse relational and analytical databases.Contribute to and uphold standards for data modeling, data governance, and pipeline performance.Own the implementation of CI/CD pipelines to enable reliable deployment of data workflows and infrastructure (e.g., GitHub Actions, Azure DevOps, Jenkins).Embed unit testing, integration testing, and monitoring in all stages of the data pipeline lifecycle.Participate actively in Agile ceremonies: sprint planning, daily stand-ups, retrospectives, and backlog grooming.Collaborate directly with clients, stakeholders, and cross-functional teams to translate business needs into scalable technical solutions.Act as a technical authority within the teamleading architectural decisions and contributing to internal best practices and documentation.
View all details

Big Data Lead

Hexaware Technologies

Snowflake Python SQL
Must have 4-6 years of experience in Data warehouse, ETL, BI projects Must have atleast 4+ years of experience in Snowflake Expertise in Snowflake architecture is must. Must have atleast 3+ years of experience and strong hold in Python/PySpark Must have experience implementing complex stored Procedures and standard DWH and ETL concepts Proficient in Oracle database, complex PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Good to have experience with AWS services and creating DevOps templates for various AWS services. Experience in using Github, Jenkins Good communication and Analytical skills Snowflake certification is desirable
View all details
  • 5 - 7 yrs
  • 10.0 Lac/Yr
  • Coimbatore
Big Data Analytics RIDGE Elastic Reality Python AWS Cloud Engineer Agile
Qualifications: 5-7 years of professional experience in machine learning, data science, or related roles. Should have good exposure and understanding in time series Modelling using ARIMA, ARIMAX Exposure into how to handle underfitting and overfitting. Should be capable of applying techniques which helps to generalize Models. Regularization techniques LASSO, RIDGE & ELASTIC NET and when to apply them. Good exposure in Unsupervised machine learning like clustering, dimensionality reduction, Outlier detection Ability to understand how Models are optimized using various techniques including Gradient Descent approach. Good understanding of deep learning algorithms CNN, RNN, LSTM and how to control overfitting in such cases. Good hands on in data engineering to process huge scale of data using Big Data (Spark/Hive) Good coding practices to write production ready code for creating data pipeline for Models to consume. Very good hands on in python (Pandas/Numpy/Scikit-Learn/NLTK/spaCy/Matplotlib) Able to apply the right level of ML techniques for the given problem statement. Ability to access information contained in data and engineer appropriate features. Familiar with Python language and various platforms for hosting ML models Expert in model training, tuning and validation. Expert in statistical techniques, deep learning methodologies, GenAI, alternate techniques such as Bayesian etc. Exposure to big data and related models Ability to articulate model choice and convert outcome for business decision making. Expert in Model Development Lifecycle from sourcing to model monitoring Ability to create code that is highly performance in the given platform. Ability to map model and business use case to the appropriate platform and tools needed. Understanding of technical and machine learning governance Ability to validate and articulate model choices with relevant metrics (precision, recall, confusion matrix, RMSE,
View all details

Opening For Cloud Data Engineer

Talme Technologies Pvt Ltd

Designing and Implementing Data Architecture Strategies Data Integration Data Management Supporting Analytics Technology Selection and Performance Optimization. Technical Skills: In-depth Knowledge Of AWS Services (IAM Redshift NoSQL) Data Processing and Analysis Tools (AWS Glue EMR) Big Data Frameworks (Hadoop Spark) ETL Tools (IBM DataStage ODI in
we are in the lookout for a seasoned Cloud Data Lead. We are eager to connect with you if you have extensive experience in cloud platforms, data architecture, and leadership!
View all details

Big Data Analytics

Creative Consultant & Contractor

  • 3 - 7 yrs
  • 9.0 Lac/Yr
  • Bangalore
Hadoop Developer SQL Server Developer Python Developer SCALA Data Warehouse Developer Data Scientist Data Analyst
Hi, we have job opportunity for the post of Big Data Analytics or Developer at Bangalore, Karnataka, candidate who have minimum 3 + years experienced in the same field and they are ready to join immediately those candidates can apply. Company will give you good salary and other benefits also.
View all details

IT Trainer

Vijaya Management Services

  • 2 - 8 yrs
  • 5.0 Lac/Yr
  • Pune
Ava Python Big Data Technologies Hadoop Spark PiSpark Kafka Airflow Machine Learning Deep Learning Tableau Power BI
Training on Java, Python, Big data technologies, Hadoop, Spark, PiSpark, Kafka, Airflow, Machine Learning, Deep learning, Tableau, Power BI, TableauMin Experience: 2 to 3 years of training experience.
View all details

Big Data Engineer

Krtrimaiq Cognitive Solutions

  • 4 - 10 yrs
  • 20.0 Lac/Yr
  • Bangalore
Big Data Big Data Architecture Big Data Engineer Big Data Architect
Design, develop, and maintain scalable data processing pipelines using Kafka, PySpark, Python/Scala, and Spark.Work extensively with the Kafka and Hadoop ecosystem, including HDFS, Hive, and other related technologies.Write efficient SQL queries for data extraction, transformation, and analysis.Implement and manage Kafka streams for real-time data processing.Utilize scheduling tools to automate data workflows and processes.
View all details

Hiring For Big Data Developer

krtrimaiq cognitive solution

  • 4 - 8 yrs
  • Bangalore
Python SCALA SQL Hadoop
We are looking for an only immediate joiner and experienced Big Data Developer with a strong background in Kafka, PySpark, Python/Scala, Spark, SQL, and the Hadoop ecosystem.The ideal candidate should have over 5 years of experience and be ready to join immediately. This role requires hands-on expertise in big data technologies and the ability to design and implement robust data processing solutions.Key Responsibilities:Design, develop, and maintain scalable data processing pipelines using Kafka, PySpark, Python/Scala, and Spark.Work extensively with the Kafka and Hadoop ecosystem, including HDFS, Hive, and other related technologies.Write efficient SQL queries for data extraction, transformation, and analysis.Implement and manage Kafka streams for real-time data processing.Utilize scheduling tools to automate data workflows and processes.Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions.Ensure data quality and integrity by implementing robust data validation processes.Optimize existing data processes for performance and scalability.
View all details
  • 2 - 6 yrs
  • 6.5 Lac/Yr
  • Bangalore National Highway Chennai
Data Warehousing Apache Data Integration Data Management SQL ETL Tool Hadoop AWS Big Data Python Java Java-script React Js
Invitation for B2B Partnerships: Seeking Software Development Support! We are looking to collaborate with companies that can provide 10 skilled Data Engineers to support our data engineering requirements pipeline for Euoropean clients on a contract basis.Requirements:Expertise in data engineering, including data processing and ETL.Proficiency in SQL and NoSQL databases.Experience with Hadoop or Spark.Experience in AWS or Azure.Snowflake is an added advantage.If your company is equipped to provide top-notch Data Engineers, we invite you to submit your proposal with terms and conditions.Please contact us by email or Whatsapprakanalytics@gmail.com (or) 9900173022
View all details
  • Fresher
  • 3.0 Lac/Yr
  • Eluru
Typing Data Entry Operator
1.Do Simple online form filling work from home without any investment.2.You must have a PC, laptop or android smartphone with internet connection.3.Basic computer & internet knowledge is require.4.Work 2-3 hours daily in your free time.5.Work from home, office, net cafe etc.6.No experience needed.7.Housewives, students, professionals, job seekers, etc can work.8.Age must be above 18 years.9.Get guaranteed monthly income.10.Both male & female can apply.we are providing the software to you through the mail.We provide 1200 online forms & give Rs.16 for each correct form entry. you have to complete this work in 12 days. We also provide Database file, Software & DEMO .
View all details

Data Engineer

United Technology

  • 1 - 3 yrs
  • 4.0 Lac/Yr
  • Chennai
Data Integration Data Engineer Hadoop ETL SQL Informatica Apache AWS Big Data Python
We are looking Data Engineer with 1 to 3 years experience in Chennai.Immediate joiners preferred
View all details
  • 0 - 1 yrs
  • 1.5 Lac/Yr
  • Chennai
Python Big Data Technologies Statistical Modeling Deep Learning DATA WRA DATA WRANGLING
We are looking Any Degree Graduate Freshers for Data Science in Chennai.Stipend: Based on students performance during the internship. Minimum 3K. Depending upon performance can go upto 5K- Salary package: After internship the candidate will get the salary of 1.5LPA to 2LPA will be provided.
View all details
View More Jobs