191

Data Engineer Jobs

filter
  • Location
  • Role
  • Functional Area
  • Qualification
  • Experience
  • Employer Type
  • Fresher
  • 4.5 Lac/Yr
  • Kotha Parcha Allahabad
Database Mysql MS SQL Postgre SQL SQL Mysql Database Administration DBA Compliance Standards Backup and Recovery Monitoring Tools Network Infrastructure Power and Cooling Systems Security Protocols Server Hardware Storage Systems Virtualization Technologies Risk Assessment Vendor Management Scripting Languages Automation Tools Capacity Planning Change Management Troubleshooting Data Center Operations Problem-solving Documentation Disaster Recovery Work From Home
As a Data Center Engineer, you will play a vital role in managing and maintaining our data center operations. This is a part-time position that allows you to work from home, providing flexibility while you kickstart your career in this exciting field.Key Responsibilities:1. **System Monitoring**: You will monitor the performance of servers and networks to ensure they are running efficiently. This involves checking for issues and reporting any findings to the team.2. **Data Management**: You will assist in organizing and backing up data to ensure its safety and accessibility. Proper data management is crucial for smooth operations.3. **Troubleshooting**: When issues arise, you will help diagnose and resolve technical problems. Your ability to quickly identify and fix issues will be essential to minimize downtime.4. **Documentation**: You will maintain proper records of data center operations, including system configurations and incidents. This helps in keeping track of changes and learning from past experiences.5. **Collaboration**: You will work closely with other team members, ensuring that tasks are completed efficiently and that knowledge is shared across the team.Required Skills and Expectations: You should have a basic understanding of computer systems and networks. Strong problem-solving abilities and attention to detail are essential. Being a quick learner and having good communication skills will also help you succeed in this role.
View all details
  • 4 - 10 yrs
  • Qatar
ADF Hana SAP SQL
Minimum 4+ years of hands-on experience in data engineering or data management, preferably in the energy or industrial domain. Design, develop, and maintain scalable ETL/ELT pipelines to process structured and unstructured data from diverse sources including SAP ECC / S/4HANA, SQL Server, Oracle, other RDBMS, Excel/CSV files, Parquet, JSON, XML, and APIs. Collaborate with business and functional teams to define data integration and transformation strategies aligned with enterprise objectives. Optimize data pipelines, storage, and query performance for high-volume datasets across cloud or on-prem environments. Implement robust data quality, validation, and monitoring frameworks to ensure accuracy and reliability. Develop and automate data workflows, versioning, and deployment pipelines to streamline data operations. Support the deployment, monitoring, and governance of data infrastructure and warehouse/lakehouse environments. Work closely with Data Architects to establish best practices for data modeling, warehousing, and lineage tracking. Enable incremental data processing (CDC) and efficient handling of batch and near real-time data pipelines. Collaborate with analytics teams to enable insightful visualization and reporting solutions. Prepare and maintain technical documentation for pipelines, transformations, and data flows. Collaborate with business stakeholders to understand requirements and ensure alignment with business goals.Qualification and Experience: BE/B.Tech / Science Graduate in Computer Science, Information Technology, or related field. Strong foundation in data modeling, performance tuning, and ETL/ELT orchestration. Experience working with enterprise data sources such as SAP ECC / S/4HANA, SQL/Oracle databases, and file-based data (Excel, CSV, Parquet, JSON). Experience with one or more cloud platforms such as Azure, AWS, or GCP. Familiarity with tools such as Microsoft Fabric, Databricks, Power BI, or equivalent tools is an advantage. Experience in data migration or transformation projects is a strong advantage. Understanding of data governance, data quality, and metadata/lineage concepts is a plus.Key Deliverables: ETL/ELT pipelines for all assigned data sources Data ingestion from SAP, databases, files, and APIs Bronze/Silver/Gold data layer implementation Data quality checks and monitoring setup Optimized and scalable data pipelines Develop the Power BI dashboard. Technical documentation and runbooks
View all details
  • 5 - 7 yrs
  • 3.0 Lac/Yr
  • United States
Information Technology Computer Hardware Problem Solving Hardware Troubleshooting Hardware Support Data Engineer
We are seeking a skilled Mid-Level Data Engineer to design, build, and optimize scalable data pipelines and architectures. The ideal candidate will have strong experience in data processing, ETL development, and cloud-based data platforms, supporting data-driven decision-making across the organization.
View all details
  • 5 - 7 yrs
  • 12.0 Lac/Yr
  • Chennai
Snow Flake Developer Dbt Dagster SQL Python Git Cicd Pipelines Data Modeling Datawarehouse Architecture Claude Copilot Data Extraction
We are looking for Senior Data Engineer (Snowflake / dbt / Dagster / AI-Assisted Development) with 5+ Year Experience in Chennai.Design and optimize data pipelines from SQL Server to Snowflake Work with healthcare data formats including EDI 835 / 837 if applicable Use AI tools (LLMs, code assistants, automation agents) to improve engineeringproductivity and quality
View all details

Get Personalized Job Matches

Based on your experience, skills, interests, and career goals to help you find the most relevant opportunities faster. Register Now!

Data Engineer Jobs For M.C.A Freshers

SECRET TECHNOLOGIES INDIA VMS GROUP

  • 0 - 4 yrs
  • 40.0 Lac/Yr
  • Pune
Data Management Data Analysis Data Mining Informatica PLSQL SQL Oracle SQL Data Collection
As a Data Engineer, your responsibilities will include collecting and analyzing data to help inform business decisions. You will be responsible for data management, ensuring that data is accurate and up-to-date. This will involve using tools such as Informatica, PLSQL, SQL, and Oracle SQL to manipulate and query large datasets.Your skills should include a strong understanding of data analysis techniques, such as data mining and statistical analysis.
View all details
  • 0 - 1 yrs
  • 8.0 Lac/Yr
  • Female
  • Mall Road Amritsar
Data Integration Data Warehousing SQL Informatica ETL Hadoop Big Data Python
We are looking for a motivated Data Engineer to join our team. This part-time position allows you to work from home and is suitable for individuals with little to no experience. The ideal candidate will help us manage and process data to ensure it meets the needs of the business.**Key Responsibilities:**- **Data Collection:** Gather data from various sources to prepare for analysis. Its important to ensure the data is accurate and up-to-date.- **Data Cleaning:** Clean and organize raw data to make it usable. This involves removing errors and inconsistencies, which is crucial for reliable analysis.- **Data Storage:** Help in storing data in databases or cloud storage systems. Proper organization helps in easy access and retrieval of data when needed.- **Collaboration:** Work with other team members to understand their data needs. Communication is key to delivering the right data for their projects.- **Support:** Assist in monitoring data systems and providing technical support. Being proactive in identifying issues helps keep the data flow smooth.**Required Skills and Expectations:**Candidates should have a basic understanding of data management principles. Familiarity with data cleaning tools and database management systems is a plus. The ability to learn new software quickly and a strong attention to detail are essential. Good communication skills are important for working with teammates and understanding project requirements. We encourage fresh graduates and those with relevant qualifications to apply.
View all details
  • 3 - 8 yrs
  • Bangalore
Web Scraping Python
Data Extraction Engineer designs extraction systems (and not just scripts). They build and maintain a next-generation data acquisition platform that treats web scraping as a declarative, specification-driven discipline. Instead of hard-coding XPaths for every site, Web Scraping Developer defines what data is neededusing schemas, natural language descriptions, or visual blueprintsand lets intelligent pipelines figure out how to get it.Key Responsibilities:Specification-Driven Extraction Engineering-Design and maintain declarative extraction specificationsusing Pydantic models, JSON schemas, or domain-specific languagesthat describe exactly which fields to capture, their types, and validation rules.Implement pipelines that translate these specifications into executable extraction plans, leveraging both classical (Scrapy, Playwright) and AI-augmented (LLM-based semantic parsing) backends.Build reusable specification libraries for recurring data types (product prices, tariff codes, regulatory texts) to accelerate onboarding of new sources.Autonomous & Self-Healing Systems-Deploy self-healing spiders that automatically detect website layout changes and repair themselves using Model Context Protocol (MCP) servers (e.g., Scrapy MCP Server, Playwright MCP).Integrate semantic extraction (Scrapy-LLM, custom LLM pipelines) to eliminate selector brittlenessspiders rely on field descriptions, not fragile XPaths.Orchestrate complex, multi-step browsing workflows with agentic frameworks (BMAD/TEA, AutoGPT-like agents) that reason about page state, adapt to anti-bot measures, and correct their own behaviour in real time.Platform Thinking & Reusability-Move beyond one-off scrapers: build a component-based extraction platform where selectors, login handlers, and pagination logic are shared, versioned, and tested.Implement monitoring, alerting, and automatic rollback for failed extraction runs.Champion ethical crawling by designrate limiting, robots.txt respect, and compliance with GDPR/CCPA are built into the specification layer, not retrofitted.Collaboration & Continuous Innovation-Partner with data scientists and domain experts to refine extraction specifications for complex, unstructured domains (e.g., legal texts, tariff classifications).Evaluate and pilot emerging tools to push automation coverage beyond 90%.Document and evangelise specification-driven best practices across the engineering organisation.Candidate Profile:Education and Experience -Bachelors degree in Computer Science3+ years of experience in web scraping or data extractionSkills and competences-Specification-Driven Extraction Experience defining extraction requirements via schemas (Pydantic, JSON Schema) and executing them through both traditional crawlers and LLM-based semantic parsers.SelfHealing & Semantic Extraction Handson use of ScrapyLLM, Scrapy MCP Server, or similar systems that decouple field definitions from page structure.Agentic Workflows Familiarity with frameworks that give LLMs browser control (Playwright + MCP, BMAD/TEA) to handle complex, nondeterministic crawling tasks.Classical Scraping Fundamentals You still know how to write a Scrapy spider or a Playwright script when needed, but you actively seek to replace that work with reusable, specification-driven components.Data Validation & Storage Ability to define validation rules within specifications and land clean data into SQL/NoSQL databases or data lakes.Python proficiency: the focus is on an extraction engineer who happens to use Python.HTTP, DOM, XPath, CSS.Basic API integration and authentication flows.Preferred / Nice-to-Have Skills:Contributions to open-source scraping or AI-automation projects.Experience training or fine-tuning small LLMs for domain-specific extraction.Familiarity with data privacy engineering (GDPR, CCPA) baked into specification design.DevOps light Docker, CI/CD for testing extraction specifications.Mindset & Approach (Non-Negotiable):Strong belief that the future of scraping is declarative, not imperative. Youd rather write a schema that says extract the price than debug an XPath when a website redesigns.Looking to shift from code that scrapes to systems that understand extraction.
View all details
  • 5 - 11 yrs
  • 25.0 Lac/Yr
  • Bangalore
Apache Kafka Azure Grafana Data Warehousing
Role: Data Engineer 2.0Location - RemoteExperience: Min. 5 YearsNotice Period- Immediate to 15 Days or serving notice periodKey Responsibilities:Design and implement manual test strategies for real-time streaming use cases using Azure Service Bus, Event Hubs, Kafka, and Azure Functions.Validate Spark Streaming applications, including unbounded data flows, streaming DataFrames, checkpoints, and streaming joins.Develop test plans for containerized microservices deployed on Kubernetes, ensuring scalability and fault tolerance.Test data ingestion and transformation workflows across open table formats like Delta Lake, Apache Iceberg, and Hudi.Good to Have:Monitoring and troubleshooting system performance using observability stacks such as Prometheus, Grafana, and ELK.functional and performance testing on analytical databases and query engines such as Trino, StarRocks, and ClickHouse.Testing and validation of data products designed under data mesh architecture, ensuring domain-oriented data quality and governance.
View all details

Looking For Data Engineer

BSRI Solutions Pvt Ltd

  • 3 - 5 yrs
  • 16.0 Lac/Yr
  • Chennai
Python Pyspark Developer Scala SQL Hive Hadoop Google Cloud Platform Kafka Developer Infrastructure AS Code GitHub Agile Methodology ETL
Required Qualifications : 3+ years of demonstrated ability with Hive, Python, Spark/Scala, SQL, etc. Google Cloud Platform Experience, Big Query, Cloud Storage, Dataproc, Data Flow, Cloud Composer, Cloud SQL, Pub Sub, Terraform, etc. Experience with Hadoop Ecosystem, Kafka, PCF cloud services Familiar with big data and machine learning tools and platforms Experience with BI tools, such as Alteryx, Data Stage, QlikSense, etc. Design data pipelines and data robots, take a vision and bring it to life Master data engineer; mentors others; works closely with IT architects to set strategy and design projects Provide extensive technical, and strategic advice and guidance to key stakeholders around the data transformation efforts Redesign data flows to prevent recurring data issues Strong analytical and problem-solving skills Possess excellent oral and written communication skills, as well as facilitationand presentation skills, and engaging presentation style. Ability to work as a global team member, as well as independently, in achanging environment and prioritize. Ability to establish and maintain coordinated and effective working relationships with application implementation teams, IT project teams, business customers, and end users. Ability to deliver work within deadlines. Experience with agile/lean methodologies Experience working independently and with minimal supervision Experience with Test Driven Development and Software Craftsmanship Experience with GitHub, Accurev, or other version-control systems Experience with Putty Experience with Datastage Strong Communications skills Ability to illustrate and convey ideas and prototypes effectively with team and partners Presence demonstrating confidence, ability to learn quickly, influence, and shape ideas Key Skills Required - Data Engineer- Python / PySpark / Scala- SQL & Hive- Hadoop Ecosystem- Data Pipeline Design & ETL Development- Google Cloud Platform (BigQuery, Dataproc, Dataflow, Cloud Storage)- Kafka / Streaming Data Processing- Terraform (Infrastructure as Code)- DataStage or Similar ETL Tools- Version Control (GitHub or equivalent)- Agile Methodologies- Strong Analytical & Problem-Solving Skills- Stakeholder Collaboration & CommunicationNice to Have:- Cloud Composer, Cloud SQL, Pub/Sub- BI Tools (Alteryx, QlikSense)- Machine Learning Platform Exposure- Test Driven Development (TDD)- Mentoring & Technical Leadership
View all details

Looking For Data Engineer

InfiCare Technologies

  • 10 - 15 yrs
  • 22.5 Lac/Yr
  • Delhi
AZURE AWS ETL Data Factory Data Warehousing ETL Tool SQL
Key Responsibilities:- Design and manage data pipelines to transform and integrate structured and unstructured data.- Ensure high data quality and performance.- Support analytics, reporting, and business intelligence needs by preparing reliable data sets and models for stakeholders.- Collaborate with Analysts, Digital Project Managers, Developers, and business teams to ensure data accessibility and usefulness.- Enforce standards for data governance, security, and cost-effective operations.Ideal candidates will thrive in a collaborative, mission-focused environment and excel in ETL/ELT engineering. They should have experience building scalable data solutions using modern data engineering technologies that impact organizational outcomes.Required Qualifications:- Strong proficiency in Structured Query Language (SQL) and at least one programming language such as Python or Scala.- Hands-on experience developing ETL or ELT pipelines.- Experience with cloud-native data services (e.g., AWS Glue, AWS Redshift, Azure Data Factory, Azure Synapse, Databricks).- Good understanding of data modeling and data warehousing concepts.Desired Qualifications:- Design, build, and optimize scalable ETL or ELT pipelines handling both structured and unstructured data.- Ingest and integrate data from internal and external sources into data lakes or data warehouses.- Ensure that processed data is accurate, complete, and secure.Outcomes include well-documented, automated pipelines that support downstream analytics without bottlenecks or data errors.
View all details
  • 6 - 12 yrs
  • 16.0 Lac/Yr
  • Bangalore
Python GCP Developer
Job Title:Data Engineer Location: Bangalore Experience: 6+ years Notice Period : Immediate to 21daysMUST-HAVE TECHNICAL SKILLSSkillSkill DepthPython for Data PipelinesIndependently written ingestion/transformation scripts, including pagination, exception handling, logging, and dataframe-level operations using Pandas, JSON, or GCP SDKsDBT (Data Build Tool)Authored and executed DBT models and tests using YAML files and Jinja macros; contributed to CI test configs and schedule integrationGCP (BigQuery, GCS, CloudSQL)Hands-on experience in at least two of the above tools in pipeline execution e.g., used BigQuery for SQL transformation and GCS for raw/processed layer segregationAWS LambdaIntegrated serverless functions to automate trigger points like new file upload, API call chaining or job completion; used boto3 or GCP Pub/Sub hooksData Quality & ValidationDeveloped or plugged-in validation layers for ingestion such as record count matching, null/duplicate flagging, recon table populationCloud-Native ModelingAdapted pre-existing logical models to ingestion logic, ensuring correct joins, partitioning strategy, and target-layer conformity (Star/Snowflake)Version Control & AgileParticipated in Git branching workflows and sprint-based delivery (JIRA or similar); able to push/pull/test with basic conflict resolution
View all details

Python Architect (12-19 Years)

Cynosure Corporate Solutions

  • 12 - 19 yrs
  • Chennai
Python Architecture Advanced Python Development System Design Microservices Data Engineering AIML Platforms API Design Performance Optimization Code Governance
We are looking for a highly experienced Python Architect to design and lead large-scale, enterprise-grade Python solutions for AI, data, and analytics platforms. The role requires deep hands-on expertise in Python across architecture, development, and production systems, supporting end-to-end AI and data-driven solutions.Key Responsibilities:Architect scalable, high-performance systems using Python as the core technologyDefine technical architecture for AI, data engineering, and analytics platformsLead design and development of microservices, APIs, and backend frameworks in PythonEnsure production-ready implementations with strong focus on performance, security, and reliabilityCollaborate with data science, ML, DevOps, and product teams to operationalize AI solutionsEstablish coding standards, best practices, and architectural guidelinesReview code, mentor senior engineers, and provide technical leadershipDrive innovation and continuous improvement in Python-based system designRequired Skills & Qualifications:1218 years of experience working fully and extensively in PythonStrong expertise in Python system architecture and large-scale backend designExperience supporting AI/ML, data engineering, or analytics platformsSolid understanding of microservices, REST APIs, and distributed systemsHands-on experience with cloud platforms (AWS/Azure/GCP)Strong problem-solving skills and ability to lead complex technical initiativesExcellent communication and stakeholder collaboration skills
View all details

Data Scientist

The Best Services & Enterprise's

  • 6 - 10 yrs
  • 13.0 Lac/Yr
  • Mumbai
Data Scientist Python Media Mix Models MTA MMM Data Engineering Python Data Engineer Data Management
Job Title: Data ScientistExperience: 7+ YearsLocation: USA/Canada (Remote) or Offshore (Remote)Working Hours: USA ESTJob Description:We are looking for experienced Data Scientists with proven expertise in building Media MixModels (MMM) and Multi-Touch Attribution (MTA) models for a long-term engagementwith Univision. The ideal candidates should have a strong background in AdTech, datascience, and analytics, with the ability to derive actionable insights from large datasets inthe media and OTT domain.Key Responsibilities: Develop advanced MMM (Media Mixed Modeling) and MTA (Multi-touch Attribution)models to optimize marketing and advertising strategies Analyze large volumes of structured and unstructured data to uncover trends,correlations, and actionable insights Build and deploy machine learning models and predictive algorithms to solve complexbusiness problems Gather data from diverse sources, clean and transform it for analysis Apply statistical techniques to validate models and ensure accuracy and reliability Automate analytical workflows and repetitive tasks using AI tools and scripting languages Create compelling visualizations, dashboards, and reports to communicate insights acrossteams Collaborate with cross-functional teams including AdSales, Data Engineering, andAnalytics Stay updated on the latest developments in AI, machine learning, and media analyticsRequired Skills & Experience: Specific MMM Experience: Bayesian methods, causal inference, incrementality testing MTA Expertise: Attribution modeling, customer journey analysis, touchpoint optimization Tools: Python (scikit-learn, statsmodels), R (prophet, CausalImpact), SQL,Tableau/PowerBI 6+ years of experience in data science, analytics, or machine learning roles At least 3 years of experience in AdTech or AdSales systems Hands-on experience in developing MMM and MTA models Strong understanding of OTT, digital media, and advertising ecosystems Proficiency in programming languages like Python, R, or SQL for data manipulation andmodeling Experience working with large datasets, data pipelines, and BI/reporting tools Familiarity with statistical methods, experiment design, and model evaluation metrics Excellent problem-solving, communication, and stakeholder management skil
View all details
  • Fresher
  • 3.5 Lac/Yr
  • Coimbatore
Work From Home Home Based Work Data Engineer
We are looking for 999 Data Engineer Posts in Coimbatore with deep knowledge in Data Engineering and Required Educational Qualification is : 12th Pass, 10th Pass;
View all details
  • 2 - 8 yrs
  • United States
AWS Certification Data Modeling
Job Title: Data EngineerLocation: [Remote / Onsite / Hybrid]Employment Type: [Full-time / Contract / W2 / C2C]Job Summary:We are looking for a skilled Data Engineer to design, develop, and maintain scalable data pipelines and infrastructure. The ideal candidate will have hands-on experience with cloud data platforms, large datasets, and data integration tools to ensure the accuracy, reliability, and accessibility of enterprise data.Key Responsibilities:Design, build, and optimize ETL/ELT pipelines to process data from multiple sources.Develop and manage data warehouses, data lakes, and streaming systems.Ensure data reliability, quality, and governance across platforms.Collaborate with data scientists, analysts, and software engineers to deliver data solutions.Monitor, troubleshoot, and enhance data workflows for performance and scalability.Maintain detailed documentation of data architecture, flows, and processes.Required Qualifications:Bachelors degree in Computer Science, Information Systems, Engineering, or a related field.3+ years of experience as a Data Engineer or in a similar data-focused role.Strong proficiency in SQL and one programming language (Python, Scala, or Java).Experience with ETL tools such as Apache Airflow, AWS Glue, or Talend.Expertise in cloud platforms (AWS, Azure, or Google Cloud).Hands-on experience with data warehousing technologies (Snowflake, Redshift, BigQuery, Synapse).Knowledge of big data frameworks (Hadoop, Spark, Kafka).Solid understanding of data modeling, governance, and security best practices.Preferred Qualifications:Experience implementing CI/CD and DevOps practices for data pipelines.Familiarity with real-time data streaming, API integration, and event-driven architectures.Knowledge of containerization tools such as Docker or Kubernetes.Strong problem-solving, analytical, and communication skills.Technical Stack (Typical):Languages: Python, SQL, Scala, JavaFrameworks: Apache Spark, Kafka, Hadoop, AirflowDatabases: Snowflake, Redshift, BigQuery, PostgreSQLCloud Platforms: AWS, Azure, GCPETL / ELT Tools: dbt, Glue, Talend, Informatica
View all details
  • 5 - 10 yrs
  • Hyderabad
Python API Spark
Data Engineer HyderabadExperience: 48 years Location: Bangalore Employment Type: Full-timeAbout the RoleWere seeking a skilled Data Engineer to design, build, and maintain scalable data pipelines and integrations across multiple systems. Youll work with Python, GCP (BigQuery), Spark, and API integrations, ensuring data quality and seamless workflows. Experience with Ascend.io is a plus.Key ResponsibilitiesDesign and develop ETL/ELT pipelines and automated workflows.Integrate data from APIs, Oracle EBS, and cloud platforms.Leverage Google Cloud Platform (GCP) and BigQuery for data analytics.Utilize Apache Spark or similar frameworks for large-scale data processing.Ensure data accuracy, consistency, and security across systems.Collaborate with business and data teams to deliver reliable data solutions.Monitor and troubleshoot pipeline performance.RequirementsStrong Python and workflow orchestration experience (Airflow, Prefect, etc.).Hands-on with GCP / BigQuery and big data frameworks (Spark).Experience with API integrations (REST, SOAP, GraphQL).Understanding of ETL optimization, CI/CD, and Agile workflows.Excellent problem-solving and communication skills.Nice to HaveExperience with Ascend.io.Knowledge of SQL/NoSQL, Docker, Kubernetes.Exposure to machine learning pipelines.Why Join UsWork on cutting-edge data engineering and cloud-based integration projects.Collaborative, innovative team environment.Competitive compensation and strong career growth.
View all details
  • 5 - 10 yrs
  • 40.0 Lac/Yr
  • Hyderabad
AWS Python AWS Data Engineer Terraform ETL Tool CI CD
About the RoleWe are looking for a highly skilled and experienced Senior Data Engineer to join our team in Hyderabad. The ideal candidate will bring strong technical expertise in building scalable data platforms and pipelines using modern technologies such as Python, Scala, AWS, Redshift, Terraform, Jenkins, and Docker. This role demands a hands-on professional who thrives in a fast-paced, collaborative environment and is eager to solve complex data problems.Key ResponsibilitiesDesign, build, and optimize robust, scalable, and secure data pipelines and platform components.Collaborate with data scientists, analysts, and engineering teams to ensure seamless data flow, integration, and availability across systems.Develop infrastructure as code using Terraform to automate provisioning and environment management.Manage containerized services and workflows using Docker.Set up, manage, and optimize CI/CD pipelines using Jenkins for continuous integration and deployment.Optimize performance, scalability, and reliability of large-scale data systems on AWS.Write clean, modular, and efficient code in Python and Scala to support ETL, data transformation, and processing tasks.Support data architecture planning and participate in technical reviews and design sessions.Must-Have SkillsStrong hands-on experience with Python, Scala, SQL, and Amazon Redshift.Proven expertise in AWS cloud services and ecosystem (EC2, S3, Redshift, Glue, Lambda, etc.).Experience implementing Infrastructure as Code (IaC) with Terraform.Proficient in managing and deploying Docker containers in development and production environments.Hands-on experience with CI/CD pipelines using Jenkins.Strong understanding of data architecture, ETL pipelines, and distributed data processing systems.Excellent problem-solving skills and ability to mentor junior engineers.Nice-to-HaveExperience working in regulated domains like healthcare or finance.Exposure to Apache Airflow, Spark, or Databricks.Familiarity with data quality frameworks and observability tools.
View all details
  • 4 - 10 yrs
  • 50+ Lakh/Yr
  • Togo
Data Analysis
Job Description:We are seeking a highly skilled and experienced Data Engineer to help shape and scale our supply chain and operations analytics infrastructure. In this role, you will work closely with cross-functional teamsincluding Operations, Finance, and Analyticsto design, build, and monitor scalable, production-grade data pipelines. Your work will be critical to driving data-informed decisions across the business.---What Youll Do:- Develop and maintain automated ETL pipelines using Python, Snowflake SQL, and related technologies.- Ensure robust data quality through unit testing, validation, and continuous monitoring.- Collaborate with stakeholders to ingest and transform large healthcare datasets with accuracy and efficiency.- Leverage AWS services such as S3, DynamoDB, Batch, and Step Functions for data integration and deployment.- Optimize performance for pipelines processing large-scale datasets (1GB+).- Translate business requirements into reliable, scalable data solutions.---What You Bring:- 4+ years of hands-on experience as a Data Engineer or in a similar role.- Proven expertise in Python, SQL, and Snowflake for data engineering tasks.- Strong experience building and maintaining production-grade ETL pipelines.- Solid understanding of data validation, transformation, and debugging practices.- Prior experience with *healthcare or claims datasets* is highly preferred.- Practical knowledge of AWS technologies: S3, DynamoDB, Batch, Step Functions.- Experience working with large datasets and complex data environments.- Excellent verbal and written English communication skills.---Work Schedule:- Full-time remote* position (40 hours/week).- Working hours must align with U.S. Central Time Zone (CT).
View all details
Glue Lamda ETL
3+ years of AWS data engineering: Glue, Step Functions, Lambda, S3, DynamoDB, EC2Strong Python (boto3) scripting for automationTerraform or CloudFormation expertiseHands-on experience integrating RAG workflows or deploying LLM applicationsSolid SQL and NoSQL data-modeling skillsExcellent written and verbal communication in client-facing contexts
View all details
  • 8 - 10 yrs
  • Pune
Kafka Scala Spark Hadoop Airflow Data Lakes Kappa Kappa ++ Architectures RDBMS NoSQL Cassandra Redis Oracle
Sr. Big Data Engineer Location: PuneExperience: 10+ years Mode: HybridRole Overview:We are seeking a talented Sr. Big Data Engineer to design, develop, and support a highly scalable, distributed SaaS-based Security Risk Prioritization product. You will lead the design and evolution of our data platform and pipelines, providing technical leadership to a team of engineers and architects.Key Responsibilities: Provide technical leadership on data platform design, roadmaps, and architecture. Design and implement scalable architecture for Big Data and Microservices environments. Drive technology explorations, leveraging knowledge of internal and industry prior art. Ensure quality architecture and design of systems, focusing on performance, scalability, and security. Mentor and provide technical guidance to other engineers.Required Skills & Technologies: Mandatory: Kafka, Scala, Spark. Big Data & Data Streaming: Spark, Kafka, Hadoop, Presto, Airflow, Data lakes, lambda architecture, kappa, and kappa ++ architectures with flink data streaming. Databases & Caching: RDBMS, NoSQL, Oracle, Cassandra, Redis. Search Solutions: Solr, Elastic. ML & Automation: Experience with ML models engineering and related deployment, scripting, and automation. Architecture: In-depth experience with messaging queues and caching components. Other Skills: Strong troubleshooting and performance benchmarking skills for Big Data technologies.Qualifications: Bachelors degree in Computer Science or equivalent. 8+ years of total experience, with 6+ years relevant. 2+ years in designing Big Data solutions with Spark. 3+ years with Kafka and performance testing for large infrastructure.
View all details

Hiring For Senior Data Engineer

The Best Services & Enterprise's

  • 4 - 8 yrs
  • 9.0 Lac/Yr
  • Rajkot
Senior Data Engineer Senior Data Analyst Senior Data Associate
Job Summary:We are looking for an experienced Senior Data Engineer to design, develop, and maintain robust data pipelines and ETL workflows. The role involves working on Azure cloud platforms to support analytics and reporting initiatives.Key Responsibilities:Develop and maintain ETL pipelines using Azure Data Factory and Azure Databricks.Implement data integration and transformation workflows for analytics and reporting.Collaborate with data scientists, analysts, and stakeholders to deliver high-quality data solutions.Write efficient, reusable code using Python and SQL.Ensure data quality, governance, and performance optimization across systems.Skills & Qualifications:5+ years of experience in data engineering.Hands-on experience with Azure Data Factory, Azure Databricks, Python, and SQL.Strong understanding of data engineering best practices and cloud architectures.Experience with data modeling, data warehousing, and analytics solutions is a plus.
View all details
  • 4 - 10 yrs
  • 50+ Lakh/Yr
  • Togo
Data Integration ETL ETL Tool Data Warehousing Scala
Job Description:We are seeking a highly skilled and experienced *Data Engineer* to help shape and scale our supply chain and operations analytics infrastructure. In this role, you will work closely with cross-functional teamsincluding Operations, Finance, and Analyticsto design, build, and monitor scalable, production-grade data pipelines. Your work will be critical to driving data-informed decisions across the business.---What Youll Do:- Develop and maintain automated ETL pipelines using Python, Snowflake SQL, and related technologies.- Ensure robust data quality through unit testing, validation, and continuous monitoring.- Collaborate with stakeholders to ingest and transform large healthcare datasets with accuracy and efficiency.- Leverage AWS services such as S3, DynamoDB, Batch, and Step Functions for data integration and deployment.- Optimize performance for pipelines processing large-scale datasets (1GB+).- Translate business requirements into reliable, scalable data solutions.---What You Bring:- 4+ years of hands-on experience as a Data Engineer or in a similar role.- Proven expertise in Python, SQL, and Snowflake for data engineering tasks.- Strong experience building and maintaining production-grade ETL pipelines.- Solid understanding of data validation, transformation, and debugging practices.- Prior experience with *healthcare or claims datasets* is highly preferred.- Practical knowledge of AWS technologies: S3, DynamoDB, Batch, Step Functions.- Experience working with large datasets and complex data environments.- Excellent verbal and written English communication skills.---Work Schedule:- Full-time remote* position (40 hours/week).- Working hours must align with U.S. Central Time Zone (CT).
View all details

Looking For Senior Data Engineer

BSRI Solutions Pvt Ltd

  • 5 - 9 yrs
  • 30.0 Lac/Yr
  • Chennai
Databricks Pyspark Java Web Services SQL Python
Job Title: Senior Data Engineer Experience: 5+ YearsLocation: Chennai (Hybrid)Budget: Up to 30 LPAJob SummaryWe are looking for a highly skilled developer with strong hands-on experience in Databricks, PySpark, Python/Java, Web Services, and SQL. The ideal candidate will work closely with architects, tech leads, and business teams to design, build, optimize, and support scalable data-driven solutions. This is a long-term role with a strong focus on performance, cost optimization, and production support.Key Responsibilities / Essential Job FunctionsUnderstand end-to-end system architecture and support operations through monitoring and dashboardsCollaborate with Architects and Tech Leads on solution design and implementationContinuously monitor and optimize system cost and performanceWork closely with Business Analysts on integration requirementsCoordinate with TCOE teams for defining test scenarios and supporting testing activitiesTroubleshoot performance and functional issues across environmentsDocument key technical decisions and maintain detailed design documentationHandle fast-paced project deliveries and support production issues as requiredQuickly learn and adapt to new technologies and frameworksEnsure prompt and effective communication with stakeholdersOther ResponsibilitiesCreate, document, and maintain project artifactsFollow industry standards, methodologies, and best practicesSafeguard company assets and sensitive information (PI)Report any suspected security or compliance issues promptlyAdhere to company compliance and governance policiesMaintain focus on customer service, efficiency, quality, and business growthCollaborate effectively with cross-functional teamsPerform other duties as assignedMinimum Qualifications & Job RequirementsMinimum 5+ years of IT development experienceStrong analytical and problem-solving skillsProven experience in solution design, code reviews, and mentoring junior engineersStrong backend development experience on data-driven projectsExcellent SQL and database skillsStrong team player with good communication skillsMandatory Technical SkillsPython / PySparkSQL / PL-SQLDatabricksJava or C#Preferred / Good-to-Have SkillsAzure CloudKafkaNode.jsAzure Data Factory
View all details

Looking For Data Engineer

Cynosure Corporate Solutions

  • 8 - 14 yrs
  • Chennai
Data Engineer Python AWS
Responsibilities: Designing and implementing data pipelines to collect, clean, and transform data from various sources Building and maintaining data storage and processing systems, such as databases, data warehouses, and data lakes Ensuring data is properly secured and protected Developing and implementing data governance policies and proceduresCollaborating with business analysts, data analysts, data scientists, and other stakeholders to understand their data needs and ensure they have access to the data they need Sharing knowledge with the wider business, working with other BAs and technology teams to make sure processes and ways of working is documented. Collaborate with Big Data Solution Architects to design, prototype, implement, and optimize data ingestion pipelines so that data is shared effectively across various business systems. Ensure the design, code and procedural aspects of solution are production ready, in terms of operational, security and compliance standards. Participate in day-to-day project and agile meetings and provide technical support for faster resolution of issues. Clearly and concisely communicating to the business, on status of items and blockers. Have an end-to-end knowledge of the data landscape within the company. Skills & Experience: 10+ years of design & development experience with big data technologies like Azure, AWS or GCP Preferred is Azure & Databricks, with experience in Azure DevOps Experience in data visualising technology in DL, like PowerBI Proficient in Python, PySpark and SQL Proficient in querying and manipulating data from various DB (relational and big data). Experience of writing effective and maintainable unit and integration tests for ingestion pipelines. Experience of using static analysis and code quality tools and building CI/CD pipelines. Excellent communication, problem-solving, and leadership skills, and be able to work well in a fast-paced, dynamic environment.
View all details

Opening For Data Engineer

Cynosure Corporate Solutions

  • 3 - 9 yrs
  • Delhi
Apache Python Hadoop SCALA
Job Description: We are looking for Data Engineers to join our team. You will use various methods to transform raw data into useful data systems. For example, youll create algorithms and conduct statistical analysis. Overall, youll strive for efficiency by aligning data systems with business goals. To succeed in this position, you should have strong analytical skills and the ability to combine data from different sources. Data engineer skills also include familiarity with several programming languages and knowledge of machine learning methods. Job Requirements: Participate in the customers system design meetings and collect the functional/technical requirements. Build up data pipelines for consumption by the data science team. Skillful in ETL process and tools. Clear understanding and experience with Python and PySpark or Spark and SCALA, with HIVE, Airflow, Impala, and Hadoop and RDBMS architecture. Experience in writing Python programs and SQL queries. Experience in SQL Query tuning. Experienced in Shell Scripting (Unix/Linux). Build and maintain data pipelines in Spark/Pyspark with SQL and Python or SCALA. Knowledge of Cloud (Azure/AWS/GCP, etc..) technologies is additional. Good to have knowledge of Kubernetes, CI/CD concepts, Apache Kafka Suggest and implement best practices in data integration. Guide the QA team in defining system integration tests as needed. Split the planned deliverables into tasks and assign them to the team. Needs to Maintain/Deploy the ETL code and follow the Agile methodology Needs to work on optimization wherever applicable. Good oral, written and presentation skills. Preferred Qualifications: Degree in Computer Science, IT, or a similar field; a Masters is a plus. Hands-on experience with Python and Pyspark Or Hands-on experience with Spark and SCALA. Great numerical and analytical skills. Working knowledge of cloud platforms such as MS Azure, AWS, etc..
View all details

Data Engineer

United Technology

  • 1 - 3 yrs
  • 4.0 Lac/Yr
  • Chennai
Data Integration Data Engineer Hadoop ETL SQL Informatica Apache AWS Big Data Python
We are looking Data Engineer with 1 to 3 years experience in Chennai.Immediate joiners preferred
View all details
  • Fresher
  • 6.5 Lac/Yr
  • Basavanagudi Bangalore
Data Verification Google Sheets Keyboard Shortcuts Numeric Keypad Spreadsheet Management Data Input Data Quality Control Data Formatting Data Accuracy Data Extraction Data Cleansing Data Entry Software Data Collection Microsoft Excel Data Visualization Data Quality Data Transformation Big Data Technologies Programming Data Warehousing
We are looking for a motivated Data Processing Engineer to join our team. This part-time role is perfect for freshers who are eager to learn and grow in the field of data management. You will work from home, contributing to our data processing needs.
View all details
  • Fresher
  • 5.5 Lac/Yr
  • Begumpet Secunderabad
Data Cleansing Data Entry Accuracy Data Verification Data Quality Control Google Sheets Numeric Keypad Spreadsheet Management Microsoft Excel Data Extraction Data Collection Data Formatting Data Accuracy
We are seeking a Data Processing Engineer, ideal for freshers looking to start their careers in data handling. This part-time role offers the flexibility to work from home while engaging with data processing tasks. In this position, you will be responsible for:- **Data Collection**: Gathering data from various sources to ensure accurate and up-to-date information is available for processing.- **Data Cleansing**: Checking for errors or inconsistencies in the data and correcting them to improve quality.- **Data Analysis**: Analyzing the processed data to extract meaningful insights that can help in decision-making.- **Reporting**: Creating simple reports based on the analysis that summarize findings and present them clearly to stakeholders.For this role, we expect candidates to possess essential skills, including:- **Attention to Detail**: You should have a sharp eye for detail, as accuracy in data processing is crucial.- **Basic Computer Skills**: Familiarity with computers and basic software applications like spreadsheets is necessary.- **Problem-Solving Skills**: You should be able to identify problems in data and think critically about solutions.- **Communication Skills**: Good written communication skills are important for creating reports and sharing insights.Overall, this role offers an exciting opportunity for beginners passionate about data to develop their skills in a supportive work-from-home environment.
View all details
View More Jobs