20

Python Data Engineer Job Vacancies in Delhi NCR

filter
  • Location
  • Role
  • Functional Area
  • Qualification
  • Experience
  • Employer Type
Python SQL ML Docker AWS Cloud Engineer
Level of skills and experience:5 years of hands-on experience in using Python, Spark,Sql.Experienced in AWS Cloud usage and management.Experience with Databricks (Lakehouse, ML, Unity Catalog, MLflow).Experience using various ML models and frameworks such as XGBoost, Lightgbm, Torch.Experience with orchestrators such as Airflow and Kubeflow.Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes).Fundamental understanding of Parquet, Delta Lake and other data file formats.Proficiency on an IaC tool such as Terraform, CDK or Cloud Formation.Strong written and verbal English communication skill and proficient in communication with non-technical stakeholderst
View all details
  • 10 - 15 yrs
  • 22.5 Lac/Yr
  • Gurgaon
Data Engineer Erwin ER ETL Tool Azure Synapse Azure Data Factory Databricks Python Pyspark SQL
Hello Everyone,Greetings!We are actively hiring a Data Engineer for our India team! If you or someone in your network is interested, please review the job details below and reach out to me at gaurav.sharma@cxdatalabs.com with you cv.Key Responsibilities:and optimize database data modeling with expertise in Azure Synapse, Azure DataFactory, and Databricks.data migration and modernization efforts within the Enterprise Data Lake.and implement data pipelines, ETL mappings, sessions, and workflows.data transformation, data modeling/mart creation, and scheduling coordination.closely with the Enterprise Information Management Team on data validation strategis, quality analysis, issue resolution, and unit testing.smooth release management and system integration.Qualifications & Skills Required:years of experience in Data Engineering, with a strong domain background in Clinical Research & Biopharmaceutical services.in data platforms and analytics systems, collaborating with Product Owners and stakeholders.in data modeling tools (ER Studio, Erwin) and cloud technologies.understanding of ETL processes, data quality management, and integration strategies.with Agile tools like Jira, Confluence, and Asana.problem-solving skills with the ability to identify areas for improvement and alignbusiness requirements.communication skills to proactively communicate integration changes in advance.If youre interested or have any referrals, please email me or message me directly.Looking forward to connecting!Best Regards,Gaurav Sharma#8802080222HR Operations PartnerCX Data Labs
View all details

Data Engineer

StatusNeo

Python Pyspark Data Engineer AWS Glue
3+ years of experience with AWS services including SQS, S3, Step Functions, EFS, Lambda, and OpenSearch.Strong experience in API integrations, including experience working with large-scale API endpoints.Proficiency in PySpark for data processing and parallelism in large-scale ingestion pipelines.Experience with AWS OpenSearch APIs for managing search indices.Terraform expertise for automating and managing cloud infrastructure.Hands-on experience with AWS SageMaker, including working with machine learning models and endpoints.Strong understanding of data flow architectures, document stores, and journal-based systems.Experience in parallelizing data processing workflows to meet strict performance and SLA requirements.Familiarity with AWS tools like CloudWatch for monitoring pipeline performance.Additional Preferred Qualifications:Strong problem-solving and debugging skills in distributed systems.Prior experience in optimizing ingestion pipelines with a focus on cost-efficiency and scalability.Solid understanding of distributed data processing and workflow orchestration in AWS environments.Soft Skills:Strong communication and collaboration skills, with the ability to work effectively with cross-functional teams.Ability to work in a fast-paced environment and deliver high-quality results under tight deadlines.Analytical mindset, with a focus on performance optimization and continuous improvement.
View all details

Opening For Data Engineer

Cynosure Corporate Solutions

  • 3 - 9 yrs
  • Delhi
Apache Python Hadoop SCALA
Job Description: We are looking for Data Engineers to join our team. You will use various methods to transform raw data into useful data systems. For example, youll create algorithms and conduct statistical analysis. Overall, youll strive for efficiency by aligning data systems with business goals. To succeed in this position, you should have strong analytical skills and the ability to combine data from different sources. Data engineer skills also include familiarity with several programming languages and knowledge of machine learning methods. Job Requirements: Participate in the customers system design meetings and collect the functional/technical requirements. Build up data pipelines for consumption by the data science team. Skillful in ETL process and tools. Clear understanding and experience with Python and PySpark or Spark and SCALA, with HIVE, Airflow, Impala, and Hadoop and RDBMS architecture. Experience in writing Python programs and SQL queries. Experience in SQL Query tuning. Experienced in Shell Scripting (Unix/Linux). Build and maintain data pipelines in Spark/Pyspark with SQL and Python or SCALA. Knowledge of Cloud (Azure/AWS/GCP, etc..) technologies is additional. Good to have knowledge of Kubernetes, CI/CD concepts, Apache Kafka Suggest and implement best practices in data integration. Guide the QA team in defining system integration tests as needed. Split the planned deliverables into tasks and assign them to the team. Needs to Maintain/Deploy the ETL code and follow the Agile methodology Needs to work on optimization wherever applicable. Good oral, written and presentation skills. Preferred Qualifications: Degree in Computer Science, IT, or a similar field; a Masters is a plus. Hands-on experience with Python and Pyspark Or Hands-on experience with Spark and SCALA. Great numerical and analytical skills. Working knowledge of cloud platforms such as MS Azure, AWS, etc..
View all details

Get Personalized Job Matches

Based on your experience, skills, interests, and career goals to help you find the most relevant opportunities faster. Register Now!

AWS Data Engineer Lead / Architect

Vision Excel Career Solutions

Python Data Architect Data Engineer AWS
Are you a Mid/Senior level T-Shaped AWS expert with specialization in DevOps and Data Engineering space? If yes, We have an exciting opportunity just for you.One of our reputed European Client is looking for AWS engineers to help them build secure, resilient and cost-effective solutions on AWS platform to reap the benefits from their investment in AWS platform and services.We are looking for self-motivated, highly experienced engineers, possessing great analytical and excellent communication skills for this client facing role.What do we expect from you?Role: Data Engineer (AWS)*Mandatory*Experience in Developing Data Pipelines that process large volumes of data using Python, PySpark, Pandas etc, preferably on AWS.Experience in ingesting batch and streaming data from various data sources.Experience in writing complex SQL using any RDBMS (Oracle, PostgreSQL, SQL Server etc.)Experience in developing ETL, OLAP based and Analytical Applications.Ability to quickly learn and develop expertise in existing highly complex applications and architectures.Comfortable working in Agile projects*Desirable*Exposure to AWS platform's data services (AWS Lambda, Glue, Athena, Redshift, Kinesis etc.)Knowledge of DevOps and CD/CD tools.Experience in handling unstructured dataKnowledge of Financial Markets domainKeywords: Data Engineer, Data Pipelines, Data Ingestion, AWS Lambda, AWS Athena
View all details
Python Python Trainer Python Developer Data Scientist Java Developer Core Java Java2D JavaCard JavaFX JavaSE Java Script Developer Java Beans Advanced Java Javascript Work From Home
I require a person to assist us with a Java Udemy Course. If the person has the training videos ready, it's the best. If not, the person should be willing to create a course. The person has to create the full course.To Apply please go to GerardYadGG LinkedIn's Page, you will find the website link there, you can apply there. Please apply there to be considered.100% worldwide remote.ID PI 206
View all details

Data Engineer

Bb Works India

  • 9 - 15 yrs
  • 40.0 Lac/Yr
  • Bangalore +1 Noida
Data Warehousing ETL Python AWS SCALA Data Engineer
We have vacant of 5 Data Engineer Jobs in Bangalore, Noida, Experience Required : 9 Years Educational Qualification : Other Bachelor Degree Skill Data Warehousing, ETL, Python, AWS, SCALA, data engineer
View all details

Azure Data Engineer

Epik Solutions

Python SQL Spark SCALA Data Bricks Azure Data
Job Description:As an Azure Data Engineer, your role will involve designing, developing, and maintaining data solutions on the Azure platform. You will be responsible for building and optimizing data pipelines, ensuring data quality and reliability, and implementing data processing and transformation logic. Your expertise in Azure Databricks, Python, SQL, Azure Data Factory (ADF), PySpark, and Scala will be essential for performing the following key responsibilities:Designing and developing data pipelines: You will design and implement scalable and efficient data pipelines using Azure Databricks, PySpark, and Scala. This includes data ingestion, data transformation, and data loading processes.Data modeling and database design: You will design and implement data models to support efficient data storage, retrieval, and analysis. This may involve working with relational databases, data lakes, or other storage solutions on the Azure platform.Data integration and orchestration: You will leverage Azure Data Factory (ADF) to orchestrate data integration workflows and manage data movement across various data sources and targets. This includes scheduling and monitoring data pipelines.Data quality and governance: You will implement data quality checks, validation rules, and data governance processes to ensure data accuracy, consistency, and compliance with relevant regulations and standards.Performance optimization: You will optimize data pipelines and queries to improve overall system performance and reduce processing time. This may involve tuning SQL queries, optimizing data transformation logic, and leveraging caching techniques.Monitoring and troubleshooting: You will monitor data pipelines, identify performance bottlenecks, and troubleshoot issues related to data ingestion, processing, and transformation. You will work closely with cross-functional teams to resolve data-related
View all details
  • 3 - 9 yrs
  • 25.0 Lac/Yr
  • Bangalore +1 Noida
Azure Databricks SQL Python Spark
Job Description:As an Azure Data Engineer, your role will involve designing, developing, and maintaining data solutions on the Azure platform. You will be responsible for building and optimizing data pipelines, ensuring data quality and reliability, and implementing data processing and transformation logic. Your expertise in Azure Databricks, Python, SQL, Azure Data Factory (ADF), PySpark, and Scala will be essential for performing the following key responsibilities:Designing and developing data pipelines: You will design and implement scalable and efficient data pipelines using Azure Databricks, PySpark, and Scala. This includes data ingestion, data transformation, and data loading processes.Data modeling and database design: You will design and implement data models to support efficient data storage, retrieval, and analysis. This may involve working with relational databases, data lakes, or other storage solutions on the Azure platform.Data integration and orchestration: You will leverage Azure Data Factory (ADF) to orchestrate data integration workflows and manage data movement across various data sources and targets. This includes scheduling and monitoring data pipelines.Data quality and governance: You will implement data quality checks, validation rules, and data governance processes to ensure data accuracy, consistency, and compliance with relevant regulations and standards.Performance optimization: You will optimize data pipelines and queries to improve overall system performance and reduce processing time. This may involve tuning SQL queries, optimizing data transformation logic, and leveraging caching techniques.Monitoring and troubleshooting: You will monitor data pipelines, identify performance bottlenecks, and troubleshoot issues related to data ingestion, processing, and transformation. You will work closely with cross-functional teams to resolve data-related problems.Documentation and collaboration: You will document data pipelines,
View all details
Python Developer Scrapy Django Flask Selenium Beautiful Soup Data Engineer
Desired candidate profile: Design, develop, and maintain web scraping scripts using Python. Use web scraping libraries like Beautiful Soup, Scrapy, Selenium and other scraping tools to extract data from websites. Write reusable, testable, and efficient code to extract structured and unstructured data. Develop and maintain software documentation for web scraping scripts. Collaborate with other software developers, data scientists, and other stakeholders to plan, design, develop, and launch new web scraping projects. Troubleshoot, debug, and optimize web scraping scripts. Stay up-to-date with the latest industry trends and technologies in automated data collection and cleaning Help maintain code quality and organization of projects Participate in code reviews and ensure that all solutions are aligned with standards. Create automated test cases to ensure the functionality and performance of the code Integration of data storage solutions like SQL/NoSQL databases, message brokers, and data streams for storing and analyzing the scraped data.Experience with Python development and web scraping techniques. Familiarity with web frameworks such as Django and Flask, as well as other technologies like SQL, Git, and Linux, is also required. Strong analytical and problem-solving skills, as well as good communication and teamwork abilities, are also important for the role.
View all details
  • 0 - 1 yrs
  • Gurgaon
Artificial Intelligence Machine Learning Python Data Engineer Work From Home
We have vacant of 3 data engineer intern Jobs in remote, Artificial Intelligence, Machine Learning, Python, for Freshers Educational Qualification : B.Tech/B.E Skill Artificial Intelligence, Machine Learning, Python etc.,
View all details

Data Engineer

Talentrupt RPO LLP

Python Pyspark Data Modeling Data Engineer
Data Management PySpark / SparkSQL Data Modeling Python / Scala Adaptable and flexible Agility for quick learning Ability to work well in a team Commitment to quality Strong analytical skillsRoles and Responsibilities:In this role, you need to analyze and solve increasingly complex problems Your day to day interactions is with peers within Accenture You are likely to have some interaction with clients and/or Accenture management You will be given minimal instruction on daily work/tasks and a moderate level of instructions on new assignments You will need to consistently seek and provide meaningful and actionable feedback in all interactions You will be expected to be constantly on the lookout for ways to enhance value for your respective stakeholders/clients Decisions that are made by you will impact your work and may impact the work of others You would be an individual contributor and/or oversee a small work effort and/or team. Please note this role may require you to work in rotational shifts.
View all details
  • 3 - 9 yrs
  • 12.0 Lac/Yr
  • Delhi
Azure SQL Azure U-SQL Data Engineer DataBricks Spark Azure Databricks SSIS SSIS Developer Informatica Azure Data Environment Python SQL Data Base Administrator C# Walk in
Position : Data Engineer with DataBricks and Spark experienceDataBricks Engineer, you will work with multiple teams to deliversolutions on the Azure Cloud using core cloud data warehouse tools.Must be able to analyze data and develop strategies for populatingdata lakes if required. This is not an infrastructure position. Thisperson may be called upon to do complex coding using U-SQL, Scala orPython and T-SQL.Skills/Qualifications Expertise in Any ETL tool i.e. (SSIS, Informatica, Data Stage) Expertise to Implementing Data warehousing Solutions experience as Data Engineer in Azure Data Environment Programming experience in Scala or Python, SQL Hands-on experience in Azure stack (Azure Databricks) -- Mandatory Good understanding of other Azure services like Azure Data LakeAnalytics & U-SQL, Azure SQL DW
View all details
Big Data React JS Python AWS C++ Angular Spark Programming ETL SQL Work From Home
**Preference will be given to the candidates who can join on or before 1st of October, 2022**You will:Write excellent production code and tests and help others improve in code-reviewsAnalyze high-level requirements to design, document, estimate, and build systemsCoordinate across teams to identify, resolve, mitigate and prevent technical issuesCoach and mentor engineers within the team to develop their skills and abilitiesContinuously improve the team's practices in code-quality, reliability, performance, testing, automation, logging, monitoring, alerting, and build processesYou have:For (Full stack):2 - 10 Years of experienceStrong with DS & AlgorithmsHands on Experience in the Programming languages: JavaScript (React or Angular), Python, SQL.Experience with AWS.For (Backend):2 - 10 years of experienceHands on product development experience using Java/ C++/PythonExperience with AWS,SQL,GITStrong with Data structures and AlgorithmsAdditional nice to have skills/certifications:For Java skill set:Mockito, Grizzly, Netty, VertX, Jersey / JAX-RS, Swagger / Open API, Nginx, Protocol Buffers, Thrift, Aerospike, Redis, Kinesis, Sed, Awk, PerlFor Python skill set: Data Engineering experience, Athena, Lambda, EMR, Spark, Glue, Step Functions, Hadoop, Kinesis, Orc, Parquet, Perl, Awk, RedshiftFor (Data Engineering):2 - 10 years of experienceExperience with object-oriented/object function scripting languages: Python.Experience with AWS cloud services: EC2, RDS, Redshift,S3,Athena, GlueMust be proficient in GIT, Jenkins, CICD (Continuous Integration Continuous Deployment)Experience in big data technologies like Hadoop, Map Reduce, Spark, etcExperience with Amazon Web Services and DockersFor (Geo Team):4 - 10 years of experienceExperience with Big Data technologies like Hadoop, Spark, Map Reduce, Kafka, etcExperience using object-oriented languages (Java, Python)Experience in working with different AWS technologies.Experience in software
View all details

MACHINE LEARNING/ DATA ENGINEER

Aurawoo International Pvt. Ltd.

  • 2 - 5 yrs
  • 18.0 Lac/Yr
  • Gurgaon
Machine Learning Artificial Intelligence Developer Data Scientist Python Developer Django Work From Home Walk in
Production Machine Learning Models with Django API Designs Machine Learning Model Deployment using Django Strong hands-on experience in Python Experience on any one of the Cloud platforms AWS/ Azure/ GCP Experience building custom integrations between cloud-based systems using APIs Excellent communication (verbal and written) skills can communicate complex ideas in simple ways Experience developing and maintaining ML systems built with open source tools Experience developing with containers (Docker) in cloud computing environments is a plus Exposure to machine learning methodology and best practices
View all details

Hadoop Data Engineer

Telamon HR Solutions

  • 5 - 10 yrs
  • 30.0 Lac/Yr
  • Gurgaon
Hadoop SQL JAVA PIG SPARK Python Web Developer Walk in
We are looking for a candidate with 5+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:Experience with big data tools: Hadoop, Spark, Kafka, etc.Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.Experience with AWS cloud services: EC2, EMR, RDS, RedshiftExperience with stream-processing systems: Storm, Spark-Streaming, etc.Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
View all details

Data Engineer Python SQL

AP Job Consultants

  • 3 - 6 yrs
  • 12.0 Lac/Yr
  • Gurgaon
Python SQL Data Engineer Django JavaScript Jquery Walk in
Good experience in computer programming languages such as: Python & SQL Minimum 3 years of working experience in Python and Python Frameworks (Flask/Django). Familiarity with concepts of MVC, Mocking, ORM, and RESTful. Solid understanding of object-oriented programming. Experience working with design architecture. Familiarity with some ORM (Object Relational Mapper) libraries. Sound knowledge of Database administration like: Managing MySQL Server, MSSQL Server. Advanced working SQL knowledge and experience working with relational databases, as wellworking familiarity with a variety of databases. Knowledge of NoSQL Databases (MongoDB, DynamoDB) is good to have. Strong knowledge on JavaScript or JQuery. Knowledge of other languages OR big data tools (Hive, Spark) is a plus. Ability to work with AWS services like Lambda, Kinesis, SQS, SNS etc., is a plus. Certificationin cloud platforms like Azure, AWS, and Google will be considered a very good asset. Strong verbal and written communication skills with ability to communicate effectively,articulate results and issues to internal and client teams.
View all details
  • 6 - 10 yrs
  • Gurgaon
Data Engineer Warehouse Management SQL Oracle Python Work From Home
Job Responsibilities Use different data warehousing concepts to build a data warehouse for reporting purposes. Design, develop and launch efficient and reliable data pipelines to move data across application systems and to provide intuitive analytics to business teams Actively develop and test ETL components to high standards of data quality. Assist in the creation of design best practices as well as coding and architectural guidelines, standards, and frameworks. Provide analytical support like Visualization, Business Insights, Reporting, as needed. Can guide a team of Data Engineers for a whole projectMust have: 4+ years of experience in ETL (or) data engineering role in an analytics environment. Bachelors degree in a technical field (Comp. Sci degree preferred not mandatory) Working knowledge of Relational Database Management Systems (RDBMS) like Oracle, SQL server etc. Expertise in building data pipelines & data warehousing concepts Good understanding of Big data platforms Knowledge of SQL, Python & some of the standard data science packages (Pandas, Numpy, etc.). Exposure to Visualization tools like Tableau & Power BI is a plus; Not Mandatory. Strong verbal and business communication skills.
View all details
Industrial Engineer Data Analytics Python Materials Management
Educational Qualification: B.tech in Industrial Engineering from recognized Technical Board/ Institute with not less than 50% marks Diploma (Preferred)Roles and Responsibility Performs Methods engineering analysis including data analytics, simulation, and time and motion studies. Used Applied Materials software (Simulation and other propriety applications) to develop models to automated the process of data analytics Used advanced Excel, R, and Python scripting for preparing scoring template and for scoring the customer's data using Applied repositories and FV KBFor More Details Please Visit This Site
View all details
Data Analytics Python Productivity Improvement Return On Investment Production Processes Walk in
Educational Qualification: ? B.tech in Industrial Engineering from recognized Technical Board/ Institute with not less than 50% marks.? Diploma (Preferred)JOB DESCRIPTION? As part of the student role you will get to lead independently various type of challenging projects in order to support inventory reduction, Supply chain ROI simulations, definition and implementation of new processes and infrastructures that supports efficient planning in low validity business environment, Shortages analysis and more.? The role includes high exposure to complex organizational processes and interfaces and requires high interpersonal skills, fast-thinking, high independence, and the ability to lead activities with no formal authority.
View all details
Python Python Developer Python Trainer Data Scientist Java Developer Core Java Java2D JavaCard JavaFX Java Script Developer Javascript Angularjs Javascript PHP JavaScript JavaScript MySQL JavaScript Frameworks Work From Home
I require a person to assist us with Java Developer Udemy Course. If the person has the training videos ready, it's the best. If not, the person should be willing to create a course. The person has to create the full course.To Apply please go to GerardYadGG LinkedIn's Page, you will find the website link there, you can apply there. Please apply there to be considered.100% worldwide remote.ID PI 203
View all details
Python Developer Python Trainer Python Data Scientist Java Developer Java Script Developer Java Trainer Java J2Ee Developer Java Software Engineer Java Developer Trainee Java Architect Java Application Developer Java Back End Developer Java Internshi Software Analyst Software Architect Software Executive Software Consultant Software Designer Software Developer Software Engineer Software Professional Software Quality Analyst Software Specialist IT Consultant Information Technology Analyst VP IT IT Administrator IT Business Analyst IT Manager IT Recruiter IT Software Tester IT Engineer IT Marketing Executive Work From Home
I require a person to assist us with a Python Developer Udemy Course. If the person has the training videos ready, it's the best. If not, the person should be willing to create a course. The person has to create the full course.To Apply please go to GerardYadGG LinkedIn's Page, you will find the website link there, you can apply there. Please apply there to be considered.100% worldwide remote.ID: PL300Thank You.
View all details
Python Python Developer Python Trainer Data Scientist Java Developer Core Java Java2D JavaCard JavaFX JavaSE Java Script Developer Javascript Angularjs Javascript PHP JavaScript JavaScript MySQL JavaScript Frameworks IT Consultant Information Technology Analyst IT Administrator VP IT IT Business Analyst IT Manager IT Recruiter IT Software Tester IT Engineer IT Marketing Executive Software Analyst Software Architect Software Executive Software Consultant Software Designer Software Developer Software Engineer Software Professional Software Quality Analyst Software Specialist Work From Home
I require a person to assist us with a Data Science Udemy Course. If the person has the training videos ready, it's the best. If not, the person should be willing to create a course. The person has to create the full course.To Apply please go to GerardYadGG LinkedIn's Page, you will find the website link there, you can apply there. Please apply there to be considered.100% worldwide remote.ID PI 201
View all details

Data Engineer

Telamon HR Solutions

  • 5 - 10 yrs
  • 30.0 Lac/Yr
  • Gurgaon
Spark Pig Hive Python Java SQL SAS-Statistical Analysis System ETL Hadoop DATA ENGINEER Azure JSON XML Scala Github DevOps Data Miration C++ NoSQL Walk in
We are looking for a candidate with 5+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:Experience with big data tools: Hadoop, Spark, Kafka, etc.Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.Experience with AWS cloud services: EC2, EMR, RDS, RedshiftExperience with stream-processing systems: Storm, Spark-Streaming, etc.Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
View all details
Advanced Excel Data Analytics Python Industrial Engineer Plant Maintenance Equipment Maintenance
Educational Qualification: B.tech in IndustrialEngineering from recognized Technical Board/ Institute with not less than 50% marks Diploma (Preferred)Roles and Responsibility Performs Methods engineering analysis including data analytics, simulation, and time and motion studies. Used Applied Materials software (Simulation and other propriety applications) to develop models to automated the process of data analytics Measure plant and equipment capacity output. Supports the identification of equipment and process flow bottlenecks.For More Details Please Visit This Site
View all details