Array ( [0] => python-data-engineer [1] => delhi-ncr ) Python Data Engineer Graduate Experience Jobs in Delhi NCR
11

Python Data Engineer Graduate Experience Jobs in Delhi Ncr

filter
  • Location
  • Role
  • Functional Area
  • Qualification
  • Experience
  • Employer Type
Python SQL ML Docker AWS Cloud Engineer
Level of skills and experience:5 years of hands-on experience in using Python, Spark,Sql.Experienced in AWS Cloud usage and management.Experience with Databricks (Lakehouse, ML, Unity Catalog, MLflow).Experience using various ML models and frameworks such as XGBoost, Lightgbm, Torch.Experience with orchestrators such as Airflow and Kubeflow.Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes).Fundamental understanding of Parquet, Delta Lake and other data file formats.Proficiency on an IaC tool such as Terraform, CDK or Cloud Formation.Strong written and verbal English communication skill and proficient in communication with non-technical stakeholderst
View all details

AWS Data Engineer Lead / Architect

Vision Excel Career Solutions

Python Data Architect Data Engineer AWS
Are you a Mid/Senior level T-Shaped AWS expert with specialization in DevOps and Data Engineering space? If yes, We have an exciting opportunity just for you.One of our reputed European Client is looking for AWS engineers to help them build secure, resilient and cost-effective solutions on AWS platform to reap the benefits from their investment in AWS platform and services.We are looking for self-motivated, highly experienced engineers, possessing great analytical and excellent communication skills for this client facing role.What do we expect from you?Role: Data Engineer (AWS)*Mandatory*Experience in Developing Data Pipelines that process large volumes of data using Python, PySpark, Pandas etc, preferably on AWS.Experience in ingesting batch and streaming data from various data sources.Experience in writing complex SQL using any RDBMS (Oracle, PostgreSQL, SQL Server etc.)Experience in developing ETL, OLAP based and Analytical Applications.Ability to quickly learn and develop expertise in existing highly complex applications and architectures.Comfortable working in Agile projects*Desirable*Exposure to AWS platform's data services (AWS Lambda, Glue, Athena, Redshift, Kinesis etc.)Knowledge of DevOps and CD/CD tools.Experience in handling unstructured dataKnowledge of Financial Markets domainKeywords: Data Engineer, Data Pipelines, Data Ingestion, AWS Lambda, AWS Athena
View all details

Data Engineer

Bb Works India

  • 9 - 15 yrs
  • 40.0 Lac/Yr
  • Bangalore +1 Noida
Data Warehousing ETL Python AWS SCALA Data Engineer
We have vacant of 5 Data Engineer Jobs in Bangalore, Noida, Experience Required : 9 Years Educational Qualification : Other Bachelor Degree Skill Data Warehousing, ETL, Python, AWS, SCALA, data engineer
View all details

Azure Data Engineer

Epik Solutions

Python SQL Spark SCALA Data Bricks Azure Data
Job Description:As an Azure Data Engineer, your role will involve designing, developing, and maintaining data solutions on the Azure platform. You will be responsible for building and optimizing data pipelines, ensuring data quality and reliability, and implementing data processing and transformation logic. Your expertise in Azure Databricks, Python, SQL, Azure Data Factory (ADF), PySpark, and Scala will be essential for performing the following key responsibilities:Designing and developing data pipelines: You will design and implement scalable and efficient data pipelines using Azure Databricks, PySpark, and Scala. This includes data ingestion, data transformation, and data loading processes.Data modeling and database design: You will design and implement data models to support efficient data storage, retrieval, and analysis. This may involve working with relational databases, data lakes, or other storage solutions on the Azure platform.Data integration and orchestration: You will leverage Azure Data Factory (ADF) to orchestrate data integration workflows and manage data movement across various data sources and targets. This includes scheduling and monitoring data pipelines.Data quality and governance: You will implement data quality checks, validation rules, and data governance processes to ensure data accuracy, consistency, and compliance with relevant regulations and standards.Performance optimization: You will optimize data pipelines and queries to improve overall system performance and reduce processing time. This may involve tuning SQL queries, optimizing data transformation logic, and leveraging caching techniques.Monitoring and troubleshooting: You will monitor data pipelines, identify performance bottlenecks, and troubleshoot issues related to data ingestion, processing, and transformation. You will work closely with cross-functional teams to resolve data-related
View all details

Get Personalized Job Matches

Based on your experience, skills, interests, and career goals to help you find the most relevant opportunities faster. Register Now!
  • 3 - 9 yrs
  • 25.0 Lac/Yr
  • Bangalore +1 Noida
Azure Databricks SQL Python Spark
Job Description:As an Azure Data Engineer, your role will involve designing, developing, and maintaining data solutions on the Azure platform. You will be responsible for building and optimizing data pipelines, ensuring data quality and reliability, and implementing data processing and transformation logic. Your expertise in Azure Databricks, Python, SQL, Azure Data Factory (ADF), PySpark, and Scala will be essential for performing the following key responsibilities:Designing and developing data pipelines: You will design and implement scalable and efficient data pipelines using Azure Databricks, PySpark, and Scala. This includes data ingestion, data transformation, and data loading processes.Data modeling and database design: You will design and implement data models to support efficient data storage, retrieval, and analysis. This may involve working with relational databases, data lakes, or other storage solutions on the Azure platform.Data integration and orchestration: You will leverage Azure Data Factory (ADF) to orchestrate data integration workflows and manage data movement across various data sources and targets. This includes scheduling and monitoring data pipelines.Data quality and governance: You will implement data quality checks, validation rules, and data governance processes to ensure data accuracy, consistency, and compliance with relevant regulations and standards.Performance optimization: You will optimize data pipelines and queries to improve overall system performance and reduce processing time. This may involve tuning SQL queries, optimizing data transformation logic, and leveraging caching techniques.Monitoring and troubleshooting: You will monitor data pipelines, identify performance bottlenecks, and troubleshoot issues related to data ingestion, processing, and transformation. You will work closely with cross-functional teams to resolve data-related problems.Documentation and collaboration: You will document data pipelines,
View all details
Python Developer Scrapy Django Flask Selenium Beautiful Soup Data Engineer
Desired candidate profile: Design, develop, and maintain web scraping scripts using Python. Use web scraping libraries like Beautiful Soup, Scrapy, Selenium and other scraping tools to extract data from websites. Write reusable, testable, and efficient code to extract structured and unstructured data. Develop and maintain software documentation for web scraping scripts. Collaborate with other software developers, data scientists, and other stakeholders to plan, design, develop, and launch new web scraping projects. Troubleshoot, debug, and optimize web scraping scripts. Stay up-to-date with the latest industry trends and technologies in automated data collection and cleaning Help maintain code quality and organization of projects Participate in code reviews and ensure that all solutions are aligned with standards. Create automated test cases to ensure the functionality and performance of the code Integration of data storage solutions like SQL/NoSQL databases, message brokers, and data streams for storing and analyzing the scraped data.Experience with Python development and web scraping techniques. Familiarity with web frameworks such as Django and Flask, as well as other technologies like SQL, Git, and Linux, is also required. Strong analytical and problem-solving skills, as well as good communication and teamwork abilities, are also important for the role.
View all details

Data Engineer

Talentrupt RPO LLP

Python Pyspark Data Modeling Data Engineer
Data Management PySpark / SparkSQL Data Modeling Python / Scala Adaptable and flexible Agility for quick learning Ability to work well in a team Commitment to quality Strong analytical skillsRoles and Responsibilities:In this role, you need to analyze and solve increasingly complex problems Your day to day interactions is with peers within Accenture You are likely to have some interaction with clients and/or Accenture management You will be given minimal instruction on daily work/tasks and a moderate level of instructions on new assignments You will need to consistently seek and provide meaningful and actionable feedback in all interactions You will be expected to be constantly on the lookout for ways to enhance value for your respective stakeholders/clients Decisions that are made by you will impact your work and may impact the work of others You would be an individual contributor and/or oversee a small work effort and/or team. Please note this role may require you to work in rotational shifts.
View all details
Big Data React JS Python AWS C++ Angular Spark Programming ETL SQL Work From Home
**Preference will be given to the candidates who can join on or before 1st of October, 2022**You will:Write excellent production code and tests and help others improve in code-reviewsAnalyze high-level requirements to design, document, estimate, and build systemsCoordinate across teams to identify, resolve, mitigate and prevent technical issuesCoach and mentor engineers within the team to develop their skills and abilitiesContinuously improve the team's practices in code-quality, reliability, performance, testing, automation, logging, monitoring, alerting, and build processesYou have:For (Full stack):2 - 10 Years of experienceStrong with DS & AlgorithmsHands on Experience in the Programming languages: JavaScript (React or Angular), Python, SQL.Experience with AWS.For (Backend):2 - 10 years of experienceHands on product development experience using Java/ C++/PythonExperience with AWS,SQL,GITStrong with Data structures and AlgorithmsAdditional nice to have skills/certifications:For Java skill set:Mockito, Grizzly, Netty, VertX, Jersey / JAX-RS, Swagger / Open API, Nginx, Protocol Buffers, Thrift, Aerospike, Redis, Kinesis, Sed, Awk, PerlFor Python skill set: Data Engineering experience, Athena, Lambda, EMR, Spark, Glue, Step Functions, Hadoop, Kinesis, Orc, Parquet, Perl, Awk, RedshiftFor (Data Engineering):2 - 10 years of experienceExperience with object-oriented/object function scripting languages: Python.Experience with AWS cloud services: EC2, RDS, Redshift,S3,Athena, GlueMust be proficient in GIT, Jenkins, CICD (Continuous Integration Continuous Deployment)Experience in big data technologies like Hadoop, Map Reduce, Spark, etcExperience with Amazon Web Services and DockersFor (Geo Team):4 - 10 years of experienceExperience with Big Data technologies like Hadoop, Spark, Map Reduce, Kafka, etcExperience using object-oriented languages (Java, Python)Experience in working with different AWS technologies.Experience in software
View all details

Hadoop Data Engineer

Telamon HR Solutions

  • 5 - 10 yrs
  • 30.0 Lac/Yr
  • Gurgaon
Hadoop SQL JAVA PIG SPARK Python Web Developer Walk in
We are looking for a candidate with 5+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:Experience with big data tools: Hadoop, Spark, Kafka, etc.Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.Experience with AWS cloud services: EC2, EMR, RDS, RedshiftExperience with stream-processing systems: Storm, Spark-Streaming, etc.Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
View all details

Data Engineer Python SQL

AP Job Consultants

  • 3 - 6 yrs
  • 12.0 Lac/Yr
  • Gurgaon
Python SQL Data Engineer Django JavaScript Jquery Walk in
Good experience in computer programming languages such as: Python & SQL Minimum 3 years of working experience in Python and Python Frameworks (Flask/Django). Familiarity with concepts of MVC, Mocking, ORM, and RESTful. Solid understanding of object-oriented programming. Experience working with design architecture. Familiarity with some ORM (Object Relational Mapper) libraries. Sound knowledge of Database administration like: Managing MySQL Server, MSSQL Server. Advanced working SQL knowledge and experience working with relational databases, as wellworking familiarity with a variety of databases. Knowledge of NoSQL Databases (MongoDB, DynamoDB) is good to have. Strong knowledge on JavaScript or JQuery. Knowledge of other languages OR big data tools (Hive, Spark) is a plus. Ability to work with AWS services like Lambda, Kinesis, SQS, SNS etc., is a plus. Certificationin cloud platforms like Azure, AWS, and Google will be considered a very good asset. Strong verbal and written communication skills with ability to communicate effectively,articulate results and issues to internal and client teams.
View all details
  • 6 - 10 yrs
  • Gurgaon
Data Engineer Warehouse Management SQL Oracle Python Work From Home
Job Responsibilities Use different data warehousing concepts to build a data warehouse for reporting purposes. Design, develop and launch efficient and reliable data pipelines to move data across application systems and to provide intuitive analytics to business teams Actively develop and test ETL components to high standards of data quality. Assist in the creation of design best practices as well as coding and architectural guidelines, standards, and frameworks. Provide analytical support like Visualization, Business Insights, Reporting, as needed. Can guide a team of Data Engineers for a whole projectMust have: 4+ years of experience in ETL (or) data engineering role in an analytics environment. Bachelors degree in a technical field (Comp. Sci degree preferred not mandatory) Working knowledge of Relational Database Management Systems (RDBMS) like Oracle, SQL server etc. Expertise in building data pipelines & data warehousing concepts Good understanding of Big data platforms Knowledge of SQL, Python & some of the standard data science packages (Pandas, Numpy, etc.). Exposure to Visualization tools like Tableau & Power BI is a plus; Not Mandatory. Strong verbal and business communication skills.
View all details