Array ( [0] => aws-data-engineer [1] => pune ) AWS Data Engineer Jobs in Pune,AWS Data Engineer Job Vacancies in Pune Maharashtra
12

AWS Data Engineer Job Vacancies in Pune

filter
  • Location
  • Role
  • Functional Area
  • Qualification
  • Experience
  • Employer Type

AI/ML Engineer

Kasa Talent Pvt Ltd

  • Fresher
  • 4.0 Lac/Yr
  • Pune
Data Analysis C++ Python LLM AWS Google Cloud Azure AI SQL Data Cleaning
We are seeking a talented AI/ML Engineer to design, develop, and deploy machine learning models that solve real-world business problems.Key ResponsibilitiesDevelop, train, and optimize machine learning and deep learning models.Design and implement AI solutions for automation, prediction, and data analysis.Work with large datasets to clean, preprocess, and engineer features.Deploy models into production environments and monitor performance.Build scalable ML pipelines and integrate models with applications.Conduct experiments, model evaluations, and performance tuning.Collaborate with cross-functional teams including data engineers and product managers.Stay updated with the latest research and advancements in AI/ML.Note: Only Pune-based candidates can eligible to apply.
View all details
  • 3 - 6 yrs
  • 10.0 Lac/Yr
  • Baner Pune
FastAPI MongoDB AWS Services GitHub Actions and CI CD Pipelines Lambda S3 EC2 RESTful API Design Microservices Event-driven Architecture Performance Tuning Caching Security Best Practices Docker and Containerized Applications Problem-solving Skills Ability to Lead Team Good Communication Skills Data Science Knowledge
About Netra LabsAt Netra Labs, we redefine enterprise AI with our groundbreaking platform, Ground Truth. Our platform transforms expertise into powerful AI agents, enabling businesses to automate complex tasks efficiently. With a user-friendly interface and seamless integration with any language model, Ground Truth empowers system integrators, innovators, and developers to rapidly build and deploy AI solutions. Our commitment to security, scalability, and ROI ensures our clients can trust us with their AI-driven workflows.Role OverviewWe are looking for a highly skilled Python Engineer to lead our backend team and drive the development of scalable, secure, and high-performance AI-powered applications. The ideal candidate will have expertise in data science, a deep understanding of backend development, and hands-on experience with cloud services and DevOps practices. You will work closely with cross-functional teams, ensuring seamless integration between AI models, data pipelines, and enterprise applications.Key Responsibilities Work with the backend development team, ensuring best practices in coding, architecture, and performance optimization. Design, develop, and maintain scalable backend services using Python and FastAPI. Architect and optimize databases, ensuring efficient storage and retrieval of data using MongoDB. Integrate AI models and data science workflows into enterprise applications. Implement and manage AWS cloud services, including Lambda, S3, EC2, and other AWS components. Automate deployment pipelines using Jenkins and CI/CD best practices. Ensure security and reliability, implementing best practices for authentication, authorization, and data privacy. Monitor and troubleshoot system performance, optimizing infrastructure and codebase. Collaborate with data scientists, front-end engineers, and product team to build AI-driven solutions. Stay up to date with the latest technologies in AI, backend development, and cloud computing.Required Skills & Qualifications 3+ years of experience in backend development with Python. Strong experience in FastAPI or other modern Python web frameworks. Proficiency in MongoDB or other NoSQL databases. Hands-on experience with AWS services (Lambda, S3, EC2, etc.). Experience with GitHub Actions and CI/CD pipelines. Data Science knowledge with experience integrating AI models and data pipelines. Strong understanding of RESTful API design, microservices, and event-driven architecture. Experience in performance tuning, caching, and security best practices. Proficiency in working with Docker and containerized applications. Excellent problem-solving skills and ability to lead a team. Strong communication skills to interact with stakeholders and cross-functional teams.Preferred Qualifications Experience with Machine Learning frameworks such as TensorFlow or PyTorch. Knowledge of GraphQL, WebSockets, or gRPC. Familiarity with Terraform or Kubernetes for infrastructure as code. Experience with big data processing frameworks such as Apache Spark.
View all details
Glue Lamda ETL
3+ years of AWS data engineering: Glue, Step Functions, Lambda, S3, DynamoDB, EC2Strong Python (boto3) scripting for automationTerraform or CloudFormation expertiseHands-on experience integrating RAG workflows or deploying LLM applicationsSolid SQL and NoSQL data-modeling skillsExcellent written and verbal communication in client-facing contexts
View all details

AWS Data Engineer

Hexaware Technologies

SQL AWS Python ETL TERRAFORM LAMDA
Work Mode: Hybrid6-9 years of overall IT experience, preferably in cloud environments.Minimum of 5 years of hands-on experience with AWS cloud development projects.Design and develop AWS data architectures and solutions.Build robust data pipelines and ETL processes using big data technologies.Utilize AWS data services such as Glue, Lambda, Redshift, and Athena effectively.Implement infrastructure as code (IaC) using Terraform.Proficiency in SQL, Python, and other relevant programming/scripting languages.Experience with orchestration tools like Apache Airflow or AWS Step Functions.Strong understanding of data warehousing concepts, data lakes, and data governance frameworks.Expertise in data modeling for both relational and non-relational databases.Excellent communication skills are essential for this role.
View all details

Get Personalized Job Matches

Based on your experience, skills, interests, and career goals to help you find the most relevant opportunities faster. Register Now!
Python SQL ML Docker AWS Cloud Engineer
Level of skills and experience:5 years of hands-on experience in using Python, Spark,Sql.Experienced in AWS Cloud usage and management.Experience with Databricks (Lakehouse, ML, Unity Catalog, MLflow).Experience using various ML models and frameworks such as XGBoost, Lightgbm, Torch.Experience with orchestrators such as Airflow and Kubeflow.Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes).Fundamental understanding of Parquet, Delta Lake and other data file formats.Proficiency on an IaC tool such as Terraform, CDK or Cloud Formation.Strong written and verbal English communication skill and proficient in communication with non-technical stakeholderst
View all details

Ataccama Admin

Learning Lane Pvt Ltd

Ataccama Devops Engineer AWS Azure Administrator Linux Windows Vms-virtual Memory System Data Management
Detailed Job Description:We are looking for an Ataccama Admin to join our team and help us manage and maintain our Ataccama data quality and data governance platform. You will be responsible for installing, configuring, and maintaining Ataccama on AWS/Azure platform, as well as developing and implementing data quality rules and policies. You should have a deep understanding of Ataccama architecture and best practices, as well as experience in data management and data governance. Experience in administering Collibra and Immuta is preferred. You should also have experience in managing VMs and Windows and Linux based systems. Pharma experience is preferred.Essential Duties and Responsibilities: Install, configure, and maintain Ataccama Develop and implement data quality rules and policies Monitor and report on data quality metrics Troubleshoot and resolve Ataccama-related issues Stay up-to-date on the latest Ataccama features and best practices Work with cross-functional teams to implement data governance policies and procedures Manage and maintain VMs and Windows and Linux based systems Manage redundancy, backup and recovery plans and processes Strong AWS/Azure experienceQualifications: 4+ years of experience with Ataccama Experience in data management and data governance Experience in administering Collibra and Immuta is preferred Experience in managing VMs and Windows and Linux based systems Experience in Performance Tuning Pharma experience is preferred Strong analytical and problem-solving skills Excellent communication and teamwork skills
View all details

Hiring For AWS Data Engineer

Right Time Placement

Data Engineer Data Architect AWS Data Warehousing
Job Description AWS Data Engineer with min of 5 to 7 years of experience. Collaborate with business analysts to understand and gather requirements for existing or new ETL pipelines. Connect with stakeholders daily to discuss project progress and updates. Work within an Agile process to deliver projects in a timely and efficient manner. Design and develop Airflow DAGs to schedule and manage ETL workflows. Transform SQL queries into Spark SQL code for ETL pipelines. Develop custom Python functions to handle data quality and validation. Write PySpark scripts to process data and perform transformations. Perform data validation and ensure data accuracy and completeness by creating automated tests and implementing data validation processes. Run Spark jobs on AWS EMR cluster using Airflow DAGs. Monitor and troubleshoot ETL pipelines to ensure smooth operation. Implement best practices for data engineering, including data modeling, data warehousing, and data pipeline architecture. Collaborate with other members of the data engineering team to improve processes and implement new technologies. Stay up to date with emerging trends and technologies in data engineering and suggest ways to improve the team's efficiency and effectiveness.
View all details

Data Engineer

Juniper Consultancy Services

  • 2 - 5 yrs
  • Pune
Python SQL AWS Spark Data Engineer
Key Responsibilities: Design and build scalable, reliable, and efficient data pipelines using Python, SQL, AWS, and Linux. Develop and maintain data warehouses and other data-related infrastructure. Collaborate with data scientists, analysts, and other stakeholders to ensure that data engineering solutions meet business requirements. Mentor and train team members to ensure high-quality deliverables. Continuously drive innovation and best practices in data engineering. Collaborate with cross-functional teams to identify and resolve data-related issues. Perform code reviews and ensure code quality. Provide technical guidance and leadership to the team.Qualifications: Bachelor's or Master's degree in Computer Science or related field. 3-5 years of experience in data engineering, with a focus on designing and building scalable, reliable, and efficient data pipelines. Strong expertise in Python and SQL. Experience with AWS services such as S3, Redshift, Glue, and EMR. Strong Linux skills. Experience with distributed systems and big data technologies such as Hadoop, Spark, and Kafka. Experience leading and mentoring a team of data engineers. Excellent communication and interpersonal skills. Ability to work in a fast-paced environment and manage multiple priorities. Strong problem-solving skills and attention to detail. Solid understanding of programming SQL objects (procedures, triggers, views, functions) in SQL Server. Experience optimizing SQL queries a plus. Working Knowledge of Azure Architecture, Data Lake Advanced understanding of T-SQL, indexes, stored procedures, triggers, functions, views, etc. Must be detail-oriented. Must work under limited supervision. Must demonstrate good analytical skills as it relates to data identification and mapping and excellent oral communication skills.
View all details

Data Engineer

Juniper Consultancy Services

  • 2 - 5 yrs
  • Pune
Spark SQL Python AWS Data Engineer
Key Responsibilities: Design and build scalable, reliable, and efficient data pipelines using Python, SQL, AWS, and Linux. Develop and maintain data warehouses and other data-related infrastructure. Collaborate with data scientists, analysts, and other stakeholders to ensure that data engineering solutions meet business requirements. Mentor and train team members to ensure high-quality deliverables. Continuously drive innovation and best practices in data engineering. Collaborate with cross-functional teams to identify and resolve data-related issues. Perform code reviews and ensure code quality. Provide technical guidance and leadership to the team.Qualifications: Bachelor's or Master's degree in Computer Science or related field. 3-5 years of experience in data engineering, with a focus on designing and building scalable, reliable, and efficient data pipelines. Strong expertise in Python and SQL. Experience with AWS services such as S3, Redshift, Glue, and EMR. Strong Linux skills. Experience with distributed systems and big data technologies such as Hadoop, Spark, and Kafka. Experience leading and mentoring a team of data engineers. Excellent communication and interpersonal skills. Ability to work in a fast-paced environment and manage multiple priorities. Strong problem-solving skills and attention to detail. Solid understanding of programming SQL objects (procedures, triggers, views, functions) in SQL Server. Experience optimizing SQL queries a plus. Working Knowledge of Azure Architecture, Data Lake Advanced understanding of T-SQL, indexes, stored procedures, triggers, functions, views, etc. Must be detail-oriented. Must work under limited supervision. Must demonstrate good analytical skills as it relates to data identification and mapping and excellent oral communication skills.
View all details

Data Engineer

Juniper Consultancy Services

  • 2 - 5 yrs
  • Pune
Spark AWS Python SQL Data Engineer
Key Responsibilities: Design and build scalable, reliable, and efficient data pipelines using Python, SQL, AWS, and Linux. Develop and maintain data warehouses and other data-related infrastructure. Collaborate with data scientists, analysts, and other stakeholders to ensure that data engineering solutions meet business requirements. Mentor and train team members to ensure high-quality deliverables. Continuously drive innovation and best practices in data engineering. Collaborate with cross-functional teams to identify and resolve data-related issues. Perform code reviews and ensure code quality. Provide technical guidance and leadership to the team.Qualifications: Bachelor's or Master's degree in Computer Science or related field. 3-5 years of experience in data engineering, with a focus on designing and building scalable, reliable, and efficient data pipelines. Strong expertise in Python and SQL. Experience with AWS services such as S3, Redshift, Glue, and EMR. Strong Linux skills. Experience with distributed systems and big data technologies such as Hadoop, Spark, and Kafka. Experience leading and mentoring a team of data engineers. Excellent communication and interpersonal skills. Ability to work in a fast-paced environment and manage multiple priorities. Strong problem-solving skills and attention to detail. Solid understanding of programming SQL objects (procedures, triggers, views, functions) in SQL Server. Experience optimizing SQL queries a plus. Working Knowledge of Azure Architecture, Data Lake Advanced understanding of T-SQL, indexes, stored procedures, triggers, functions, views, etc. Must be detail-oriented. Must work under limited supervision. Must demonstrate good analytical skills as it relates to data identification and mapping and excellent oral communication skills.
View all details

Data Engineer

The Caliber Hunt

ETL Hadoop Python AWS Spark Data Engineer Walk in
Technologies / Skills: Advanced SQL, Python and associated libraries like Pandas, Numpy etc., Pyspark , Shell scripting, Data- Modelling, Big data, Hadoop, Hive, ETL pipelines and IaC tools like Terraform etc.Responsibilities: Efficient communication skills to coordinate with users, technical teams and DataSolution architects. Document technical design documents for given requirements or JIRA stories. Communicate results and business impacts of insight initiatives to key stakeholders to collaboratively solve business problems. Working closely with the overall Enterprise Data & Analytics Architect and Engineering practice leads to ensure adherence with the best practices and design principles. Assures quality, security and compliance requirements are met for supported area. Develop fault-tolerance data pipelines running on cluster Ability to come up with scalable and modular solutionsRequired Qualification: 1-8 yrs of hands-on experience developing data pipelines for Data Ingestion or transformation using Python (PySpark) /Spark SQL in AWS cloud Experience in development of data pipelines and processing of data at scale using technologies like EMR, Lambda, Glue, Athena, Redshift, Step Functions. Advanced experience in writing and optimizing efficient SQL queries with Python and Hive handling Large Data Sets in Big-Data Environments Experience in debugging, tunning and optimizing PySpark data pipelines Should have implemented concepts and have good knowledge of Pyspark data frames, joins, partitioning, parallelism etc. Understanding of Spark UI, Event Timelines, DAG, Spark config parameters, in order to tune the long running data pipelines. Experience working in Agile implementations Experience with Git and CI/CD pipelines to deploy cloud applications Good knowledge of designing Hive tables with partitioning for performanceThanks and RegardsHR TEAM
View all details
Big Data React JS Python AWS C++ Angular Spark Programming ETL SQL Work From Home
**Preference will be given to the candidates who can join on or before 1st of October, 2022**You will:Write excellent production code and tests and help others improve in code-reviewsAnalyze high-level requirements to design, document, estimate, and build systemsCoordinate across teams to identify, resolve, mitigate and prevent technical issuesCoach and mentor engineers within the team to develop their skills and abilitiesContinuously improve the team's practices in code-quality, reliability, performance, testing, automation, logging, monitoring, alerting, and build processesYou have:For (Full stack):2 - 10 Years of experienceStrong with DS & AlgorithmsHands on Experience in the Programming languages: JavaScript (React or Angular), Python, SQL.Experience with AWS.For (Backend):2 - 10 years of experienceHands on product development experience using Java/ C++/PythonExperience with AWS,SQL,GITStrong with Data structures and AlgorithmsAdditional nice to have skills/certifications:For Java skill set:Mockito, Grizzly, Netty, VertX, Jersey / JAX-RS, Swagger / Open API, Nginx, Protocol Buffers, Thrift, Aerospike, Redis, Kinesis, Sed, Awk, PerlFor Python skill set: Data Engineering experience, Athena, Lambda, EMR, Spark, Glue, Step Functions, Hadoop, Kinesis, Orc, Parquet, Perl, Awk, RedshiftFor (Data Engineering):2 - 10 years of experienceExperience with object-oriented/object function scripting languages: Python.Experience with AWS cloud services: EC2, RDS, Redshift,S3,Athena, GlueMust be proficient in GIT, Jenkins, CICD (Continuous Integration Continuous Deployment)Experience in big data technologies like Hadoop, Map Reduce, Spark, etcExperience with Amazon Web Services and DockersFor (Geo Team):4 - 10 years of experienceExperience with Big Data technologies like Hadoop, Spark, Map Reduce, Kafka, etcExperience using object-oriented languages (Java, Python)Experience in working with different AWS technologies.Experience in software
View all details