50

AWS Data Engineer Jobs

filter
  • Location
  • Role
  • Functional Area
  • Qualification
  • Experience
  • Employer Type

AI/ML Engineer

Kasa Talent Pvt Ltd

  • Fresher
  • 4.0 Lac/Yr
  • Pune
Data Analysis C++ Python LLM AWS Google Cloud Azure AI SQL Data Cleaning
We are seeking a talented AI/ML Engineer to design, develop, and deploy machine learning models that solve real-world business problems.Key ResponsibilitiesDevelop, train, and optimize machine learning and deep learning models.Design and implement AI solutions for automation, prediction, and data analysis.Work with large datasets to clean, preprocess, and engineer features.Deploy models into production environments and monitor performance.Build scalable ML pipelines and integrate models with applications.Conduct experiments, model evaluations, and performance tuning.Collaborate with cross-functional teams including data engineers and product managers.Stay updated with the latest research and advancements in AI/ML.Note: Only Pune-based candidates can eligible to apply.
View all details

Looking For Data Engineer

InfiCare Technologies

  • 10 - 15 yrs
  • 22.5 Lac/Yr
  • Delhi
AZURE AWS ETL Data Factory Data Warehousing ETL Tool SQL
Key Responsibilities:- Design and manage data pipelines to transform and integrate structured and unstructured data.- Ensure high data quality and performance.- Support analytics, reporting, and business intelligence needs by preparing reliable data sets and models for stakeholders.- Collaborate with Analysts, Digital Project Managers, Developers, and business teams to ensure data accessibility and usefulness.- Enforce standards for data governance, security, and cost-effective operations.Ideal candidates will thrive in a collaborative, mission-focused environment and excel in ETL/ELT engineering. They should have experience building scalable data solutions using modern data engineering technologies that impact organizational outcomes.Required Qualifications:- Strong proficiency in Structured Query Language (SQL) and at least one programming language such as Python or Scala.- Hands-on experience developing ETL or ELT pipelines.- Experience with cloud-native data services (e.g., AWS Glue, AWS Redshift, Azure Data Factory, Azure Synapse, Databricks).- Good understanding of data modeling and data warehousing concepts.Desired Qualifications:- Design, build, and optimize scalable ETL or ELT pipelines handling both structured and unstructured data.- Ingest and integrate data from internal and external sources into data lakes or data warehouses.- Ensure that processed data is accurate, complete, and secure.Outcomes include well-documented, automated pipelines that support downstream analytics without bottlenecks or data errors.
View all details

Looking For Data Architect

Toolify Private Limited

  • 9 - 15 yrs
  • 40.0 Lac/Yr
  • Jaipur
Data Architect Databricks Developer Apache Spark Delta Lake Azure Synapse Azure Data AWS Redshift AWS Glue SQL Pyspark Developer Kafka Engineer Big Data
Job SummaryWe are seeking a skilled Data Architect to lead the design and implementation of high-performance, scalable data platforms. This role involves architecting modern data lakes, warehouses, and streaming systems using Databricks and cloud technologies. If you enjoy solving complex data challenges and driving data-driven decision-making, this role is for you.Key ResponsibilitiesDesign and implement scalable data lakes, data warehouses, and real-time streaming architecturesBuild, optimize, and manage Databricks solutions using Spark, Delta Lake, Workflows, and SQL AnalyticsDevelop cloud-native data platforms on Azure (Synapse, Data Factory, Data Lake) and AWS (Redshift, Glue, S3)Create and automate ETL/ELT pipelines using Apache Spark, PySpark, and cloud toolsDesign and maintain data models (dimensional, normalized, star schemas) to support analytics and reportingLeverage big data technologies such as Hadoop, Kafka, and Scala for large-scale data processingEnsure data governance, security, and compliance with standards like GDPR and HIPAAOptimize Spark workloads and storage for performance and cost efficiencyCollaborate with engineering, analytics, and business teams to align data solutions with organizational goalsRequired Skills & Qualifications8+ years of experience in Data Architecture, Data Engineering, or AnalyticsStrong hands-on experience with Databricks (Delta Lake, Spark, MLflow, Pipelines)Expertise in Azure (Synapse, Data Factory, Data Lake) and AWS (Redshift, S3, Glue)Proficient in SQL and Python or ScalaExperience with NoSQL databases (e.g., MongoDB) and streaming platforms (e.g., Kafka)Solid understanding of data governance, security, and compliance best practicesExcellent problem-solving, communication, and cross-functional collaboration skills.Looking forward to receiving suitable profiles at the earliest.
View all details
  • 4 - 6 yrs
  • 12.0 Lac/Yr
  • Mumbai
Problem Sloving Vector Database Data Analysis Machine Learning Data Science AWS Power Automate AI Predictive Analytics Dot Net
Candidate should have experience in Dot Net , power automate, AI and data analytics and predictive analytics experience
View all details

Get Personalized Job Matches

Based on your experience, skills, interests, and career goals to help you find the most relevant opportunities faster. Register Now!
  • 3 - 6 yrs
  • 10.0 Lac/Yr
  • Baner Pune
FastAPI MongoDB AWS Services GitHub Actions and CI CD Pipelines Lambda S3 EC2 RESTful API Design Microservices Event-driven Architecture Performance Tuning Caching Security Best Practices Docker and Containerized Applications Problem-solving Skills Ability to Lead Team Good Communication Skills Data Science Knowledge
About Netra LabsAt Netra Labs, we redefine enterprise AI with our groundbreaking platform, Ground Truth. Our platform transforms expertise into powerful AI agents, enabling businesses to automate complex tasks efficiently. With a user-friendly interface and seamless integration with any language model, Ground Truth empowers system integrators, innovators, and developers to rapidly build and deploy AI solutions. Our commitment to security, scalability, and ROI ensures our clients can trust us with their AI-driven workflows.Role OverviewWe are looking for a highly skilled Python Engineer to lead our backend team and drive the development of scalable, secure, and high-performance AI-powered applications. The ideal candidate will have expertise in data science, a deep understanding of backend development, and hands-on experience with cloud services and DevOps practices. You will work closely with cross-functional teams, ensuring seamless integration between AI models, data pipelines, and enterprise applications.Key Responsibilities Work with the backend development team, ensuring best practices in coding, architecture, and performance optimization. Design, develop, and maintain scalable backend services using Python and FastAPI. Architect and optimize databases, ensuring efficient storage and retrieval of data using MongoDB. Integrate AI models and data science workflows into enterprise applications. Implement and manage AWS cloud services, including Lambda, S3, EC2, and other AWS components. Automate deployment pipelines using Jenkins and CI/CD best practices. Ensure security and reliability, implementing best practices for authentication, authorization, and data privacy. Monitor and troubleshoot system performance, optimizing infrastructure and codebase. Collaborate with data scientists, front-end engineers, and product team to build AI-driven solutions. Stay up to date with the latest technologies in AI, backend development, and cloud computing.Required Skills & Qualifications 3+ years of experience in backend development with Python. Strong experience in FastAPI or other modern Python web frameworks. Proficiency in MongoDB or other NoSQL databases. Hands-on experience with AWS services (Lambda, S3, EC2, etc.). Experience with GitHub Actions and CI/CD pipelines. Data Science knowledge with experience integrating AI models and data pipelines. Strong understanding of RESTful API design, microservices, and event-driven architecture. Experience in performance tuning, caching, and security best practices. Proficiency in working with Docker and containerized applications. Excellent problem-solving skills and ability to lead a team. Strong communication skills to interact with stakeholders and cross-functional teams.Preferred Qualifications Experience with Machine Learning frameworks such as TensorFlow or PyTorch. Knowledge of GraphQL, WebSockets, or gRPC. Familiarity with Terraform or Kubernetes for infrastructure as code. Experience with big data processing frameworks such as Apache Spark.
View all details
  • 2 - 8 yrs
  • United States
AWS Certification Data Modeling
Job Title: Data EngineerLocation: [Remote / Onsite / Hybrid]Employment Type: [Full-time / Contract / W2 / C2C]Job Summary:We are looking for a skilled Data Engineer to design, develop, and maintain scalable data pipelines and infrastructure. The ideal candidate will have hands-on experience with cloud data platforms, large datasets, and data integration tools to ensure the accuracy, reliability, and accessibility of enterprise data.Key Responsibilities:Design, build, and optimize ETL/ELT pipelines to process data from multiple sources.Develop and manage data warehouses, data lakes, and streaming systems.Ensure data reliability, quality, and governance across platforms.Collaborate with data scientists, analysts, and software engineers to deliver data solutions.Monitor, troubleshoot, and enhance data workflows for performance and scalability.Maintain detailed documentation of data architecture, flows, and processes.Required Qualifications:Bachelors degree in Computer Science, Information Systems, Engineering, or a related field.3+ years of experience as a Data Engineer or in a similar data-focused role.Strong proficiency in SQL and one programming language (Python, Scala, or Java).Experience with ETL tools such as Apache Airflow, AWS Glue, or Talend.Expertise in cloud platforms (AWS, Azure, or Google Cloud).Hands-on experience with data warehousing technologies (Snowflake, Redshift, BigQuery, Synapse).Knowledge of big data frameworks (Hadoop, Spark, Kafka).Solid understanding of data modeling, governance, and security best practices.Preferred Qualifications:Experience implementing CI/CD and DevOps practices for data pipelines.Familiarity with real-time data streaming, API integration, and event-driven architectures.Knowledge of containerization tools such as Docker or Kubernetes.Strong problem-solving, analytical, and communication skills.Technical Stack (Typical):Languages: Python, SQL, Scala, JavaFrameworks: Apache Spark, Kafka, Hadoop, AirflowDatabases: Snowflake, Redshift, BigQuery, PostgreSQLCloud Platforms: AWS, Azure, GCPETL / ELT Tools: dbt, Glue, Talend, Informatica
View all details

Data Engineer

United Technology

  • 1 - 3 yrs
  • 4.0 Lac/Yr
  • Chennai
Data Integration Data Engineer Hadoop ETL SQL Informatica Apache AWS Big Data Python
We are looking Data Engineer with 1 to 3 years experience in Chennai.Immediate joiners preferred
View all details
Glue Lamda ETL
3+ years of AWS data engineering: Glue, Step Functions, Lambda, S3, DynamoDB, EC2Strong Python (boto3) scripting for automationTerraform or CloudFormation expertiseHands-on experience integrating RAG workflows or deploying LLM applicationsSolid SQL and NoSQL data-modeling skillsExcellent written and verbal communication in client-facing contexts
View all details
  • 5 - 10 yrs
  • 40.0 Lac/Yr
  • Hyderabad
AWS Python AWS Data Engineer Terraform ETL Tool CI CD
About the RoleWe are looking for a highly skilled and experienced Senior Data Engineer to join our team in Hyderabad. The ideal candidate will bring strong technical expertise in building scalable data platforms and pipelines using modern technologies such as Python, Scala, AWS, Redshift, Terraform, Jenkins, and Docker. This role demands a hands-on professional who thrives in a fast-paced, collaborative environment and is eager to solve complex data problems.Key ResponsibilitiesDesign, build, and optimize robust, scalable, and secure data pipelines and platform components.Collaborate with data scientists, analysts, and engineering teams to ensure seamless data flow, integration, and availability across systems.Develop infrastructure as code using Terraform to automate provisioning and environment management.Manage containerized services and workflows using Docker.Set up, manage, and optimize CI/CD pipelines using Jenkins for continuous integration and deployment.Optimize performance, scalability, and reliability of large-scale data systems on AWS.Write clean, modular, and efficient code in Python and Scala to support ETL, data transformation, and processing tasks.Support data architecture planning and participate in technical reviews and design sessions.Must-Have SkillsStrong hands-on experience with Python, Scala, SQL, and Amazon Redshift.Proven expertise in AWS cloud services and ecosystem (EC2, S3, Redshift, Glue, Lambda, etc.).Experience implementing Infrastructure as Code (IaC) with Terraform.Proficient in managing and deploying Docker containers in development and production environments.Hands-on experience with CI/CD pipelines using Jenkins.Strong understanding of data architecture, ETL pipelines, and distributed data processing systems.Excellent problem-solving skills and ability to mentor junior engineers.Nice-to-HaveExperience working in regulated domains like healthcare or finance.Exposure to Apache Airflow, Spark, or Databricks.Familiarity with data quality frameworks and observability tools.
View all details

Data Engineer

Guiding Consulting

  • 10 - 12 yrs
  • Bangalore
SQL Python Spark Data Integration ETL AWS ETL Tool Data Warehousing Azure Server
Job Description:Yrs of Exp : 10 + yrsMode : 3 days a weekLocation: BangaloreWork Type : PermanentKey ResponsibilitiesDesign and Development:Architect, implement, and optimize scalable data solutions.Develop and maintain data pipelines, ETL/ELT processes, and workflows to ensure the seamless integration and transformation of data.Collaboration:Work closely with data scientists, analysts, and business stakeholders to understand requirements and deliver actionable insights.Partner with cloud architects and DevOps teams to ensure robust, secure, and cost-effective data platform deployments.Data Management:Manage and maintain data lakes, data warehouses, and real-time analytics systems.Ensure high data quality, integrity, and security across the organization.Performance Optimization:Monitor and enhance system performance, troubleshoot issues, and implement optimizations as needed.Leverage Microsoft Fabrics advanced analytics and AI capabilities for innovative data solutions.Best Practices & Leadership:Lead and mentor junior engineers to foster a culture of technical excellence.Stay updated with industry trends and best practices, especially in the Microsoft ecosystem.Required:Bachelors or Masters degree in Computer Science, Data Engineering, or a related field.10+ years of experience in data engineering, with a proven track record of working on large-scale data platforms.Expertise in Microsoft Fabric and its components (e.g., Synapse, Data Factory, Azure Data Lake, Power BI).Strong proficiency in SQL, Python, and Spark.Experience with cloud platforms, particularly Microsoft Azure.Solid understanding of data modeling, data warehousing, and ETL/ELT best practices.Excellent problem-solving, communication, team management and project management skills.Preferred:Familiarity with other cloud platforms (e.g., AWS, GCP).Experience with machine learning pipelines or integrating AI into data workflows.Certifications in Microsoft Azure or related technologies.
View all details
  • Fresher
  • 7.0 Lac/Yr
  • Hyderabad
Data Analysis AI Engineer ML Engineer Dot Net Java Ui UX Designer Testing AWS Cloud Engineer
Are you a fun loving and passionate to be a part of Global Innovator team? Are you planning to grow your career, which enhance your skills in technology? A career in IT can open many doors for you in the world of technology. If you are looking for a company that is dedicated to your ideas, recognizes you for your unique competency & contributions and provides a fun, flexible and delightful work atmosphere.Then, we are the right place to ignite your passion. We are totally committed about our employees, our Clients & Customers, our work culture and especially our technology. We are a flat organization where opportunities are provide based on talent and we always encourage new ideas of employees through collaboration and creativity.We are seeking for smart-driven IT JAVA/dotnet/python Programmers to join us. The candidate will work with the global product development team and subject matter experts. An ideal candidate must possess excellent business skills with outstanding Analytical & logical skills, professionalism, Intelligent and should have the zeal to learn.Core Competency:Masters or bachelors degree in Engineering (CSE / IT / MCA /or any fresher looking to work in Software) are preferred.Passionate Fresher who can wok in technologies into Java,dotnet ,python,sql applications.Possesses intellectual humility; smart-driven, creative and able to learn things from slipups, willing to raise others up.Excellent Logical & Analytical skills with integrated professionalism at all levels.Strong Knowledge in Java, Frame-works like Spring, Spring boot (Mandatory); J2EE technologies like Servlets, JSP and Web Application Server or dotnet(c#,aspdotnet,python,reactjs,angular,ui or ux deigning).Good knowledge in web technologies like HTML, Java Script, XML and CSS.Candidates must be team players, have a thirst for knowledge, the energy to work in a fast-paced environment and a desire to grow in an entrepreneurial company.Key Responsibilities:The role is responsible for designing, coding and deploying of high-performance applications.Excellent interpersonal, Communication and effective organizational skills with solid technical skills.Should be able to communicate effectively with both technical and non-technical personnel.Excellent trouble shootings & problem-solving skills.Practical Knowledge with SDLC from requirement analysis through testing and deployment is a plus.Devising possible solutions to anticipated problems.Develop and maintain strong product knowledge.Guide the clients through various stages of the project and transition to support organization.Review existing business processes and participate in the Process Improvement Program.
View all details
  • 4 - 10 yrs
  • 36000/Yr
  • Missouri +1 USA
Data Warehousing Data Management Data Integration SQL Data Extraction ETL Tool Hadoop AWS Big Data Python
Role OverviewThis position requires a detail-oriented data engineer who can independently architect and implement data pipelines, while also serving as a trusted technical partner in client engagements and stakeholder meetings. Youll work hands-on with PySpark, Airflow, Python, and SQL, driving end-to-end data migration and platform modernization efforts across Azure and AWS.In addition to technical execution, youll contribute to sprint planning, backlog prioritization, and continuous integration/deployment of data infrastructure. This is a senior-level individual contributor role with direct visibility across engineering, product, and client delivery functions.Key ResponsibilitiesLead design and development of enterprise-grade data pipelines and cloud data migration architectures.Build scalable, maintainable ETL/ELT pipelines using Apache Airflow, PySpark, and modern data services.Write efficient, modular, and well-tested Python code, grounded in clean architecture and performance principles.Develop and optimize complex SQL queries across diverse relational and analytical databases.Contribute to and uphold standards for data modeling, data governance, and pipeline performance.Own the implementation of CI/CD pipelines to enable reliable deployment of data workflows and infrastructure (e.g., GitHub Actions, Azure DevOps, Jenkins).Embed unit testing, integration testing, and monitoring in all stages of the data pipeline lifecycle.Participate actively in Agile ceremonies: sprint planning, daily stand-ups, retrospectives, and backlog grooming.Collaborate directly with clients, stakeholders, and cross-functional teams to translate business needs into scalable technical solutions.Act as a technical authority within the teamleading architectural decisions and contributing to internal best practices and documentation.
View all details

AWS Data Engineer

Hexaware Technologies

SQL AWS Python ETL TERRAFORM LAMDA
Work Mode: Hybrid6-9 years of overall IT experience, preferably in cloud environments.Minimum of 5 years of hands-on experience with AWS cloud development projects.Design and develop AWS data architectures and solutions.Build robust data pipelines and ETL processes using big data technologies.Utilize AWS data services such as Glue, Lambda, Redshift, and Athena effectively.Implement infrastructure as code (IaC) using Terraform.Proficiency in SQL, Python, and other relevant programming/scripting languages.Experience with orchestration tools like Apache Airflow or AWS Step Functions.Strong understanding of data warehousing concepts, data lakes, and data governance frameworks.Expertise in data modeling for both relational and non-relational databases.Excellent communication skills are essential for this role.
View all details
  • 5 - 10 yrs
  • Noida Sector 63
Data Science Computer Vision Object Detection AWS Azure Google Cloud Mysql Biometric Engineer Machine Learning Statistical Analysis
We are actively seeking a talented and experienced Data Scientist to join our dynamic team.The ideal candidate will have a strong foundation in data science, computer vision, statisticalanalysis, and object detection, with a focus on developing impactful solutions for complexbusiness problems.
View all details
Python SQL ML Docker AWS Cloud Engineer
Level of skills and experience:5 years of hands-on experience in using Python, Spark,Sql.Experienced in AWS Cloud usage and management.Experience with Databricks (Lakehouse, ML, Unity Catalog, MLflow).Experience using various ML models and frameworks such as XGBoost, Lightgbm, Torch.Experience with orchestrators such as Airflow and Kubeflow.Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes).Fundamental understanding of Parquet, Delta Lake and other data file formats.Proficiency on an IaC tool such as Terraform, CDK or Cloud Formation.Strong written and verbal English communication skill and proficient in communication with non-technical stakeholderst
View all details

Opening For AWS Data Engineer

Advancesoft India Pvt. Ltd.

  • 4 - 10 yrs
  • 16.0 Lac/Yr
  • Hyderabad
Python Pyspark AWS SQL S3 Lambda
Advancesoft India Private limited (a subsidiary of Advancesoft Inc., USA) is looking to hire AWS Data Engineers, with 4+ yrs of Hands-on exp., to work closely with our agile software team. If you enjoy working in a dynamic environment and are a self-driven individual, we wish to speak to you.Roles and Responsibilities: Set up process for Data Management, working on automated Analytical Modules. Continuously focus on improvement and automation by partnering with different teams Create POCs of new ideas, scale them into solutions - e.g., integration of new channels, workflows to support new product journeys, and testing new communication frameworks! The person would lend Data Engineering support on creation and execution of omnichannel campaigns for the bank. Working alongside campaign managers for Rule definition, logical coding of the rules, creation of campaign files Prepare data layer and support for the analytics and data science team Help in integration of new sources of data, pipelines. Lead the data development of new analytical modules and automated customer journeys Maintain and improve already existing processes. Provide automation support and backup to the Prod Support TeamQualifications: 4+ years of work experience in Data Engineering with experience in Big Data. Banking domain knowledge is a plus Strong and hands on knowledge of SQL, Python, PySpark. Strong analytical experience in writing complex queries, query optimization, debugging, user defined functions etc. in PySpark Has worked in AWS with familiarity to Airflow and DAGs. Knowledge of AWS Cloud integration with Apache Spark, EMR, Athena, Glue, Kafka, Lambda, S3, Redshift is desirable. Knowledge of PostgreSQL and Docker Has experience in processing large Structured and non-structured file from multiple sources Has experience working on JIRA or similar project management tools Profiles to be sent to contact@advancesoftinc.com
View all details
  • 1 - 7 yrs
  • 8.0 Lac/Yr
  • Philippines
Python AWS Language Machine Learning Data Analysis Data Science Problem Sloving
Machine Learning Engineer - Street SimplifiedFull-time, Remote: US Time, PSTAt Street Simplified, we use video analytics to help public sector agencies understand why crashes happen and give them the tools to proactively prevent them. We are looking to hire a Machine Learning Engineer who is passionate about eliminating crashes and saving lives. Responsibilities and Duties:Develop new capabilities that eliminate crashes & save livesImplement/extend the core code base to provide additional functionality & support Design production tests and production deployment systemExperience:Experience working on very large 100+ GB datasets Experience managing a large code base, following best practices, clean coding, documentation, commits, testingExperience solving difficult math/ physics problems Experience programming in Python and preferably also C/C++Experience interfacing with AWS for production processingExperience with (As many as possible)Data ScienceMachine learningReal-time edge computing (preferably on NVIDIA hardware) Statistics Optimization Training neural networksClassical computer vision Signal ProcessingAlgorithm developmentQualifications:Masters, or PhD in EE, CS, Physics, or a related field (compensation will vary based on expertise, experience, and education)Proven track record of solving difficult technical problems across domains Highly efficient programmer/ multitasker Must communicate effectively in spoken and written English
View all details
  • 5 - 7 yrs
  • 10.0 Lac/Yr
  • Coimbatore
Big Data Analytics RIDGE Elastic Reality Python AWS Cloud Engineer Agile
Qualifications: 5-7 years of professional experience in machine learning, data science, or related roles. Should have good exposure and understanding in time series Modelling using ARIMA, ARIMAX Exposure into how to handle underfitting and overfitting. Should be capable of applying techniques which helps to generalize Models. Regularization techniques LASSO, RIDGE & ELASTIC NET and when to apply them. Good exposure in Unsupervised machine learning like clustering, dimensionality reduction, Outlier detection Ability to understand how Models are optimized using various techniques including Gradient Descent approach. Good understanding of deep learning algorithms CNN, RNN, LSTM and how to control overfitting in such cases. Good hands on in data engineering to process huge scale of data using Big Data (Spark/Hive) Good coding practices to write production ready code for creating data pipeline for Models to consume. Very good hands on in python (Pandas/Numpy/Scikit-Learn/NLTK/spaCy/Matplotlib) Able to apply the right level of ML techniques for the given problem statement. Ability to access information contained in data and engineer appropriate features. Familiar with Python language and various platforms for hosting ML models Expert in model training, tuning and validation. Expert in statistical techniques, deep learning methodologies, GenAI, alternate techniques such as Bayesian etc. Exposure to big data and related models Ability to articulate model choice and convert outcome for business decision making. Expert in Model Development Lifecycle from sourcing to model monitoring Ability to create code that is highly performance in the given platform. Ability to map model and business use case to the appropriate platform and tools needed. Understanding of technical and machine learning governance Ability to validate and articulate model choices with relevant metrics (precision, recall, confusion matrix, RMSE,
View all details

Opening For Cloud Data Engineer

Talme Technologies Pvt Ltd

Designing and Implementing Data Architecture Strategies Data Integration Data Management Supporting Analytics Technology Selection and Performance Optimization. Technical Skills: In-depth Knowledge Of AWS Services (IAM Redshift NoSQL) Data Processing and Analysis Tools (AWS Glue EMR) Big Data Frameworks (Hadoop Spark) ETL Tools (IBM DataStage ODI in
we are in the lookout for a seasoned Cloud Data Lead. We are eager to connect with you if you have extensive experience in cloud platforms, data architecture, and leadership!
View all details

Ataccama Admin

Learning Lane Pvt Ltd

Ataccama Devops Engineer AWS Azure Administrator Linux Windows Vms-virtual Memory System Data Management
Detailed Job Description:We are looking for an Ataccama Admin to join our team and help us manage and maintain our Ataccama data quality and data governance platform. You will be responsible for installing, configuring, and maintaining Ataccama on AWS/Azure platform, as well as developing and implementing data quality rules and policies. You should have a deep understanding of Ataccama architecture and best practices, as well as experience in data management and data governance. Experience in administering Collibra and Immuta is preferred. You should also have experience in managing VMs and Windows and Linux based systems. Pharma experience is preferred.Essential Duties and Responsibilities: Install, configure, and maintain Ataccama Develop and implement data quality rules and policies Monitor and report on data quality metrics Troubleshoot and resolve Ataccama-related issues Stay up-to-date on the latest Ataccama features and best practices Work with cross-functional teams to implement data governance policies and procedures Manage and maintain VMs and Windows and Linux based systems Manage redundancy, backup and recovery plans and processes Strong AWS/Azure experienceQualifications: 4+ years of experience with Ataccama Experience in data management and data governance Experience in administering Collibra and Immuta is preferred Experience in managing VMs and Windows and Linux based systems Experience in Performance Tuning Pharma experience is preferred Strong analytical and problem-solving skills Excellent communication and teamwork skills
View all details
View More Jobs