Data Engineer Female 10th Pass Jobs in Noida

There are currently no vacancies available for "Data Engineer" in Noida

If you are interested for future opportunities please Post Your Resume



Browse the "Data Engineer" Jobs in Other location of Uttar Pradesh

  • Fresher
  • 5.5 Lac/Yr
  • Garhi Chaukhandi Noida
Work From Home Jose Back Office Processing English Typing Copy Editing Hindi Typing Non Voice Process Data Management Data Processing Online Data Entry Computer Operations Copy Paste Jobs Offline Data Entry MS Office Package Typing Skills Data Entry MS Office Communication Skills Basic Computers Mails Data Entry Operator Data Entry Specialist SAP Data Entry Operator Phone Banking Officer Phone Banking Executive Data Migration Data Warehousing Data Data Encoder Data An
We are looking for a motivated and detail-oriented Data Engineer to join our team. This part-time position is ideal for freshers who have completed at least their 10th grade and are eager to learn and grow in the field of data engineering. The role can be performed from home, providing flexibility in your work environment.Key Responsibilities:1. **Data Collection**: Assist in gathering data from various sources and help ensure the data is accurate and complete, which is essential for analysis.2. **Data Cleaning**: Support the team in cleaning and organizing data, removing any errors or inconsistencies to make the data reliable for decision-making.3. **Data Storage**: Help in storing data in databases and data warehouses. You will learn how these systems work and how to manage data effectively.4. **Collaboration**: Work closely with other team members, including data analysts and scientists, to understand their data needs and provide the required support.Required Skills and Expectations:Candidates should have a foundational understanding of databases and a willingness to learn new technologies. Strong analytical thinking and attention to detail are essential for success in this role. Good communication skills are important as you will be cooperating with teammates and sharing insights. Familiarity with tools like Excel or database management systems will be helpful but not mandatory, as training will be provided.
View all details

Looking For Data Engineer

InfiCare Technologies

  • 10 - 15 yrs
  • 22.5 Lac/Yr
  • Delhi
AZURE AWS ETL Data Factory Data Warehousing ETL Tool SQL
Key Responsibilities:- Design and manage data pipelines to transform and integrate structured and unstructured data.- Ensure high data quality and performance.- Support analytics, reporting, and business intelligence needs by preparing reliable data sets and models for stakeholders.- Collaborate with Analysts, Digital Project Managers, Developers, and business teams to ensure data accessibility and usefulness.- Enforce standards for data governance, security, and cost-effective operations.Ideal candidates will thrive in a collaborative, mission-focused environment and excel in ETL/ELT engineering. They should have experience building scalable data solutions using modern data engineering technologies that impact organizational outcomes.Required Qualifications:- Strong proficiency in Structured Query Language (SQL) and at least one programming language such as Python or Scala.- Hands-on experience developing ETL or ELT pipelines.- Experience with cloud-native data services (e.g., AWS Glue, AWS Redshift, Azure Data Factory, Azure Synapse, Databricks).- Good understanding of data modeling and data warehousing concepts.Desired Qualifications:- Design, build, and optimize scalable ETL or ELT pipelines handling both structured and unstructured data.- Ingest and integrate data from internal and external sources into data lakes or data warehouses.- Ensure that processed data is accurate, complete, and secure.Outcomes include well-documented, automated pipelines that support downstream analytics without bottlenecks or data errors.
View all details
Glue Lamda ETL
3+ years of AWS data engineering: Glue, Step Functions, Lambda, S3, DynamoDB, EC2Strong Python (boto3) scripting for automationTerraform or CloudFormation expertiseHands-on experience integrating RAG workflows or deploying LLM applicationsSolid SQL and NoSQL data-modeling skillsExcellent written and verbal communication in client-facing contexts
View all details

Data Engineer

StatusNeo

Python Pyspark Data Engineer AWS Glue
3+ years of experience with AWS services including SQS, S3, Step Functions, EFS, Lambda, and OpenSearch.Strong experience in API integrations, including experience working with large-scale API endpoints.Proficiency in PySpark for data processing and parallelism in large-scale ingestion pipelines.Experience with AWS OpenSearch APIs for managing search indices.Terraform expertise for automating and managing cloud infrastructure.Hands-on experience with AWS SageMaker, including working with machine learning models and endpoints.Strong understanding of data flow architectures, document stores, and journal-based systems.Experience in parallelizing data processing workflows to meet strict performance and SLA requirements.Familiarity with AWS tools like CloudWatch for monitoring pipeline performance.Additional Preferred Qualifications:Strong problem-solving and debugging skills in distributed systems.Prior experience in optimizing ingestion pipelines with a focus on cost-efficiency and scalability.Solid understanding of distributed data processing and workflow orchestration in AWS environments.Soft Skills:Strong communication and collaboration skills, with the ability to work effectively with cross-functional teams.Ability to work in a fast-paced environment and deliver high-quality results under tight deadlines.Analytical mindset, with a focus on performance optimization and continuous improvement.
View all details

Get Personalized Job Matches

Based on your experience, skills, interests, and career goals to help you find the most relevant opportunities faster. Register Now!
Python SQL ML Docker AWS Cloud Engineer
Level of skills and experience:5 years of hands-on experience in using Python, Spark,Sql.Experienced in AWS Cloud usage and management.Experience with Databricks (Lakehouse, ML, Unity Catalog, MLflow).Experience using various ML models and frameworks such as XGBoost, Lightgbm, Torch.Experience with orchestrators such as Airflow and Kubeflow.Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes).Fundamental understanding of Parquet, Delta Lake and other data file formats.Proficiency on an IaC tool such as Terraform, CDK or Cloud Formation.Strong written and verbal English communication skill and proficient in communication with non-technical stakeholderst
View all details
  • 10 - 15 yrs
  • 22.5 Lac/Yr
  • Gurgaon
Data Engineer Erwin ER ETL Tool Azure Synapse Azure Data Factory Databricks Python Pyspark SQL
Hello Everyone,Greetings!We are actively hiring a Data Engineer for our India team! If you or someone in your network is interested, please review the job details below and reach out to me at gaurav.sharma@cxdatalabs.com with you cv.Key Responsibilities:and optimize database data modeling with expertise in Azure Synapse, Azure DataFactory, and Databricks.data migration and modernization efforts within the Enterprise Data Lake.and implement data pipelines, ETL mappings, sessions, and workflows.data transformation, data modeling/mart creation, and scheduling coordination.closely with the Enterprise Information Management Team on data validation strategis, quality analysis, issue resolution, and unit testing.smooth release management and system integration.Qualifications & Skills Required:years of experience in Data Engineering, with a strong domain background in Clinical Research & Biopharmaceutical services.in data platforms and analytics systems, collaborating with Product Owners and stakeholders.in data modeling tools (ER Studio, Erwin) and cloud technologies.understanding of ETL processes, data quality management, and integration strategies.with Agile tools like Jira, Confluence, and Asana.problem-solving skills with the ability to identify areas for improvement and alignbusiness requirements.communication skills to proactively communicate integration changes in advance.If youre interested or have any referrals, please email me or message me directly.Looking forward to connecting!Best Regards,Gaurav Sharma#8802080222HR Operations PartnerCX Data Labs
View all details

AWS Data Engineer Lead / Architect

Vision Excel Career Solutions

Python Data Architect Data Engineer AWS
Are you a Mid/Senior level T-Shaped AWS expert with specialization in DevOps and Data Engineering space? If yes, We have an exciting opportunity just for you.One of our reputed European Client is looking for AWS engineers to help them build secure, resilient and cost-effective solutions on AWS platform to reap the benefits from their investment in AWS platform and services.We are looking for self-motivated, highly experienced engineers, possessing great analytical and excellent communication skills for this client facing role.What do we expect from you?Role: Data Engineer (AWS)*Mandatory*Experience in Developing Data Pipelines that process large volumes of data using Python, PySpark, Pandas etc, preferably on AWS.Experience in ingesting batch and streaming data from various data sources.Experience in writing complex SQL using any RDBMS (Oracle, PostgreSQL, SQL Server etc.)Experience in developing ETL, OLAP based and Analytical Applications.Ability to quickly learn and develop expertise in existing highly complex applications and architectures.Comfortable working in Agile projects*Desirable*Exposure to AWS platform's data services (AWS Lambda, Glue, Athena, Redshift, Kinesis etc.)Knowledge of DevOps and CD/CD tools.Experience in handling unstructured dataKnowledge of Financial Markets domainKeywords: Data Engineer, Data Pipelines, Data Ingestion, AWS Lambda, AWS Athena
View all details

Hiring For AWS Data Engineer

Right Time Placement

Data Engineer Data Architect AWS Data Warehousing
Job Description AWS Data Engineer with min of 5 to 7 years of experience. Collaborate with business analysts to understand and gather requirements for existing or new ETL pipelines. Connect with stakeholders daily to discuss project progress and updates. Work within an Agile process to deliver projects in a timely and efficient manner. Design and develop Airflow DAGs to schedule and manage ETL workflows. Transform SQL queries into Spark SQL code for ETL pipelines. Develop custom Python functions to handle data quality and validation. Write PySpark scripts to process data and perform transformations. Perform data validation and ensure data accuracy and completeness by creating automated tests and implementing data validation processes. Run Spark jobs on AWS EMR cluster using Airflow DAGs. Monitor and troubleshoot ETL pipelines to ensure smooth operation. Implement best practices for data engineering, including data modeling, data warehousing, and data pipeline architecture. Collaborate with other members of the data engineering team to improve processes and implement new technologies. Stay up to date with emerging trends and technologies in data engineering and suggest ways to improve the team's efficiency and effectiveness.
View all details

Data Engineer

Bb Works India

Data Warehousing ETL Python AWS SCALA Data Engineer
We have vacant of 5 Data Engineer Jobs in Bangalore, Noida, Experience Required : 9 Years Educational Qualification : Other Bachelor Degree Skill Data Warehousing, ETL, Python, AWS, SCALA, data engineer
View all details
GCP Data Engineer Data Flow Data Fusion
Experience: 5-8 Years in IT industry.Skill Set: Experience in GCP data analytics services such as dataflow, data fusion, big query and data storage.Responsibility: Responsible for developing Enterprise Knowledge Graph (EKG) to connects all data relevant from the data sources.Unit test and validate the graph lineage.
View all details

Apply to 0 Data Engineer Female 10th Pass Jobs in Noida

  • Noida Jobs
  • Hyderabad Jobs
  • Ahmedabad Jobs
  • Bangalore Jobs
  • Mumbai Jobs
  • Pune Jobs
  • Chennai Jobs
  • Kolkata Jobs
  • Delhi Jobs