Data Engineer Female 10th Pass Jobs in Delhi

There are currently no vacancies available for "Data Engineer" in Delhi

If you are interested for future opportunities please Post Your Resume



Browse the "Data Engineer" Jobs in Other location of

  • Fresher
  • 9.5 Lac/Yr
  • Geetanjali Enclave Delhi
Data Entry Validation Data Entry Speed Data Formatting Data Entry Software Data Quality Control Data Entry Automation Data Entry Forms Data Entry Audit Data Verification Google Sheets Data Entry Accuracy Keyboard Shortcuts Data Cleansing Spreadsheet Management Data Input Microsoft Excel Numeric Keypad Data Collection Typing Speed Data Extraction Copy-Paste Data Accuracy
We are looking for a Part-Time Data Entry Specialist to join our team. This role is ideal for freshers who have completed at least their 10th grade. You will work from home, helping to maintain and update our important data.Key Responsibilities:1. **Data Entry**: Accurately input data from various sources into our databases. Attention to detail is crucial to ensure the information is correct.2. **Data Management**: Assist in organizing and maintaining the database by updating records, checking for errors, and ensuring timely entries.3. **Report Generation**: Prepare simple reports as needed to help the team analyze data. This may involve summarizing information and creating spreadsheets.4. **Collaboration**: Communicate with team members to understand data needs and provide updates on data entry tasks. Good communication helps ensure that everyone stays on track.Required Skills and Expectations:Candidates should have basic computer skills, including knowledge of word processing and spreadsheet applications. The ability to type quickly and accurately is important for this role. Strong attention to detail will help you minimize errors. Good organizational skills will aid in keeping data systematic and accessible. While prior experience in data entry is not necessary, a willingness to learn and adapt is important. Candidates should be self-motivated and manage their time effectively while working from home to meet deadlines.
View all details

Looking For Data Engineer

InfiCare Technologies

  • 10 - 15 yrs
  • 22.5 Lac/Yr
  • Delhi
AZURE AWS ETL Data Factory Data Warehousing ETL Tool SQL
Key Responsibilities:- Design and manage data pipelines to transform and integrate structured and unstructured data.- Ensure high data quality and performance.- Support analytics, reporting, and business intelligence needs by preparing reliable data sets and models for stakeholders.- Collaborate with Analysts, Digital Project Managers, Developers, and business teams to ensure data accessibility and usefulness.- Enforce standards for data governance, security, and cost-effective operations.Ideal candidates will thrive in a collaborative, mission-focused environment and excel in ETL/ELT engineering. They should have experience building scalable data solutions using modern data engineering technologies that impact organizational outcomes.Required Qualifications:- Strong proficiency in Structured Query Language (SQL) and at least one programming language such as Python or Scala.- Hands-on experience developing ETL or ELT pipelines.- Experience with cloud-native data services (e.g., AWS Glue, AWS Redshift, Azure Data Factory, Azure Synapse, Databricks).- Good understanding of data modeling and data warehousing concepts.Desired Qualifications:- Design, build, and optimize scalable ETL or ELT pipelines handling both structured and unstructured data.- Ingest and integrate data from internal and external sources into data lakes or data warehouses.- Ensure that processed data is accurate, complete, and secure.Outcomes include well-documented, automated pipelines that support downstream analytics without bottlenecks or data errors.
View all details
Glue Lamda ETL
3+ years of AWS data engineering: Glue, Step Functions, Lambda, S3, DynamoDB, EC2Strong Python (boto3) scripting for automationTerraform or CloudFormation expertiseHands-on experience integrating RAG workflows or deploying LLM applicationsSolid SQL and NoSQL data-modeling skillsExcellent written and verbal communication in client-facing contexts
View all details
Python SQL ML Docker AWS Cloud Engineer
Level of skills and experience:5 years of hands-on experience in using Python, Spark,Sql.Experienced in AWS Cloud usage and management.Experience with Databricks (Lakehouse, ML, Unity Catalog, MLflow).Experience using various ML models and frameworks such as XGBoost, Lightgbm, Torch.Experience with orchestrators such as Airflow and Kubeflow.Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes).Fundamental understanding of Parquet, Delta Lake and other data file formats.Proficiency on an IaC tool such as Terraform, CDK or Cloud Formation.Strong written and verbal English communication skill and proficient in communication with non-technical stakeholderst
View all details

Get Personalized Job Matches

Based on your experience, skills, interests, and career goals to help you find the most relevant opportunities faster. Register Now!

Data Engineer

StatusNeo

Python Pyspark Data Engineer AWS Glue
3+ years of experience with AWS services including SQS, S3, Step Functions, EFS, Lambda, and OpenSearch.Strong experience in API integrations, including experience working with large-scale API endpoints.Proficiency in PySpark for data processing and parallelism in large-scale ingestion pipelines.Experience with AWS OpenSearch APIs for managing search indices.Terraform expertise for automating and managing cloud infrastructure.Hands-on experience with AWS SageMaker, including working with machine learning models and endpoints.Strong understanding of data flow architectures, document stores, and journal-based systems.Experience in parallelizing data processing workflows to meet strict performance and SLA requirements.Familiarity with AWS tools like CloudWatch for monitoring pipeline performance.Additional Preferred Qualifications:Strong problem-solving and debugging skills in distributed systems.Prior experience in optimizing ingestion pipelines with a focus on cost-efficiency and scalability.Solid understanding of distributed data processing and workflow orchestration in AWS environments.Soft Skills:Strong communication and collaboration skills, with the ability to work effectively with cross-functional teams.Ability to work in a fast-paced environment and deliver high-quality results under tight deadlines.Analytical mindset, with a focus on performance optimization and continuous improvement.
View all details
  • 10 - 15 yrs
  • 22.5 Lac/Yr
  • Gurgaon
Data Engineer Erwin ER ETL Tool Azure Synapse Azure Data Factory Databricks Python Pyspark SQL
Hello Everyone,Greetings!We are actively hiring a Data Engineer for our India team! If you or someone in your network is interested, please review the job details below and reach out to me at gaurav.sharma@cxdatalabs.com with you cv.Key Responsibilities:and optimize database data modeling with expertise in Azure Synapse, Azure DataFactory, and Databricks.data migration and modernization efforts within the Enterprise Data Lake.and implement data pipelines, ETL mappings, sessions, and workflows.data transformation, data modeling/mart creation, and scheduling coordination.closely with the Enterprise Information Management Team on data validation strategis, quality analysis, issue resolution, and unit testing.smooth release management and system integration.Qualifications & Skills Required:years of experience in Data Engineering, with a strong domain background in Clinical Research & Biopharmaceutical services.in data platforms and analytics systems, collaborating with Product Owners and stakeholders.in data modeling tools (ER Studio, Erwin) and cloud technologies.understanding of ETL processes, data quality management, and integration strategies.with Agile tools like Jira, Confluence, and Asana.problem-solving skills with the ability to identify areas for improvement and alignbusiness requirements.communication skills to proactively communicate integration changes in advance.If youre interested or have any referrals, please email me or message me directly.Looking forward to connecting!Best Regards,Gaurav Sharma#8802080222HR Operations PartnerCX Data Labs
View all details

Opening For Data Engineer

Cynosure Corporate Solutions

  • 3 - 9 yrs
  • Delhi
Apache Python Hadoop SCALA
Job Description: We are looking for Data Engineers to join our team. You will use various methods to transform raw data into useful data systems. For example, youll create algorithms and conduct statistical analysis. Overall, youll strive for efficiency by aligning data systems with business goals. To succeed in this position, you should have strong analytical skills and the ability to combine data from different sources. Data engineer skills also include familiarity with several programming languages and knowledge of machine learning methods. Job Requirements: Participate in the customers system design meetings and collect the functional/technical requirements. Build up data pipelines for consumption by the data science team. Skillful in ETL process and tools. Clear understanding and experience with Python and PySpark or Spark and SCALA, with HIVE, Airflow, Impala, and Hadoop and RDBMS architecture. Experience in writing Python programs and SQL queries. Experience in SQL Query tuning. Experienced in Shell Scripting (Unix/Linux). Build and maintain data pipelines in Spark/Pyspark with SQL and Python or SCALA. Knowledge of Cloud (Azure/AWS/GCP, etc..) technologies is additional. Good to have knowledge of Kubernetes, CI/CD concepts, Apache Kafka Suggest and implement best practices in data integration. Guide the QA team in defining system integration tests as needed. Split the planned deliverables into tasks and assign them to the team. Needs to Maintain/Deploy the ETL code and follow the Agile methodology Needs to work on optimization wherever applicable. Good oral, written and presentation skills. Preferred Qualifications: Degree in Computer Science, IT, or a similar field; a Masters is a plus. Hands-on experience with Python and Pyspark Or Hands-on experience with Spark and SCALA. Great numerical and analytical skills. Working knowledge of cloud platforms such as MS Azure, AWS, etc..
View all details

Data Engineer

Bb Works India

  • 9 - 15 yrs
  • 40.0 Lac/Yr
  • Bangalore +1 Noida
Data Warehousing ETL Python AWS SCALA Data Engineer
We have vacant of 5 Data Engineer Jobs in Bangalore, Noida, Experience Required : 9 Years Educational Qualification : Other Bachelor Degree Skill Data Warehousing, ETL, Python, AWS, SCALA, data engineer
View all details

Jobs by Popular Location

AWS Data Engineer Lead / Architect

Vision Excel Career Solutions

Python Data Architect Data Engineer AWS
Are you a Mid/Senior level T-Shaped AWS expert with specialization in DevOps and Data Engineering space? If yes, We have an exciting opportunity just for you.One of our reputed European Client is looking for AWS engineers to help them build secure, resilient and cost-effective solutions on AWS platform to reap the benefits from their investment in AWS platform and services.We are looking for self-motivated, highly experienced engineers, possessing great analytical and excellent communication skills for this client facing role.What do we expect from you?Role: Data Engineer (AWS)*Mandatory*Experience in Developing Data Pipelines that process large volumes of data using Python, PySpark, Pandas etc, preferably on AWS.Experience in ingesting batch and streaming data from various data sources.Experience in writing complex SQL using any RDBMS (Oracle, PostgreSQL, SQL Server etc.)Experience in developing ETL, OLAP based and Analytical Applications.Ability to quickly learn and develop expertise in existing highly complex applications and architectures.Comfortable working in Agile projects*Desirable*Exposure to AWS platform's data services (AWS Lambda, Glue, Athena, Redshift, Kinesis etc.)Knowledge of DevOps and CD/CD tools.Experience in handling unstructured dataKnowledge of Financial Markets domainKeywords: Data Engineer, Data Pipelines, Data Ingestion, AWS Lambda, AWS Athena
View all details

Hiring For AWS Data Engineer

Right Time Placement

Data Engineer Data Architect AWS Data Warehousing
Job Description AWS Data Engineer with min of 5 to 7 years of experience. Collaborate with business analysts to understand and gather requirements for existing or new ETL pipelines. Connect with stakeholders daily to discuss project progress and updates. Work within an Agile process to deliver projects in a timely and efficient manner. Design and develop Airflow DAGs to schedule and manage ETL workflows. Transform SQL queries into Spark SQL code for ETL pipelines. Develop custom Python functions to handle data quality and validation. Write PySpark scripts to process data and perform transformations. Perform data validation and ensure data accuracy and completeness by creating automated tests and implementing data validation processes. Run Spark jobs on AWS EMR cluster using Airflow DAGs. Monitor and troubleshoot ETL pipelines to ensure smooth operation. Implement best practices for data engineering, including data modeling, data warehousing, and data pipeline architecture. Collaborate with other members of the data engineering team to improve processes and implement new technologies. Stay up to date with emerging trends and technologies in data engineering and suggest ways to improve the team's efficiency and effectiveness.
View all details

Apply to 0 Data Engineer Female 10th Pass Jobs in Delhi

  • Delhi Jobs
  • Hyderabad Jobs
  • Ahmedabad Jobs
  • Bangalore Jobs
  • Mumbai Jobs
  • Pune Jobs
  • Chennai Jobs
  • Kolkata Jobs