180

Data Engineer Jobs in India

filter
  • Location
  • Role
  • Functional Area
  • Qualification
  • Experience
  • Employer Type
  • Fresher
  • 6.5 Lac/Yr
  • Alwarpet Chennai
Work From Home Jose Data Entry Audit Data Entry Automation Data Cleansing Data Entry Software Data Entry Accuracy Data Entry Forms Copy-Paste Data Formatting Data Input Google Sheets Data Quality Control Data Verification Data Entry Speed Keyboard Shortcuts Data Entry Validation Spreadsheet Management Numeric Keypad Typing Speed Microsoft Excel Data Accuracy Data Extraction Data Collection Offline Data Entry Data Entry Operator Data Entry Executive Phone Banking Data S
We are looking for a passionate Data Engineer to join our team in a part-time role. This is an excellent opportunity for freshers who are eager to kickstart their career in the data field. You will help manage and organize data to assist in various projects.Key Responsibilities:1. Data Collection: You will gather data from various sources to ensure that we have the right information needed for analysis.2. Data Cleaning: You will clean and prepare the data, removing any inconsistencies or errors, to provide reliable datasets for our projects.3. Data Storage: You will be responsible for organizing data in appropriate storage systems, ensuring that it is easily accessible for analysis.4. Data Processing: You will assist in processing large volumes of data, transforming it into useful formats for further analysis.5. Collaboration: You will work with team members to understand data requirements and provide the necessary support for their projects.Required Skills and Expectations:Candidates should have a basic understanding of data management and relevant tools. Strong attention to detail is essential, as accuracy is crucial in this role. Good communication skills will help you collaborate effectively with team members. You should be comfortable working independently from home and have a willingness to learn and adapt to new technologies. A proactive approach to problem-solving and a keen interest in working with data will set you apart as a valuable team member.
View all details
  • 5 - 7 yrs
  • 12.0 Lac/Yr
  • Chennai
Snow Flake Developer Dbt Dagster SQL Python Git Cicd Pipelines Data Modeling Datawarehouse Architecture Claude Copilot Data Extraction
We are looking for Senior Data Engineer (Snowflake / dbt / Dagster / AI-Assisted Development) with 5+ Year Experience in Chennai.Design and optimize data pipelines from SQL Server to Snowflake Work with healthcare data formats including EDI 835 / 837 if applicable Use AI tools (LLMs, code assistants, automation agents) to improve engineeringproductivity and quality
View all details

Data Engineer Jobs For M.C.A Freshers

SECRET TECHNOLOGIES INDIA VMS GROUP

  • 0 - 4 yrs
  • 40.0 Lac/Yr
  • Pune
Data Management Data Analysis Data Mining Informatica PLSQL SQL Oracle SQL Data Collection
As a Data Engineer, your responsibilities will include collecting and analyzing data to help inform business decisions. You will be responsible for data management, ensuring that data is accurate and up-to-date. This will involve using tools such as Informatica, PLSQL, SQL, and Oracle SQL to manipulate and query large datasets.Your skills should include a strong understanding of data analysis techniques, such as data mining and statistical analysis.
View all details
  • 0 - 1 yrs
  • 8.0 Lac/Yr
  • Female
  • Mall Road Amritsar
Data Integration Data Warehousing SQL Informatica ETL Hadoop Big Data Python
We are looking for a motivated Data Engineer to join our team. This part-time position allows you to work from home and is suitable for individuals with little to no experience. The ideal candidate will help us manage and process data to ensure it meets the needs of the business.**Key Responsibilities:**- **Data Collection:** Gather data from various sources to prepare for analysis. Its important to ensure the data is accurate and up-to-date.- **Data Cleaning:** Clean and organize raw data to make it usable. This involves removing errors and inconsistencies, which is crucial for reliable analysis.- **Data Storage:** Help in storing data in databases or cloud storage systems. Proper organization helps in easy access and retrieval of data when needed.- **Collaboration:** Work with other team members to understand their data needs. Communication is key to delivering the right data for their projects.- **Support:** Assist in monitoring data systems and providing technical support. Being proactive in identifying issues helps keep the data flow smooth.**Required Skills and Expectations:**Candidates should have a basic understanding of data management principles. Familiarity with data cleaning tools and database management systems is a plus. The ability to learn new software quickly and a strong attention to detail are essential. Good communication skills are important for working with teammates and understanding project requirements. We encourage fresh graduates and those with relevant qualifications to apply.
View all details

Get Personalized Job Matches

Based on your experience, skills, interests, and career goals to help you find the most relevant opportunities faster. Register Now!
  • 3 - 8 yrs
  • Bangalore
Web Scraping Python
Data Extraction Engineer designs extraction systems (and not just scripts). They build and maintain a next-generation data acquisition platform that treats web scraping as a declarative, specification-driven discipline. Instead of hard-coding XPaths for every site, Web Scraping Developer defines what data is neededusing schemas, natural language descriptions, or visual blueprintsand lets intelligent pipelines figure out how to get it.Key Responsibilities:Specification-Driven Extraction Engineering-Design and maintain declarative extraction specificationsusing Pydantic models, JSON schemas, or domain-specific languagesthat describe exactly which fields to capture, their types, and validation rules.Implement pipelines that translate these specifications into executable extraction plans, leveraging both classical (Scrapy, Playwright) and AI-augmented (LLM-based semantic parsing) backends.Build reusable specification libraries for recurring data types (product prices, tariff codes, regulatory texts) to accelerate onboarding of new sources.Autonomous & Self-Healing Systems-Deploy self-healing spiders that automatically detect website layout changes and repair themselves using Model Context Protocol (MCP) servers (e.g., Scrapy MCP Server, Playwright MCP).Integrate semantic extraction (Scrapy-LLM, custom LLM pipelines) to eliminate selector brittlenessspiders rely on field descriptions, not fragile XPaths.Orchestrate complex, multi-step browsing workflows with agentic frameworks (BMAD/TEA, AutoGPT-like agents) that reason about page state, adapt to anti-bot measures, and correct their own behaviour in real time.Platform Thinking & Reusability-Move beyond one-off scrapers: build a component-based extraction platform where selectors, login handlers, and pagination logic are shared, versioned, and tested.Implement monitoring, alerting, and automatic rollback for failed extraction runs.Champion ethical crawling by designrate limiting, robots.txt respect, and compliance with GDPR/CCPA are built into the specification layer, not retrofitted.Collaboration & Continuous Innovation-Partner with data scientists and domain experts to refine extraction specifications for complex, unstructured domains (e.g., legal texts, tariff classifications).Evaluate and pilot emerging tools to push automation coverage beyond 90%.Document and evangelise specification-driven best practices across the engineering organisation.Candidate Profile:Education and Experience -Bachelors degree in Computer Science3+ years of experience in web scraping or data extractionSkills and competences-Specification-Driven Extraction Experience defining extraction requirements via schemas (Pydantic, JSON Schema) and executing them through both traditional crawlers and LLM-based semantic parsers.SelfHealing & Semantic Extraction Handson use of ScrapyLLM, Scrapy MCP Server, or similar systems that decouple field definitions from page structure.Agentic Workflows Familiarity with frameworks that give LLMs browser control (Playwright + MCP, BMAD/TEA) to handle complex, nondeterministic crawling tasks.Classical Scraping Fundamentals You still know how to write a Scrapy spider or a Playwright script when needed, but you actively seek to replace that work with reusable, specification-driven components.Data Validation & Storage Ability to define validation rules within specifications and land clean data into SQL/NoSQL databases or data lakes.Python proficiency: the focus is on an extraction engineer who happens to use Python.HTTP, DOM, XPath, CSS.Basic API integration and authentication flows.Preferred / Nice-to-Have Skills:Contributions to open-source scraping or AI-automation projects.Experience training or fine-tuning small LLMs for domain-specific extraction.Familiarity with data privacy engineering (GDPR, CCPA) baked into specification design.DevOps light Docker, CI/CD for testing extraction specifications.Mindset & Approach (Non-Negotiable):Strong belief that the future of scraping is declarative, not imperative. Youd rather write a schema that says extract the price than debug an XPath when a website redesigns.Looking to shift from code that scrapes to systems that understand extraction.
View all details
  • 5 - 11 yrs
  • 25.0 Lac/Yr
  • Bangalore
Apache Kafka Azure Grafana Data Warehousing
Role: Data Engineer 2.0Location - RemoteExperience: Min. 5 YearsNotice Period- Immediate to 15 Days or serving notice periodKey Responsibilities:Design and implement manual test strategies for real-time streaming use cases using Azure Service Bus, Event Hubs, Kafka, and Azure Functions.Validate Spark Streaming applications, including unbounded data flows, streaming DataFrames, checkpoints, and streaming joins.Develop test plans for containerized microservices deployed on Kubernetes, ensuring scalability and fault tolerance.Test data ingestion and transformation workflows across open table formats like Delta Lake, Apache Iceberg, and Hudi.Good to Have:Monitoring and troubleshooting system performance using observability stacks such as Prometheus, Grafana, and ELK.functional and performance testing on analytical databases and query engines such as Trino, StarRocks, and ClickHouse.Testing and validation of data products designed under data mesh architecture, ensuring domain-oriented data quality and governance.
View all details

Looking For Data Engineer

BSRI Solutions Pvt Ltd

  • 3 - 5 yrs
  • 16.0 Lac/Yr
  • Chennai
Python Pyspark Developer Scala SQL Hive Hadoop Google Cloud Platform Kafka Developer Infrastructure AS Code GitHub Agile Methodology ETL
Required Qualifications : 3+ years of demonstrated ability with Hive, Python, Spark/Scala, SQL, etc. Google Cloud Platform Experience, Big Query, Cloud Storage, Dataproc, Data Flow, Cloud Composer, Cloud SQL, Pub Sub, Terraform, etc. Experience with Hadoop Ecosystem, Kafka, PCF cloud services Familiar with big data and machine learning tools and platforms Experience with BI tools, such as Alteryx, Data Stage, QlikSense, etc. Design data pipelines and data robots, take a vision and bring it to life Master data engineer; mentors others; works closely with IT architects to set strategy and design projects Provide extensive technical, and strategic advice and guidance to key stakeholders around the data transformation efforts Redesign data flows to prevent recurring data issues Strong analytical and problem-solving skills Possess excellent oral and written communication skills, as well as facilitationand presentation skills, and engaging presentation style. Ability to work as a global team member, as well as independently, in achanging environment and prioritize. Ability to establish and maintain coordinated and effective working relationships with application implementation teams, IT project teams, business customers, and end users. Ability to deliver work within deadlines. Experience with agile/lean methodologies Experience working independently and with minimal supervision Experience with Test Driven Development and Software Craftsmanship Experience with GitHub, Accurev, or other version-control systems Experience with Putty Experience with Datastage Strong Communications skills Ability to illustrate and convey ideas and prototypes effectively with team and partners Presence demonstrating confidence, ability to learn quickly, influence, and shape ideas Key Skills Required - Data Engineer- Python / PySpark / Scala- SQL & Hive- Hadoop Ecosystem- Data Pipeline Design & ETL Development- Google Cloud Platform (BigQuery, Dataproc, Dataflow, Cloud Storage)- Kafka / Streaming Data Processing- Terraform (Infrastructure as Code)- DataStage or Similar ETL Tools- Version Control (GitHub or equivalent)- Agile Methodologies- Strong Analytical & Problem-Solving Skills- Stakeholder Collaboration & CommunicationNice to Have:- Cloud Composer, Cloud SQL, Pub/Sub- BI Tools (Alteryx, QlikSense)- Machine Learning Platform Exposure- Test Driven Development (TDD)- Mentoring & Technical Leadership
View all details

Looking For Data Engineer

InfiCare Technologies

  • 10 - 15 yrs
  • 22.5 Lac/Yr
  • Delhi
AZURE AWS ETL Data Factory Data Warehousing ETL Tool SQL
Key Responsibilities:- Design and manage data pipelines to transform and integrate structured and unstructured data.- Ensure high data quality and performance.- Support analytics, reporting, and business intelligence needs by preparing reliable data sets and models for stakeholders.- Collaborate with Analysts, Digital Project Managers, Developers, and business teams to ensure data accessibility and usefulness.- Enforce standards for data governance, security, and cost-effective operations.Ideal candidates will thrive in a collaborative, mission-focused environment and excel in ETL/ELT engineering. They should have experience building scalable data solutions using modern data engineering technologies that impact organizational outcomes.Required Qualifications:- Strong proficiency in Structured Query Language (SQL) and at least one programming language such as Python or Scala.- Hands-on experience developing ETL or ELT pipelines.- Experience with cloud-native data services (e.g., AWS Glue, AWS Redshift, Azure Data Factory, Azure Synapse, Databricks).- Good understanding of data modeling and data warehousing concepts.Desired Qualifications:- Design, build, and optimize scalable ETL or ELT pipelines handling both structured and unstructured data.- Ingest and integrate data from internal and external sources into data lakes or data warehouses.- Ensure that processed data is accurate, complete, and secure.Outcomes include well-documented, automated pipelines that support downstream analytics without bottlenecks or data errors.
View all details
  • 6 - 12 yrs
  • 16.0 Lac/Yr
  • Bangalore
Python GCP Developer
Job Title:Data Engineer Location: Bangalore Experience: 6+ years Notice Period : Immediate to 21daysMUST-HAVE TECHNICAL SKILLSSkillSkill DepthPython for Data PipelinesIndependently written ingestion/transformation scripts, including pagination, exception handling, logging, and dataframe-level operations using Pandas, JSON, or GCP SDKsDBT (Data Build Tool)Authored and executed DBT models and tests using YAML files and Jinja macros; contributed to CI test configs and schedule integrationGCP (BigQuery, GCS, CloudSQL)Hands-on experience in at least two of the above tools in pipeline execution e.g., used BigQuery for SQL transformation and GCS for raw/processed layer segregationAWS LambdaIntegrated serverless functions to automate trigger points like new file upload, API call chaining or job completion; used boto3 or GCP Pub/Sub hooksData Quality & ValidationDeveloped or plugged-in validation layers for ingestion such as record count matching, null/duplicate flagging, recon table populationCloud-Native ModelingAdapted pre-existing logical models to ingestion logic, ensuring correct joins, partitioning strategy, and target-layer conformity (Star/Snowflake)Version Control & AgileParticipated in Git branching workflows and sprint-based delivery (JIRA or similar); able to push/pull/test with basic conflict resolution
View all details

Python Architect (12-19 Years)

Cynosure Corporate Solutions

  • 12 - 19 yrs
  • Chennai
Python Architecture Advanced Python Development System Design Microservices Data Engineering AIML Platforms API Design Performance Optimization Code Governance
We are looking for a highly experienced Python Architect to design and lead large-scale, enterprise-grade Python solutions for AI, data, and analytics platforms. The role requires deep hands-on expertise in Python across architecture, development, and production systems, supporting end-to-end AI and data-driven solutions.Key Responsibilities:Architect scalable, high-performance systems using Python as the core technologyDefine technical architecture for AI, data engineering, and analytics platformsLead design and development of microservices, APIs, and backend frameworks in PythonEnsure production-ready implementations with strong focus on performance, security, and reliabilityCollaborate with data science, ML, DevOps, and product teams to operationalize AI solutionsEstablish coding standards, best practices, and architectural guidelinesReview code, mentor senior engineers, and provide technical leadershipDrive innovation and continuous improvement in Python-based system designRequired Skills & Qualifications:1218 years of experience working fully and extensively in PythonStrong expertise in Python system architecture and large-scale backend designExperience supporting AI/ML, data engineering, or analytics platformsSolid understanding of microservices, REST APIs, and distributed systemsHands-on experience with cloud platforms (AWS/Azure/GCP)Strong problem-solving skills and ability to lead complex technical initiativesExcellent communication and stakeholder collaboration skills
View all details

Data Scientist

The Best Services & Enterprise's

  • 6 - 10 yrs
  • 13.0 Lac/Yr
  • Mumbai
Data Scientist Python Media Mix Models MTA MMM Data Engineering Python Data Engineer Data Management
Job Title: Data ScientistExperience: 7+ YearsLocation: USA/Canada (Remote) or Offshore (Remote)Working Hours: USA ESTJob Description:We are looking for experienced Data Scientists with proven expertise in building Media MixModels (MMM) and Multi-Touch Attribution (MTA) models for a long-term engagementwith Univision. The ideal candidates should have a strong background in AdTech, datascience, and analytics, with the ability to derive actionable insights from large datasets inthe media and OTT domain.Key Responsibilities: Develop advanced MMM (Media Mixed Modeling) and MTA (Multi-touch Attribution)models to optimize marketing and advertising strategies Analyze large volumes of structured and unstructured data to uncover trends,correlations, and actionable insights Build and deploy machine learning models and predictive algorithms to solve complexbusiness problems Gather data from diverse sources, clean and transform it for analysis Apply statistical techniques to validate models and ensure accuracy and reliability Automate analytical workflows and repetitive tasks using AI tools and scripting languages Create compelling visualizations, dashboards, and reports to communicate insights acrossteams Collaborate with cross-functional teams including AdSales, Data Engineering, andAnalytics Stay updated on the latest developments in AI, machine learning, and media analyticsRequired Skills & Experience: Specific MMM Experience: Bayesian methods, causal inference, incrementality testing MTA Expertise: Attribution modeling, customer journey analysis, touchpoint optimization Tools: Python (scikit-learn, statsmodels), R (prophet, CausalImpact), SQL,Tableau/PowerBI 6+ years of experience in data science, analytics, or machine learning roles At least 3 years of experience in AdTech or AdSales systems Hands-on experience in developing MMM and MTA models Strong understanding of OTT, digital media, and advertising ecosystems Proficiency in programming languages like Python, R, or SQL for data manipulation andmodeling Experience working with large datasets, data pipelines, and BI/reporting tools Familiarity with statistical methods, experiment design, and model evaluation metrics Excellent problem-solving, communication, and stakeholder management skil
View all details
  • Fresher
  • 3.5 Lac/Yr
  • Coimbatore
Work From Home Home Based Work Data Engineer
We are looking for 999 Data Engineer Posts in Coimbatore with deep knowledge in Data Engineering and Required Educational Qualification is : 12th Pass, 10th Pass;
View all details
  • 5 - 10 yrs
  • Hyderabad
Python API Spark
Data Engineer HyderabadExperience: 48 years Location: Bangalore Employment Type: Full-timeAbout the RoleWere seeking a skilled Data Engineer to design, build, and maintain scalable data pipelines and integrations across multiple systems. Youll work with Python, GCP (BigQuery), Spark, and API integrations, ensuring data quality and seamless workflows. Experience with Ascend.io is a plus.Key ResponsibilitiesDesign and develop ETL/ELT pipelines and automated workflows.Integrate data from APIs, Oracle EBS, and cloud platforms.Leverage Google Cloud Platform (GCP) and BigQuery for data analytics.Utilize Apache Spark or similar frameworks for large-scale data processing.Ensure data accuracy, consistency, and security across systems.Collaborate with business and data teams to deliver reliable data solutions.Monitor and troubleshoot pipeline performance.RequirementsStrong Python and workflow orchestration experience (Airflow, Prefect, etc.).Hands-on with GCP / BigQuery and big data frameworks (Spark).Experience with API integrations (REST, SOAP, GraphQL).Understanding of ETL optimization, CI/CD, and Agile workflows.Excellent problem-solving and communication skills.Nice to HaveExperience with Ascend.io.Knowledge of SQL/NoSQL, Docker, Kubernetes.Exposure to machine learning pipelines.Why Join UsWork on cutting-edge data engineering and cloud-based integration projects.Collaborative, innovative team environment.Competitive compensation and strong career growth.
View all details
  • 5 - 10 yrs
  • 40.0 Lac/Yr
  • Hyderabad
AWS Python AWS Data Engineer Terraform ETL Tool CI CD
About the RoleWe are looking for a highly skilled and experienced Senior Data Engineer to join our team in Hyderabad. The ideal candidate will bring strong technical expertise in building scalable data platforms and pipelines using modern technologies such as Python, Scala, AWS, Redshift, Terraform, Jenkins, and Docker. This role demands a hands-on professional who thrives in a fast-paced, collaborative environment and is eager to solve complex data problems.Key ResponsibilitiesDesign, build, and optimize robust, scalable, and secure data pipelines and platform components.Collaborate with data scientists, analysts, and engineering teams to ensure seamless data flow, integration, and availability across systems.Develop infrastructure as code using Terraform to automate provisioning and environment management.Manage containerized services and workflows using Docker.Set up, manage, and optimize CI/CD pipelines using Jenkins for continuous integration and deployment.Optimize performance, scalability, and reliability of large-scale data systems on AWS.Write clean, modular, and efficient code in Python and Scala to support ETL, data transformation, and processing tasks.Support data architecture planning and participate in technical reviews and design sessions.Must-Have SkillsStrong hands-on experience with Python, Scala, SQL, and Amazon Redshift.Proven expertise in AWS cloud services and ecosystem (EC2, S3, Redshift, Glue, Lambda, etc.).Experience implementing Infrastructure as Code (IaC) with Terraform.Proficient in managing and deploying Docker containers in development and production environments.Hands-on experience with CI/CD pipelines using Jenkins.Strong understanding of data architecture, ETL pipelines, and distributed data processing systems.Excellent problem-solving skills and ability to mentor junior engineers.Nice-to-HaveExperience working in regulated domains like healthcare or finance.Exposure to Apache Airflow, Spark, or Databricks.Familiarity with data quality frameworks and observability tools.
View all details
Glue Lamda ETL
3+ years of AWS data engineering: Glue, Step Functions, Lambda, S3, DynamoDB, EC2Strong Python (boto3) scripting for automationTerraform or CloudFormation expertiseHands-on experience integrating RAG workflows or deploying LLM applicationsSolid SQL and NoSQL data-modeling skillsExcellent written and verbal communication in client-facing contexts
View all details
  • 8 - 10 yrs
  • Pune
Kafka Scala Spark Hadoop Airflow Data Lakes Kappa Kappa ++ Architectures RDBMS NoSQL Cassandra Redis Oracle
Sr. Big Data Engineer Location: PuneExperience: 10+ years Mode: HybridRole Overview:We are seeking a talented Sr. Big Data Engineer to design, develop, and support a highly scalable, distributed SaaS-based Security Risk Prioritization product. You will lead the design and evolution of our data platform and pipelines, providing technical leadership to a team of engineers and architects.Key Responsibilities: Provide technical leadership on data platform design, roadmaps, and architecture. Design and implement scalable architecture for Big Data and Microservices environments. Drive technology explorations, leveraging knowledge of internal and industry prior art. Ensure quality architecture and design of systems, focusing on performance, scalability, and security. Mentor and provide technical guidance to other engineers.Required Skills & Technologies: Mandatory: Kafka, Scala, Spark. Big Data & Data Streaming: Spark, Kafka, Hadoop, Presto, Airflow, Data lakes, lambda architecture, kappa, and kappa ++ architectures with flink data streaming. Databases & Caching: RDBMS, NoSQL, Oracle, Cassandra, Redis. Search Solutions: Solr, Elastic. ML & Automation: Experience with ML models engineering and related deployment, scripting, and automation. Architecture: In-depth experience with messaging queues and caching components. Other Skills: Strong troubleshooting and performance benchmarking skills for Big Data technologies.Qualifications: Bachelors degree in Computer Science or equivalent. 8+ years of total experience, with 6+ years relevant. 2+ years in designing Big Data solutions with Spark. 3+ years with Kafka and performance testing for large infrastructure.
View all details

Data Engineer

Guiding Consulting

  • 10 - 12 yrs
  • Bangalore
SQL Python Spark Data Integration ETL AWS ETL Tool Data Warehousing Azure Server
Job Description:Yrs of Exp : 10 + yrsMode : 3 days a weekLocation: BangaloreWork Type : PermanentKey ResponsibilitiesDesign and Development:Architect, implement, and optimize scalable data solutions.Develop and maintain data pipelines, ETL/ELT processes, and workflows to ensure the seamless integration and transformation of data.Collaboration:Work closely with data scientists, analysts, and business stakeholders to understand requirements and deliver actionable insights.Partner with cloud architects and DevOps teams to ensure robust, secure, and cost-effective data platform deployments.Data Management:Manage and maintain data lakes, data warehouses, and real-time analytics systems.Ensure high data quality, integrity, and security across the organization.Performance Optimization:Monitor and enhance system performance, troubleshoot issues, and implement optimizations as needed.Leverage Microsoft Fabrics advanced analytics and AI capabilities for innovative data solutions.Best Practices & Leadership:Lead and mentor junior engineers to foster a culture of technical excellence.Stay updated with industry trends and best practices, especially in the Microsoft ecosystem.Required:Bachelors or Masters degree in Computer Science, Data Engineering, or a related field.10+ years of experience in data engineering, with a proven track record of working on large-scale data platforms.Expertise in Microsoft Fabric and its components (e.g., Synapse, Data Factory, Azure Data Lake, Power BI).Strong proficiency in SQL, Python, and Spark.Experience with cloud platforms, particularly Microsoft Azure.Solid understanding of data modeling, data warehousing, and ETL/ELT best practices.Excellent problem-solving, communication, team management and project management skills.Preferred:Familiarity with other cloud platforms (e.g., AWS, GCP).Experience with machine learning pipelines or integrating AI into data workflows.Certifications in Microsoft Azure or related technologies.
View all details

GEN AI - AIML

Welkin Soft Tech Pvt. Ltd.

  • 8 - 12 yrs
  • 18.0 Lac/Yr
  • Bangalore
GEN AI AI ML Python LLM Optimization AI Engineer Integration Data Engineer Analysis SQL Cloud Computing Data Base Natural Language Processing
Job Opening: Generative AI & LLMs / Distinguished Gen AI Engineer Location: Remote Experience: 7+ yearsShare your profilestosandhya@welkinsofttech.com To apply, send your profile to sandhya@welkinsofttech.com / hr@welkinsofttech.com or connect with us here. Key Responsibilities:LLM Development: Design, fine-tune, and implement large language models (e.g., GPT, BERT, T5) for applications like personalized learning, content generation, and semantic search.Generative AI Solutions: Drive innovation with Gen AIdeveloping tools like adaptive learning paths, resume builders, and AI-written job descriptions.Machine Learning: Create predictive models and recommendation engines that align user profiles to skills and job opportunities.Token Optimization: Work with OpenAI and other services to manage token efficiency and usage costs.AI Integration: Collaborate with product and engineering teams to integrate AI features seamlessly into the Elefy platform.Data Engineering: Build and maintain robust data pipelines using Python, Node.js, and MongoDB.Data Analysis: Analyze large datasets to surface actionable insights for user engagement and platform growth.Visualization & Reporting: Build dashboards using Tableau, Power BI, or Matplotlib to communicate insights to stakeholders.Documentation: Ensure clear and comprehensive documentation for models, pipelines, and workflows. Who Were Looking For:Experience:8+ years in data science or AI, with 3+ years hands-on with LLMs or Gen AI in production settings.Proven track record of delivering ML models in scalable, real-world applications.Skills:Languages: Python (must), R, SQLFrameworks: PyTorch, TensorFlow, Hugging Face, Scikit-learnPrompt Engineering: Few-Shot Learning, Dynamic Prompting, Role Play, Chain-of-Thought (nice to have)Cloud: Azure (preferred), AWS, or GCPDatabase: MongoDB or similar NoSQL/SQL systemsKnowledge & Tools:Deep NLP & LLM expertise (e.g., GPT, BERT, T5, etc.)Containerization, APIs, CI/CD, and Azure-native cloud toolsStrong visual storytelling via Tableau, Power BI, or Python-based plotsAgile & cross-functional collaboration mindset Bonus Points:Experience with ethical AI, bias mitigation, and explain abilityFamiliarity with skill-based learning platforms or EdTech ecosystemsJoin us at Elefy and be part of a team thats reshaping the future of learning with AI.
View all details

HP and Dell Server and Storage Engineer

Creative Infotech Solution Pvt. Ltd.

  • 1 - 5 yrs
  • Valsad
Data Center Engineer HP DELL HP Storage EMC Storage
Job Summary:We are seeking a skilled and experienced HP and Dell Server and Storage Engineer to manage, support, and maintain enterprise-grade servers and storage infrastructure. The ideal candidate will have hands-on experience with HPE ProLiant / DL / Blade servers, Dell PowerEdge, Dell EMC, and MSA or other enterprise storage solutions.Key Responsibilities:Install, configure, and maintain HP and Dell servers (rack and blade systems).Deploy and manage enterprise storage systems (Dell EMC, HPE MSA).Perform RAID configuration, firmware upgrades, and hardware diagnostics.Manage storage provisioning, zoning, and LUN mapping.Perform server hardware troubleshooting and replacement (HDDs, memory, RAID cards, etc.).Monitor hardware health using tools like HPE iLO, Dell iDRAC, and OpenManage.Collaborate with network and application teams for infrastructure deployments.Maintain documentation for inventory, configurations, and standard operating procedures.Ensure high availability, performance, and data security for all server/storage components.Provide support for server and storage migrations and disaster recovery planning.Key Skills & Qualifications:3+ years of hands-on experience with HP and Dell enterprise server platforms.Expertise in enterprise storage systems: Dell EMC, HPE MSA, SAN/NAS technologies.Strong knowledge of RAID levels and disk configurations.Familiar with data center best practices, racking, cabling, and airflow management.Proficiency in using iLO, iDRAC, HPE OneView, and Dell OpenManage tools.Understanding of VMware, Hyper-V, or other virtualization platforms is a plus.Preferred Qualifications:Experience working in enterprise or data center environments.Familiarity with Linux and Windows Server environments.Basic networking knowledge (switching, VLANs, IP addressing).Education:Bachelors Degree in Computer Science, IT, Electronics, or a related fieldOR equivalent technical certifications and relevant experience.
View all details
  • 4 - 6 yrs
  • 18.0 Lac/Yr
  • Pune
SQL ETL Azure Pyspark Databricks Python
Responsibilities: Design, develop, and deploy data solutions on Azure, leveraging SQL Azure,Azure Data Factory, and Databricks. Build and maintain scalable data pipelines to ingest, transform, and load datafrom various sources into Azure data repositories. Implement data security and compliance measures to safeguard sensitiveinformation. Collaborate with data scientists and analysts to support their data requirementsand enable advanced analytics and machine learning initiatives. Optimize and tune data workflows for performance and efficiency. Troubleshoot data-related issues and provide timely resolution. Stay updated with the latest Azure data services and technologies andrecommend best practices for data engineering.Qualifications: Bachelors degree in computer science, Information Technology, or related field. Proven experience as a data engineer, preferably in a cloud environment. Strong proficiency in SQL Azure for database design, querying, and optimization. Hands-on experience with Azure Data Factory for ETL/ELT workflows. Familiarity with Azure Databricks for big data processing and analytics. Experience with other Azure data services such as Azure Synapse Analytics,Azure Cosmos DB, and Azure Data Lake Storage is a plus. Solid understanding of data warehousing concepts, data modeling, anddimensional modeling. Excellent problem-solving and communication skills.
View all details

Data Engineer

United Technology

  • 1 - 3 yrs
  • 4.0 Lac/Yr
  • Chennai
Data Integration Data Engineer Hadoop ETL SQL Informatica Apache AWS Big Data Python
We are looking Data Engineer with 1 to 3 years experience in Chennai.Immediate joiners preferred
View all details

Looking For Data Engineer

Cynosure Corporate Solutions

  • 8 - 14 yrs
  • Chennai
Data Engineer Python AWS
Responsibilities: Designing and implementing data pipelines to collect, clean, and transform data from various sources Building and maintaining data storage and processing systems, such as databases, data warehouses, and data lakes Ensuring data is properly secured and protected Developing and implementing data governance policies and proceduresCollaborating with business analysts, data analysts, data scientists, and other stakeholders to understand their data needs and ensure they have access to the data they need Sharing knowledge with the wider business, working with other BAs and technology teams to make sure processes and ways of working is documented. Collaborate with Big Data Solution Architects to design, prototype, implement, and optimize data ingestion pipelines so that data is shared effectively across various business systems. Ensure the design, code and procedural aspects of solution are production ready, in terms of operational, security and compliance standards. Participate in day-to-day project and agile meetings and provide technical support for faster resolution of issues. Clearly and concisely communicating to the business, on status of items and blockers. Have an end-to-end knowledge of the data landscape within the company. Skills & Experience: 10+ years of design & development experience with big data technologies like Azure, AWS or GCP Preferred is Azure & Databricks, with experience in Azure DevOps Experience in data visualising technology in DL, like PowerBI Proficient in Python, PySpark and SQL Proficient in querying and manipulating data from various DB (relational and big data). Experience of writing effective and maintainable unit and integration tests for ingestion pipelines. Experience of using static analysis and code quality tools and building CI/CD pipelines. Excellent communication, problem-solving, and leadership skills, and be able to work well in a fast-paced, dynamic environment.
View all details

Opening For Data Engineer

Cynosure Corporate Solutions

  • 3 - 9 yrs
  • Delhi
Apache Python Hadoop SCALA
Job Description: We are looking for Data Engineers to join our team. You will use various methods to transform raw data into useful data systems. For example, youll create algorithms and conduct statistical analysis. Overall, youll strive for efficiency by aligning data systems with business goals. To succeed in this position, you should have strong analytical skills and the ability to combine data from different sources. Data engineer skills also include familiarity with several programming languages and knowledge of machine learning methods. Job Requirements: Participate in the customers system design meetings and collect the functional/technical requirements. Build up data pipelines for consumption by the data science team. Skillful in ETL process and tools. Clear understanding and experience with Python and PySpark or Spark and SCALA, with HIVE, Airflow, Impala, and Hadoop and RDBMS architecture. Experience in writing Python programs and SQL queries. Experience in SQL Query tuning. Experienced in Shell Scripting (Unix/Linux). Build and maintain data pipelines in Spark/Pyspark with SQL and Python or SCALA. Knowledge of Cloud (Azure/AWS/GCP, etc..) technologies is additional. Good to have knowledge of Kubernetes, CI/CD concepts, Apache Kafka Suggest and implement best practices in data integration. Guide the QA team in defining system integration tests as needed. Split the planned deliverables into tasks and assign them to the team. Needs to Maintain/Deploy the ETL code and follow the Agile methodology Needs to work on optimization wherever applicable. Good oral, written and presentation skills. Preferred Qualifications: Degree in Computer Science, IT, or a similar field; a Masters is a plus. Hands-on experience with Python and Pyspark Or Hands-on experience with Spark and SCALA. Great numerical and analytical skills. Working knowledge of cloud platforms such as MS Azure, AWS, etc..
View all details

Hiring For Senior Data Engineer

The Best Services & Enterprise's

  • 4 - 8 yrs
  • 9.0 Lac/Yr
  • Rajkot
Senior Data Engineer Senior Data Analyst Senior Data Associate
Job Summary:We are looking for an experienced Senior Data Engineer to design, develop, and maintain robust data pipelines and ETL workflows. The role involves working on Azure cloud platforms to support analytics and reporting initiatives.Key Responsibilities:Develop and maintain ETL pipelines using Azure Data Factory and Azure Databricks.Implement data integration and transformation workflows for analytics and reporting.Collaborate with data scientists, analysts, and stakeholders to deliver high-quality data solutions.Write efficient, reusable code using Python and SQL.Ensure data quality, governance, and performance optimization across systems.Skills & Qualifications:5+ years of experience in data engineering.Hands-on experience with Azure Data Factory, Azure Databricks, Python, and SQL.Strong understanding of data engineering best practices and cloud architectures.Experience with data modeling, data warehousing, and analytics solutions is a plus.
View all details
  • Fresher
  • 6.5 Lac/Yr
  • Basavanagudi Bangalore
Data Verification Google Sheets Keyboard Shortcuts Numeric Keypad Spreadsheet Management Data Input Data Quality Control Data Formatting Data Accuracy Data Extraction Data Cleansing Data Entry Software Data Collection Microsoft Excel Data Visualization Data Quality Data Transformation Big Data Technologies Programming Data Warehousing
We are looking for a motivated Data Processing Engineer to join our team. This part-time role is perfect for freshers who are eager to learn and grow in the field of data management. You will work from home, contributing to our data processing needs.
View all details
  • Fresher
  • 5.5 Lac/Yr
  • Begumpet Secunderabad
Data Cleansing Data Entry Accuracy Data Verification Data Quality Control Google Sheets Numeric Keypad Spreadsheet Management Microsoft Excel Data Extraction Data Collection Data Formatting Data Accuracy
We are seeking a Data Processing Engineer, ideal for freshers looking to start their careers in data handling. This part-time role offers the flexibility to work from home while engaging with data processing tasks. In this position, you will be responsible for:- **Data Collection**: Gathering data from various sources to ensure accurate and up-to-date information is available for processing.- **Data Cleansing**: Checking for errors or inconsistencies in the data and correcting them to improve quality.- **Data Analysis**: Analyzing the processed data to extract meaningful insights that can help in decision-making.- **Reporting**: Creating simple reports based on the analysis that summarize findings and present them clearly to stakeholders.For this role, we expect candidates to possess essential skills, including:- **Attention to Detail**: You should have a sharp eye for detail, as accuracy in data processing is crucial.- **Basic Computer Skills**: Familiarity with computers and basic software applications like spreadsheets is necessary.- **Problem-Solving Skills**: You should be able to identify problems in data and think critically about solutions.- **Communication Skills**: Good written communication skills are important for creating reports and sharing insights.Overall, this role offers an exciting opportunity for beginners passionate about data to develop their skills in a supportive work-from-home environment.
View all details
  • Fresher
  • 7.0 Lac/Yr
  • Chilkana Road Saharanpur
Data Visualization Data Quality Data Transformation ETL Processes ETL Tools Programming Data Warehousing Database Management Data Integration Data Analysis Data Modeling Big Data Technologies Statistical Analysis Data Mining Hadoop Machine Learning Data Cleansing Scripting SQL Python Data Sheets Data Migration Data Management
We are looking for a detail-oriented Data Processing Engineer to join our team. This part-time position is ideal for freshers who have passed their 10th grade. The role offers the opportunity to work from home, making it convenient and flexible. Key Responsibilities:- **Data Entry:** Accurately enter data into databases and spreadsheets, ensuring all information is correct and up-to-date.- **Data Cleaning:** Identify and correct errors in datasets, organizing the information to improve clarity and accessibility.- **Data Analysis:** Assist in analyzing data trends and patterns, providing insights that may help in decision-making processes.- **Reporting:** Create simple reports that summarize findings and present the data in a clear and understandable format for the team.Required Skills and Expectations:Candidates should have a strong attention to detail and be able to work independently. Basic computer skills, including familiarity with data entry software and spreadsheets, are essential. Strong time management abilities will help you complete tasks efficiently. Good communication skills are important, as you will need to report your findings to the team regularly. A willingness to learn and adapt is key, as you may encounter new tools and methods in data processing. Being proactive and responsive will help you succeed in this dynamic part-time role.
View all details

Looking For Senior Data Engineer

BSRI Solutions Pvt Ltd

  • 5 - 9 yrs
  • 30.0 Lac/Yr
  • Chennai
Databricks Pyspark Java Web Services SQL Python
Job Title: Senior Data Engineer Experience: 5+ YearsLocation: Chennai (Hybrid)Budget: Up to 30 LPAJob SummaryWe are looking for a highly skilled developer with strong hands-on experience in Databricks, PySpark, Python/Java, Web Services, and SQL. The ideal candidate will work closely with architects, tech leads, and business teams to design, build, optimize, and support scalable data-driven solutions. This is a long-term role with a strong focus on performance, cost optimization, and production support.Key Responsibilities / Essential Job FunctionsUnderstand end-to-end system architecture and support operations through monitoring and dashboardsCollaborate with Architects and Tech Leads on solution design and implementationContinuously monitor and optimize system cost and performanceWork closely with Business Analysts on integration requirementsCoordinate with TCOE teams for defining test scenarios and supporting testing activitiesTroubleshoot performance and functional issues across environmentsDocument key technical decisions and maintain detailed design documentationHandle fast-paced project deliveries and support production issues as requiredQuickly learn and adapt to new technologies and frameworksEnsure prompt and effective communication with stakeholdersOther ResponsibilitiesCreate, document, and maintain project artifactsFollow industry standards, methodologies, and best practicesSafeguard company assets and sensitive information (PI)Report any suspected security or compliance issues promptlyAdhere to company compliance and governance policiesMaintain focus on customer service, efficiency, quality, and business growthCollaborate effectively with cross-functional teamsPerform other duties as assignedMinimum Qualifications & Job RequirementsMinimum 5+ years of IT development experienceStrong analytical and problem-solving skillsProven experience in solution design, code reviews, and mentoring junior engineersStrong backend development experience on data-driven projectsExcellent SQL and database skillsStrong team player with good communication skillsMandatory Technical SkillsPython / PySparkSQL / PL-SQLDatabricksJava or C#Preferred / Good-to-Have SkillsAzure CloudKafkaNode.jsAzure Data Factory
View all details
View More Jobs