92

Data Engineer Job Vacancies in Bangalore

filter
  • Location
  • Role
  • Functional Area
  • Qualification
  • Experience
  • Employer Type
  • Fresher
  • 6.5 Lac/Yr
  • Basavanagudi Bangalore
Data Verification Google Sheets Keyboard Shortcuts Numeric Keypad Spreadsheet Management Data Input Data Quality Control Data Formatting Data Accuracy Data Extraction Data Cleansing Data Entry Software Data Collection Microsoft Excel Data Visualization Data Quality Data Transformation Big Data Technologies Programming Data Warehousing
We are looking for a motivated Data Processing Engineer to join our team. This part-time role is perfect for freshers who are eager to learn and grow in the field of data management. You will work from home, contributing to our data processing needs.
View all details
  • 3 - 8 yrs
  • Bangalore
Web Scraping Python
Data Extraction Engineer designs extraction systems (and not just scripts). They build and maintain a next-generation data acquisition platform that treats web scraping as a declarative, specification-driven discipline. Instead of hard-coding XPaths for every site, Web Scraping Developer defines what data is neededusing schemas, natural language descriptions, or visual blueprintsand lets intelligent pipelines figure out how to get it.Key Responsibilities:Specification-Driven Extraction Engineering-Design and maintain declarative extraction specificationsusing Pydantic models, JSON schemas, or domain-specific languagesthat describe exactly which fields to capture, their types, and validation rules.Implement pipelines that translate these specifications into executable extraction plans, leveraging both classical (Scrapy, Playwright) and AI-augmented (LLM-based semantic parsing) backends.Build reusable specification libraries for recurring data types (product prices, tariff codes, regulatory texts) to accelerate onboarding of new sources.Autonomous & Self-Healing Systems-Deploy self-healing spiders that automatically detect website layout changes and repair themselves using Model Context Protocol (MCP) servers (e.g., Scrapy MCP Server, Playwright MCP).Integrate semantic extraction (Scrapy-LLM, custom LLM pipelines) to eliminate selector brittlenessspiders rely on field descriptions, not fragile XPaths.Orchestrate complex, multi-step browsing workflows with agentic frameworks (BMAD/TEA, AutoGPT-like agents) that reason about page state, adapt to anti-bot measures, and correct their own behaviour in real time.Platform Thinking & Reusability-Move beyond one-off scrapers: build a component-based extraction platform where selectors, login handlers, and pagination logic are shared, versioned, and tested.Implement monitoring, alerting, and automatic rollback for failed extraction runs.Champion ethical crawling by designrate limiting, robots.txt respect, and compliance with GDPR/CCPA are built into the specification layer, not retrofitted.Collaboration & Continuous Innovation-Partner with data scientists and domain experts to refine extraction specifications for complex, unstructured domains (e.g., legal texts, tariff classifications).Evaluate and pilot emerging tools to push automation coverage beyond 90%.Document and evangelise specification-driven best practices across the engineering organisation.Candidate Profile:Education and Experience -Bachelors degree in Computer Science3+ years of experience in web scraping or data extractionSkills and competences-Specification-Driven Extraction Experience defining extraction requirements via schemas (Pydantic, JSON Schema) and executing them through both traditional crawlers and LLM-based semantic parsers.SelfHealing & Semantic Extraction Handson use of ScrapyLLM, Scrapy MCP Server, or similar systems that decouple field definitions from page structure.Agentic Workflows Familiarity with frameworks that give LLMs browser control (Playwright + MCP, BMAD/TEA) to handle complex, nondeterministic crawling tasks.Classical Scraping Fundamentals You still know how to write a Scrapy spider or a Playwright script when needed, but you actively seek to replace that work with reusable, specification-driven components.Data Validation & Storage Ability to define validation rules within specifications and land clean data into SQL/NoSQL databases or data lakes.Python proficiency: the focus is on an extraction engineer who happens to use Python.HTTP, DOM, XPath, CSS.Basic API integration and authentication flows.Preferred / Nice-to-Have Skills:Contributions to open-source scraping or AI-automation projects.Experience training or fine-tuning small LLMs for domain-specific extraction.Familiarity with data privacy engineering (GDPR, CCPA) baked into specification design.DevOps light Docker, CI/CD for testing extraction specifications.Mindset & Approach (Non-Negotiable):Strong belief that the future of scraping is declarative, not imperative. Youd rather write a schema that says extract the price than debug an XPath when a website redesigns.Looking to shift from code that scrapes to systems that understand extraction.
View all details
  • 5 - 11 yrs
  • 25.0 Lac/Yr
  • Bangalore
Apache Kafka Azure Grafana Data Warehousing
Role: Data Engineer 2.0Location - RemoteExperience: Min. 5 YearsNotice Period- Immediate to 15 Days or serving notice periodKey Responsibilities:Design and implement manual test strategies for real-time streaming use cases using Azure Service Bus, Event Hubs, Kafka, and Azure Functions.Validate Spark Streaming applications, including unbounded data flows, streaming DataFrames, checkpoints, and streaming joins.Develop test plans for containerized microservices deployed on Kubernetes, ensuring scalability and fault tolerance.Test data ingestion and transformation workflows across open table formats like Delta Lake, Apache Iceberg, and Hudi.Good to Have:Monitoring and troubleshooting system performance using observability stacks such as Prometheus, Grafana, and ELK.functional and performance testing on analytical databases and query engines such as Trino, StarRocks, and ClickHouse.Testing and validation of data products designed under data mesh architecture, ensuring domain-oriented data quality and governance.
View all details
  • 4 - 10 yrs
  • 5.0 Lac/Yr
  • Bangalore
ETL ELT SQL Python Dbt Spark Hadoop Cloud Data CICD Data Security Data Warehousing
Design, build, and maintain ETL/ELT data pipelines and data lake solutions to support analytics and AI/ML use cases. Ensure data quality, performance, and reliability across enterprise data platforms.Key ResponsibilitiesPipeline DevelopmentData Lake EngineeringPerformance & OptimizationCollaboration & SupportRequired Skills & Experience 4+ years of experience in data engineering or ETL development. Proficiency in SQL and Python (or Scala/Java) for data transformations. Hands-on with ETL tools (Informatica, Talend, dbt, SSIS, Glue, or similar). Exposure to big data technologies (Hadoop, Spark, Hive, Delta Lake). Familiarity with cloud data platforms (AWS Glue/Redshift, Azure Data Factory/Synapse, GCP Dataflow/BigQuery). Understanding of workflow orchestration (Airflow, Oozie, Prefect, or Temporal).Preferred Knowledge Experience with real-time data pipelines using Kafka, Kinesis, or Pub/Sub. Basic understanding of data warehousing and dimensional modeling. Exposure to containerization and CI/CD pipelines for data engineering. Knowledge of data security practices (masking, encryption, RBAC).Education & Certifications Bachelors degree in Computer Science, IT, or related field.Preferred certifications:o AWS Data Analytics Specialty / Azure Data Engineer Associate / GCP Data Engineer.o dbt or Informatica/Talend certifications.
View all details

Get Personalized Job Matches

Based on your experience, skills, interests, and career goals to help you find the most relevant opportunities faster. Register Now!
  • 6 - 12 yrs
  • 16.0 Lac/Yr
  • Bangalore
Python GCP Developer
Job Title:Data Engineer Location: Bangalore Experience: 6+ years Notice Period : Immediate to 21daysMUST-HAVE TECHNICAL SKILLSSkillSkill DepthPython for Data PipelinesIndependently written ingestion/transformation scripts, including pagination, exception handling, logging, and dataframe-level operations using Pandas, JSON, or GCP SDKsDBT (Data Build Tool)Authored and executed DBT models and tests using YAML files and Jinja macros; contributed to CI test configs and schedule integrationGCP (BigQuery, GCS, CloudSQL)Hands-on experience in at least two of the above tools in pipeline execution e.g., used BigQuery for SQL transformation and GCS for raw/processed layer segregationAWS LambdaIntegrated serverless functions to automate trigger points like new file upload, API call chaining or job completion; used boto3 or GCP Pub/Sub hooksData Quality & ValidationDeveloped or plugged-in validation layers for ingestion such as record count matching, null/duplicate flagging, recon table populationCloud-Native ModelingAdapted pre-existing logical models to ingestion logic, ensuring correct joins, partitioning strategy, and target-layer conformity (Star/Snowflake)Version Control & AgileParticipated in Git branching workflows and sprint-based delivery (JIRA or similar); able to push/pull/test with basic conflict resolution
View all details
Software Engineer Information Technology Engineer Java Html Autocad Data Analyst Web Tools System Support Engineer Banking Back Office Back End Processing Admin Freshers Airport Operation Airport Executive Airport Representative Airline Ground Staff
The Information Technology Engineer is responsible for designing, developing and implementing software applications and systems to support the organization's operations. They will also provide technical support and assistance to end-users.Key responsibilities include:- Developing and maintaining software applications using various programming languages such as Java, Html, and Autocad- Conducting data analysis to identify trends and patterns- Providing system support and troubleshooting technical issues- Working closely with the team to implement and maintain web tools
View all details
Glue Lamda ETL
3+ years of AWS data engineering: Glue, Step Functions, Lambda, S3, DynamoDB, EC2Strong Python (boto3) scripting for automationTerraform or CloudFormation expertiseHands-on experience integrating RAG workflows or deploying LLM applicationsSolid SQL and NoSQL data-modeling skillsExcellent written and verbal communication in client-facing contexts
View all details

Data Engineer

Guiding Consulting

  • 10 - 12 yrs
  • Bangalore
SQL Python Spark Data Integration ETL AWS ETL Tool Data Warehousing Azure Server
Job Description:Yrs of Exp : 10 + yrsMode : 3 days a weekLocation: BangaloreWork Type : PermanentKey ResponsibilitiesDesign and Development:Architect, implement, and optimize scalable data solutions.Develop and maintain data pipelines, ETL/ELT processes, and workflows to ensure the seamless integration and transformation of data.Collaboration:Work closely with data scientists, analysts, and business stakeholders to understand requirements and deliver actionable insights.Partner with cloud architects and DevOps teams to ensure robust, secure, and cost-effective data platform deployments.Data Management:Manage and maintain data lakes, data warehouses, and real-time analytics systems.Ensure high data quality, integrity, and security across the organization.Performance Optimization:Monitor and enhance system performance, troubleshoot issues, and implement optimizations as needed.Leverage Microsoft Fabrics advanced analytics and AI capabilities for innovative data solutions.Best Practices & Leadership:Lead and mentor junior engineers to foster a culture of technical excellence.Stay updated with industry trends and best practices, especially in the Microsoft ecosystem.Required:Bachelors or Masters degree in Computer Science, Data Engineering, or a related field.10+ years of experience in data engineering, with a proven track record of working on large-scale data platforms.Expertise in Microsoft Fabric and its components (e.g., Synapse, Data Factory, Azure Data Lake, Power BI).Strong proficiency in SQL, Python, and Spark.Experience with cloud platforms, particularly Microsoft Azure.Solid understanding of data modeling, data warehousing, and ETL/ELT best practices.Excellent problem-solving, communication, team management and project management skills.Preferred:Familiarity with other cloud platforms (e.g., AWS, GCP).Experience with machine learning pipelines or integrating AI into data workflows.Certifications in Microsoft Azure or related technologies.
View all details

GEN AI - AIML

Welkin Soft Tech Pvt. Ltd.

  • 8 - 12 yrs
  • 18.0 Lac/Yr
  • Bangalore
GEN AI AI ML Python LLM Optimization AI Engineer Integration Data Engineer Analysis SQL Cloud Computing Data Base Natural Language Processing
Job Opening: Generative AI & LLMs / Distinguished Gen AI Engineer Location: Remote Experience: 7+ yearsShare your profilestosandhya@welkinsofttech.com To apply, send your profile to sandhya@welkinsofttech.com / hr@welkinsofttech.com or connect with us here. Key Responsibilities:LLM Development: Design, fine-tune, and implement large language models (e.g., GPT, BERT, T5) for applications like personalized learning, content generation, and semantic search.Generative AI Solutions: Drive innovation with Gen AIdeveloping tools like adaptive learning paths, resume builders, and AI-written job descriptions.Machine Learning: Create predictive models and recommendation engines that align user profiles to skills and job opportunities.Token Optimization: Work with OpenAI and other services to manage token efficiency and usage costs.AI Integration: Collaborate with product and engineering teams to integrate AI features seamlessly into the Elefy platform.Data Engineering: Build and maintain robust data pipelines using Python, Node.js, and MongoDB.Data Analysis: Analyze large datasets to surface actionable insights for user engagement and platform growth.Visualization & Reporting: Build dashboards using Tableau, Power BI, or Matplotlib to communicate insights to stakeholders.Documentation: Ensure clear and comprehensive documentation for models, pipelines, and workflows. Who Were Looking For:Experience:8+ years in data science or AI, with 3+ years hands-on with LLMs or Gen AI in production settings.Proven track record of delivering ML models in scalable, real-world applications.Skills:Languages: Python (must), R, SQLFrameworks: PyTorch, TensorFlow, Hugging Face, Scikit-learnPrompt Engineering: Few-Shot Learning, Dynamic Prompting, Role Play, Chain-of-Thought (nice to have)Cloud: Azure (preferred), AWS, or GCPDatabase: MongoDB or similar NoSQL/SQL systemsKnowledge & Tools:Deep NLP & LLM expertise (e.g., GPT, BERT, T5, etc.)Containerization, APIs, CI/CD, and Azure-native cloud toolsStrong visual storytelling via Tableau, Power BI, or Python-based plotsAgile & cross-functional collaboration mindset Bonus Points:Experience with ethical AI, bias mitigation, and explain abilityFamiliarity with skill-based learning platforms or EdTech ecosystemsJoin us at Elefy and be part of a team thats reshaping the future of learning with AI.
View all details

Big Data Engineer (Spark and Scala)

E2E Infoware Management Services

Scala Spark Pyspark
Role: Bigdata Developer - Scala SparkExp: 5+ YrsMode of Work: WFO All 5 daysLocation: Chennai/Bangalore/PuneInterview: Any one Level F2FJob Description: Total IT / development experience of 3+ years Experience in Spark (Scala-Spark ) developing Big Data applications on Hadoop, Hive and/or Kafka, HBase, MongoDB Deep knowledge of Scala-Spark libraries to develop and debug complex data engineering challenges Experience in developing sustainable data driven solutions with current new generation data technologies to drive our business and technology strategies Exposure to deploying on Cloud platforms At least 2 years of development experience on designing and developing Data Pipelines for Data Ingestion or Transformation using Spark-Scala At least 2 years of development experience in the following Big Data frameworks: File Format (Parquet, AVRO, ORC), Resource Management, Distributed Processing and RDBMS At least 2 years of developing applications in Agile with Monitoring, Build Tools, Version Control, Unit Test, Unix Shell Scripting, TDD, CI/CD, Change Management to support DevOps
View all details

Looking For ML Engineer

The Supreme Consultancy

Machine Learning Data Analysis Python ML Engineer Data Science Data Analyst Problem Sloving Deep Learning Deep Learning Engineer
Mandatory Criteria (Can't be neglected during screening) : Looking for Only BTech and BE candidates. Candidate should have Hands-on development experience as Data Analyst and/or ML Engineer. Candidate must have Coding experience in Python. Need candidates with atleast 1-2years of ML experience. Candidate should have Good Experience with ML models and ML algorithms. Need Experience with statistical modelling of large data sets. Looking for Immediate joiners or max. 30 days of Notice Period candidates. The candidates based out of these locations - Bangalore, Pune, Hyderabad, Mumbai, will be preffered. Kindly note Salary bracket will vary according to the exp. of the candidate - - Experience from 4 yrs to 5 yrs - Salary range - 15 LPA - 21 LPA max.- Experience from 6 yrs to 7 yrs - Salary range - 21 LPA - 25 LPA- Experience of 8 yrs to 9 yrs - Salary range - 30 LPA - 32 LPA- Experience 10 yrs to 12 yrs - Salary upto 40 LPA max.What You will do: Play the role of Data Analyst / ML Engineer Collection, cleanup, exploration and visualization of data Perform statistical analysis on data and build ML models Implement ML models using some of the popular ML algorithms Use Excel to perform analytics on large amounts of data Understand, model and build to bring actionable business intelligence out of data that is available in different formats Work with data engineers to design, build, test and monitor data pipelines for ongoing business operationsBasic Qualifications: Only BTech and BE candidates. Experience: 4+ years. Hands-on development experience playing the role of Data Analyst and/or ML Engineer. Experience in working with excel for data analytics Experience with statistical modelling of large data sets Experience with ML models and ML algorithms Coding experience in PythonNice to have Qualifications: Experience with wide variety of tools used in ML Experience with Deep learningBenefits: Competitive salary. Hybrid work model. Learning and gaining experience rapidly. Reimbursement for basic working set up at home. Insurance (including a top up insurance for COVID).
View all details
  • Fresher
  • Bangalore
SQL Tableau Power BI ML Engineer
Job Overview:This role is to support the students who are working on the course assignments and project as the part of ExcelR training.Responsibilities and Duties:- Evaluate Assignments of the Students- Help the participants to solve their Queries - mentor/ Co- mentor the projectsQualifications:-Any graduate-Completed the course on Data Analytics or Data Science-Should have a Good Communication skillsRequirements:For DA: any of the following 2 modules (Excel, My SQL, Tableau, PowerBI)For DS: Knowledge of ML concepts
View all details

Data Scientist

Kudos Technolabs

  • 3 - 5 yrs
  • Bangalore
Python ML Frameworks & Libraries NumPy Pandas Scikit-Learn TensorFlow PyTorch Keras Matplotlib EDA Data Visualization Techniques SQL Statistical Analysis Hypothesis Testing Feature Engineering AWS Certified AWS Azure GCP Hadoop Spark Scipy SciPy Machine Learning
We are looking for a highly motivated and skilled Data Scientist to join our team. The ideal candidate will have a strong background in data science , machine learning , and statistical analysis, with hands-on experience in python and industry-standards libraries. You will be responsible for deriving actionable insights, building predictive models and effectively communicating findings through data story telling.
View all details
  • 3 - 8 yrs
  • 5.5 Lac/Yr
  • Bangalore
Data Analysis Configuration Engineer Integration Engineer Data Management System Test Engineer Dashboard Manager
Job Title: Senior Analyst - People Systems (Cornerstone OnDemand)Location: Bengaluru, IndiaEmployment Type: C2H (Contract-to-Hire) for an Esteemed US-based ClientAbout the Role:We are seeking a Senior Product Analyst with expertise in Cornerstone OnDemand to join our team in Bengaluru. This role is part of a C2H engagement with a US-based client, offering you the opportunity to work on cutting-edge projects and contribute to a global team.As a Senior Analyst, you will play a pivotal role in enhancing the end-user experience by working closely with cross-functional teams, including People Technology, IT, Legal, and various Enablement teams. If you are passionate about problem-solving, driving transformation through automation, and elevating employee experiences, this role is for you!Key Responsibilities:Ensure data accuracy and seamless ongoing enhancements of new features/ functionality.Act as the primary point of contact for data gathering, testing, and communication with key stakeholders and internal Business Systems teams.Understand business requirements, configure solutions, and demonstrate configurations through the development of testing systems.Lead feature assessment, requirements gathering, user story creation, and acceptance criteria definition.Provide hands-on training to cross-functional teams to broaden internal knowledge of system configurations.Proactively interface between business partner groups and the Business Technology team to ensure effective coordination and delivery of implementations and enhancements.Coordinate production support and ensure timely resolution of issues for end-users.Communicate effectively within and outside the immediate team, providing regular updates to stakeholders and identifying potential issues early.Propose process and technology improvements, ensuring compliance and security in designs.Interested candidates can share their updated resumes at chinmaypatil15@gmail.com
View all details

AWS Data Engineer

Hexaware Technologies

SQL AWS Python ETL TERRAFORM LAMDA
Work Mode: Hybrid6-9 years of overall IT experience, preferably in cloud environments.Minimum of 5 years of hands-on experience with AWS cloud development projects.Design and develop AWS data architectures and solutions.Build robust data pipelines and ETL processes using big data technologies.Utilize AWS data services such as Glue, Lambda, Redshift, and Athena effectively.Implement infrastructure as code (IaC) using Terraform.Proficiency in SQL, Python, and other relevant programming/scripting languages.Experience with orchestration tools like Apache Airflow or AWS Step Functions.Strong understanding of data warehousing concepts, data lakes, and data governance frameworks.Expertise in data modeling for both relational and non-relational databases.Excellent communication skills are essential for this role.
View all details
Python SQL ML Docker AWS Cloud Engineer
Level of skills and experience:5 years of hands-on experience in using Python, Spark,Sql.Experienced in AWS Cloud usage and management.Experience with Databricks (Lakehouse, ML, Unity Catalog, MLflow).Experience using various ML models and frameworks such as XGBoost, Lightgbm, Torch.Experience with orchestrators such as Airflow and Kubeflow.Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes).Fundamental understanding of Parquet, Delta Lake and other data file formats.Proficiency on an IaC tool such as Terraform, CDK or Cloud Formation.Strong written and verbal English communication skill and proficient in communication with non-technical stakeholderst
View all details

Looking For Hiring For BMS Commissioning Engineer

JOB24by7 Recruitment Consultancy Services

Electrical Electronics Instrumentation BMS Engineering Testing Commissioning SCADA PLC OPC Designing Building Management System Automation HMI Programming Graphics SQL Computer Knowledge Data Base Database Management Programming Problem Solving Troubleshooting Skills Technical Skills Client Coordination Client Management Implementation Communication Organizational
Bachelors degree in Electrical/Electronics/Instrumentation Engineering Degree from an accredited college or university.Minimum 3 years of experience in BMS Engineering, Testing & CommissioningSCADA, PLC, BMS engineering/programming experience preferred.Strong understanding of OPC, BACnet, Modbus protocol to include IP, ETH and MS/TPDesign, development, commissioning, and testing of Building automation systemsBMS/SCADA/HMI graphic screen developmentIO Loop test and Functional testBasic knowledge on SQL database programming.Excellent problem-solving and troubleshooting skillsProviding technical support to clientsExperience with current trends in automation and instrumentation to be able to select and implement modern controls architecturesStrong communication and organizational skills
View all details

Looking For Cloud Data Engineer

Talme Technologies Pvt Ltd

Designing and Implementing Data Architecture Strategies Data Integration Data Management Supporting Analytics Technology Selection and Performance Optimization.
Technical Skills: In-depth knowledge of AWS services (IAM, Redshift, Step Functions) Infrastructure as code tools (CloudFormation Stack) Database technologies (SQL, NoSQL) Data processing and analysis tools (AWS Glue, EMR) Big data frameworks (Hadoop, Spark) ETL tools (IBM DataStage, ODI, Informatica, Talend, Microsoft SQL Server Integration Services, Apache Nifi) Data warehousing architecture, Architecture Design Collaboration tools (JIRA, Confluence).If you are looking for a new challenge and have the relevant experience, apply now or tag someone who'd be perfect for this role.Send your resume to : hr@talme.in
View all details
  • 7 - 10 yrs
  • 35.0 Lac/Yr
  • Bangalore
Solution Architecting Data Engineering Design Architect Pipeline Management Data Pipeline Python AWS AWS Cloud Cloud Architect
Key Responsibilities Requirement Analysis: Collaborate with stakeholders to understand businessrequirements and data sources, and define the architecture and design of dataengineering models to meet these requirements. Architecture Design: Design scalable, reliable, and efficient data engineering models,including algorithms, data pipelines, and data processing systems, to support businessrequirements and quantitative analysis. Technology Selection: Evaluate using POCs and recommend appropriate technologies,frameworks, and tools for building and managing data engineering models, consideringfactors like performance, scalability, and cost-effectiveness. Data Processing: Develop and implement data processing logic, including data cleansing,transformation, and aggregation, using technologies such as AWS Glue, Batch, Lambda. Quantitative Analysis: Collaborate with data scientists and analysts to develop algorithmsand models for quantitative analysis, using techniques such as regression analysis,clustering, and predictive modeling. Model Evaluation: Evaluate the performance of data engineering models using metricsand validation techniques, and iterate on models to improve their accuracy andeffectiveness. Data Visualization: Create visualizations of data and model outputs to communicateinsights and findings to stakeholders.Data Engineering: Understanding of Data engineering principles and practices, includingdata ingestion, processing, transformation, and storage, using tools and technologiessuch as AWS Glue, Batch, Lambda. Quantitative Analysis: Proficiency in quantitative analysis techniques, including statisticalmodeling, machine learning, and data mining, with experience in implementingalgorithms for regression analysis, clustering, classification, and predictive modeling. Programming Languages: Proficiency in programming languages commonly used in dataengineering and quantitative analysis, such as Python, R, Java, or Scala,
View all details
  • 7 - 10 yrs
  • 20.0 Lac/Yr
  • Bangalore
SQL Data Warehousing ETL Tool Data Integration
Knowledge of cloud data warehousing in Azure, AWS.Knowledge of RDBMS and data modellingWorking with different storage and files types.Proficiency in SQL for writing complex queriesManaging relational databases, and performing data operationsExperience with ETL (extract, transform, load) tools and data modelling
View all details

Looking For Azure Data Engineer

Hexaware Technologies

Azure Date Engineer Pyspark SQL Data Warehouse Data Lakes
Work mode : Hybrid Overall 4-9 yrs of IT experience preferably in cloudMin 3 years in Azure Databricks on development projectsShould be 100% hands on in Pyspark codingShould have strong SQL expertise in writing advanced/complex SQL queriesDWH experience is a must for this roleExperience in programming using Python is an advantageExperience in data ingestion, preparation, integration, and operationalization techniques in optimally addressing the data requirementsShould be able to understand system architecture which involves Data Lakes, Data Warehouses and Data MartsExperience to own end-to-end development, including coding, testing, debugging and deploymentExcellent communication is required for this role
View all details

Opening For Cloud Data Engineer

Talme Technologies Pvt Ltd

Designing and Implementing Data Architecture Strategies Data Integration Data Management Supporting Analytics Technology Selection and Performance Optimization. Technical Skills: In-depth Knowledge Of AWS Services (IAM Redshift NoSQL) Data Processing and Analysis Tools (AWS Glue EMR) Big Data Frameworks (Hadoop Spark) ETL Tools (IBM DataStage ODI in
we are in the lookout for a seasoned Cloud Data Lead. We are eager to connect with you if you have extensive experience in cloud platforms, data architecture, and leadership!
View all details
View More Jobs

Apply to 92 Data Engineer Job Vacancies in Bangalore

  • Bangalore Jobs
  • Hyderabad Jobs
  • Ahmedabad Jobs
  • Mumbai Jobs
  • Pune Jobs
  • Chennai Jobs
  • Kolkata Jobs
  • Delhi Jobs