75

Data Engineer Graduate Jobs in Bangalore

filter
  • Location
  • Role
  • Functional Area
  • Qualification
  • Experience
  • Employer Type

Looking For Big Data Engineer

Talent Zone Consultant

  • 6 - 12 yrs
  • Bangalore
Python SQL Spark Hadoop ETL Tools Data Warehousing Airflow Programming Data Visualization Data Lakes Data Modeling
Key Responsibilities:Build and manage data pipelines and ETL processesWork with large datasets using tools like Spark, Hadoop, or SQLEnsure data quality and performance optimizationRequirements:Experience in Python/SQLHands-on with ETL tools and big data technologiesUnderstanding of data warehousing conceptsBrief Summary:Develops scalable data systems to support analytics and business insights.
View all details
  • 3 - 8 yrs
  • Bangalore
Web Scraping Python
Data Extraction Engineer designs extraction systems (and not just scripts). They build and maintain a next-generation data acquisition platform that treats web scraping as a declarative, specification-driven discipline. Instead of hard-coding XPaths for every site, Web Scraping Developer defines what data is neededusing schemas, natural language descriptions, or visual blueprintsand lets intelligent pipelines figure out how to get it.Key Responsibilities:Specification-Driven Extraction Engineering-Design and maintain declarative extraction specificationsusing Pydantic models, JSON schemas, or domain-specific languagesthat describe exactly which fields to capture, their types, and validation rules.Implement pipelines that translate these specifications into executable extraction plans, leveraging both classical (Scrapy, Playwright) and AI-augmented (LLM-based semantic parsing) backends.Build reusable specification libraries for recurring data types (product prices, tariff codes, regulatory texts) to accelerate onboarding of new sources.Autonomous & Self-Healing Systems-Deploy self-healing spiders that automatically detect website layout changes and repair themselves using Model Context Protocol (MCP) servers (e.g., Scrapy MCP Server, Playwright MCP).Integrate semantic extraction (Scrapy-LLM, custom LLM pipelines) to eliminate selector brittlenessspiders rely on field descriptions, not fragile XPaths.Orchestrate complex, multi-step browsing workflows with agentic frameworks (BMAD/TEA, AutoGPT-like agents) that reason about page state, adapt to anti-bot measures, and correct their own behaviour in real time.Platform Thinking & Reusability-Move beyond one-off scrapers: build a component-based extraction platform where selectors, login handlers, and pagination logic are shared, versioned, and tested.Implement monitoring, alerting, and automatic rollback for failed extraction runs.Champion ethical crawling by designrate limiting, robots.txt respect, and compliance with GDPR/CCPA are built into the specification layer, not retrofitted.Collaboration & Continuous Innovation-Partner with data scientists and domain experts to refine extraction specifications for complex, unstructured domains (e.g., legal texts, tariff classifications).Evaluate and pilot emerging tools to push automation coverage beyond 90%.Document and evangelise specification-driven best practices across the engineering organisation.Candidate Profile:Education and Experience -Bachelors degree in Computer Science3+ years of experience in web scraping or data extractionSkills and competences-Specification-Driven Extraction Experience defining extraction requirements via schemas (Pydantic, JSON Schema) and executing them through both traditional crawlers and LLM-based semantic parsers.SelfHealing & Semantic Extraction Handson use of ScrapyLLM, Scrapy MCP Server, or similar systems that decouple field definitions from page structure.Agentic Workflows Familiarity with frameworks that give LLMs browser control (Playwright + MCP, BMAD/TEA) to handle complex, nondeterministic crawling tasks.Classical Scraping Fundamentals You still know how to write a Scrapy spider or a Playwright script when needed, but you actively seek to replace that work with reusable, specification-driven components.Data Validation & Storage Ability to define validation rules within specifications and land clean data into SQL/NoSQL databases or data lakes.Python proficiency: the focus is on an extraction engineer who happens to use Python.HTTP, DOM, XPath, CSS.Basic API integration and authentication flows.Preferred / Nice-to-Have Skills:Contributions to open-source scraping or AI-automation projects.Experience training or fine-tuning small LLMs for domain-specific extraction.Familiarity with data privacy engineering (GDPR, CCPA) baked into specification design.DevOps light Docker, CI/CD for testing extraction specifications.Mindset & Approach (Non-Negotiable):Strong belief that the future of scraping is declarative, not imperative. Youd rather write a schema that says extract the price than debug an XPath when a website redesigns.Looking to shift from code that scrapes to systems that understand extraction.
View all details
  • 5 - 11 yrs
  • 25.0 Lac/Yr
  • Bangalore
Apache Kafka Azure Grafana Data Warehousing
Role: Data Engineer 2.0Location - RemoteExperience: Min. 5 YearsNotice Period- Immediate to 15 Days or serving notice periodKey Responsibilities:Design and implement manual test strategies for real-time streaming use cases using Azure Service Bus, Event Hubs, Kafka, and Azure Functions.Validate Spark Streaming applications, including unbounded data flows, streaming DataFrames, checkpoints, and streaming joins.Develop test plans for containerized microservices deployed on Kubernetes, ensuring scalability and fault tolerance.Test data ingestion and transformation workflows across open table formats like Delta Lake, Apache Iceberg, and Hudi.Good to Have:Monitoring and troubleshooting system performance using observability stacks such as Prometheus, Grafana, and ELK.functional and performance testing on analytical databases and query engines such as Trino, StarRocks, and ClickHouse.Testing and validation of data products designed under data mesh architecture, ensuring domain-oriented data quality and governance.
View all details
  • 4 - 10 yrs
  • 5.0 Lac/Yr
  • Bangalore
ETL ELT SQL Python Dbt Spark Hadoop Cloud Data CICD Data Security Data Warehousing
Design, build, and maintain ETL/ELT data pipelines and data lake solutions to support analytics and AI/ML use cases. Ensure data quality, performance, and reliability across enterprise data platforms.Key ResponsibilitiesPipeline DevelopmentData Lake EngineeringPerformance & OptimizationCollaboration & SupportRequired Skills & Experience 4+ years of experience in data engineering or ETL development. Proficiency in SQL and Python (or Scala/Java) for data transformations. Hands-on with ETL tools (Informatica, Talend, dbt, SSIS, Glue, or similar). Exposure to big data technologies (Hadoop, Spark, Hive, Delta Lake). Familiarity with cloud data platforms (AWS Glue/Redshift, Azure Data Factory/Synapse, GCP Dataflow/BigQuery). Understanding of workflow orchestration (Airflow, Oozie, Prefect, or Temporal).Preferred Knowledge Experience with real-time data pipelines using Kafka, Kinesis, or Pub/Sub. Basic understanding of data warehousing and dimensional modeling. Exposure to containerization and CI/CD pipelines for data engineering. Knowledge of data security practices (masking, encryption, RBAC).Education & Certifications Bachelors degree in Computer Science, IT, or related field.Preferred certifications:o AWS Data Analytics Specialty / Azure Data Engineer Associate / GCP Data Engineer.o dbt or Informatica/Talend certifications.
View all details

Get Personalized Job Matches

Based on your experience, skills, interests, and career goals to help you find the most relevant opportunities faster. Register Now!
  • 6 - 12 yrs
  • 16.0 Lac/Yr
  • Bangalore
Python GCP Developer
Job Title:Data Engineer Location: Bangalore Experience: 6+ years Notice Period : Immediate to 21daysMUST-HAVE TECHNICAL SKILLSSkillSkill DepthPython for Data PipelinesIndependently written ingestion/transformation scripts, including pagination, exception handling, logging, and dataframe-level operations using Pandas, JSON, or GCP SDKsDBT (Data Build Tool)Authored and executed DBT models and tests using YAML files and Jinja macros; contributed to CI test configs and schedule integrationGCP (BigQuery, GCS, CloudSQL)Hands-on experience in at least two of the above tools in pipeline execution e.g., used BigQuery for SQL transformation and GCS for raw/processed layer segregationAWS LambdaIntegrated serverless functions to automate trigger points like new file upload, API call chaining or job completion; used boto3 or GCP Pub/Sub hooksData Quality & ValidationDeveloped or plugged-in validation layers for ingestion such as record count matching, null/duplicate flagging, recon table populationCloud-Native ModelingAdapted pre-existing logical models to ingestion logic, ensuring correct joins, partitioning strategy, and target-layer conformity (Star/Snowflake)Version Control & AgileParticipated in Git branching workflows and sprint-based delivery (JIRA or similar); able to push/pull/test with basic conflict resolution
View all details
Glue Lamda ETL
3+ years of AWS data engineering: Glue, Step Functions, Lambda, S3, DynamoDB, EC2Strong Python (boto3) scripting for automationTerraform or CloudFormation expertiseHands-on experience integrating RAG workflows or deploying LLM applicationsSolid SQL and NoSQL data-modeling skillsExcellent written and verbal communication in client-facing contexts
View all details

Big Data Engineer (Spark and Scala)

E2E Infoware Management Services

Scala Spark Pyspark
Role: Bigdata Developer - Scala SparkExp: 5+ YrsMode of Work: WFO All 5 daysLocation: Chennai/Bangalore/PuneInterview: Any one Level F2FJob Description: Total IT / development experience of 3+ years Experience in Spark (Scala-Spark ) developing Big Data applications on Hadoop, Hive and/or Kafka, HBase, MongoDB Deep knowledge of Scala-Spark libraries to develop and debug complex data engineering challenges Experience in developing sustainable data driven solutions with current new generation data technologies to drive our business and technology strategies Exposure to deploying on Cloud platforms At least 2 years of development experience on designing and developing Data Pipelines for Data Ingestion or Transformation using Spark-Scala At least 2 years of development experience in the following Big Data frameworks: File Format (Parquet, AVRO, ORC), Resource Management, Distributed Processing and RDBMS At least 2 years of developing applications in Agile with Monitoring, Build Tools, Version Control, Unit Test, Unix Shell Scripting, TDD, CI/CD, Change Management to support DevOps
View all details

Looking For ML Engineer

The Supreme Consultancy

Machine Learning Data Analysis Python ML Engineer Data Science Data Analyst Problem Sloving Deep Learning Deep Learning Engineer
Mandatory Criteria (Can't be neglected during screening) : Looking for Only BTech and BE candidates. Candidate should have Hands-on development experience as Data Analyst and/or ML Engineer. Candidate must have Coding experience in Python. Need candidates with atleast 1-2years of ML experience. Candidate should have Good Experience with ML models and ML algorithms. Need Experience with statistical modelling of large data sets. Looking for Immediate joiners or max. 30 days of Notice Period candidates. The candidates based out of these locations - Bangalore, Pune, Hyderabad, Mumbai, will be preffered. Kindly note Salary bracket will vary according to the exp. of the candidate - - Experience from 4 yrs to 5 yrs - Salary range - 15 LPA - 21 LPA max.- Experience from 6 yrs to 7 yrs - Salary range - 21 LPA - 25 LPA- Experience of 8 yrs to 9 yrs - Salary range - 30 LPA - 32 LPA- Experience 10 yrs to 12 yrs - Salary upto 40 LPA max.What You will do: Play the role of Data Analyst / ML Engineer Collection, cleanup, exploration and visualization of data Perform statistical analysis on data and build ML models Implement ML models using some of the popular ML algorithms Use Excel to perform analytics on large amounts of data Understand, model and build to bring actionable business intelligence out of data that is available in different formats Work with data engineers to design, build, test and monitor data pipelines for ongoing business operationsBasic Qualifications: Only BTech and BE candidates. Experience: 4+ years. Hands-on development experience playing the role of Data Analyst and/or ML Engineer. Experience in working with excel for data analytics Experience with statistical modelling of large data sets Experience with ML models and ML algorithms Coding experience in PythonNice to have Qualifications: Experience with wide variety of tools used in ML Experience with Deep learningBenefits: Competitive salary. Hybrid work model. Learning and gaining experience rapidly. Reimbursement for basic working set up at home. Insurance (including a top up insurance for COVID).
View all details
  • Fresher
  • Bangalore
SQL Tableau Power BI ML Engineer
Job Overview:This role is to support the students who are working on the course assignments and project as the part of ExcelR training.Responsibilities and Duties:- Evaluate Assignments of the Students- Help the participants to solve their Queries - mentor/ Co- mentor the projectsQualifications:-Any graduate-Completed the course on Data Analytics or Data Science-Should have a Good Communication skillsRequirements:For DA: any of the following 2 modules (Excel, My SQL, Tableau, PowerBI)For DS: Knowledge of ML concepts
View all details

Data Scientist

Kudos Technolabs

  • 3 - 5 yrs
  • Bangalore
Python ML Frameworks & Libraries NumPy Pandas Scikit-Learn TensorFlow PyTorch Keras Matplotlib EDA Data Visualization Techniques SQL Statistical Analysis Hypothesis Testing Feature Engineering AWS Certified AWS Azure GCP Hadoop Spark Scipy SciPy Machine Learning
We are looking for a highly motivated and skilled Data Scientist to join our team. The ideal candidate will have a strong background in data science , machine learning , and statistical analysis, with hands-on experience in python and industry-standards libraries. You will be responsible for deriving actionable insights, building predictive models and effectively communicating findings through data story telling.
View all details
  • 3 - 8 yrs
  • 5.5 Lac/Yr
  • Bangalore
Data Analysis Configuration Engineer Integration Engineer Data Management System Test Engineer Dashboard Manager
Job Title: Senior Analyst - People Systems (Cornerstone OnDemand)Location: Bengaluru, IndiaEmployment Type: C2H (Contract-to-Hire) for an Esteemed US-based ClientAbout the Role:We are seeking a Senior Product Analyst with expertise in Cornerstone OnDemand to join our team in Bengaluru. This role is part of a C2H engagement with a US-based client, offering you the opportunity to work on cutting-edge projects and contribute to a global team.As a Senior Analyst, you will play a pivotal role in enhancing the end-user experience by working closely with cross-functional teams, including People Technology, IT, Legal, and various Enablement teams. If you are passionate about problem-solving, driving transformation through automation, and elevating employee experiences, this role is for you!Key Responsibilities:Ensure data accuracy and seamless ongoing enhancements of new features/ functionality.Act as the primary point of contact for data gathering, testing, and communication with key stakeholders and internal Business Systems teams.Understand business requirements, configure solutions, and demonstrate configurations through the development of testing systems.Lead feature assessment, requirements gathering, user story creation, and acceptance criteria definition.Provide hands-on training to cross-functional teams to broaden internal knowledge of system configurations.Proactively interface between business partner groups and the Business Technology team to ensure effective coordination and delivery of implementations and enhancements.Coordinate production support and ensure timely resolution of issues for end-users.Communicate effectively within and outside the immediate team, providing regular updates to stakeholders and identifying potential issues early.Propose process and technology improvements, ensuring compliance and security in designs.Interested candidates can share their updated resumes at chinmaypatil15@gmail.com
View all details
Python SQL ML Docker AWS Cloud Engineer
Level of skills and experience:5 years of hands-on experience in using Python, Spark,Sql.Experienced in AWS Cloud usage and management.Experience with Databricks (Lakehouse, ML, Unity Catalog, MLflow).Experience using various ML models and frameworks such as XGBoost, Lightgbm, Torch.Experience with orchestrators such as Airflow and Kubeflow.Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes).Fundamental understanding of Parquet, Delta Lake and other data file formats.Proficiency on an IaC tool such as Terraform, CDK or Cloud Formation.Strong written and verbal English communication skill and proficient in communication with non-technical stakeholderst
View all details

Looking For Hiring For BMS Commissioning Engineer

JOB24by7 Recruitment Consultancy Services

Electrical Electronics Instrumentation BMS Engineering Testing Commissioning SCADA PLC OPC Designing Building Management System Automation HMI Programming Graphics SQL Computer Knowledge Data Base Database Management Programming Problem Solving Troubleshooting Skills Technical Skills Client Coordination Client Management Implementation Communication Organizational
Bachelors degree in Electrical/Electronics/Instrumentation Engineering Degree from an accredited college or university.Minimum 3 years of experience in BMS Engineering, Testing & CommissioningSCADA, PLC, BMS engineering/programming experience preferred.Strong understanding of OPC, BACnet, Modbus protocol to include IP, ETH and MS/TPDesign, development, commissioning, and testing of Building automation systemsBMS/SCADA/HMI graphic screen developmentIO Loop test and Functional testBasic knowledge on SQL database programming.Excellent problem-solving and troubleshooting skillsProviding technical support to clientsExperience with current trends in automation and instrumentation to be able to select and implement modern controls architecturesStrong communication and organizational skills
View all details

Opening For Cloud Data Engineer

Talme Technologies Pvt Ltd

Designing and Implementing Data Architecture Strategies Data Integration Data Management Supporting Analytics Technology Selection and Performance Optimization. Technical Skills: In-depth Knowledge Of AWS Services (IAM Redshift NoSQL) Data Processing and Analysis Tools (AWS Glue EMR) Big Data Frameworks (Hadoop Spark) ETL Tools (IBM DataStage ODI in
we are in the lookout for a seasoned Cloud Data Lead. We are eager to connect with you if you have extensive experience in cloud platforms, data architecture, and leadership!
View all details
Cloud Engineer
Strong experience with Microsoft Azure data services, including Azure Data Factory, Azure Synapse Analytics, Azure Databricks, and Azure SQL Database.Proficiency in ETL processes, data pipeline design, and automation within the Azure ecosystem.Experience with big data technologies such as Apache Spark, Hadoop, Azure HDInsight, and Databricks.Solid understanding of data modeling, data warehousing concepts, and relational and non-relational databases.Strong knowledge of SQL and experience with scripting languages like Python, R, or Scala.Experience with Azure Storage services, such as Blob Storage, Data Lake Storage, and Azure Cosmos DB.Familiarity with Data Governance, Data Lineage, and Data Security practices in the cloud.Hands-on experience with Azure DevOps for CI/CD pipeline management, version control, and deployment automation.Familiarity with Azure Active Directory (AAD) and role-based access control (RBAC) for managing permissions and security.Experience with RESTful APIs and integrating with third-party systems for data ingestion and export.Strong problem-solving and analytical skills, with the ability to work with large datasets and derive actionable insights.Ability to collaborate effectively with stakeholders and communicate complex data engineering concepts in simple terms.Preferred Qualifications:Azure certifications such as Microsoft Certified: Azure Data Engineer Associate or Azure Solutions Architect Expert.Experience with real-time data processing using Azure Stream Analytics or Apache Kafka.Knowledge of machine learning and AI concepts for data-driven decision-making.Familiarity with Power BI or other reporting tools for data visualization.Experience with Agile methodologies in a data engineering environment.
View all details

Ataccama Admin

Learning Lane Pvt Ltd

Ataccama Devops Engineer AWS Azure Administrator Linux Windows Vms-virtual Memory System Data Management
Detailed Job Description:We are looking for an Ataccama Admin to join our team and help us manage and maintain our Ataccama data quality and data governance platform. You will be responsible for installing, configuring, and maintaining Ataccama on AWS/Azure platform, as well as developing and implementing data quality rules and policies. You should have a deep understanding of Ataccama architecture and best practices, as well as experience in data management and data governance. Experience in administering Collibra and Immuta is preferred. You should also have experience in managing VMs and Windows and Linux based systems. Pharma experience is preferred.Essential Duties and Responsibilities: Install, configure, and maintain Ataccama Develop and implement data quality rules and policies Monitor and report on data quality metrics Troubleshoot and resolve Ataccama-related issues Stay up-to-date on the latest Ataccama features and best practices Work with cross-functional teams to implement data governance policies and procedures Manage and maintain VMs and Windows and Linux based systems Manage redundancy, backup and recovery plans and processes Strong AWS/Azure experienceQualifications: 4+ years of experience with Ataccama Experience in data management and data governance Experience in administering Collibra and Immuta is preferred Experience in managing VMs and Windows and Linux based systems Experience in Performance Tuning Pharma experience is preferred Strong analytical and problem-solving skills Excellent communication and teamwork skills
View all details

Opening For Data Engineer

Rak Analytics Solutions

  • 0 - 3 yrs
  • 5.0 Lac/Yr
  • Bangalore
Java Python SQL Nbase Haddop
Invitation for B2B Partnerships: Seeking Software Development Support! We are looking to collaborate with companies that can provide 10 skilled Data Engineers to support our data engineering requirements pipeline for Euoropean clients on a contract basis.Requirements:Expertise in data engineering, including data processing and ETL.Proficiency in SQL and NoSQL databases.Experience with Hadoop or Spark.Experience in AWS or Azure.Snowflake is an added advantage.If your company is equipped to provide top-notch Data Engineers, we invite you to submit your proposal with terms and conditions.Please contact us by email or Whatsapprakanalytics@gmail.com (or) 9900173022
View all details

Data Engineer

krtrimaiq cognitive solution

  • 4 - 9 yrs
  • Bangalore
SCALA Python SQL
Job Summary:We are seeking an experienced Scala Developer with a strong background in Kafka and Big Data technologies. The ideal candidate will have extensive experience in designing and implementing scalable data solutions, with a focus on performance and reliability. You will work closely with our data engineering and AI teams to build and maintain high-performance data pipelines and applications.Key Responsibilities:Develop and maintain scalable applications using Scala and related technologies.Design, implement, and manage data pipelines utilizing Kafka and other Big Data tools.Collaborate with cross-functional teams to define, design, and ship new features.Optimize application performance, ensuring high throughput and low latency.Monitor and troubleshoot data processing workflows, ensuring data integrity and reliability.Participate in code reviews, provide feedback, and improve code quality.Stay up-to-date with the latest trends and best practices in Big Data and functional programming.Required Skills:Strong proficiency in Scala with at least 4+ years of hands-on experience.Extensive experience with Kafka: stream processing, Kafka Connect, Kafka Streams, etc.Solid understanding of Big Data technologies including Hadoop, Spark, HDFS, and Hive.Proficiency in SQL and NoSQL databases.Experience with ETL pipelines and data integration workflows.Familiarity with data warehousing concepts and cloud platforms (AWS, GCP, Azure).Knowledge of containerization (Docker) and orchestration tools (Kubernetes) is a plus.Excellent problem-solving skills and a proactive attitude.Qualifications:Bachelors or Masters degree in Computer Science, Engineering, or a related field.4+ years of experience in Scala development, with a focus on Big Data and Kafka.Proven track record of building and managing scalable, reliable data systems.Strong communication skills and ability to work in a collaborative environment.
View all details
Data Engineer Azure Data Engineer
Job Title: Azure Data EngineerLocation: RemoteJob Type: ContractJob DescriptionRequirements:Proven experience as a Data Engineer with a focus on Azure services.Strong expertise in Azure Data Factory, Azure Databricks, Azure Synapse Analytics, and other Azure data services.Proficiency in programming languages such as Python, SQL, and Spark.Experience with data modeling, data warehousing, and ETL processes.Knowledge of data security and privacy best practices.Strong problem-solving skills and attention to detail.Excellent communication and collaboration skills.Ability to work independently in a remote setup.Preferred Qualifications:Azure Data Engineer certification.Experience with other cloud platforms an
View all details
Networking System Testing Java Python Programmer BPO Core Engineer Mechanical Engineer Data Validation Data Entry Operator Work From Home
We have immediate requirement for FreshersRole : Software Engineer , Network Engineer. Desktop Support EngineerSalary : 3 Lac - 4LacLocation : Chennai, Coimbatore, Bangalore.Training : 2 Week in Online trainingsend resumes to hr.alphamanpower@gmail.comOnly Tamilnadu candidate can apply
View all details

Cloud Engineer - Full Time

Talent Zone Consultant

  • 9 - 15 yrs
  • Bangalore
AWSAzure Docker Kubernetes Terraform Ansible CICD Tools Linux DevOps Integration Network Security Data Management IT Security Windows Server Administration Statistical Programming Troubleshooting Skills
Key Responsibilities: Cloud EngineerDesign, implement, and manage CI/CD pipelinesAutomate deployments and infrastructure using tools like Terraform/AnsibleMonitor system performance and ensure high availabilityRequirements:Experience with AWS/AzureKnowledge of Docker, Kubernetes, and scriptingStrong problem-solving skillsBrief Summary:Responsible for automating and optimizing cloud infrastructure and deployment processes.
View all details
  • 7 - 10 yrs
  • 35.0 Lac/Yr
  • Bangalore
Solution Architecting Data Engineering Design Architect Pipeline Management Data Pipeline Python AWS AWS Cloud Cloud Architect
Key Responsibilities Requirement Analysis: Collaborate with stakeholders to understand businessrequirements and data sources, and define the architecture and design of dataengineering models to meet these requirements. Architecture Design: Design scalable, reliable, and efficient data engineering models,including algorithms, data pipelines, and data processing systems, to support businessrequirements and quantitative analysis. Technology Selection: Evaluate using POCs and recommend appropriate technologies,frameworks, and tools for building and managing data engineering models, consideringfactors like performance, scalability, and cost-effectiveness. Data Processing: Develop and implement data processing logic, including data cleansing,transformation, and aggregation, using technologies such as AWS Glue, Batch, Lambda. Quantitative Analysis: Collaborate with data scientists and analysts to develop algorithmsand models for quantitative analysis, using techniques such as regression analysis,clustering, and predictive modeling. Model Evaluation: Evaluate the performance of data engineering models using metricsand validation techniques, and iterate on models to improve their accuracy andeffectiveness. Data Visualization: Create visualizations of data and model outputs to communicateinsights and findings to stakeholders.Data Engineering: Understanding of Data engineering principles and practices, includingdata ingestion, processing, transformation, and storage, using tools and technologiessuch as AWS Glue, Batch, Lambda. Quantitative Analysis: Proficiency in quantitative analysis techniques, including statisticalmodeling, machine learning, and data mining, with experience in implementingalgorithms for regression analysis, clustering, classification, and predictive modeling. Programming Languages: Proficiency in programming languages commonly used in dataengineering and quantitative analysis, such as Python, R, Java, or Scala,
View all details
View More Jobs

Apply to 75 Data Engineer Graduate Jobs in Bangalore

  • Bangalore Jobs
  • Hyderabad Jobs
  • Ahmedabad Jobs
  • Mumbai Jobs
  • Pune Jobs
  • Chennai Jobs
  • Kolkata Jobs
  • Delhi Jobs