14

Big Data Engineer Job Vacancies in Bangalore

filter
  • Location
  • Role
  • Functional Area
  • Qualification
  • Experience
  • Employer Type

Looking For Big Data Engineer

Talent Zone Consultant

  • 6 - 12 yrs
  • Bangalore
Python SQL Spark Hadoop ETL Tools Data Warehousing Airflow Programming Data Visualization Data Lakes Data Modeling
Key Responsibilities:Build and manage data pipelines and ETL processesWork with large datasets using tools like Spark, Hadoop, or SQLEnsure data quality and performance optimizationRequirements:Experience in Python/SQLHands-on with ETL tools and big data technologiesUnderstanding of data warehousing conceptsBrief Summary:Develops scalable data systems to support analytics and business insights.
View all details

Big Data Engineer (Spark and Scala)

E2E Infoware Management Services

Scala Spark Pyspark
Role: Bigdata Developer - Scala SparkExp: 5+ YrsMode of Work: WFO All 5 daysLocation: Chennai/Bangalore/PuneInterview: Any one Level F2FJob Description: Total IT / development experience of 3+ years Experience in Spark (Scala-Spark ) developing Big Data applications on Hadoop, Hive and/or Kafka, HBase, MongoDB Deep knowledge of Scala-Spark libraries to develop and debug complex data engineering challenges Experience in developing sustainable data driven solutions with current new generation data technologies to drive our business and technology strategies Exposure to deploying on Cloud platforms At least 2 years of development experience on designing and developing Data Pipelines for Data Ingestion or Transformation using Spark-Scala At least 2 years of development experience in the following Big Data frameworks: File Format (Parquet, AVRO, ORC), Resource Management, Distributed Processing and RDBMS At least 2 years of developing applications in Agile with Monitoring, Build Tools, Version Control, Unit Test, Unix Shell Scripting, TDD, CI/CD, Change Management to support DevOps
View all details

Opening For Cloud Data Engineer

Talme Technologies Pvt Ltd

Designing and Implementing Data Architecture Strategies Data Integration Data Management Supporting Analytics Technology Selection and Performance Optimization. Technical Skills: In-depth Knowledge Of AWS Services (IAM Redshift NoSQL) Data Processing and Analysis Tools (AWS Glue EMR) Big Data Frameworks (Hadoop Spark) ETL Tools (IBM DataStage ODI in
we are in the lookout for a seasoned Cloud Data Lead. We are eager to connect with you if you have extensive experience in cloud platforms, data architecture, and leadership!
View all details

Big Data Engineer

Krtrimaiq Cognitive Solutions

  • 4 - 10 yrs
  • 20.0 Lac/Yr
  • Bangalore
Big Data Big Data Architecture Big Data Engineer Big Data Architect
Design, develop, and maintain scalable data processing pipelines using Kafka, PySpark, Python/Scala, and Spark.Work extensively with the Kafka and Hadoop ecosystem, including HDFS, Hive, and other related technologies.Write efficient SQL queries for data extraction, transformation, and analysis.Implement and manage Kafka streams for real-time data processing.Utilize scheduling tools to automate data workflows and processes.
View all details

Get Personalized Job Matches

Based on your experience, skills, interests, and career goals to help you find the most relevant opportunities faster. Register Now!
  • 4 - 10 yrs
  • Bangalore
Big Data Spark Hadoop Work From Home
Job TitleTeam Lead / Sr. DeveloperLocation : Any Location Skills:Advanced working SQL/NoSQL knowledge and experience working with relational/nonrelational databases.Expert in PythonExperience with big data tools: Hadoop, Spark, Kafka, Nifi etc.Experience with relational SQL (Oracle & PosgreSQL) and NoSQL (HDFS Hive, HBase, Cassandra) databases.Experience with data pipeline and workflow management tools like Airflow, etc.Experience building and optimizing big data data pipelines, architectures and data sets.Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.Strong analytic skills related to working with unstructured datasets.Build processes supporting data transformation, data structures, metadata, dependency and workload management.A successful history of manipulating, processing and extracting value from large disconnected datasets.Working knowledge of message queuing, stream processing, and highly scalable big data data stores.Strong project management and organizational skills.Experience supporting and working with cross-functional teams in a dynamic environment.
View all details
Data Warehousing Python S3 Glue Snowflake Athena Redshift Data Modeling SQL Big Data Data Engineer
Job DescriptionThe Lead Data Engineer is responsible for finding trends in Data sets and Developing Algorithms to help make raw data more useful to the Organization. This role requires a significant Set of Technical Skills, including a deep Knowledge of SQL Database Design and Multiple Programming languages along with team management skillsiSOCRATES is looking to add a Lead Data Engineer to its growing Analytics team. The Lead Data Engineer will lead the team that is responsible for finding trends in Data sets and Developing Algorithms to help make raw data more useful to the Organization. The Lead Data Engineer will report directly to the Senior Director Qualifications Overall, 8-12 years of experience in data & analytics Have strong experience in SQL/NoSQL databases Design and build production data pipelines from ingestion to consumption within a big data architecture, using Python/Scala/Java Design, build and operationalize large-scale enterprise data solutions and applications using one or more of AWS data and analytics services in combination with Mandatory - Redshift, S3, Athena, Good to have - Glue, Snowflake Design and implement data engineering, ingestion, and curation functions on AWS cloud using AWS native or custom programming Perform detailed assessments of current state data platforms and create an appropriate transition path to AWS cloud Should be able to understand the data very well, create a pattern on structured /non-structured data Strong knowledge of Data warehouse, Data Modelling (Preferably Dimensional Modelling Kimball) Proficiency in both ETL & ELT AWS Certified Data Analytics Specialist / AWS Certified ML Specialist will be an added advantage Exposure to Media & Marketing is a plus Having knowledge of GCP is a plus Having knowledge of 1 or more visualization tools is a plus
View all details

Big Data & Cloud Senior Architect

Brid Tech Solutions Private Limited

Big Data Cloud Architect Senior Cloud Cloud Engineer Work From Home Walk in
Job DescriptionThe Big Data/Analytics Solution Architect is responsible for understanding emerging and evolving end user usage models and requirements in Big Data and Analytics, documenting those usage models and business, technical and user requirements and designing a solutions architecture to meet those requirements and specifying an implementation HW and SW solution stack. Solution architects document the solution architectures and solution requirements and, when needed, define end user proofs of concept to test the architectures, usage models and corresponding Intel technologies in testbed or real end user environments. They also work with end users and ecosystem partners to deploy those solutions in early adopter production environments. A strong candidate will have:Experience in Requirements Engineering, Solution Architecture, Design, Development and DeploymentA broad set of technical skills and knowledge across hardware, software, systems and solutions development and a across more than one technical domain.Demonstrated experience in real world IT or other solutions environments including creating (on your own or with a team) a product or IT solution in the area of Big Data/AnalyticsStrong communication skills including representing your company in industry standards organizations or industry technical forums or events in Cloud SecurityStrong technical team leadership, mentorship and collaboration.Ability to develop technical relationships with end-users, ISVs, OEMs and Intel platform architects and Proof-Of-Concept engineers.
View all details

Analytics Engineer

Jaden Executive Search

  • 2 - 4 yrs
  • 25.0 Lac/Yr
  • Bangalore
Pyspark SQL CI CD Big Data Visualization Tools Analytics Engineer
The ideal candidate will use their passion for big data and analytics to provide insights into the business covering a range of topics. They will be responsible for developing an in-house ETL-driven toolchain and conducting both recurring as well as on-demand analyses for business users, research folks, and customers.Responsibilities- Build optimized Spark-based data processing jobs to generate analytical models anddeploy them into Airflow- Build Analytical models and Data Models to align with Product strategy- Understand the day-to-day issues that our business faces, by closely communicatingwith stakeholders across the board- Build data pipelines to facilitate quality checks on datasets across the board; therebyensuring a seamless flow of high-quality data into the platform- Develop a diverse range of visualizations to convey complicated data in astraightforward fashion, for both internal and external audiencesQualifications- Bachelor's or Master's degree.- 2 - 4 years of experience in the Analytics/engineering domain.- Proficient in PySpark, SQL (Google BigQuery, etc.), and CI/CD driven deployment- Proficient in Big-Data related toolkits, Kubernetes and Docker- Experience in working with the Airflow orchestration engine.- Redash or Tableau or PowerBI or equivalent visualization tools.- Problem-solving skills- Strong communication/interpersonal skills
View all details
  • 2 - 7 yrs
  • Bangalore
Big Data AWS Data Engineer Python
Data Engineer - PythonLocation : BangaloreExperience- 2 to 7 Years.Job Description.You will be responsible to understand the client requirement and architect robust data platform on AWS Cloud systems.You will be expected to create reusable and scalable data pipelines for development and deployment of new data platformsOwning responsibility for using AWS Cloud data services for development of Big Data Platforms using Streaming & Batch Data Ingestion servicesYou will also be responsible for creating reusable components for rapid development of data platform & deploying AI algorithms in to the data platform to run predictive analytics at scale.You will be responsible to provide the essential support to the application team who is responsible for the product user journeyCreate and own the technical product backlogs for products, help the team to close the backlogs in right time, with high qualityExtensive hands-on experience in Python is mandatory.Good command over AWS/NLP would be good to haveAbility to produce high-quality code that allows us to put solutions into production with minimum defectsWorking knowledge of GitHub, CI/CD implementations & running robust unit tests is mandatory for this roleUnderstanding Elasticsearch & Python will be an added advantage
View all details
Big Data React JS Python AWS C++ Angular Spark Programming ETL SQL Work From Home
**Preference will be given to the candidates who can join on or before 1st of October, 2022**You will:Write excellent production code and tests and help others improve in code-reviewsAnalyze high-level requirements to design, document, estimate, and build systemsCoordinate across teams to identify, resolve, mitigate and prevent technical issuesCoach and mentor engineers within the team to develop their skills and abilitiesContinuously improve the team's practices in code-quality, reliability, performance, testing, automation, logging, monitoring, alerting, and build processesYou have:For (Full stack):2 - 10 Years of experienceStrong with DS & AlgorithmsHands on Experience in the Programming languages: JavaScript (React or Angular), Python, SQL.Experience with AWS.For (Backend):2 - 10 years of experienceHands on product development experience using Java/ C++/PythonExperience with AWS,SQL,GITStrong with Data structures and AlgorithmsAdditional nice to have skills/certifications:For Java skill set:Mockito, Grizzly, Netty, VertX, Jersey / JAX-RS, Swagger / Open API, Nginx, Protocol Buffers, Thrift, Aerospike, Redis, Kinesis, Sed, Awk, PerlFor Python skill set: Data Engineering experience, Athena, Lambda, EMR, Spark, Glue, Step Functions, Hadoop, Kinesis, Orc, Parquet, Perl, Awk, RedshiftFor (Data Engineering):2 - 10 years of experienceExperience with object-oriented/object function scripting languages: Python.Experience with AWS cloud services: EC2, RDS, Redshift,S3,Athena, GlueMust be proficient in GIT, Jenkins, CICD (Continuous Integration Continuous Deployment)Experience in big data technologies like Hadoop, Map Reduce, Spark, etcExperience with Amazon Web Services and DockersFor (Geo Team):4 - 10 years of experienceExperience with Big Data technologies like Hadoop, Spark, Map Reduce, Kafka, etcExperience using object-oriented languages (Java, Python)Experience in working with different AWS technologies.Experience in software
View all details

Data Engineer

Maxdata Solutions

Big Data Spark SCALA Impala HBase Kafka MongoDB PostgreSQL Rabbitmq Sqoop
Currently we are hiring as Data Engineer,Job Location: Mumbai / Bangalore/ NoidaHands-on experience programming language: Python, Java, Scala ? Passionate and knowledgeable about big data stacks: ? Distributed systems: Spark(PySpark), Hadoop, Presto, Hive, etc. ? Message Queueing systems: Kafka, rabbitMQ, NSQ, etc are good to have. ? Database (Relational & NoSQL): PostgreSQL, MySQL, MongoDB, etc. ? Experience gathering and analyzing system requirements ? In-depth understanding of database structure principles, data warehousing, data mining concepts, and segmentation techniques ? Experience with cloud computing platforms (AWS, GCP, etc.) and UNIX environment. ? experience in AWS services eg EMR, Lambda, Step Functions, S3, Redshift etc is a plus. ? Experience in designing, implementing, and monitoring big data analytics solutions ? Have fast learning capability and natural curiosity about big data ? DevOps/DataOps skills are plus points ? Background: Fields of study is Computer Science (preferred) or Any other graduation degree.If you are interested then please share your updated resume onPrakash Rathod
View all details
C++ Developer Windows Server Multithreading Boost STL Data Structures Socket Programming TCP Big Data Debugging Compiler Templates CPP Agile Software Engineer
Designation-Senior Software Engineer(C++)Experience:3-8 yearsLocation-Bangalore, Trivandrum,CochinNotice Period: Immediate to 30daysRequired Skills (Technical Competency):. Creates high quality working software using C++ in the Windows Environment. Has experience in Agile Software Development methodologies. Actively participates in sprint planning, daily stand-up meetings, sprint reviews, sprint retrospectives and backlog refinement.. Strong logical and analytical skills.. Contributes to continuous improvement of the team, software and processes. Realization of high-level architecture and design with strong development skills. Designs, codes, documents, tests (automated), maintains and deploys software. Sets, monitors and ensures to meet own performance metrics. Defines, monitors and meets performance and quality metrics. Provides technical solutions that conform to requirements with a strong focus on end-users, high quality (QMS/regulatory standards), performance, safety and security. Keeps abreast of technical knowledge by studying and implementing state-of-the-art programming techniques and development tools, participating in educational opportunities, participating in communities of practice, reading professional publications and maintaining personal networks. Participates in full process, working in pairing mode with equal.. Challenges requirements, design and quality, focused on technical leadership.. Leads the creation of the software design.
View all details

DATA ENGINEER (Informatica BDM)

KGP Manpower Consulting Pvt Ltd

Informatica Big Data Management SAS-Statistical Analysis System ETL Hadoop DATA ENGINEER Azure JSON XML Scala Spark Github DevOps Data Miration
DATA ENGINEER (Informatica BDM)Job Location: Dubai & Offshore (Chennai, Hyderabad, Bangalore)Experience: 5+ YearsNotice: 30 days or lessMax CTC: 15K AED per month/ 18LPA for offshoreJOB Description:Should have Strong expertise of Extraction, Transformation and Loading (ETL) mechanism using Informatica Big Data Management 10.2.X and various Push down mode using Spark, Blaze and Hive execution engine. Should have Strong expertise of Dynamic mapping Use case, Development, Deployment mechanism using Informatica Big Data Management 10.2.X. Should have experience on transforming and loading various Complex data source types such as Unstructured data sources ,No SQL Data Sources. Should have Strong expertise of Hive Database including Hive DDL, Partition and Hive Query Language. Should have Good Understanding of Hadoop Eco system (HDFS, Spark, Hive). Should have Strong expertise of SQL/PLSQL. Should have Good knowledge on working with Oracle/Sybase/SQL Databases. Should have Good knowledge of Data Lake and Dimensional data Modelling implementation. Should be able to understand the requirements and write Functional Specification Document, Design Document and Mapping Specifications.
View all details

Python Developer

Kalki Placements & Industrial Technology Solutions

  • 3 - 9 yrs
  • 12.0 Lac/Yr
  • Bangalore
Python Machine Learning Big Data DevOps Engineer MySQL Django REST Framework MongoDB PostgreSQL Web Developer AWS JAVA
Mandatory Skills:An overall 5 yrs of IT experience with an Engineering degreeExcellent in data science and machine-learning concepts dealing with decoding data, video frames and performing data analysis [3+ yrs]Programming experience in languages like Python and JavaDemonstrate successful experience learning new technologies quicklyDesired skills:Knowledge on various AWS Services like S3, CloudWatch, CloudTrail, Lambda etcDebugging skills on JAVA, experience on DevOps tools, CICD pipelinesJapanese language JLPT N2/N3 certification
View all details