28

SCALA Job Vacancies in Bangalore

filter
  • Location
  • Role
  • Functional Area
  • Qualification
  • Experience
  • Employer Type
  • 6 - 8 yrs
  • 22.5 Lac/Yr
  • Bangalore
Spark + Scala Scala SQL Snowflake Developer Pyspark Developer
We're Hiring: Spark + Scala Data Engineers Are you a skilled Data Engineer with a strong foundation in Scala and Apache Spark? We're on the lookout for passionate professionals ready to make an impact in large-scale data projects. Role Overview: Experience: 6+ years total 3+ years relevant in Spark Tech Stack: Scala, Spark / PySpark, Hadoop ecosystem, SQL Hands-on experience in at least 2 Spark-based projects Ability to write clean, testable Scala code Exposure to Agile, CI/CD, Git, and orchestration tools Preferred Background:Candidates from companies like TCS, Wipro, IBM, Barclays, Thoughtworks, Xebia, Publicis Sapient are highly encouraged to apply! What Were Looking For:Strong data processing skills using Scala + SparkAdvanced SQL and performance tuning knowledgeUnderstanding of data warehouse concepts & big data architectureGreat problem-solving mindset and communication skills Location & Compensation: Will be discussed during the screening process If you're interested or know someone who fits the bill, feel free to DM me or drop your CV in the comments or inbox.
View all details
Azure Databricks Azure Datafactory SQL Scala
Role & responsibilities Provide business support for production systems including analysis and troubleshooting of problems, interact with internal/external team members and business users throughout the application life cycleContributes and collaborates to assist designs, implements, maintains reliable and scalable applications/systems.Work with geographically diverse delivery groups (BAs, QAs and Developers) and DevOps teams in delivering quality solutionsResearch and develop on innovation technologies and strive for excellence and continuous improvement in software architecture, Agile methods and building systems.Adheres to agile methodology and operates and builds DevOps maturity. Ensuring delivery of business incremental change safely and reliably.Actively contributes to building DevOps maturity, incrementally and measurably improving delivery velocityQualifications:Overall 7+ years of software engineering experience in solution design & implementation5+ years of software engineering experience in solution design & implementation using Scala software development languageProven expertise and hands-on experience with Apache Spark/ Databricks and Azure Data FactoryExpert with data analysis in SQL Server/PostgreSQL with strong database query skills.Preferred candidate profile Overall 7+ years of software engineering experience in solution design & implementation5+ years of software engineering experience in solution design & implementation using Scala software development languageProven expertise and hands-on experience with Apache Spark/ Databricks and Azure Data FactoryExpert with data analysis in SQL Server/PostgreSQL with strong database query skills.Provide business support for production systems including analysis and troubleshooting of problems, interact with internal/external team members and business users throughout the application life cycleContributes and collaborates to assist designs, implements, maintains reliable and scalable applications/systems.Work with geographically diverse delivery groups (BAs, QAs and Developers) and DevOps teams in delivering quality solutionsResearch and develop on innovation technologies and strive for excellence and continuous improvement in software architecture, Agile methods and building systems.Adheres to agile methodology and operates and builds DevOps maturity. Ensuring delivery of business incremental change safely and reliably.Actively contributes to building DevOps maturity, incrementally and measurably improving delivery velocity.
View all details

Big Data Engineer (Spark and Scala)

E2E Infoware Management Services

Scala Spark Pyspark
Role: Bigdata Developer - Scala SparkExp: 5+ YrsMode of Work: WFO All 5 daysLocation: Chennai/Bangalore/PuneInterview: Any one Level F2FJob Description: Total IT / development experience of 3+ years Experience in Spark (Scala-Spark ) developing Big Data applications on Hadoop, Hive and/or Kafka, HBase, MongoDB Deep knowledge of Scala-Spark libraries to develop and debug complex data engineering challenges Experience in developing sustainable data driven solutions with current new generation data technologies to drive our business and technology strategies Exposure to deploying on Cloud platforms At least 2 years of development experience on designing and developing Data Pipelines for Data Ingestion or Transformation using Spark-Scala At least 2 years of development experience in the following Big Data frameworks: File Format (Parquet, AVRO, ORC), Resource Management, Distributed Processing and RDBMS At least 2 years of developing applications in Agile with Monitoring, Build Tools, Version Control, Unit Test, Unix Shell Scripting, TDD, CI/CD, Change Management to support DevOps
View all details

Big Data Analytics

Creative Consultant & Contractor

  • 3 - 7 yrs
  • 9.0 Lac/Yr
  • Bangalore
Hadoop Developer SQL Server Developer Python Developer SCALA Data Warehouse Developer Data Scientist Data Analyst
Hi, we have job opportunity for the post of Big Data Analytics or Developer at Bangalore, Karnataka, candidate who have minimum 3 + years experienced in the same field and they are ready to join immediately those candidates can apply. Company will give you good salary and other benefits also.
View all details

Get Personalized Job Matches

Based on your experience, skills, interests, and career goals to help you find the most relevant opportunities faster. Register Now!

Jobs by Related Category

  • 10 - 18 yrs
  • 30.0 Lac/Yr
  • Bangalore
Azure Administrator SCALA Data Engineer
Looking for Senior Data Engineer Bangalore to join a team of rockstar developers. The candidate should have a minimum of 10+ yrs. of experience.There are multiple openings. If you're looking for career growth & a chance to work with the top 0.1% of developers in the industry, this one is for you! You will report into IIT'ans/BITS grads with 10+ years of development experience + work with F500 companies (our customers).Technical Skills:Language: Scala (5+yrs)Frameworks: SparkAutonomy, self-motivation, and pro-activityAccountability, commitment to deliver durable work of quality, ready to embrace challengesTeam player, multicultural approach, eager to learn from other and share knowledgeKnowledge in Azure services (data bricks, Azure functions) or other SaaS solutions (Snowflake)Familiar with GIT, AirflowScaled Agile Framework (SAFe)Analytical and conceptual thinking, information gathering and Devops mindset is necessaryKey skills: Scala,Spark,AzureMain accountabilities:You will analyze user needs and design solutions thoroughly.You will code, test, debug, document and maintain software solutions. You will propose optimizations to improve performance or resources efficiency.You will work on the data acquisition, the data modeling, cleansing, aggregation, treatment.You will define / enhance data models, implement the right DataMarts, partitions keys, so that we can answer analytics requirements efficiently.You may as well benchmark technologies or cloud services recommend technologies and design best practices, so that the best technology or design are eventually implemented.You are curious and like discovering new technologies, new functional areas, and are eager to understand the functional aspects of your applications.Location: Bangalore (Work from Office).Here is what we have on offer for youFree healthcareLaser-sharp focus on upskilling our employeesDiverse & Inclusive teamsIndustry-par compensation & benefitsGreat wo
View all details

Data Engineer

krtrimaiq cognitive solution

  • 4 - 9 yrs
  • Bangalore
SCALA Python SQL
Job Summary:We are seeking an experienced Scala Developer with a strong background in Kafka and Big Data technologies. The ideal candidate will have extensive experience in designing and implementing scalable data solutions, with a focus on performance and reliability. You will work closely with our data engineering and AI teams to build and maintain high-performance data pipelines and applications.Key Responsibilities:Develop and maintain scalable applications using Scala and related technologies.Design, implement, and manage data pipelines utilizing Kafka and other Big Data tools.Collaborate with cross-functional teams to define, design, and ship new features.Optimize application performance, ensuring high throughput and low latency.Monitor and troubleshoot data processing workflows, ensuring data integrity and reliability.Participate in code reviews, provide feedback, and improve code quality.Stay up-to-date with the latest trends and best practices in Big Data and functional programming.Required Skills:Strong proficiency in Scala with at least 4+ years of hands-on experience.Extensive experience with Kafka: stream processing, Kafka Connect, Kafka Streams, etc.Solid understanding of Big Data technologies including Hadoop, Spark, HDFS, and Hive.Proficiency in SQL and NoSQL databases.Experience with ETL pipelines and data integration workflows.Familiarity with data warehousing concepts and cloud platforms (AWS, GCP, Azure).Knowledge of containerization (Docker) and orchestration tools (Kubernetes) is a plus.Excellent problem-solving skills and a proactive attitude.Qualifications:Bachelors or Masters degree in Computer Science, Engineering, or a related field.4+ years of experience in Scala development, with a focus on Big Data and Kafka.Proven track record of building and managing scalable, reliable data systems.Strong communication skills and ability to work in a collaborative environment.
View all details
Rest API MySQL SQL Database Administrator AWS Cloud Engineer Agile Methodology Testing & Commissioning Engineer
We are urgently hiring Scala Developer for company located at Mohali having Work from Home/Hybrid (for Mohali)/Work from officeCompany OverviewWe are a highly talented & experienced team of IT solution providers based in the UK and with offshore offices in India. Company exists to meet the growing demands of IT services that are bespoke to individual client challenges.Over the past decade, we have built a strong track record of success by delivering more than 500 projects to clients around the world.Job OverviewScala Developer role with 4-6 years of experience.Employment type: Full-Time, Remote, Hybrid or Work from OfficeQualifications and Skills4-6 years of experience as a Scala DeveloperProficiency in Scala programming and related frameworks like AkkaExperience with REST APIs and Agile methodologiesStrong knowledge of testing/unit testing tools like JUnit/MockitoFamiliarity with cloud-based environments (AWS, Azure)Hands-on experience with Kafka and ElasticsearchRoles and ResponsibilitiesStrong scala programming skills and experience with Scala frameworks such as Akka is must.Efficient in designing and developing REST APIs.Experience with SQL and NoSQL databases.Proficiency in software design patterns and principles.Experience with version control tools such as Git.Good understanding of Agile methodologies and a collaborative mindset.Excellent problem- solving and analytical skills.Experience in unit testing (eg. JUnit/ Mockito).Experience in cloud- based environments such as AWS or Azure.Experience with Kafka and Elasticsearch.
View all details

SR Back End Engineer (5-7 Years)

Brid Tech Solutions Private Limited

Java Spring Spring Boot Developer Python SCALA NodeJS Golang Developer
About the role: You will spend time ensuring the products have the best technical design and architecture; you would be supported by peers and team members in creating best-in-class technical solutions. Identify technical challenges proactively and provide effective solutions to overcome them, ensuring the successful implementation of features and functionality. Quickly respond to business needs and client facing teams demand for features, enhancements and bug fixes. Work with AI tech and AI leaders in shaping and scaling the software products and hosting manufacturing focussed AI and ML software products Required Skills & Experience: You should have 5-8 years of experience, with deep expertise in backend technologies. Must have: Expert in coding for business logic, server scripts and application programming interfaces (APIs) Must have: Should have expertise in multiple programming languages (JAVA SPRING, springboot, Python, SCALA, NodeJS, Golang, etc.); HTML,CSS, Javascript (required to build APIs). Excellent in writing optimal SQL queries for backend databases; CRUD operations for databases from applications. Exposure to relational databases - MYSQL, Postgres DB, non-relational: MongoDB, Graph based databases, HBASE, Cloud native big data stores; willing to learn and ramp up on multiple database technologies . Must have at least 1 public cloud platform experience (GCP/Azure/AWS; GCP preferred). Good to have: Basic knowledge of Advanced Analytics / Machine learning / Artificial intelligence (has to collaborate with ML engineers to build backend of AI-enabled apps) Product development experience.
View all details

Looking For Scala Developer - Work From Home

JOB24by7 Recruitment Consultancy Services

SCALA Agile Development Azure Administrator NoSQL Programming Lecturer MySQL Apache AWS Developer Rest API
Profile:- Scala DeveloperRequired Experience:- 2+ YearsRequired Experience:-Strong scala programming skills and experience with Scala frameworks such as Akka.Efficient in designing and developing REST APIs.Experience with SQL and NoSQL databases.Proficiency in software design patterns and principles.Experience with version control tools such as Git.Good understanding of Agile methodologies and a collaborative mindset.Excellent problem-solving and analytical skills.Experience in unit testing (eg. JUnit/ Mockito).Experience in cloud-based environments such as AWS or Azure.Experience with Kafka and Elasticsearch.
View all details

Data Engineer

Beyond Human Resource

  • 4 - 7 yrs
  • 25.0 Lac/Yr
  • Bangalore
Data Warehousing SCALA Python SQL
Mandatory Skills:Apache Spark and either PySpark or Scala: Extensive hands-on experience with Spark for large-scale data processing and analysis. Proficiency in either PySpark or Scala for developing Spark applications.Databricks: Strong expertise in using Databricks for big data analytics, data engineering, and collaborative work on Apache Spark.Github: Proficient in version control using Git and GitHub for managing and tracking changes in the codebase.Data Warehousing (DWH): Experience with one or more of the following DWH technologies Snowflake, Presto, Hive, or Hadoop. Ability to design, implement, and optimize data warehouses.Python: Advanced programming skills in Python for data manipulation, analysis, and scripting tasks.SQL: Strong proficiency in SQL for querying, analyzing, and manipulating large datasets in relational databases.Data Streaming or Data Batch: In-depth knowledge and hands-on experience in both data streaming and batch processing methodologies.Good to Have:Kafka: Familiarity with Apache Kafka for building real-time data pipelines and streaming applications.Jenkins: Experience with Jenkins for continuous integration and continuous delivery (CI/CD) in the data engineering workflow.Responsibilities:Design, develop, and maintain scalable and efficient data engineering solutions using Apache Spark and related technologies.Collaborate with cross-functional teams to understand data requirements, design data models, and implement data processing pipelines.Utilize Databricks for collaborative development, debugging, and optimization of Spark applications.Work with various data warehousing technologies such as Snowflake, Presto, Hive, or Hadoop to build robust and high-performance data storage solutions.Develop and optimize SQL queries for efficient data retrieval and transformation.Implement both batch and streaming data processing solutions to meet business requirements.Collaborate with other teams to integrate data engineering
View all details
  • 5 - 11 yrs
  • Bangalore
AWS MySQL Java Developer SCALA Hadoop
Experience: 6+ YearsWho should apply for this role?-Good experience with Aurora MYSQL on AWS. (Oracle not preferred).-Proficient in managing the databases.-Good hands-on experience with version upgrades and handling its impact on applications; experience with Data migration.-Minimal experience with Java / Scala.-Strong hands-on experience writing Complex SQL queries, SQL Procedures & Performance tuning.-Constructing infrastructure for efficient ETL processes from various sources and storage systems.-Continuously exploring opportunities to enhance data quality and reliability.-Applying strong programming and problem-solving skills.-Experience with RDBMS and OLAP databases like MySQL, Redshift.-Experience in Big data technologies & AWS.-Immediate joiner.Good to Have:-Willingness to acquire new skills and knowledge.-Possess a product/engineering mindset to drive impactful data solutions.-Experience working in distributed environments with global teams.CTC: As per the Industry standards, negotiable.Appreciate your time invested in us. Thank you!
View all details

Data Engineer

Bb Works India

  • 9 - 15 yrs
  • 40.0 Lac/Yr
  • Bangalore +1 Noida
Data Warehousing ETL Python AWS SCALA Data Engineer
We have vacant of 5 Data Engineer Jobs in Bangalore, Noida, Experience Required : 9 Years Educational Qualification : Other Bachelor Degree Skill Data Warehousing, ETL, Python, AWS, SCALA, data engineer
View all details

Azure Data Engineer

Epik Solutions

Python SQL Spark SCALA Data Bricks Azure Data
Job Description:As an Azure Data Engineer, your role will involve designing, developing, and maintaining data solutions on the Azure platform. You will be responsible for building and optimizing data pipelines, ensuring data quality and reliability, and implementing data processing and transformation logic. Your expertise in Azure Databricks, Python, SQL, Azure Data Factory (ADF), PySpark, and Scala will be essential for performing the following key responsibilities:Designing and developing data pipelines: You will design and implement scalable and efficient data pipelines using Azure Databricks, PySpark, and Scala. This includes data ingestion, data transformation, and data loading processes.Data modeling and database design: You will design and implement data models to support efficient data storage, retrieval, and analysis. This may involve working with relational databases, data lakes, or other storage solutions on the Azure platform.Data integration and orchestration: You will leverage Azure Data Factory (ADF) to orchestrate data integration workflows and manage data movement across various data sources and targets. This includes scheduling and monitoring data pipelines.Data quality and governance: You will implement data quality checks, validation rules, and data governance processes to ensure data accuracy, consistency, and compliance with relevant regulations and standards.Performance optimization: You will optimize data pipelines and queries to improve overall system performance and reduce processing time. This may involve tuning SQL queries, optimizing data transformation logic, and leveraging caching techniques.Monitoring and troubleshooting: You will monitor data pipelines, identify performance bottlenecks, and troubleshoot issues related to data ingestion, processing, and transformation. You will work closely with cross-functional teams to resolve data-related
View all details
  • 6 - 12 yrs
  • 30.0 Lac/Yr
  • Bangalore
PySpark Scala-Spark Hive Hadoop CLI MapReduce Storm Kafka Lambda Javascript HTML CSS CI CD Docker Kuberne Data Engineer
Full Stack Development background with Java and JavaScript/CSS/HTML Knowledge of ReactJs/Angular is a plus Big Data Engineer with solid background with the larger Hadoop ecosystem and real-time analytics tools including PySpark/Scala-Spark/Hive/Hadoop CLI/MapReduce/Storm/Kafka/Lambda Architecture Comfortable with using the larger Hadoop eco system Familiar with job scheduling challenges in Hadoop Experienced in creating and submitting Spark jobs Experienced with Kafka/Storm and real-time analytics Core Java and Python/Scala background and their related libraries and frameworks Experienced with Spring Framework and Spring Boot Unix/Linux expertise; comfortable with Linux operating system and Shell Scripting PL/SQL, RDBMS background with Oracle/MySQL Familiarity with ORMs a plus Design, Development, Configuration, Unit and Integration testing of web applications to meet business process and application requirements Familiar with config management/automations tools such as Ansible/Chef/Puppet Comfortable with Microservices, CI/CD, Dockers, and Kubernetes Familiarity with AT&Ts ECO platform is a plus Comfortable tweaking/using Jenkins and deployment orchestration Creating/modifying Dockers and deploying them via Kubernetes
View all details

Spark Scala Developer

Hirehut Technologies

Spring SCALA Spark Data Processing Fault Tolerance Scalability Array String Tuple Set List Map Walk in
Must-Have1. Must have 5+ years of IT experience2. Must have good experience in Spark and Scala3. Good to have experience instreaming systems like Spark streaming and Storm4. Expereicne with Spark Data processing, Performance Tuning, Memory Management, Fault Tolerance, Scalability5. Good knowledge of Hive,Sqoop,Spark, data warehousing and information management best practices6. Expertise in big data infrastructure , distributed systems, data modelling ,query processing and relational7. Experiene with Scala - Object Orient Programming concepts (Singleton and Companion Object, Class, Case Class, File Handling and Multi threading), Collections (Array,String,Tuple,Set,List,Map), Pattern Matching
View all details
  • 3 - 8 yrs
  • Bangalore
Writing Code in Scala Data Messaging Stream Processing Tools Big Data Frameworks Big Data Developer
Roles & Responsibilities:Contribute to the development of data infrastructure(s) capable of ingestingand storing petabytes of data and serving thousands of queries a day withinseconds on that data.You will build fault tolerant, self-healing, adaptive and highly accurate dataand event computational pipelines.You will be responsible for the continued development and enhancements ofour proprietary AI reference architecture.Prepare and capture data for machine learning and automation.Lead a team of SW Engineers and collaborate with Data Scientists toinnovate and deliver automated AI systems.You will work with the Product Management team to understand therequirements for next generation modules and applications.Collaborate with the Product Marketing team to deliver client dashboardsand user interface.Collaborate with the VP of Engineering to grow and mentor a team of software developers, data scientists and full-stack engineers.Strong Knowledge of Skills:Hold a degree in Computer Science from an accredited University.Prepare and capture data for machine learning and automation.Strong experience with tools commonly used in Data Messaging/Streamprocessing for real-time analytics.Superior skills in writing code in Scala for +3 years or more in a productdevelopment environment.Proven leadership in the development of innovative software for multi-threaded applications with a specific expertise in concurrency, parallelism, and locking strategies.A proven history of building big data solutions - TBs or PBs of data.A passion for working with huge data sets.Solid experience with big data frameworks.Knowledge of OOP design and patterns.Key Attributes:Strong data analysis and problem solving skills, including the ability to learn and discuss domain specific knowledge to understand a project and deliver results.Exceptional communication and presentation skills to lead project teams and communicate technical content and analytical insights/complex find
View all details
Spark Py-spark AWS S3 EMR Redshift Scala Work From Home
Job Description: 3+ years of Spark experience 3+ years of scala or Py-spark hands-on experience (Must Have) 3+ years of AWS familiar with S3, EMR, Redshift.
View all details

Scala Developer

Sight Spectrum

Scala Algorithms Data Structures Scala Developer Big Data Hive Java Software Development Work From Home
OUR REQUIREMENTS Professional experience as a Scala developer Knowledge of Akka, event driven systems and functional programming Experience with working with large code-bases Good spoken and written English communication skills, ability to express ideas clearly Experience or strong interest in the financial industry Experience building scalable, distributed applications in Scala and Java Strong understanding of Algorithms and Data Structures Experience in developing software in an agile environment Interest in the latest programming trends such as functional and reactive programming Knowledge of relational and non-relational database systems Experience in implementation of APIs for integration with internal and external systems Strong problem-solving skills & ability to learn in a fast paced environment.
View all details

BigData Engineer

Maxdata Solutions

Bigdata Spark SCALA Pyspark Hive HBase
Currently we are hiring as Data Engineer,Job Location: Mumbai / Bangalore/ NoidaHands-on experience programming language: Python, Java, Scala ? Passionate and knowledgeable about big data stacks: ? Distributed systems: Spark(PySpark), Hadoop, Presto, Hive, etc. ? Message Queueing systems: Kafka, rabbitMQ, NSQ, etc are good to have. ? Database (Relational & NoSQL): PostgreSQL, MySQL, MongoDB, etc. ? Experience gathering and analyzing system requirements ? In-depth understanding of database structure principles, data warehousing, data mining concepts, and segmentation techniques ? Experience with cloud computing platforms (AWS, GCP, etc.) and UNIX environment. ? experience in AWS services eg EMR, Lambda, Step Functions, S3, Redshift etc is a plus. ? Experience in designing, implementing, and monitoring big data analytics solutions ? Have fast learning capability and natural curiosity about big data ? DevOps/DataOps skills are plus points ? Background: Fields of study is Computer Science (preferred) or Any other graduation degree.If you are interested then please share your updated resume onPrakash Rathod
View all details
  • 1 - 3 yrs
  • Bangalore
Core Java Hibernate J2EE JSF-Java Server Faces Spring Struts Android Advanced Java Servlets SCALA Spark Python
Roles and Responsibilities Write Java/Scala + Spark code that accurately reflects the requirements and designdocuments. Write unit tests that exercise all major logic components of the code. - Deploy, maintain,and performance tune all models. He/she should be capable of understanding and solving complex problems and havesolid communication skills.Desired Candidate Profile 0-3 years of appropriate technical experience Strong proficiency with Core Java and Scala on Spark or Python Experienced engineer with hands-on and good coding skills, preferably with Java, Scalaor Python The candidate should have at least 2 to 3 years of experience in coding. Excellent interpersonal skills and professional approach Ability to work in a dynamic, fast-moving and growing environment Be determined and willing to learn new technologies Immediate joiners are preferred
View all details

Data Engineer

Maxdata Solutions

Big Data Spark SCALA Impala HBase Kafka MongoDB PostgreSQL Rabbitmq Sqoop
Currently we are hiring as Data Engineer,Job Location: Mumbai / Bangalore/ NoidaHands-on experience programming language: Python, Java, Scala ? Passionate and knowledgeable about big data stacks: ? Distributed systems: Spark(PySpark), Hadoop, Presto, Hive, etc. ? Message Queueing systems: Kafka, rabbitMQ, NSQ, etc are good to have. ? Database (Relational & NoSQL): PostgreSQL, MySQL, MongoDB, etc. ? Experience gathering and analyzing system requirements ? In-depth understanding of database structure principles, data warehousing, data mining concepts, and segmentation techniques ? Experience with cloud computing platforms (AWS, GCP, etc.) and UNIX environment. ? experience in AWS services eg EMR, Lambda, Step Functions, S3, Redshift etc is a plus. ? Experience in designing, implementing, and monitoring big data analytics solutions ? Have fast learning capability and natural curiosity about big data ? DevOps/DataOps skills are plus points ? Background: Fields of study is Computer Science (preferred) or Any other graduation degree.If you are interested then please share your updated resume onPrakash Rathod
View all details

Snowflake Developer

Hirehut Technologies

Snowflake DW Architecture and Design Python Scala Walk in
Must-Have1. Must have 5+ years of IT experience, relevant experience of atleast 2 years in Snowflake.2. In-depth understanding of Data Warehousing, ETL concepts and modeling structure pinciples3. Experience working with Snowflake Functions, hands on exp with Snowflake utilities, stage and file upload features, time travel, fail safe,procedure writing,tasks,snowpipe, SnowSQL4. Knowledge on Snowflake Architecture5. Good knowledge of RDBMS topics, ability to write complex SQL, PL/SQL.6. Expertise on engineeering platform components such as Data Pipelines, Data Orchestration, Data Quality, Data Governance & Analytics7. Hands-on experience on implementing large-scale data intelligence solution around Snowflake DW8. Experience in scripting language such as Python or Scala is must9. Good experience on streaming services such as Kafka10. Experience working with Semi-Structured data
View all details
AWS Databricks Python Spark SCALA Azure Developer
Desired Role:Solid knowledge of Datawarehouse and Big Data Frameworks, preferably Databricks.Experience in Large scale Cloud DWH Migration, define and administer standard architecture methodologies, processes and tools across the engagement and best practices.Experience working in cloud-based AWS tech stack that includes Databricks, Spark, Airflow, Python, and Scala. Designing big data infrastructure to run large scale and complex data pipelines that collect, organize, and standardize data.Lead a team of Data engineers and design engineers.Mandatory skills*AWS, Databricks, Python, Spark, ScalaDesired skills*ExasolDomain*Retailnew onsite position on 6 month contract. It's for 10-13 yrs experience in Bigdata tech / data bricks.Open to Bangalore/ Pune/ Chennai/ Hyderabad/ Gurgaon once back from Germany
View all details

Hiring For Big Data Developer

krtrimaiq cognitive solution

  • 4 - 8 yrs
  • Bangalore
Python SCALA SQL Hadoop
We are looking for an only immediate joiner and experienced Big Data Developer with a strong background in Kafka, PySpark, Python/Scala, Spark, SQL, and the Hadoop ecosystem.The ideal candidate should have over 5 years of experience and be ready to join immediately. This role requires hands-on expertise in big data technologies and the ability to design and implement robust data processing solutions.Key Responsibilities:Design, develop, and maintain scalable data processing pipelines using Kafka, PySpark, Python/Scala, and Spark.Work extensively with the Kafka and Hadoop ecosystem, including HDFS, Hive, and other related technologies.Write efficient SQL queries for data extraction, transformation, and analysis.Implement and manage Kafka streams for real-time data processing.Utilize scheduling tools to automate data workflows and processes.Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions.Ensure data quality and integrity by implementing robust data validation processes.Optimize existing data processes for performance and scalability.
View all details
View More Jobs

Apply to 28 SCALA Job Vacancies in Bangalore

  • Bangalore Jobs
  • Hyderabad Jobs
  • Ahmedabad Jobs
  • Mumbai Jobs
  • Pune Jobs
  • Chennai Jobs
  • Kolkata Jobs
  • Delhi Jobs