17

SCALA Job Vacancies in Delhi NCR

filter
  • Location
  • Role
  • Functional Area
  • Qualification
  • Experience
  • Employer Type
Rest API MySQL SQL Database Administrator AWS Cloud Engineer Agile Methodology Testing & Commissioning Engineer
We are urgently hiring Scala Developer for company located at Mohali having Work from Home/Hybrid (for Mohali)/Work from officeCompany OverviewWe are a highly talented & experienced team of IT solution providers based in the UK and with offshore offices in India. Company exists to meet the growing demands of IT services that are bespoke to individual client challenges.Over the past decade, we have built a strong track record of success by delivering more than 500 projects to clients around the world.Job OverviewScala Developer role with 4-6 years of experience.Employment type: Full-Time, Remote, Hybrid or Work from OfficeQualifications and Skills4-6 years of experience as a Scala DeveloperProficiency in Scala programming and related frameworks like AkkaExperience with REST APIs and Agile methodologiesStrong knowledge of testing/unit testing tools like JUnit/MockitoFamiliarity with cloud-based environments (AWS, Azure)Hands-on experience with Kafka and ElasticsearchRoles and ResponsibilitiesStrong scala programming skills and experience with Scala frameworks such as Akka is must.Efficient in designing and developing REST APIs.Experience with SQL and NoSQL databases.Proficiency in software design patterns and principles.Experience with version control tools such as Git.Good understanding of Agile methodologies and a collaborative mindset.Excellent problem- solving and analytical skills.Experience in unit testing (eg. JUnit/ Mockito).Experience in cloud- based environments such as AWS or Azure.Experience with Kafka and Elasticsearch.
View all details

Opening For Data Engineer

Cynosure Corporate Solutions

  • 3 - 9 yrs
  • Delhi
Apache Python Hadoop SCALA
Job Description: We are looking for Data Engineers to join our team. You will use various methods to transform raw data into useful data systems. For example, youll create algorithms and conduct statistical analysis. Overall, youll strive for efficiency by aligning data systems with business goals. To succeed in this position, you should have strong analytical skills and the ability to combine data from different sources. Data engineer skills also include familiarity with several programming languages and knowledge of machine learning methods. Job Requirements: Participate in the customers system design meetings and collect the functional/technical requirements. Build up data pipelines for consumption by the data science team. Skillful in ETL process and tools. Clear understanding and experience with Python and PySpark or Spark and SCALA, with HIVE, Airflow, Impala, and Hadoop and RDBMS architecture. Experience in writing Python programs and SQL queries. Experience in SQL Query tuning. Experienced in Shell Scripting (Unix/Linux). Build and maintain data pipelines in Spark/Pyspark with SQL and Python or SCALA. Knowledge of Cloud (Azure/AWS/GCP, etc..) technologies is additional. Good to have knowledge of Kubernetes, CI/CD concepts, Apache Kafka Suggest and implement best practices in data integration. Guide the QA team in defining system integration tests as needed. Split the planned deliverables into tasks and assign them to the team. Needs to Maintain/Deploy the ETL code and follow the Agile methodology Needs to work on optimization wherever applicable. Good oral, written and presentation skills. Preferred Qualifications: Degree in Computer Science, IT, or a similar field; a Masters is a plus. Hands-on experience with Python and Pyspark Or Hands-on experience with Spark and SCALA. Great numerical and analytical skills. Working knowledge of cloud platforms such as MS Azure, AWS, etc..
View all details

Looking For Scala Developer - Work From Home

JOB24by7 Recruitment Consultancy Services

SCALA Agile Development Azure Administrator NoSQL Programming Lecturer MySQL Apache AWS Developer Rest API
Profile:- Scala DeveloperRequired Experience:- 2+ YearsRequired Experience:-Strong scala programming skills and experience with Scala frameworks such as Akka.Efficient in designing and developing REST APIs.Experience with SQL and NoSQL databases.Proficiency in software design patterns and principles.Experience with version control tools such as Git.Good understanding of Agile methodologies and a collaborative mindset.Excellent problem-solving and analytical skills.Experience in unit testing (eg. JUnit/ Mockito).Experience in cloud-based environments such as AWS or Azure.Experience with Kafka and Elasticsearch.
View all details

Data Engineer

Bb Works India

  • 9 - 15 yrs
  • 40.0 Lac/Yr
  • Bangalore +1 Noida
Data Warehousing ETL Python AWS SCALA Data Engineer
We have vacant of 5 Data Engineer Jobs in Bangalore, Noida, Experience Required : 9 Years Educational Qualification : Other Bachelor Degree Skill Data Warehousing, ETL, Python, AWS, SCALA, data engineer
View all details

Get Personalized Job Matches

Based on your experience, skills, interests, and career goals to help you find the most relevant opportunities faster. Register Now!

Jobs by Related Category

Azure Data Engineer

Epik Solutions

Python SQL Spark SCALA Data Bricks Azure Data
Job Description:As an Azure Data Engineer, your role will involve designing, developing, and maintaining data solutions on the Azure platform. You will be responsible for building and optimizing data pipelines, ensuring data quality and reliability, and implementing data processing and transformation logic. Your expertise in Azure Databricks, Python, SQL, Azure Data Factory (ADF), PySpark, and Scala will be essential for performing the following key responsibilities:Designing and developing data pipelines: You will design and implement scalable and efficient data pipelines using Azure Databricks, PySpark, and Scala. This includes data ingestion, data transformation, and data loading processes.Data modeling and database design: You will design and implement data models to support efficient data storage, retrieval, and analysis. This may involve working with relational databases, data lakes, or other storage solutions on the Azure platform.Data integration and orchestration: You will leverage Azure Data Factory (ADF) to orchestrate data integration workflows and manage data movement across various data sources and targets. This includes scheduling and monitoring data pipelines.Data quality and governance: You will implement data quality checks, validation rules, and data governance processes to ensure data accuracy, consistency, and compliance with relevant regulations and standards.Performance optimization: You will optimize data pipelines and queries to improve overall system performance and reduce processing time. This may involve tuning SQL queries, optimizing data transformation logic, and leveraging caching techniques.Monitoring and troubleshooting: You will monitor data pipelines, identify performance bottlenecks, and troubleshoot issues related to data ingestion, processing, and transformation. You will work closely with cross-functional teams to resolve data-related
View all details

Spark Scala Developer

Hirehut Technologies

Spring SCALA Spark Data Processing Fault Tolerance Scalability Array String Tuple Set List Map Walk in
Must-Have1. Must have 5+ years of IT experience2. Must have good experience in Spark and Scala3. Good to have experience instreaming systems like Spark streaming and Storm4. Expereicne with Spark Data processing, Performance Tuning, Memory Management, Fault Tolerance, Scalability5. Good knowledge of Hive,Sqoop,Spark, data warehousing and information management best practices6. Expertise in big data infrastructure , distributed systems, data modelling ,query processing and relational7. Experiene with Scala - Object Orient Programming concepts (Singleton and Companion Object, Class, Case Class, File Handling and Multi threading), Collections (Array,String,Tuple,Set,List,Map), Pattern Matching
View all details

Big Data

Saiva System India Pvt Ltd

  • 5 - 10 yrs
  • 27.5 Lac/Yr
  • Noida
Spark Scala Python Pyspark Azure SQL Hive Hadoop Work From Home
Hello Everyone!!,We are hiring #Bigdata_Azure_Professionals for one of our #MNC Clients.Job Location-#PAN_IndiaExperience- #5-#12YearsEmployment Type- #PermanentNotice Period- #Immediate to #60Days#Mandatory_Skills:-#SQL, #Spark/ #Scala, and #Azure_Synapse
View all details
Spark Py-spark AWS S3 EMR Redshift Scala Work From Home
Job Description: 3+ years of Spark experience 3+ years of scala or Py-spark hands-on experience (Must Have) 3+ years of AWS familiar with S3, EMR, Redshift.
View all details

Scala Developer

Sight Spectrum

Scala Algorithms Data Structures Scala Developer Big Data Hive Java Software Development Work From Home
OUR REQUIREMENTS Professional experience as a Scala developer Knowledge of Akka, event driven systems and functional programming Experience with working with large code-bases Good spoken and written English communication skills, ability to express ideas clearly Experience or strong interest in the financial industry Experience building scalable, distributed applications in Scala and Java Strong understanding of Algorithms and Data Structures Experience in developing software in an agile environment Interest in the latest programming trends such as functional and reactive programming Knowledge of relational and non-relational database systems Experience in implementation of APIs for integration with internal and external systems Strong problem-solving skills & ability to learn in a fast paced environment.
View all details

BigData Engineer

Maxdata Solutions

Bigdata Spark SCALA Pyspark Hive HBase
Currently we are hiring as Data Engineer,Job Location: Mumbai / Bangalore/ NoidaHands-on experience programming language: Python, Java, Scala ? Passionate and knowledgeable about big data stacks: ? Distributed systems: Spark(PySpark), Hadoop, Presto, Hive, etc. ? Message Queueing systems: Kafka, rabbitMQ, NSQ, etc are good to have. ? Database (Relational & NoSQL): PostgreSQL, MySQL, MongoDB, etc. ? Experience gathering and analyzing system requirements ? In-depth understanding of database structure principles, data warehousing, data mining concepts, and segmentation techniques ? Experience with cloud computing platforms (AWS, GCP, etc.) and UNIX environment. ? experience in AWS services eg EMR, Lambda, Step Functions, S3, Redshift etc is a plus. ? Experience in designing, implementing, and monitoring big data analytics solutions ? Have fast learning capability and natural curiosity about big data ? DevOps/DataOps skills are plus points ? Background: Fields of study is Computer Science (preferred) or Any other graduation degree.If you are interested then please share your updated resume onPrakash Rathod
View all details
  • 5 - 11 yrs
  • 30.0 Lac/Yr
  • Noida
Spark Developer Python Web Developer Cluster Lamda
Location: Noida / Remote Experience : 5-10 YearsSalary - upto 30 LPAJob Description: 5 10 years of recent experience in data engineering. Must have expertise on Spark with Scala. Excellent understanding data engineering concepts (ETL, near-/real-time streaming, data structures, metadata and workflow management) Good experience in AWS technologies such as EC2, Cloud Formation, EMR Cluster, AWS S3, Lambda, and AWS Analytics. Big-Data related AWS technologies like HIVE, Spark, AWS Glue, Presto, Hadoop, Athena, RedShift, S3 Select, Notebook Proficient in SQL Language. Experience in Python/PySpark/Scala. Experience of code management tools (Git/GitHub) Experience on ETL tool Glue, Data Pipeline, Talend would be an added advantageInterested candidates mail your resumePlease provide follwoing detailsTotal experience:Current CTC:Expected CTCNotice Period:LWD:Updated CV
View all details
  • 3 - 6 yrs
  • 15.0 Lac/Yr
  • Gurgaon
SAS-Statistical Analysis System ETL Hadoop DATA ENGINEER Azure JSON XML Scala Spark Github DevOps Data Miration Hive Work From Home
Job Openings for 20 Data Engineer Jobs with minimum 3 Years Experience in Gurgaon having Educational qualification of : B.C.A, B.Tech/B.E, M.C.A, M.Tech with Good knowledge in hive,spark,HBase etc. for American Express
View all details

Data Engineer

Telamon HR Solutions

  • 5 - 10 yrs
  • 30.0 Lac/Yr
  • Gurgaon
Spark Pig Hive Python Java SQL SAS-Statistical Analysis System ETL Hadoop DATA ENGINEER Azure JSON XML Scala Github DevOps Data Miration C++ NoSQL Walk in
We are looking for a candidate with 5+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:Experience with big data tools: Hadoop, Spark, Kafka, etc.Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.Experience with AWS cloud services: EC2, EMR, RDS, RedshiftExperience with stream-processing systems: Storm, Spark-Streaming, etc.Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
View all details

Urgent Required For Data Engineer Executive

Perfect Solution Group (Spectrum Placement Services)

Data Engineer Executive Computer Operator SAS-Statistical Analysis System ETL Hadoop DATA ENGINEER Azure JSON XML Scala Spark Github DevOps Data Miration Walk in
Profile - Data Engineer ExecutiveQualification - Graduate With Good Communication SkillExperience - Minimum 1 Year RequiredCandidate Should Have Knowledge of AWS,Spark, Py- Spark, Python, HarkSalary - 24 LPA TO 42 LPA Gender - Male & Female Can ApplyLocation - Pen IndiaDuties & Responsibilities-----Analyze and organize raw data.Build data systems and pipelines.Evaluate business needs and objectives.Interpret trends and patterns.Conduct complex data analysis and report on results.Prepare data for prescriptive and predictive modeling.Build algorithms and prototypes.Only Serious Candidate Apply
View all details

Python Developer

Infotech Edge

MySQL Python LISP Ruby Rails SCALA Bash Python Developer Walk in
SkillsetPython frameworks like Django, Flask, etc.Web frameworks and RESTful APIsCore Python fundamentals and programmingCode packaging, release, and deploymentDatabase knowledgeCircles, conditional and control statementsObject-relational mappingServer-side languages like Mako etc.Code versioning tools like Git, SVN, etcFundamental understanding of,Front-end technologies like JS, CSS3 and HTML5AI, ML, Deep Learning, Version Control, Neural networkingData visualization, statistics, data analyticsDesign principles that are executable for a scalable appCreating predictive modelsLibraries like Tensorflow, Scikit-learn, etcMulti-process architectureBasic knowledge about Object Relational Mapper librariesAbility to integrate databases and various data sources into a unified systemRobust testing and debugging capabilities for tools like Selenium etc.Basic knowledge about Object Relational Mapper librariesAbility to integrate databases and various data sources into a unified systemRobust testing and debugging capabilities for tools like Selenium etc.
View all details
Python SCALA JAVA AWS - EMR Hadoop Spark Kafka SQL NoSQL Data Architecture Data Structures Storm Flink
ResponsibilitiesCreate and maintain optimal data pipeline architectureAssemble large, complex data sets that meet functional / non-functional business requirements.Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Open Source and AWS big data technologiesBuild analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.Work with data and analytics experts to strive for greater functionality in our data systems.QualificationsExperience building and optimizing big data pipelines, architectures and datasets.Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.Experience interacting with customers and various stakeholders.Strong analytical skills related to working with unstructured datasets.Build processes supporting data transformation, data structures, metadata, dependency and workload management.Working knowledge of message queuing, stream processing, and highly scalable big data lakes.Strong project management and organizational skills.Experience supporting and working with cross-functional teams in a dynamic environment.They should also have experience using the following software/tools:Big data technologies: Hadoop, Spark, Kafka, etc.Relational SQL and NoSQL databases, including Postgres and Cassandra.Data pipeline and workflow management tools: Airflow, NiFi etc.Cloud services: AWS - EMR, RDS, Redshift, Glue. Azure - Databricks, Data Factory. GCP - Dataproc, Pub/SubStream-processing systems: Storm, Spark Streaming, Flink etc.
View all details

Java Developer

Consultomia Business Solutions Private Limited

Core Java Hibernate J2EE Spring AngularJs Docker Microservices RDBMS Python Kubernetes SCALA NodeJS SOAP UI Rest API React JS Work From Home Walk in
* Sound Knowledge in Java and J2EE technologies and Web technologies* Create and debug functions in Python* Working Exp in Nodejs, java and MYSQL, Oracle, RDBMS, SQL/PLSQL programming skills* Good to have experience in Java, Spring Technologies (Spring, Spring MVC, Spring Boot, Spring IOC, JPA, JDBC, REST, SECURITY, AOP, BOOT), Hibernate, Web Services (RESTful/SOAP), Jersey, Spring MVC* Having sound knowledge in web development skills in JavaScript, ReactJS, JQuery, CSS, HTML, and BootStrap* Experience of working in java IDE, SVN, Maven, CI tools, and GIT* Experience in working with ORM tools like JPA/Hibernate/Spring Data/MyBatis/Redis* Understanding of software design patterns and Java best practices* Experience in working with cross-cultural teams across multiple locations* Having experience working with front-end technologies like Html, CSS, jQuery, ReactJS, and Angular* Exposure to scalable Distributed Systems Architectures, Micro-Services, Docker, Kubernetes, Cloud Platforms (AWS, Azure, GCP)* Familiar with Agile implementation* Deep expertise with any or combination programming languages:Java, C++, C#, Ruby, Scala, Golang* Ability to understand and critique the core library/language constructs.
View all details

Data Engineer

Maxdata Solutions

Big Data Spark SCALA Impala HBase Kafka MongoDB PostgreSQL Rabbitmq Sqoop
Currently we are hiring as Data Engineer,Job Location: Mumbai / Bangalore/ NoidaHands-on experience programming language: Python, Java, Scala ? Passionate and knowledgeable about big data stacks: ? Distributed systems: Spark(PySpark), Hadoop, Presto, Hive, etc. ? Message Queueing systems: Kafka, rabbitMQ, NSQ, etc are good to have. ? Database (Relational & NoSQL): PostgreSQL, MySQL, MongoDB, etc. ? Experience gathering and analyzing system requirements ? In-depth understanding of database structure principles, data warehousing, data mining concepts, and segmentation techniques ? Experience with cloud computing platforms (AWS, GCP, etc.) and UNIX environment. ? experience in AWS services eg EMR, Lambda, Step Functions, S3, Redshift etc is a plus. ? Experience in designing, implementing, and monitoring big data analytics solutions ? Have fast learning capability and natural curiosity about big data ? DevOps/DataOps skills are plus points ? Background: Fields of study is Computer Science (preferred) or Any other graduation degree.If you are interested then please share your updated resume onPrakash Rathod
View all details

Snowflake Developer

Hirehut Technologies

Snowflake DW Architecture and Design Python Scala Walk in
Must-Have1. Must have 5+ years of IT experience, relevant experience of atleast 2 years in Snowflake.2. In-depth understanding of Data Warehousing, ETL concepts and modeling structure pinciples3. Experience working with Snowflake Functions, hands on exp with Snowflake utilities, stage and file upload features, time travel, fail safe,procedure writing,tasks,snowpipe, SnowSQL4. Knowledge on Snowflake Architecture5. Good knowledge of RDBMS topics, ability to write complex SQL, PL/SQL.6. Expertise on engineeering platform components such as Data Pipelines, Data Orchestration, Data Quality, Data Governance & Analytics7. Hands-on experience on implementing large-scale data intelligence solution around Snowflake DW8. Experience in scripting language such as Python or Scala is must9. Good experience on streaming services such as Kafka10. Experience working with Semi-Structured data
View all details

Apply to 17 SCALA Job Vacancies in Delhi NCR

  • Delhi Ncr Jobs
  • Hyderabad Jobs
  • Ahmedabad Jobs
  • Bangalore Jobs
  • Mumbai Jobs
  • Pune Jobs
  • Chennai Jobs
  • Kolkata Jobs
  • Delhi Jobs