11

SCALA Job Vacancies in Noida

filter
  • Location
  • Role
  • Functional Area
  • Qualification
  • Experience
  • Employer Type

Looking For Scala Developer - Work From Home

JOB24by7 Recruitment Consultancy Services

SCALA Agile Development Azure Administrator NoSQL Programming Lecturer MySQL Apache AWS Developer Rest API
Profile:- Scala DeveloperRequired Experience:- 2+ YearsRequired Experience:-Strong scala programming skills and experience with Scala frameworks such as Akka.Efficient in designing and developing REST APIs.Experience with SQL and NoSQL databases.Proficiency in software design patterns and principles.Experience with version control tools such as Git.Good understanding of Agile methodologies and a collaborative mindset.Excellent problem-solving and analytical skills.Experience in unit testing (eg. JUnit/ Mockito).Experience in cloud-based environments such as AWS or Azure.Experience with Kafka and Elasticsearch.
View all details

Data Engineer

Bb Works India

Data Warehousing ETL Python AWS SCALA Data Engineer
We have vacant of 5 Data Engineer Jobs in Bangalore, Noida, Experience Required : 9 Years Educational Qualification : Other Bachelor Degree Skill Data Warehousing, ETL, Python, AWS, SCALA, data engineer
View all details

Azure Data Engineer

Epik Solutions

Python SQL Spark SCALA Data Bricks Azure Data
Job Description:As an Azure Data Engineer, your role will involve designing, developing, and maintaining data solutions on the Azure platform. You will be responsible for building and optimizing data pipelines, ensuring data quality and reliability, and implementing data processing and transformation logic. Your expertise in Azure Databricks, Python, SQL, Azure Data Factory (ADF), PySpark, and Scala will be essential for performing the following key responsibilities:Designing and developing data pipelines: You will design and implement scalable and efficient data pipelines using Azure Databricks, PySpark, and Scala. This includes data ingestion, data transformation, and data loading processes.Data modeling and database design: You will design and implement data models to support efficient data storage, retrieval, and analysis. This may involve working with relational databases, data lakes, or other storage solutions on the Azure platform.Data integration and orchestration: You will leverage Azure Data Factory (ADF) to orchestrate data integration workflows and manage data movement across various data sources and targets. This includes scheduling and monitoring data pipelines.Data quality and governance: You will implement data quality checks, validation rules, and data governance processes to ensure data accuracy, consistency, and compliance with relevant regulations and standards.Performance optimization: You will optimize data pipelines and queries to improve overall system performance and reduce processing time. This may involve tuning SQL queries, optimizing data transformation logic, and leveraging caching techniques.Monitoring and troubleshooting: You will monitor data pipelines, identify performance bottlenecks, and troubleshoot issues related to data ingestion, processing, and transformation. You will work closely with cross-functional teams to resolve data-related
View all details

Spark Scala Developer

Hirehut Technologies

Spring SCALA Spark Data Processing Fault Tolerance Scalability Array String Tuple Set List Map Walk in
Must-Have1. Must have 5+ years of IT experience2. Must have good experience in Spark and Scala3. Good to have experience instreaming systems like Spark streaming and Storm4. Expereicne with Spark Data processing, Performance Tuning, Memory Management, Fault Tolerance, Scalability5. Good knowledge of Hive,Sqoop,Spark, data warehousing and information management best practices6. Expertise in big data infrastructure , distributed systems, data modelling ,query processing and relational7. Experiene with Scala - Object Orient Programming concepts (Singleton and Companion Object, Class, Case Class, File Handling and Multi threading), Collections (Array,String,Tuple,Set,List,Map), Pattern Matching
View all details

Get Personalized Job Matches

Based on your experience, skills, interests, and career goals to help you find the most relevant opportunities faster. Register Now!

Jobs by Related Category

Big Data

Saiva System India Pvt Ltd

  • 5 - 10 yrs
  • 27.5 Lac/Yr
  • Noida
Spark Scala Python Pyspark Azure SQL Hive Hadoop Work From Home
Hello Everyone!!,We are hiring #Bigdata_Azure_Professionals for one of our #MNC Clients.Job Location-#PAN_IndiaExperience- #5-#12YearsEmployment Type- #PermanentNotice Period- #Immediate to #60Days#Mandatory_Skills:-#SQL, #Spark/ #Scala, and #Azure_Synapse
View all details

BigData Engineer

Maxdata Solutions

Bigdata Spark SCALA Pyspark Hive HBase
Currently we are hiring as Data Engineer,Job Location: Mumbai / Bangalore/ NoidaHands-on experience programming language: Python, Java, Scala ? Passionate and knowledgeable about big data stacks: ? Distributed systems: Spark(PySpark), Hadoop, Presto, Hive, etc. ? Message Queueing systems: Kafka, rabbitMQ, NSQ, etc are good to have. ? Database (Relational & NoSQL): PostgreSQL, MySQL, MongoDB, etc. ? Experience gathering and analyzing system requirements ? In-depth understanding of database structure principles, data warehousing, data mining concepts, and segmentation techniques ? Experience with cloud computing platforms (AWS, GCP, etc.) and UNIX environment. ? experience in AWS services eg EMR, Lambda, Step Functions, S3, Redshift etc is a plus. ? Experience in designing, implementing, and monitoring big data analytics solutions ? Have fast learning capability and natural curiosity about big data ? DevOps/DataOps skills are plus points ? Background: Fields of study is Computer Science (preferred) or Any other graduation degree.If you are interested then please share your updated resume onPrakash Rathod
View all details
  • 5 - 11 yrs
  • 30.0 Lac/Yr
  • Noida
Spark Developer Python Web Developer Cluster Lamda
Location: Noida / Remote Experience : 5-10 YearsSalary - upto 30 LPAJob Description: 5 10 years of recent experience in data engineering. Must have expertise on Spark with Scala. Excellent understanding data engineering concepts (ETL, near-/real-time streaming, data structures, metadata and workflow management) Good experience in AWS technologies such as EC2, Cloud Formation, EMR Cluster, AWS S3, Lambda, and AWS Analytics. Big-Data related AWS technologies like HIVE, Spark, AWS Glue, Presto, Hadoop, Athena, RedShift, S3 Select, Notebook Proficient in SQL Language. Experience in Python/PySpark/Scala. Experience of code management tools (Git/GitHub) Experience on ETL tool Glue, Data Pipeline, Talend would be an added advantageInterested candidates mail your resumePlease provide follwoing detailsTotal experience:Current CTC:Expected CTCNotice Period:LWD:Updated CV
View all details

Urgent Required For Data Engineer Executive

Perfect Solution Group (Spectrum Placement Services)

Data Engineer Executive Computer Operator SAS-Statistical Analysis System ETL Hadoop DATA ENGINEER Azure JSON XML Scala Spark Github DevOps Data Miration Walk in
Profile - Data Engineer ExecutiveQualification - Graduate With Good Communication SkillExperience - Minimum 1 Year RequiredCandidate Should Have Knowledge of AWS,Spark, Py- Spark, Python, HarkSalary - 24 LPA TO 42 LPA Gender - Male & Female Can ApplyLocation - Pen IndiaDuties & Responsibilities-----Analyze and organize raw data.Build data systems and pipelines.Evaluate business needs and objectives.Interpret trends and patterns.Conduct complex data analysis and report on results.Prepare data for prescriptive and predictive modeling.Build algorithms and prototypes.Only Serious Candidate Apply
View all details

Python Developer

Infotech Edge

MySQL Python LISP Ruby Rails SCALA Bash Python Developer Walk in
SkillsetPython frameworks like Django, Flask, etc.Web frameworks and RESTful APIsCore Python fundamentals and programmingCode packaging, release, and deploymentDatabase knowledgeCircles, conditional and control statementsObject-relational mappingServer-side languages like Mako etc.Code versioning tools like Git, SVN, etcFundamental understanding of,Front-end technologies like JS, CSS3 and HTML5AI, ML, Deep Learning, Version Control, Neural networkingData visualization, statistics, data analyticsDesign principles that are executable for a scalable appCreating predictive modelsLibraries like Tensorflow, Scikit-learn, etcMulti-process architectureBasic knowledge about Object Relational Mapper librariesAbility to integrate databases and various data sources into a unified systemRobust testing and debugging capabilities for tools like Selenium etc.Basic knowledge about Object Relational Mapper librariesAbility to integrate databases and various data sources into a unified systemRobust testing and debugging capabilities for tools like Selenium etc.
View all details
Python SCALA JAVA AWS - EMR Hadoop Spark Kafka SQL NoSQL Data Architecture Data Structures Storm Flink
ResponsibilitiesCreate and maintain optimal data pipeline architectureAssemble large, complex data sets that meet functional / non-functional business requirements.Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Open Source and AWS big data technologiesBuild analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.Work with data and analytics experts to strive for greater functionality in our data systems.QualificationsExperience building and optimizing big data pipelines, architectures and datasets.Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.Experience interacting with customers and various stakeholders.Strong analytical skills related to working with unstructured datasets.Build processes supporting data transformation, data structures, metadata, dependency and workload management.Working knowledge of message queuing, stream processing, and highly scalable big data lakes.Strong project management and organizational skills.Experience supporting and working with cross-functional teams in a dynamic environment.They should also have experience using the following software/tools:Big data technologies: Hadoop, Spark, Kafka, etc.Relational SQL and NoSQL databases, including Postgres and Cassandra.Data pipeline and workflow management tools: Airflow, NiFi etc.Cloud services: AWS - EMR, RDS, Redshift, Glue. Azure - Databricks, Data Factory. GCP - Dataproc, Pub/SubStream-processing systems: Storm, Spark Streaming, Flink etc.
View all details

Java Developer

Consultomia Business Solutions Private Limited

Core Java Hibernate J2EE Spring AngularJs Docker Microservices RDBMS Python Kubernetes SCALA NodeJS SOAP UI Rest API React JS Work From Home Walk in
* Sound Knowledge in Java and J2EE technologies and Web technologies* Create and debug functions in Python* Working Exp in Nodejs, java and MYSQL, Oracle, RDBMS, SQL/PLSQL programming skills* Good to have experience in Java, Spring Technologies (Spring, Spring MVC, Spring Boot, Spring IOC, JPA, JDBC, REST, SECURITY, AOP, BOOT), Hibernate, Web Services (RESTful/SOAP), Jersey, Spring MVC* Having sound knowledge in web development skills in JavaScript, ReactJS, JQuery, CSS, HTML, and BootStrap* Experience of working in java IDE, SVN, Maven, CI tools, and GIT* Experience in working with ORM tools like JPA/Hibernate/Spring Data/MyBatis/Redis* Understanding of software design patterns and Java best practices* Experience in working with cross-cultural teams across multiple locations* Having experience working with front-end technologies like Html, CSS, jQuery, ReactJS, and Angular* Exposure to scalable Distributed Systems Architectures, Micro-Services, Docker, Kubernetes, Cloud Platforms (AWS, Azure, GCP)* Familiar with Agile implementation* Deep expertise with any or combination programming languages:Java, C++, C#, Ruby, Scala, Golang* Ability to understand and critique the core library/language constructs.
View all details

Data Engineer

Maxdata Solutions

Big Data Spark SCALA Impala HBase Kafka MongoDB PostgreSQL Rabbitmq Sqoop
Currently we are hiring as Data Engineer,Job Location: Mumbai / Bangalore/ NoidaHands-on experience programming language: Python, Java, Scala ? Passionate and knowledgeable about big data stacks: ? Distributed systems: Spark(PySpark), Hadoop, Presto, Hive, etc. ? Message Queueing systems: Kafka, rabbitMQ, NSQ, etc are good to have. ? Database (Relational & NoSQL): PostgreSQL, MySQL, MongoDB, etc. ? Experience gathering and analyzing system requirements ? In-depth understanding of database structure principles, data warehousing, data mining concepts, and segmentation techniques ? Experience with cloud computing platforms (AWS, GCP, etc.) and UNIX environment. ? experience in AWS services eg EMR, Lambda, Step Functions, S3, Redshift etc is a plus. ? Experience in designing, implementing, and monitoring big data analytics solutions ? Have fast learning capability and natural curiosity about big data ? DevOps/DataOps skills are plus points ? Background: Fields of study is Computer Science (preferred) or Any other graduation degree.If you are interested then please share your updated resume onPrakash Rathod
View all details

Snowflake Developer

Hirehut Technologies

Snowflake DW Architecture and Design Python Scala Walk in
Must-Have1. Must have 5+ years of IT experience, relevant experience of atleast 2 years in Snowflake.2. In-depth understanding of Data Warehousing, ETL concepts and modeling structure pinciples3. Experience working with Snowflake Functions, hands on exp with Snowflake utilities, stage and file upload features, time travel, fail safe,procedure writing,tasks,snowpipe, SnowSQL4. Knowledge on Snowflake Architecture5. Good knowledge of RDBMS topics, ability to write complex SQL, PL/SQL.6. Expertise on engineeering platform components such as Data Pipelines, Data Orchestration, Data Quality, Data Governance & Analytics7. Hands-on experience on implementing large-scale data intelligence solution around Snowflake DW8. Experience in scripting language such as Python or Scala is must9. Good experience on streaming services such as Kafka10. Experience working with Semi-Structured data
View all details

Apply to 11 SCALA Job Vacancies in Noida

  • Noida Jobs
  • Hyderabad Jobs
  • Ahmedabad Jobs
  • Bangalore Jobs
  • Mumbai Jobs
  • Pune Jobs
  • Chennai Jobs
  • Kolkata Jobs
  • Delhi Jobs