6

HBase Jobs

filter
  • Location
  • Role
  • Functional Area
  • Qualification
  • Experience
  • Employer Type
  • 2 - 6 yrs
  • 8.0 Lac/Yr
  • Bangalore
Unix Shell Scripting Telecom OSS BSS RDBMS Hadoop Hbase Postgre SQL
Objective of the roleResponsible for deploying and supporting the product/CRs at customer environment.Managing the tickets from customer adhering to SLAJob Responsibilities/TasksInstallation of product/solution and its dependencies on serversFollowing up with customer points of contact for various aspects.Integration of product with various external entities and readiness of the solution.End to end system testing before handing over system to customer for user acceptance testing.Driving the UAT with end customer(s) from support perspectiveLaunch and post-go live management of the product, including Monitoring and automation of various jobsSkillsVery good knowledge on Unix and Shell scripting1 to 3 years of experience with Telecom BSS/OSS solution in prior with at least two customers with/without onsite presence.Good knowledge on RDBMS concepts and Oracle QueriesPreferred knowledge on Hadoop/HBase/PostGreSQLExtensive experience of systems development including involvement in all major stages of software development projects.Good Knowledge and appreciation of systems development lifecycles and methodologies.Well established communication, presentation, motivational and inter-personal skills associated with person-management abilities.Thorough understanding, appreciation and analysis of the issues underlying systems development.Experience of the technical aspects of the relevant technologies, software and hardware, to be employed.Prior experience in Telecom sources ETL tool experience is preferred.
View all details
  • 8 - 11 yrs
  • Mumbai
Hadoop Cloudera Hive Impala Hbase Sqoop Flume Kafka Nifi Hadoop Architecture Hadoop Developer
Hadoop Lead RoleExperience 9+ years- Strong architectural experience with Hortonworks, Cloudera Hadoop distributions on Appliance based and on-premise clusters.- Expertise in providing technical solutions for data lakes design and data ingestion in Hadoop.- Expertise in data modelling, data governance and architecture of large size databases.- Expertise in understanding complex data models, large scale data migrations and application development.- Expertise in designing data pipelines solutions for structured, semi-structured and unstructured data in Hadoop.- Expertise in Developing solutions for batch and real time processing data processing in Hadoop.- Hands on experience in query writing using HiveQL, Impala QL and HBase commands.- Able to provide technical guidelines assistance to development team if they face any problems related to environment- Able to coordinate with cloudera team for trouble shooting, OS / network related problems with respective teams.- Proficiency in Cloudera Manager architecture, cloudera cluster environment and cloudera manager.- Proficiency in data ingestion tools like Sqoop, Flume, Kafka, Nifi(HDF), UNIX shells scripting and python- Proficiency in building data warehouse on top of Hadoop in Hive. Defining data models and data mapping from the source to enterprise model.- Proficiency in creating solutions in Spark (Scala/Pyspark) for batch and real time processing.- Proficiency in developing solutions for NoSQL databases like HBase, Cassandra and MongoDB.- Knowledge of Data Security / Governance and Data Lineage handling on Hadoop clusters.- Strong experience on SQL Server and Oracle, MySQL.- Expertise in understanding Java , REST API concepts and troubleshooting java based services.
View all details

Azure Data ENgineer

Prevaj Consultants Pvt Ltd

  • 5 - 10 yrs
  • 10.0 Lac/Yr
  • Chennai
Big Data HDFS Hadoop Hive Yarn Pig HBase Sqoop Flume Azure Work From Home
Role: Azure Data EngineerExperience Required: Minimum 1+ Years of experience as a Azure Data Engineer.Skills Required:As a Data Engineer, you will collaborate with a team of business domain experts, data scientists and application developers to identify relevant data for analysis and develop the Big Data solution.Analyze business problems and help develop solutions for near real-time stream processing as well as batch processing on the Big Data platform.Set up and run Hadoop development frameworks.Explore and learn new technologies for creative business problem solving.Experience in Azure Data Engineer.Ability to develop and manage scalable Hadoop cluster environmentsExperience in Big Data technologies like HDFS, Hadoop, Hive, Yarn, Pig, HBase, Sqoop, Flume, etcWorking experience in Big Data services on any cloud based environment.Experience in Spark, Scala, Kafka, ADF, Akka and core or advance Java and DatabricksExperience in NOSQL technologies like Hbase, Cassandra, MongoDB, Cloudera or Hortonworks Hadoop distribution (good to have)Familiar with data warehousing concepts, distributed systems, data pipelines and ETLGood communication and interpersonal skillsMinimum 5 years of professional experience with 3 years of Big Data project experience
View all details

Business Analyst

Billiton Services

  • 5 - 8 yrs
  • Pune
Business Analysis Hadoop HBase Hive Pig AI ML CNN RNN ARIMA SARIMA Automotive Functional Domain Flume Sqoop Oozie SQL Logistic Regression Linear Regression Time Series
To serve as advisor to senior business management on business Data Modelling strategies To understand technology product and vendor strategies, products, and customer preferences to deliver prognostics-based solutions Business data analysis and service delivery, particularly with respect to the use of data, and trends and directions Research and develop statistical learning models for data collected from vehicle during Prognostics trials To collaborate with product management and engineering departments to understand company needs and devise possible solutions To establish and maintain contacts within business units to understand business activities and business drivers, business requirements, solutions strategies and alternatives, etc., being considered and/or implemented To keep up-to-date with latest technology trends To communicate results and ideas to key decision makers Ongoing research and assessment of new analysis approaches for potential use within the Enterprise To implement new statistical or other mathematical methodologies as needed for specific models or analysis To optimize joint development efforts through appropriate database use and project design Data Model Development/Programming within data science. It will form the basis of exploring, analysing and visualising data. Translate complex functional and technical requirements into detailed design. To demonstrate abilities to derive, define, and explicitly represent various artefacts within The Enterprise Framework To understand the meanings and relationships between various models To develop and maintain project level and Enterprise level model consistency and integration To maintain security and data privacy. Data Model Performance/Accuracy improvement Identify valuable data sources and automate collection processes To undertake pre-processing of structured and unstructured data To Analyse large amounts of information to discover trends and patterns
View all details

Get Personalized Job Matches

Based on your experience, skills, interests, and career goals to help you find the most relevant opportunities faster. Register Now!

BigData Engineer

Maxdata Solutions

Bigdata Spark SCALA Pyspark Hive HBase
Currently we are hiring as Data Engineer,Job Location: Mumbai / Bangalore/ NoidaHands-on experience programming language: Python, Java, Scala ? Passionate and knowledgeable about big data stacks: ? Distributed systems: Spark(PySpark), Hadoop, Presto, Hive, etc. ? Message Queueing systems: Kafka, rabbitMQ, NSQ, etc are good to have. ? Database (Relational & NoSQL): PostgreSQL, MySQL, MongoDB, etc. ? Experience gathering and analyzing system requirements ? In-depth understanding of database structure principles, data warehousing, data mining concepts, and segmentation techniques ? Experience with cloud computing platforms (AWS, GCP, etc.) and UNIX environment. ? experience in AWS services eg EMR, Lambda, Step Functions, S3, Redshift etc is a plus. ? Experience in designing, implementing, and monitoring big data analytics solutions ? Have fast learning capability and natural curiosity about big data ? DevOps/DataOps skills are plus points ? Background: Fields of study is Computer Science (preferred) or Any other graduation degree.If you are interested then please share your updated resume onPrakash Rathod
View all details

GCP Data Architect

Veiksme Tech Limited

  • 3 - 9 yrs
  • 50+ Lakh/Yr
  • United Kingdom
Bigtable Cloud Composer Hive Hbase Oozie TWS Agile
Be part of a multi-disciplined team building a cloud data platform, spanning better data and better tooling Bring the right mix of deep technical expertise for on-premise and the cloud data management platforms, agile development and soft skills Take responsibility for owning and providing the necessary strategic thought leadership, technical assurance, data integration, and data governance related to the development, evolution, and delivery of the enterprise information architecture domain as aligned to the business Own different data architecture design for data movement and apply best practices to scale the migrationSkills Required:Good understanding of GCP applications like dataproc, dataflow, bigtable, cloud composer etc. Good understanding of Hadoop applications like spark, hive, Hbase, oozie, TWS Must have done multiple large projects with GCP Big Query and ETL processes Should have worked in Agile projects
View all details

Data Engineer

Maxdata Solutions

Big Data Spark SCALA Impala HBase Kafka MongoDB PostgreSQL Rabbitmq Sqoop
Currently we are hiring as Data Engineer,Job Location: Mumbai / Bangalore/ NoidaHands-on experience programming language: Python, Java, Scala ? Passionate and knowledgeable about big data stacks: ? Distributed systems: Spark(PySpark), Hadoop, Presto, Hive, etc. ? Message Queueing systems: Kafka, rabbitMQ, NSQ, etc are good to have. ? Database (Relational & NoSQL): PostgreSQL, MySQL, MongoDB, etc. ? Experience gathering and analyzing system requirements ? In-depth understanding of database structure principles, data warehousing, data mining concepts, and segmentation techniques ? Experience with cloud computing platforms (AWS, GCP, etc.) and UNIX environment. ? experience in AWS services eg EMR, Lambda, Step Functions, S3, Redshift etc is a plus. ? Experience in designing, implementing, and monitoring big data analytics solutions ? Have fast learning capability and natural curiosity about big data ? DevOps/DataOps skills are plus points ? Background: Fields of study is Computer Science (preferred) or Any other graduation degree.If you are interested then please share your updated resume onPrakash Rathod
View all details