11

Hadoop Job Vacancies in Pune

filter
  • Location
  • Role
  • Functional Area
  • Qualification
  • Experience
  • Employer Type
  • 8 - 10 yrs
  • Pune
Kafka Scala Spark Hadoop Airflow Data Lakes Kappa Kappa ++ Architectures RDBMS NoSQL Cassandra Redis Oracle
Sr. Big Data Engineer Location: PuneExperience: 10+ years Mode: HybridRole Overview:We are seeking a talented Sr. Big Data Engineer to design, develop, and support a highly scalable, distributed SaaS-based Security Risk Prioritization product. You will lead the design and evolution of our data platform and pipelines, providing technical leadership to a team of engineers and architects.Key Responsibilities: Provide technical leadership on data platform design, roadmaps, and architecture. Design and implement scalable architecture for Big Data and Microservices environments. Drive technology explorations, leveraging knowledge of internal and industry prior art. Ensure quality architecture and design of systems, focusing on performance, scalability, and security. Mentor and provide technical guidance to other engineers.Required Skills & Technologies: Mandatory: Kafka, Scala, Spark. Big Data & Data Streaming: Spark, Kafka, Hadoop, Presto, Airflow, Data lakes, lambda architecture, kappa, and kappa ++ architectures with flink data streaming. Databases & Caching: RDBMS, NoSQL, Oracle, Cassandra, Redis. Search Solutions: Solr, Elastic. ML & Automation: Experience with ML models engineering and related deployment, scripting, and automation. Architecture: In-depth experience with messaging queues and caching components. Other Skills: Strong troubleshooting and performance benchmarking skills for Big Data technologies.Qualifications: Bachelors degree in Computer Science or equivalent. 8+ years of total experience, with 6+ years relevant. 2+ years in designing Big Data solutions with Spark. 3+ years with Kafka and performance testing for large infrastructure.
View all details

IT Trainer

Vijaya Management Services

  • 2 - 8 yrs
  • 5.0 Lac/Yr
  • Pune
Ava Python Big Data Technologies Hadoop Spark PiSpark Kafka Airflow Machine Learning Deep Learning Tableau Power BI
Training on Java, Python, Big data technologies, Hadoop, Spark, PiSpark, Kafka, Airflow, Machine Learning, Deep learning, Tableau, Power BI, TableauMin Experience: 2 to 3 years of training experience.
View all details

Salesforce Developer

Midbrains Technologies

  • 0 - 1 yrs
  • 2.0 Lac/Yr
  • Pune
Triggers Javascript Hadoop Salesforce CRM MySQL Visual Force Work From Home
SALESFORCE Developer Job Description:Roles and Responsibilities: Support business requirements and deal with all the CRM needs of the client Provide customized solutions using the Salesforce platform Take care of requirement gatherings, produce functional analysis and facilitate customer workshops, etc. To communicate with different project managers, clients, and technicians and ensure efficient participation in all the different phases of development from testing to maintenance. To troubleshoot any bugs or attacks in the system Create various timelines and development goals.Interview Location: Pune (Face to Face Interviews only)Interview Address:2nd floor, above HDFC Bank, Dange Chowk- Hinjewadi, road, near Pandit Petrol Pump, Wakad, Pune, Maharashtra 411033
View all details

Business Analyst

Billiton Services

  • 5 - 8 yrs
  • Pune
Business Analysis Hadoop HBase Hive Pig AI ML CNN RNN ARIMA SARIMA Automotive Functional Domain Flume Sqoop Oozie SQL Logistic Regression Linear Regression Time Series
To serve as advisor to senior business management on business Data Modelling strategies To understand technology product and vendor strategies, products, and customer preferences to deliver prognostics-based solutions Business data analysis and service delivery, particularly with respect to the use of data, and trends and directions Research and develop statistical learning models for data collected from vehicle during Prognostics trials To collaborate with product management and engineering departments to understand company needs and devise possible solutions To establish and maintain contacts within business units to understand business activities and business drivers, business requirements, solutions strategies and alternatives, etc., being considered and/or implemented To keep up-to-date with latest technology trends To communicate results and ideas to key decision makers Ongoing research and assessment of new analysis approaches for potential use within the Enterprise To implement new statistical or other mathematical methodologies as needed for specific models or analysis To optimize joint development efforts through appropriate database use and project design Data Model Development/Programming within data science. It will form the basis of exploring, analysing and visualising data. Translate complex functional and technical requirements into detailed design. To demonstrate abilities to derive, define, and explicitly represent various artefacts within The Enterprise Framework To understand the meanings and relationships between various models To develop and maintain project level and Enterprise level model consistency and integration To maintain security and data privacy. Data Model Performance/Accuracy improvement Identify valuable data sources and automate collection processes To undertake pre-processing of structured and unstructured data To Analyse large amounts of information to discover trends and patterns
View all details

Get Personalized Job Matches

Based on your experience, skills, interests, and career goals to help you find the most relevant opportunities faster. Register Now!

Data Engineer

The Caliber Hunt

ETL Hadoop Python AWS Spark Data Engineer Walk in
Technologies / Skills: Advanced SQL, Python and associated libraries like Pandas, Numpy etc., Pyspark , Shell scripting, Data- Modelling, Big data, Hadoop, Hive, ETL pipelines and IaC tools like Terraform etc.Responsibilities: Efficient communication skills to coordinate with users, technical teams and DataSolution architects. Document technical design documents for given requirements or JIRA stories. Communicate results and business impacts of insight initiatives to key stakeholders to collaboratively solve business problems. Working closely with the overall Enterprise Data & Analytics Architect and Engineering practice leads to ensure adherence with the best practices and design principles. Assures quality, security and compliance requirements are met for supported area. Develop fault-tolerance data pipelines running on cluster Ability to come up with scalable and modular solutionsRequired Qualification: 1-8 yrs of hands-on experience developing data pipelines for Data Ingestion or transformation using Python (PySpark) /Spark SQL in AWS cloud Experience in development of data pipelines and processing of data at scale using technologies like EMR, Lambda, Glue, Athena, Redshift, Step Functions. Advanced experience in writing and optimizing efficient SQL queries with Python and Hive handling Large Data Sets in Big-Data Environments Experience in debugging, tunning and optimizing PySpark data pipelines Should have implemented concepts and have good knowledge of Pyspark data frames, joins, partitioning, parallelism etc. Understanding of Spark UI, Event Timelines, DAG, Spark config parameters, in order to tune the long running data pipelines. Experience working in Agile implementations Experience with Git and CI/CD pipelines to deploy cloud applications Good knowledge of designing Hive tables with partitioning for performanceThanks and RegardsHR TEAM
View all details
Hadoop Hive Kafa Python Big Data Engineer JSON Work From Home
Brief about the Company:AdZapier Corporation is a global technology and enablement services company with a vision to transform data into value for everyone. Through a simple open approach, in connecting systems and data, we provide the data foundation for the worlds best marketers. By making it safe and easy to activate, validate, enhance, and unify data. We provide marketers with the ability to deliver relevant messages at scale and tie those messages back to actual results. Our products and services enable individual-based marketing, allowing our clients to generate a higher ROI and drive better omni-channel customer experiences.Position Description:Join our Information Technology team where you will work on new technologies and find ways to meet our customers needs and make it easy for them to do business with us.You will use functional expertise to act as an advisor to management and make recommendations on more complex projects. You will use professional concepts and company policies & procedures to solve a wide range of difficult problems creatively and practically.ResponsibilitiesYou will be responsible for operations and administration of Cloudera Hadoop platform.You will work independently on day to day monitoring and operations of Data Analytics platform. You will be required to develop automation using scripting languages. After initial training, you will be able to handle critical operation tasks as well as on demand requests.Minimum Requirements: 5+ years of experience in Software Development including Big Data Analytics area Experience in Hadoop Big Data Platform Operations and Administration High Proficiency working with Hadoop platform including Hadoop, Hive, Spark/Scala, Java, Kafka, Flume etc. Experience with any scripting language such as BASH, Scala or Python Good understanding of file formats including JSON, Parquet, Avro, and otherWork Hours - 2.30 pm noon to 11.30 pm (Mon-Fri) (US Shift)
View all details

Devops Engineer

Telasar Solutions Pvt. Ltd.

  • 1 - 6 yrs
  • Pune
SQL Data Base Administrator Docker Kubernetes Prometheus Hadoop Openstock Ansible DevOps Engineer Work From Home
Job Openings for 20 Devops Engineer Jobs with minimum 1 Year Experience in Pune having Educational qualification of : B.C.A, M.C.A with Good knowledge in SQL Data Base Administrator,docker,kubernetes,promethus,hadoop,openstock,ansible etc.
View all details

Salesforce Developer

epergne solutions

  • 5 - 9 yrs
  • Pune
Triggers Hadoop Salesforce CRM Salesforce Developer CRM SOAP
SFDC Tools Configuration , CRM , SOAP, REST, API, SFDC Loader, apex call-out, inbound -outbound
View all details

Data Specialist

Perex Engineering Private Limited

Snowflake Python ETL Hadoop Big Data Data Specialist Work From Home
We have vacant of 20 Data Engineer Jobs in Hyderabad,Bangalore,Chennai,Pune Experience Required : 3 Years to 10 Years Educational Qualification : Other Bachelor Degree Skill Snowflake,Python,ETL,Hadoop,Big data etc.
View all details

Data Scientist

GSN Solutions LLC

  • 7 - 13 yrs
  • Pune
Statistics Python Machine Learning Business Intelligence Tableau SQL Programming Big Data Hadoop Private API JSON YAML XML Hypotheses Validation Tests Work From Home
What we expect? Excellent knowledge of Statistics, especially distributions, likelihood estimators, etc. Hands on experience with one of these programming languages: Python / R Extensive knowledge of Machine Learning concepts and techniques, along with knowledge of various algorithms and their use cases Experience in using Business Intelligence (BI) tools like Tableau (or equivalent) Hands on experience in using SQL programming with Microsoft SQL Server / MySQL Familiarity in using Big Data / Hadoop (or equivalent) Familiarity with a wide variety of data sources, including public or private APIs and standard data formats, like JSON, YAML and XML Ability to design and implement Validation Tests, Hypotheses with supporting documentation Excellent documentation skills, including advanced use of Microsoft Excel, Word, and PowerPoint Ability to work independently with minimum input or assistance Ability to effectively communicate with all levels of audience business and technical Good in spoken and written English languageResponsibilities Gather and analyze data, using various types of analytics and reporting tools to detect patterns, trends, and relationships in data sets Establish best practices for collecting data using analysis tools, and interpreting data Process, cleanse, and verify the integrity of data used for analysis Perform ad-hoc analysis and present results in a clear manner using appropriate medium Process / mine huge volumes of structured, semi-structured and unstructured data using state-of-the-art methods to derive meaningful insights in an appropriate format Recommend tools, techniques, and practices across organization to enhance knowledge Create complex predictive models using ML techniques and relevant tools
View all details
Python SCALA JAVA AWS - EMR Hadoop Spark Kafka SQL NoSQL Data Architecture Data Structures Storm Flink
ResponsibilitiesCreate and maintain optimal data pipeline architectureAssemble large, complex data sets that meet functional / non-functional business requirements.Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Open Source and AWS big data technologiesBuild analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.Work with data and analytics experts to strive for greater functionality in our data systems.QualificationsExperience building and optimizing big data pipelines, architectures and datasets.Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.Experience interacting with customers and various stakeholders.Strong analytical skills related to working with unstructured datasets.Build processes supporting data transformation, data structures, metadata, dependency and workload management.Working knowledge of message queuing, stream processing, and highly scalable big data lakes.Strong project management and organizational skills.Experience supporting and working with cross-functional teams in a dynamic environment.They should also have experience using the following software/tools:Big data technologies: Hadoop, Spark, Kafka, etc.Relational SQL and NoSQL databases, including Postgres and Cassandra.Data pipeline and workflow management tools: Airflow, NiFi etc.Cloud services: AWS - EMR, RDS, Redshift, Glue. Azure - Databricks, Data Factory. GCP - Dataproc, Pub/SubStream-processing systems: Storm, Spark Streaming, Flink etc.
View all details

Apply to 11 Hadoop Job Vacancies in Pune

  • Pune Jobs
  • Hyderabad Jobs
  • Ahmedabad Jobs
  • Bangalore Jobs
  • Mumbai Jobs
  • Chennai Jobs
  • Kolkata Jobs
  • Delhi Jobs