Data Architect Graduate Fresher Jobs in Gurgaon

There are currently no vacancies available for "Data Architect" in Gurgaon

If you are interested for future opportunities please Post Your Resume



Browse the "Data Architect" Jobs in Other location of Haryana

MDM Azure Server Data Warehousing
Job DescriptionAs a Data & Analytics Architect, you will lead key data initiatives, including cloud transformation, data governance, and AI projects. You'll define cloud architectures, guide data science teams in model development, and ensure alignment with data architecture principles across complex solutions. Additionally, you will create and govern architectural blueprints, ensuring standards are met and promoting best practices for data integration and consumption.Strong cloud data architecture knowledge (preference for Microsoft Azure)8-10+ years of experience in data architecture, with proven experience in cloud data transformation, MDM, data governance, and data science capabilities.Design reusable data architecture and best practices to support batch/streaming ingestion, efficient batch, real-time, and near real-time integration/ETL, integrating quality rules, and structuring data for analytic consumption by end uses.Ability to lead software evaluations including RFP development, capabilities assessment, formal scoring models, and delivery of executive presentations supporting a final recommendation.Well versed in the Data domains (Data Warehousing, Data Governance, MDM, Data Quality, Data Standards, Data Catalog, Analytics, BI, Operational Data Store, Metadata, Unstructured Data, non-traditional data and multi-media, ETL, ESB).Experience with cloud data technologies such as Azure data factory, Azure Data Fabric, Azure storage, Azure data lake storage, Azure data bricks, Azure AD, Azure ML etc.Experience with big data technologies such as Cloudera, Spark, Sqoop, Hive, HDFS, Flume, Storm, and Kafka.
View all details

Big Data Architect

NMS Consultant

  • 8 - 14 yrs
  • Gurgaon
Architect Hadoop CI CD Design Development Big Data
Required Skills (must have) : Strong Knowledges/hands-on experience about offers and features of Big data technologies (especially Hadoop, Hortonworks) Strong experience of development using Spark Scala, Java, JavaScript, Nifi, Kafka, Hive, Hbase Strong knowledge of API development Strong knowledge of the java frameworks (Spring MVC, Spring Security) Hands-on knowledge of implementing multi-staged CI / CD with tools like AWS DevOps, Jenkins, BitBucket. Experience in CI / CD integration within the Java / JavaScript ecosystem with build tools like Maven, Grunt, Gulp and other Devops tooling: Jenkins, , GitLab, SonarQube, GERRIT, SBT, Nexus, Docker Experience of AGILE methods (Scrum, Kanban) Active contributions to forum and dev communityRequired Skills (should have) : Knowledges about Elastic Search/Kibana (ELK) knowledge Knowledges about Linux, Unix, Windows environments Strong knowledge on various app monitoring tools. Strong knowledge of web services (WSDL Soap, Restful) Exposure to various Data Visualization tools such as PowerBI, Tableau, and Pentaho etc. Experience with MySQL, NoSql (MongoDB, Redis, DynamoDB) Scripting Skills: Strong scripting (e.g. Python) and automation skills. Operating Systems: Windows and Linux system administration. Monitoring Tools: Experience with system monitoring tools (e.g. Nagios). Problem Solving: Ability to analyze and resolve complex infrastructure resource and application deployment issues
View all details

AWS Data Engineer Lead / Architect

Vision Excel Career Solutions

Python Data Architect Data Engineer AWS
Are you a Mid/Senior level T-Shaped AWS expert with specialization in DevOps and Data Engineering space? If yes, We have an exciting opportunity just for you.One of our reputed European Client is looking for AWS engineers to help them build secure, resilient and cost-effective solutions on AWS platform to reap the benefits from their investment in AWS platform and services.We are looking for self-motivated, highly experienced engineers, possessing great analytical and excellent communication skills for this client facing role.What do we expect from you?Role: Data Engineer (AWS)*Mandatory*Experience in Developing Data Pipelines that process large volumes of data using Python, PySpark, Pandas etc, preferably on AWS.Experience in ingesting batch and streaming data from various data sources.Experience in writing complex SQL using any RDBMS (Oracle, PostgreSQL, SQL Server etc.)Experience in developing ETL, OLAP based and Analytical Applications.Ability to quickly learn and develop expertise in existing highly complex applications and architectures.Comfortable working in Agile projects*Desirable*Exposure to AWS platform's data services (AWS Lambda, Glue, Athena, Redshift, Kinesis etc.)Knowledge of DevOps and CD/CD tools.Experience in handling unstructured dataKnowledge of Financial Markets domainKeywords: Data Engineer, Data Pipelines, Data Ingestion, AWS Lambda, AWS Athena
View all details
Data Bricks Data Governance Data Warehouse Developer
10+ years' of experience in a technical role with expertise in data governance and data warehousing - such as setting up DataBricks as a Service modelProduction deployment experience with data governance solutions and hands-on experience with cloud data lakes.Experience with design and implementation of data warehousing technologies.Deep Specialty Expertise with scaling big data workloads that are performant and cost-effective - including technologies such as Delta LakeSupport customers by authoring reference architectures, how-to, and demo applicationsExperience working with Enterprise AccountsIntegrate Databricks with 3rd-party applications to support customer architectures.Experience designing and implementing architectures within public clouds (AWS, Azure or GCP)Good communication Skillsif you are interested kinly drop a mail we reach us
View all details

Get Personalized Job Matches

Based on your experience, skills, interests, and career goals to help you find the most relevant opportunities faster. Register Now!
Python SCALA JAVA AWS - EMR Hadoop Spark Kafka SQL NoSQL Data Architecture Data Structures Storm Flink
ResponsibilitiesCreate and maintain optimal data pipeline architectureAssemble large, complex data sets that meet functional / non-functional business requirements.Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Open Source and AWS big data technologiesBuild analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.Work with data and analytics experts to strive for greater functionality in our data systems.QualificationsExperience building and optimizing big data pipelines, architectures and datasets.Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.Experience interacting with customers and various stakeholders.Strong analytical skills related to working with unstructured datasets.Build processes supporting data transformation, data structures, metadata, dependency and workload management.Working knowledge of message queuing, stream processing, and highly scalable big data lakes.Strong project management and organizational skills.Experience supporting and working with cross-functional teams in a dynamic environment.They should also have experience using the following software/tools:Big data technologies: Hadoop, Spark, Kafka, etc.Relational SQL and NoSQL databases, including Postgres and Cassandra.Data pipeline and workflow management tools: Airflow, NiFi etc.Cloud services: AWS - EMR, RDS, Redshift, Glue. Azure - Databricks, Data Factory. GCP - Dataproc, Pub/SubStream-processing systems: Storm, Spark Streaming, Flink etc.
View all details