Should have Strong expertise of Extraction, Transformation and Loading (ETL) mechanism using Informatica Big Data Management and various Push down mode using Spark, Blaze and Hive execution engine.
• Should have Strong expertise of Dynamic mapping Use case, Development, Deployment mechanism using Informatica Big Data Management .
• Should have experience on transforming and loading various Complex data sources types such as Unstructured data sources ,No SQL Data Sources.
• Should have Strong expertise of Hive Database including Hive DDL, Partition and Hive Query Language.
• Should have Good Understanding of Hadoop Eco system (HDFS, Spark, Hive).
• Should have Strong expertise of SQL/PLSQL.
• Should have Good knowledge on working with Oracle/Sybase/SQL Databases.
• Should have Good knowledge of Data Lake and Dimensional data Modeling implementation.
• Should be able to understand the requirements and write Functional Specification Document, Design Document and Mapping Specifications.