full stack development background with java and javascript/css/html knowledge of reactjs/angular is a plus big data engineer with solid background with the larger hadoop ecosystem and real-time analytics tools including pyspark/scala-spark/hive/hadoop cli/mapreduce/storm/kafka/lambda architecture comfortable with using the larger hadoop eco system familiar with job scheduling challenges in hadoop experienced in creating and submitting spark jobs experienced with kafka/storm and real-time analytics core java and python/scala background and their related libraries and frameworks experienced with spring framework and spring boot unix/linux expertise; comfortable with linux operating system and shell scripting pl/sql, rdbms background with oracle/mysql familiarity with orms a plus design, development, configuration, unit and integration testing of web applications to meet business process and application requirements familiar with config management/automations tools such as ansible/chef/puppet comfortable with microservices, ci/cd, dockers, and kubernetes familiarity with at&t’s eco platform is a plus comfortable tweaking/using jenkins and deployment orchestration creating/modifying dockers and deploying them via kubernetes