Position ID 7102017
Date Posted 2017-06-22

Title: Sr. Hadoop Developer (LOCALS PREFERRED-F2F MUST)                 

Duration: Long term (6 months)

Location: Washington, DC

Term: Contract

Experience required: Overall 10 Years and 4-5 years of experience working in Hadoop ((Java/Python, Spark, EcoSystem, Hive, ozie, sqoop, flume, spark or python) environments

Visa: US Citizens & Green Card Holders only. (No H1B)

 Must be able to obtain Public Trust Clearance


  • Accept tasking and update JIRA with the progress of assigned tasks
  • Write scripts to move data from various systems and various file types into relational database systems (i.e. Oracle, Netezza, Hadoop)
  • Advise best of breed approaches to database instances in Netezza, Oracle and Hadoop using depth of knowledge of database strategies across platforms
  • Experience collaborating with the team to design and propose architectural designs and Models (logical and physical)
  • Write scripts to parse data from Source systems into Hadoop
  • Expected to be tool agnostic and able to customize tools as needed to support customer requirements


Required Experience & Skills

Required Skills:

  • Must have 10 years of experience functioning as a Developer
  • Must have 4-5 years of experience working in Hadoop (Java/Python, Spark, EcoSystem, Hive, ozie, sqoop, flume, spark or python) environments
  • Senior Level developer experience with JAVA on Hadoop
  • MPP database experience e.g. Redshift and Netezza
  • Demonstrated Senior level Hadoop development skills in Hadoop 2
  • Demonstrated knowledge of using various file types in Hadoop (avro, orc, parquet)
  • Possess an in-depth knowledge of data retrieval and loading processes
  • Excellent communication skills with other members of the team
  • Excellent communication skills with customers
  • Experience working with Big Insights is preferred
  • Verifiable experience/familiarity with one or more widely-used Hadoop distributions (InfoSphere Biginsights is a plus)
  • Understanding and familiarity with how HDFS works
  • Understanding and familiarity with how MapReduce works
  • Extensive experience with widely-used Hadoop tools, such as Apache Hive, Pig and Impala
  • Verifiable experience with Apache Spark is desired
  • Demonstrated experience with Cloud-based Hadoop and Data Warehouse
  • Technologies, such as Amazon EMR, Redshift
  • Experience with Hadoop data ingestion, data processing and data modelling process
  • Real-life/hands- on experience with designing and delivering Hadoop-based big data platform/solution is highly desirable


** Interested candidates please call Susan @ 703-834-5565 or email: jobs@masterinformatix.com  **

Education Bachelors or Master’s degree from an accredited institution.
Compensation MARKET
Position Type CONTRACT
Apply Now