- 5+ years of experience in the field of Software Engineering with good communication skills.
- At least 4 years of hands on experience in Hadoop, Hive and Oozie.
- At least 2 years of hands on experience in Apache spark programming.
- At least 2 years of experience working with AWS technologies.
- Good knowledge and understanding on current big data technolgies.
- At least 5 years of experience working on RDBMS systems such as Oracle, SQL Server and expert level SQL/PL-SQL programming skills.
- 3 years of experience working on Java/Python and experience in writing shell scrpits.
Good knowledge and hands-on experience with Big Data platforms covering
- Hadoop data platform/tools experience on anyone of these distribution - Cloudera / Hortonworks
- Must Have Hands on Developing experience in Apache Spark Framework/API's (1.6 / 2.x) - Streaming, Core, SQL
- Must have Strong programming language background preferably on Scala or python along with SQL on Big Data context
- Must have hands on experience in Big data Ingestion tools like Kafka, Sqoop, Flume with integration on DB, Message Queue, Flat files.
- Basic Knowledge of NoSQL storage (Hive / HBase / Impala / Kudu) & Hadoop File Format (Avro / Parquet)
- Basic Working knowledge Hadoop connectors across components like Kafka>Spark, Sqoop>HDFS, Flume>Kafka, Spark>HBase, Spark>Impala
- Good analytical and problem-solving skills with proven communication and experience in working Customer facing roles.
- Cloudera Certified developer certification, will be preferred