We are looking for a Big Data platform engineer to design and develop our Big Data application infrastructure. This role involves development and support of the various Big Data applications and frameworks on the BlueData EPIC software platform — including installation, configuration, and management of distributed file systems, job execution frameworks, NoSQL databases,key-value and SQL-on-Hadoop systems.
Desired Skills & Experience/Qualifications
- BS, MS in Computer Science or equivalent
- Experience with large data sets and distributed computing
- Hands-on experience with the Hadoop stack (e.g. MapReduce, Sqoop, Pig, Hive, HBase, Flume), Spark, Kafka, and other Big Data frameworks
- Experience in working on large linux clusters
- Hands-on experience with production Hadoop systems (e.g. administration, configuration management, monitoring, debugging, and performance tuning)
- Knowledge of NoSQL platforms (e.g. key-value stores)
- Knowledge of data warehousing and Business Intelligence systems
- Hands-on experience with open source software platforms and languages (e.g. Java/Scala, Python)
- Knowledge of cloud computing infrastructure (e.g. Amazon Web Services EC2, Amazon Elastic MapReduce)
- Self-starter, fast learner, and the ability to work in a fast-paced environment
To apply, send your resume to firstname.lastname@example.org.