What you’re going to do

  • Design and build the infrastructure with a focus on large-scale data extraction, preparation and loading of data from a variety of sources to turn information into insights using multiple platforms.
  • Work closely with other data and analytics team members to design, develop, maintain and evaluate big data solutions.
  • Develop prototypes and proof of concepts for the selected solutions
  • Extract data from a variety of sources like relational databases, NoSQL Databases, Distributed File System.
  • Write clean, well ­engineered, maintainable code that conforms with accepted standards
  • Develop quality code through unit and functional testing
  • Participate in the iteration planning and team standup meetings
  • Work with talented and determined engineers and designers to shape the future of big data solutions
  • Discover continuous learning opportunities in your everyday activity
  • Add your own mix of flavors to our dynamic and innovative team
  • Connect with passionate people in our open and friendly environment.

What we’re looking for

Qualities

We think it’s essential to have a continuous drive for self improvement and self motivation. Instead of opposing change, we count on you reshaping your mindset to accommodate the new in your daily craft. Your initiative and accountability will open doors much faster and we trust you’ll do your best in being productive and efficient.

Your positive and team­ oriented attitude will support you in working well with your colleagues. Good communication skills will help you create stronger connections. The secret ingredient to succeed in a rapidly expanding environment is to be highly organized and able to balance multiple simultaneous projects. Whatever the (technical) problem, utilize your skills to be part of the solution.

The difference between something good and something great will be your extreme attention to detail and the consistency of your work. Performing independently, with little supervision, will unlock more of your creativity to encourage you to reach your potential. Your passion towards big data will fuel your inspiration to come up with original ideas on how to get things done. All these will make a major impact on your results.

Qualifications

To complete the ideal candidate profile, you need to have:

  • BS or higher in Computer Science or related discipline
  • 1.5+ years experience in software development and database concepts, in general;
  • Experience in Object-oriented/object function scripting languages such as Python, Java,
    Scala;
  • Experience with relational database internals, including both query processing and
    query planning, or other data processing infrastructure;
  • Basic knowledge of key data structures and algorithms;
  • Knowledge of data modeling and understanding of different data structures and their
    benefits and limitations under particular use cases;
  • Proficient understanding of distributed computing principles;
  • Mandatory experience with Spark (Apache Spark, HDFS). Apache Kafka, Apache
    Kerberos, Elasticsearch are a plus;
  • Experience with NoSQL databases, such as HBase, Cassandra or MongoDB, preferably
    HBase;
  • Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala, preferablyHive;
  • Experience with Management of Hadoop cluster, with all included services is a plus;
  • Good knowledge of data warehousing solutions;
  • Good ability to familiarise with unknown code in order to analyse and improve it;
  • Experience with version control software (preferably Git);
  • Experience with Agile methodologies;
  • Good English skills (written and spoken).