Imagine a workplace which encourages you to take on responsibility and where your ideas will be heard and implemented. Imagine a fast paced environment where your performance makes the difference
You will be part of an international, talented and motivated team that uses their combined knowledge of machine learning, natural language processing, and big data technologies.
You are the perfect candidate if you enjoy researching, prototyping and finetuning algorithms to solve novel problems. Our engineers are willing to walk the extra mile to make sure our solution is not only good, but great and beneficial for millions of travelers. You will systematically look for weaknesses and areas for improvement in our approaches, and will be motivated by them to create even better solutions.
What challenges await you?
Being a member of the Metareview, you will be working in a cross-discipline delivery team focused on one of many core data products.
Gather and process raw data at scale using frameworks such as Hadoop MR and Spark.
Maintain and write new data processing pipelines handling hundreds of GB of data.
Optimize and improve existing features or data processes for performance and stability
Apply machine learning algorithms to improve our product and drive decisions
What do we expect from you?
3+ years of experience building data intensive applications.
Very strong programming and architectural experience, ideally in Python, Java or Scala but we are open to other experience if you would like to become a Python hacker.
You find creative solutions to tough problems. You are not only a great developer, you are also an architect, who is not afraid to pave the way for bigger and better things.
Experience and skills to clean and scrub noisy datasets.
Experience building data pipelines and ETLs using MapReduce, Spark or Flink.
Good to have:
Expert-level knowledge in Python. Experience in frameworks such as Pandas, Scikit-learn, Scipy, Luigi / Airflow is a plus.
Love for the command line with optional affinity for Linux scripting
Experience with Big data technologies (Hadoop, Spark, Flink, Hive, Impala, HBase, Pig, Redshift, Kafka)
Experience building scalable REST-APIs using Python or similar technologies.
Experience with data mining, machine learning, natural language processing, or information retrieval is a plus.
Experience with AWS or other IaaS/PaaS.
Experience with Agile Methodologies such as Scrum or Kanban
One free adventure day per year
Flat hierarchies & lots of freedom
Hadoop cluster with 100 nodes at your disposal
Free fruits, snacks & drinks
Legendary beer'o'clock Fridays to celebrate the week's successes
Ship your code to production & see it influence millions of travelers
Organized sport groups
Free German classes
Company pension scheme
Regular team events