Big Data Engineer


Athina, Attica, Greece

We are Kaizen Gaming

Kaizen Gaming is the leading GameTech company in Greece and one of the fastest-growing in Europe, with the Stoiximan brand in Greece and Cyprus and Betano in Germany, Romania, Bulgaria, Czech Republic, Portugal, Brazil, Chile, Peru, Ecuador and Canada. Our aim is to leverage cutting-edge Technology in order to provide the optimum experience to those who trust us for their entertainment.

The Team

At Kaizen, our aim is to make data driven decisions in order to automate our services while also focusing on offering tailored customer experiences. Our machine learning team is dedicated to this mission by building a variety of models, from binary classification tasks up to recommendation systems. We focus on transforming business needs into production applications and we cover a wide range of business sectors utilizing different data types and handling a broad project diversity. 

Our teams comprise of three different roles, data scientists, machine learning engineer and data engineers so that they include the full skillset to deliver projects to production.

The Role

We’re looking for passionate and ambitious Data Engineers who can find creative solutions to challenging problems. The role will be part of a machine learning team and will work on delivering optimized data/feature pipelines in individual projects as well as in our main infrastructure in the feature store.


  • Design, implement and operate large-scale, high-volume, high-performance data structures;
  • Work closely with the data scientists to optimize data queries both for real-time and batch sources;
  • Design big data architectures with a focus on scalability and performance for the data science problems;
  • Support the developments in our real time feature generation infrastructure. 


Must have:

  • 3+ years of experience with Spark Core;
  • Hands on experience experience with Spark structured streaming;
  • 3+ years of hands-on experience in writing complex, highly-optimized SQL queries across large data sets;
  • 3-5 years of experience in Python programming;
  • Experience with workflow engines i.e. airflow;
  • Strong skills in teamwork, communication, and analytical thinking;
  • Knowledge of version control tools;
  • Fluency in English, both oral and written.

Nice to have:

  • Experience with Azure / Databricks;
  • Experience with feature engineering;
  • Experience with feature store design and implementation;
  • Experience with delta lake;
  • Experience with Redis;
  • Experience with KAFKA;
  • Experience with SQL and NoSQL databases.