Senior Data Engineer

Our customer is looking for a fully remote Senior Data Engineer (full-time) to join their Data Engineering team in Berlin.Join our customer as a Full-time Senior Data Engineer and play a pivotal role in empowering millions of individuals worldwide to explore new languages and cultures. As part of our customers diverse and supportive team, you’ll enjoy flexible work hours and remote options while taking the lead in shaping their data architecture and designing innovative data models.

The Data Engineering team is focused on creating simple yet powerful systems that process streaming and batch data to power the user’s app experience and drive insightful analytics. You will specialize in handling streaming data from apps, as well as ETL pipelines and data models, that integrate with core product features. These services are crucial for understanding how users interact with the app, tracking time spent, and counting activities as they happen.
You will:
  • Spearhead the development and optimisation of a data collector, focusing on event and real-time streaming use cases
  • Lead the design and implementation of data models to support business needs at scale using dbt, Snowflake and Databricks Lakehouse Platform
  • Collaborate with data analysts and data scientists to deliver high-quality data solutions
  • Build a data observability and monitoring solution for data streaming services
  • Design, build, and maintain APIs and SDKs to enable self-service access to the data products / services for internal teams
  • Participate in knowledge sharing sessions
  • Mentor and coach other data engineers, fostering a culture of learning and growth

You have:
  • Computer science or related engineering degree and 5+ years of experience with Python with focus on building data pipelines
  • In-depth SQL knowledge and extensive experience working with dbt
  • Attention to detail, a strong sense for data, and a deep commitment to ensuring data quality
  • Solid experience in Dimensional Modeling, Data Warehousing and Spark streaming
  • Hands-on experience with cloud data warehouses, ideally Snowflake or Databricks Lakehouse Platform
  • Experience with AWS services (ECS, Lambda functions, S3, DynamoDB, Kinesis etc.), operations and architecture
  • Experience with Infrastructure as Code (preferably Terraform)
  • Deep understanding of API development and experience building SDKs for internal tooling or third-party use
  • Strong communication skills and eagerness to participate in cross-functional projects to support the development of Data Products
  • Skill to write clear documentation and debug data effectively

Nice to have:
  • Experience in backend development and deployment
  • Experience organizing knowledge sharing sessions and mentoring other engineers
  • Building customer-facing data observability and reporting systems with tools such as AWS QuickSight, DataDog
  • Solid understanding of data governance principles and best practices
  • Designing data architecture for a domain or a whole company

Here is what we work with for now:
  • Send over profiles by Thursday next week (2.5.)
  • Get interview slots reserved for the week after (note: that there are some holidays in Berlin and the interviews may drag somewhat into the week after
  • There will be 2 interviews conducted by the customer
  • Work targeted to start in the 3rd week of May


Work can be done fully remote and in English! 
  • Locations: Remote
  • Technologies: Snowflake, Amazon Web Services (AWS), Terraform, Data Pipelines, Databricks, Spark, ETL, Python, Data Warehouse, SQL