Muse Group
Muse Group empowers music makers. We create the world’s most popular apps for playing, recording, and composing music.
Through our innovative learning tools, expansive music catalogs, and free open-source software, we make it easier for millions — from beginners to experienced musicians — to be creative every day.
Our talented team of music lovers collaborates all over the globe, from Limassol to Seoul and Boston to Berlin. We’re an ambitious company with the drive and culture of a startup, with many more exciting Muse Group developments to come.
As we continue to grow, we are looking for a dedicated Data Engineer (DE) who would be specifically focused on managing and optimizing these data pipelines, thereby enabling more efficient and effective analytics.

Key Responsibilities:
- Construct and manage the architecture for data integration solutions, ensuring that it aligns with and supports business requirements.
- Design scalable, reliable, and efficient data structures and integration processes.
- Develop and implement new data solutions, including ETL.
- Manage the transition from the current architecture to the target architecture smoothly, ensuring continuity and minimizing disruption to data services.
- Build and maintain the infrastructure required for effective data storage, processing, and provisioning to internal and external customers.
- Develop built-in solutions for data reconciliation to ensure data integrity and accuracy across systems and processes.
- Assemble large and complex data sets that adhere to both functional and non-functional business requirements.
Required Experience
- Minimum of 4–5 years hands-on experience in deploying production-quality code.
- Proficiency in designing databases with massively parallel processing (MPP) appliances such as Druid, BigQuery, ClickHouse, PostgreSQL, and similar technologies.
- Professional experience in using Python, Java, or Scala for data processing, with a preference for Python.
- Hands-on experience in implementing ETL (or ELT) best practices at a large scale.
- Experience with streaming technologies such as Kafka, and data platform technologies including ClickHouse, MongoDB.
- Hands-on experience with data pipeline and workflow management tools such as Airflow, Luigi, Azkaban, and dbt.
- Expertise in Structured Query Language (SQL), including knowledge of query optimization techniques.
- Proficiency in using Git Distributed Version Control System (DVCS).
- English-speaking skills — B2.
- Skilled at dealing with constructive criticism and proficient in building relationships within the team to achieve common goals.
FOR ALL
EVERY DAY
WORK
set up
development
well-being
we work
responsibility
the line