Job
- Level
- Senior
- Job Feld
- IT, Data, DevOps
- Anstellung
- Vollzeit
- Vertragsart
- Unbefristetes Dienstverhältnis
- Gehalt
- 70.000 bis 75.000€ Brutto/Jahr
- Ort
- Wien
- Arbeitsmodell
- Hybrid, Onsite
Job Zusammenfassung
In dieser Rolle entwirfst und implementierst du skalierbare Echtzeit- und Batch-Datenpipelines mit Databricks auf Azure, während du mit verschiedenen Teams zusammenarbeitest und kontinuierliche Verbesserungen vorantreibst.
Job Technologien
Deine Rolle im Team
- Our Real-Time Data Engineering team builds low-latency, high-throughput pipelines that power operational and analytical use cases as well as user-facing features.
- As we expand our ecosystem by integrating Databricks alongside our existing Cloudera infrastructure, this role plays a key part in shaping a modern, scalable data foundation.
- Joining a small, collaborative, and international team, you'll contribute to solutions that enable innovation across the business - all in an environment that values humor, curiosity, continuous learning, and accountability without blame.
- Primarily design and implement robust, scalable, and low latency streaming and batch data pipelines using Databricks on Azure.
- Occasionally develop some real-time data pipelines using Cloudera tools such as NiFi, Kafka, Spark, Kudu, and Impala.
- Collaborate with data scientists, analysts, engineers, product managers and third-party partners to deliver reliable, high-quality data products.
- Contribute to continuous improvements in infrastructure, automation, observability, and deployment practices.
- Document as needed and share knowledge with other team members.
- Be part of a monthly on-call rotation (1 week per month) to support the stability of our real-time data infrastructure.
Unsere Erwartungen an dich
Qualifikationen
- Strong knowledge of the Azure cloud ecosystem, including Azure Data Lake and Azure DevOps.
- Proven expertise in real-time data technologies such as Kafka and Spark Structured Streaming, plus solid SQL skills (including query tuning and optimization).
- Proficiency in Python or Scala for large-scale data engineering tasks.
- Familiarity with the medallion architecture and Delta Lake best practices.
- Familiarity with CI/CD workflows for data engineering (Azure DevOps, GitHub Actions, or similar) is an advantage.
- Knowledge of data governance, schema evolution, and GDPR/data privacy in pipelines is beneficial.
- Understanding of observability tools for data pipelines (logging, alerting, metrics) adds value.
- Fluent English (spoken and written) with excellent communication skills (clear, concise, and audience-aware).
Erfahrung
- Several years of data engineering experience building batch and real-time data pipelines.
- 3+ years of hands-on experience with Databricks is essential.
- Experience developing data pipelines with the Cloudera stack is a plus.
- Experience in online gaming, entertainment, or e-commerce industries is a nice bonus.
Benefits
Gesundheit, Fitness & Fun
Essen & Trinken
Work-Life-Integration
Job Standorte
Themen mit denen du dich im Job beschäftigst
Das ist dein Arbeitgeber
Greentube Internet Entertainment Solutions GmbH
Raaba-Grambach, Wien
Die globale interaktive Geschäftseinheit Greentube von NOVOMATIC ist der weltweit führende Full-Service-Anbieter im Online- und Mobile Gaming Sektor und Vorreiter bei der Entwicklung und Bereitstellung state of the art gaming Lösungen.
Description
- Sprachen
- Englisch
- Unternehmenstyp
- Etablierte Firma
- Arbeitsmodell
- Hybrid, Onsite
- Branche
- Internet, IT, Telekom
Dev Reviews
by devworkplaces.com
Gesamt
(1 Bewertung)3.5
Workingconditions
4.3Engineering
3.0Culture
4.0Career Growth
3.0