Job
- Level
- Erfahren
- Job Feld
- Data, Database
- Anstellung
- Vollzeit
- Vertragsart
- Unbefristetes Dienstverhältnis
- Gehalt
- ab 3.843 € Brutto/Monat
- Ort
- Linz
- Arbeitsmodell
- Hybrid, Onsite
Job Zusammenfassung
In dieser Rolle entwickelst du Datenmodelle und robuste ETL-Pipelines, optimierst Datenstrukturen in Cloud-Umgebungen und stellst die Datenintegrität gemäß Governance-Vorgaben sicher.
Job Technologien
Deine Rolle im Team
- Design and implement effective data models and table structures across various storage systems, including relational databases, NoSQL stores, data warehouses, and data lakes.
- Build, maintain, and optimize robust data pipelines (ETL/ELT) to ingest, transform, and load data from production systems and external sources.
- Use workflow orchestration tools to schedule, automate, and monitor data pipelines, ensuring their reliability and performance.
- Define and implement data quality standards and processes (e.g., bronze, silver, gold tiering), including handling missing values and ensuring data integrity, accuracy, and completeness.
- Establish and enforce data governance policies and procedures, manage data lineage and metadata, implement access controls and encryption, and support compliance with data privacy regulations (e.g., GDPR, CCPA).
- Implement and manage scalable data platforms (data warehouses, data lakes) to support efficient analytics, feature engineering, and model training for AI applications.
- Conduct statistical analyses and evaluations of datasets, and develop dashboards or monitoring systems to track pipeline health and data quality metrics.
- Collaborate closely with AI Engineers, AI Software Engineers, QA Engineers, and Data Analysts to understand data requirements and deliver reliable, high-quality data solutions.
Unsere Erwartungen an dich
Qualifikationen
- Strong proficiency in SQL and Python for data manipulation, automation, and pipeline development.
- Familiarity with big data tools and frameworks such as Apache Spark, Kafka, or Hadoop.
- Solid understanding of data modeling, ETL/ELT development, and data warehousing concepts.
- Proficiency with version control systems (e.g., Git).
- Excellent problem-solving skills and high attention to detail.
Erfahrung
- Proven experience as a Data Engineer, including designing and building data pipelines and infrastructure.
- Hands-on experience with cloud platforms (e.g., GCP, Azure) and their respective data services (e.g., BigQuery, Azure Data Factory, Databricks).
- Experience with data quality management and data governance principles.
Benefits
Gesundheit, Fitness & Fun
Work-Life-Integration
Job Standorte
Themen mit denen du dich im Job beschäftigst
Das ist dein Arbeitgeber
TeamViewer GmbH
TeamViewer ist der weltweit führende Anbieter von Remote-Connectivity-Lösungen und ermöglicht es Nutzern alles, jederzeit und überall zu verbinden.
Description
- Unternehmensgröße
- 50-249 Employees
- Sprachen
- Englisch
- Unternehmenstyp
- Etablierte Firma
- Arbeitsmodell
- Hybrid, Onsite
- Branche
- Internet, IT, Telekom
