Job
- Level
- Senior
- Job Feld
- Data, Back End
- Anstellung
- Vollzeit
- Vertragsart
- Unbefristetes Dienstverhältnis
- Ort
- Wien
- Arbeitsmodell
- Onsite
Job Zusammenfassung
In dieser Rolle entwickelst du cloud-native Backend-Services zur Automatisierung von Recruiting-Arbeiten, implementierst KI-Systeme und sorgst für deren Betrieb und Skalierung in der Cloud.
Job Technologien
Deine Rolle im Team
- Architect, build, and operate cloud-native backend services that power AI-driven recruiter workflows.
- Design and implement agentic AI systems using frameworks such as Google ADK, LangGraph, or similar, building multi-step reasoning loops, tool-use pipelines, and agent-to-agent (A2A) communication patterns for production recruiter automation.
- Build, deploy, and maintain MCP servers to expose backend capabilities as structured tool endpoints consumable by AI agents, ensuring schema correctness, session management, and tenant-safe execution.
- Design and deploy scalable AI/LLM services using containerization and orchestration technologies in cloud environments.
- Integrate LLM APIs, embedding services, and ML inference endpoints into distributed systems with strong API design, versioning, and fault tolerance.
- Implement asynchronous processing, event-driven architectures and durable state management for AI workflow orchestration.
- Build and maintain CI/CD pipelines to automate testing, deployment, and monitoring of AI-enabled services.
- Establish observability practices (metrics, tracing, logging, alerting) to monitor model performance, latency, cost, and reliability in production.
- Optimize inference workloads for performance, scalability, and cost efficiency, including autoscaling and concurrency management.
- Partner with Data Science to bring models to production, implement evaluation pipelines, and support model lifecycle management.
Unsere Erwartungen an dich
Qualifikationen
- Strong proficiency in a strongly typed language such as Scala, Java, or similar strongly preferred.
- Familiarity with Model Context Protocol (MCP) or similar tool-serving standards.
- Practical understanding of NLP, LLM behavior, prompt design, retrieval-augmented generation (RAG), and structured output patterns.
- Familiarity with CI/CD pipelines, infrastructure-as-code (Terraform or similar), and automated deployment workflows.
- Understanding model evaluation, monitoring, drift detection, and AI system observability in production environments.
- Awareness of responsible AI practices, data security, and compliance considerations when deploying AI systems at an enterprise scale.
- Quick thinking and acting with minimal/no supervision.
- Self-driven, independent, creative, and eager to learn new skills.
- Ability to work effectively with incomplete information.
- Great communication skills.
Erfahrung
- 5+ years of experience building and operating production backend systems, with hands-on exposure to AI/ML-powered applications preferred.
- Experience integrating LLM APIs, embedding models, or ML inference services into distributed systems.
- Experience building or consuming agent frameworks (e.g., Google ADK, LangGraph, or AutoGen) to orchestrate multi-step, tool-using AI agents in production environments.
- Experience deploying and scaling AI-enabled services in AWS/GCP cloud environments.
- Hands-on experience with containerization and orchestration (Docker, Kubernetes).
- Experience designing highly concurrent, fault-tolerant systems using async processing, queues, pub/sub, or event-driven architectures.
- Experience with and desire to work for an asynchronous, remote, and global team.
Job Standorte
Themen mit denen du dich im Job beschäftigst
Das ist dein Arbeitgeber
Radancy
Wir sind der weltweit führende Anbieter von Recruiting-Technologie, die die wichtigsten Herausforderungen für Arbeitgeber löst und Ergebnisse liefert, die deren Unternehmen stärken. Unsere Recruiting-Plattform ergänzen wir mit angereicherten Daten und umfassendem Branchenwissen und revolutionieren so die Recruiting-Welt und die Möglichkeiten unserer Kunden, mit Top-Kandidaten ins Gespräch zu kommen und diese für sich zu gewinnen.
Description
- Arbeitsmodell
- Full Remote, Onsite