The Role
As a Back-End Engineer at Alinia, you will build and maintain the backend systems for AI-powered products, ensuring scalability and security.
Summary Generated by Built In
About the Role
Responsibilities
Requirements
Why Join Alinia?
As Alinia’s Back-End Engineer, you will play a foundational role in shaping the core infrastructure and backend systems powering our platform. We're building critical safety and governance infrastructure for generative AI, and as one of the first engineers, you’ll be instrumental in ensuring that our backend is fast, scalable, secure, and enterprise-ready. This is a hands-on, high-impact role for someone who thrives in a fast-paced startup, can wear multiple hats, and wants to help define how the backend of AI-powered products should be built.
- Maintain, scale, and secure our Python FastAPI backend and microservices
- Serve and optimize large language models (LLMs) via performant, cost-aware inference endpoints
- Design and build robust, scalable APIs to support product and customer use cases
- Set up and manage CI/CD pipelines, observability, and monitoring to ensure system reliability
- Manage cloud infrastructure and optimize spend, balancing performance and cost
- Implement security best practices and support compliance with frameworks like SOC 2 and ISO 27001
- Work closely with product and front-end engineers, translating user and product needs into scalable backend architecture
- Collaborate with technical vendors, AI providers, and cloud platforms on integrations and deployments
- Triage and resolve backend bugs or incidents, especially those affecting customer experience
- Support post-sales technical implementation and onboarding for enterprise customers
- 3+ years as a backend engineer working on production systems
- Strong proficiency in Python and experience building with FastAPI or similar frameworks
- Deep experience with RESTful APIs, PostgreSQL, and Docker
- Solid understanding of backend infrastructure, including deployment, CI/CD, observability, and debugging
- Experience working with or integrating LLM inference pipelines
- Familiarity with cloud platforms (e.g. AWS, GCP, or Azure)
- Strong understanding of security, data privacy, and compliance in an enterprise SaaS context
- Clear communicator and cross-functional collaborator
- Comfortable working in a startup environment with shifting priorities and open threads
Nice to have:
- Experience building and scaling systems to support LLM-serving infrastructure
- Exposure to infrastructure-as-code (e.g. Terraform, Pulumi) and Kubernetes
- Prior experience with SOC 2, ISO 27001, or similar audits
- 0→1 startup experience and ability to navigate ambiguity with ownership
- DevOps mindset: build, monitor, fix, and improve continuously
- Cutting-edge tech: Work on one of the most important challenges in AI—alignment, safety, and trust
- High ownership & growth: Be a core part of the early engineering team with massive upside
- Flexible work: Hybrid or remote work, with preference for CET time zone
- Collaborative culture: Small, experienced, mission-driven team
- Impact: Directly shape the technical foundation of an AI governance platform adopted by enterprises
Top Skills
AWS
Azure
Docker
Fastapi
GCP
Postgres
Python
Restful Apis
Am I A Good Fit?
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.
Success! Refresh the page to see how your skills align with this role.
The Company
What We Do
Alinia enables companies to control & audit their AI Agents in a simple and scalable way. We give human experts superpowers to build their own AI QA validators, auditors, and guards through our no-code control platform and our family of proprietary compliance models.









