Role Overview
We are looking for an AI Engineer to maintain and enhance the AI-driven backbone of the Sootra platform. This role involves ensuring production stability of LLM/VLM pipelines, optimizing model interactions, maintaining APIs and queues, and building feedback loops that continuously improve AI outputs.
Responsibilities
- Maintain and optimize LLM- and VLM-powered services for content generation, compliance scoring, and campaign testing.
- Manage and scale Flask/FastAPI microservices, ensuring high uptime and low latency.
- Maintain Dramatiq queues for async AI workflows, campaign generation, and pipeline orchestration.
- Deploy, monitor, and debug Uvicorn/Gunicorn-based hosting in production environments.
- Integrate with OpenRouter and equivalent LLM routing tools to balance cost, latency, and quality.
- Design and refine prompt engineering strategies for reliability, context-awareness, and compliance.
- Build and maintain feedback pipelines for AI model evaluation (human-in-the-loop scoring, automated quality checks, reinforcement).
- Expose and maintain REST APIs for AI services, ensuring secure, versioned endpoints.
- Collaborate with backend/frontend teams to keep microservice architecture aligned and maintainable.
- Track token consumption, latency, and error rates to ensure production-grade performance.
Required Skills
- Programming: Strong in Python, with experience in production-grade codebases.
- Frameworks: Flask (for APIs), FastAPI (optional), Uvicorn/Gunicorn for async hosting.
- Queues/Workers: Dramatiq (or Celery/RQ equivalent) for background jobs.
- AI/ML: Hands-on with LLMs and VLMs, including prompt engineering, fine-tuning, and evaluation.
- AI Infrastructure: Familiar with OpenRouter or equivalent LLM/VLM routing & fallback tools.
- Architecture: Experience designing and maintaining microservice architectures.
- APIs: Strong experience with REST API design (auth, rate limiting, documentation).
- Production: Dockerized deployments, CI/CD pipelines, logging/monitoring, error handling.
- Feedback Loops: Building structured evaluation/feedback systems for AI model performance.
- Cloud: AWS/GCP experience preferred (deployment, monitoring, scaling).
Experience
- 3–5 years as an AI Engineer or Python Backend Engineer working with production systems.
- Prior work with SaaS platforms, LLM/VLM integrations, or AI-first products is highly valued.
Demonstrated ability to maintain AI pipelines in production, not just prototypes.
Similar Jobs
What We Do
WE ARE MARRINA One of the most time-consuming yet highly-valuable aspects of the marketer’s role is Email Production. Our dedicated team of experienced email and landing page professionals, along with our well-tested development and QA processes, creates high impacting responsive email campaigns. As a leading Email Marketing Agency, let us help you achieve excellent results with quicker execution for those much needed Flawless Emails, empowering you to achieve your goals. OUR PROVEN PROCESS Over 54% of companies have six or more emails in production at one time, with 31% having less than half a week of work going into each one. With so many balls in the air at once, it’s crucial to have a tried and true process for getting emails planned, created, and launched on time. STRATEGY: FULLY EQUIPPING YOU FOR THE VOYAGE Thoroughly defining your email marketing campaign strategy and goals helps guide the direction of your campaign and makes it easier to measure the success of your efforts. We bring expertise to the table to ask the right questions, collect all the requirements in an organized way, and proceed with skilled and efficient execution and campaign management. DEVELOPMENT: ASSURING QUALITY & SMOOTH SAILING Our experts create custom, scalable, and responsive email and landing page templates for optimal performance. But all this work is wasted if there are mistakes, so our foolproof QA process is vital to illuminate any issues. Configuration details, campaign members, email tests, and schedule information are presented to you for final pre-launch approvals. LAUNCH: SETTING SAIL AND STAYING ON COURSE Once approved, email launch monitoring ensures things went off as scheduled and anticipated, on time, and flawless. Post-launch email performance reporting and analysis of A/B testing, within a day after sending and updating after several days, highlights testing results, tweaks, successes, and ROI.







