Position Purpose:
A proficient AI Engineer will join our IT team, focusing on developing and enhancing AI Systems Engineer, you'll play a pivotal role in transitioning to an AI-driven company. Your work will encompass designing, developing, and ship production AI solutions across ML models and LLM systems—including AI agents, RAG, Agentic AI, and Agentic RAT (Agentic RAG / Retrieval-Augmented Tooling)—using Azure, OpenAI/Azure OpenAI, and Google Gemini.
Roles and responsibilities:
AI Agents + Agentic AI (Hands-on)
- Build tool-using agents that execute multi-step tasks: planning, tool calling, verification, retries/fallbacks, and audit logs.
- Implement agent orchestration (graph/state machine patterns), deterministic controls, and human-in-the-loop escalation.
RAG + Agentic RAT (Agentic RAG / Retrieval-Augmented Tooling)
- Build RAG pipelines end-to-end: ingestion, chunking, embeddings, vector/hybrid retrieval, reranking, citations, grounded responses.
- Implement Agentic RAG: retrieval and tool-use loops (“retrieve, reason, tool-call, verify, respond”) with confidence scoring. Tune retrieval quality: metadata filters, hybrid search, prompt grounding, evaluation datasets, and regression tests.
Data Science + Machine Learning (Hands-on)
- Own end-to-end ML: problem framing, EDA, feature engineering, training, validation → deployment , monitoring.
- Build ML models (classification/regression/ranking/forecasting) using scikit-learn and/or PyTorch/TensorFlow.
- Apply rigorous evaluation: cross-validation, leakage prevention, bias checks, calibration, thresholding, lift/uplift analysis.
- Create production-grade feature pipelines (batch + real-time where needed) and ensure reproducibility.
ML Deployment + MLOps (Hands-on)
- Deploy ML models as APIs/batch jobs (FastAPI/Azure Functions/containers) with performance and reliability.
- Implement MLOps: CI/CD for training + deployment, experiment tracking (MLflow or equivalent), model registry/versioning, rollback.
- Production monitoring: model drift, data quality checks, performance degradation alerts, latency/cost monitoring.
- Write runbooks, on-call-friendly dashboards, and incident playbooks for model failures.
Cloud + Model Providers
- Deploy on Azure: Blob/ADLS, Key Vault, Azure AI Search (vector/hybrid), App Service/AKS/Functions, App Insights.
- Use OpenAI/Azure OpenAI and Google Gemini with provider abstraction, prompt/version governance, and rate-limit handling.
Physical Demands:
Not Applicable.
Preferred QualificationsMinimum Job Requirements (Education, Experience, Skills):
- Experience with large language models like GPT-4,5, Gemini
- Experience with Azure AI, Google, Gemini
- Proficiency in Python and modern development environments including Git, Anaconda, PiP, Docker, and Cloud services
- Ability to develop production-ready standalone libraries beyond "notebook code"
bachelor’s degree in computer science, Engineering, or related field, or equivalent experience
Individual contributor mindset, with strong problem-solving and communication skills
Demonstrable previous work with LLM interfaces, sharing code repositories if applicable during the interview process - Hands-on AI agents + RAG + Agentic RAG/RAT in production (not just prototypes).
- Strong Python engineering + proven delivery of production systems.
- Hands-on DS/ML: built models, validated them rigorously, and deployed them.
- Hands-on MLOps: pipelines, versioning, monitoring, drift detection, rollback.
Skills Required
- Experience with large language models like GPT-4,5, Gemini
- Proficiency in Python and modern development environments including Git, Anaconda, PiP, Docker, and Cloud services
- Bachelor's degree in computer science, Engineering, or related field, or equivalent experience
- Demonstrable previous work with LLM interfaces, sharing code repositories if applicable during the interview process
- Hands-on AI agents + RAG + Agentic RAG/RAT in production (not just prototypes)
- Strong Python engineering + proven delivery of production systems
- Hands-on DS/ML: built models, validated them rigorously, and deployed them
- Hands-on MLOps: pipelines, versioning, monitoring, drift detection, rollback
Afni, Inc. Compensation & Benefits Highlights
The following summarizes recurring compensation and benefits themes identified from responses generated by popular LLMs to common candidate questions about Afni, Inc. and has not been reviewed or approved by Afni, Inc..
-
Strong & Reliable Incentives — Incentive programs (performance-based and referral) are available in many roles and can add to earnings when targets are met. The pay-and-bonus structure is described as a positive aspect in certain positions.
-
Leave & Time Off Breadth — Paid time off for holidays and vacations is part of the package. Training periods also include additional holiday time, indicating a defined time-off offering.
-
Wellbeing & Lifestyle Benefits — Tuition reimbursement up to $5,250, a casual “Dress for Your Day” policy, and on-site events provide added non-cash value. Full-time schedules and referral rewards are also positioned as perks.
Afni, Inc. Insights
What We Do
When people reach for their phones, use laptops, or grab their tablets, Afni's contact center teams are there to provide prompt and friendly help. That's Afni. We're a global team of people who love helping companies develop meaningful and profitable relationships with customers. In 1936, we got our start in Bloomington, Illinois as a consumer collections agency. Today, we're so much more. Our channel strategies and customer lifecycle solutions give our clients ways to connect with their customers for many reasons, using their customers' channels of choice.

.jpg)






.png)