The AI Data Platform Lead is the foundational technical role within AI Operations responsible for designing, building, and governing the cross-departmental data infrastructure that powers Agiloft's AI transformation. This role owns the full data engineering scope required to make the Data Warehouse Foundation serve not only business intelligence and reporting, but the complete spectrum of AI use cases — GPT assistants, AI agents, predictive analytics, real-time operational intelligence, and the contextual intelligence layer that underpins the organization's intelligent operating model.
This role is the prerequisite for all downstream data consumers — including BI and reporting functions — to operate effectively. The AI Data Platform Lead reports to the VP of AI Operations and is a core member of the AI Operations team. This role is allocated fully within AI Operations and is managed, roadmapped, and prioritized by the VP of AI Operations. Any allocation outside of the AI Operations-designated resource percentage requires explicit agreement with AI Operations leadership.
This role is distinct from and complementary to the Principal Data and Integrations Architect, who owns the infrastructure layer — DW architecture design, pipeline build and maintenance, source system integrations, and platform reliability. The AI Data Platform Lead operates at the layer above infrastructure: owning what the data means, how it is modeled for AI and analytics consumption, whether it is trustworthy and fit for purpose, and how it connects to the intelligence layer that GPT assistants, agents, and predictive models depend on. The analogy is direct: the Principal Data and Integrations Architect builds and maintains the roads. The AI Data Platform Lead owns where the roads go, what travels on them, and whether what arrives at the destination is clean, modeled correctly, and ready for AI consumption.
This is not a traditional data engineering or BI role. It sits at the intersection of data science, AI infrastructure, and data governance — requiring someone who understands that in an AI-first organization, data quality and data modeling are not reporting concerns. They are the foundation of every intelligent system the organization depends on.
Job Responsibilities
- Own the end-to-end data architecture for the Data Warehouse Foundation, designing for AI-first consumption across GPT assistants, AI agents, predictive models, and operational intelligence — in addition to BI and reporting.
- Lead data modeling across all 11 departments, designing canonical enterprise data models that serve cross-functional AI and analytics use cases without duplication or fragmentation.
- Design and implement the contextual intelligence layer — including RAG architecture, vector store strategy, knowledge base ingestion pipelines, and document and unstructured data processing — that powers Agiloft's enterprise knowledge system.
- Build and maintain the agentic data integration layer: real-time and near-real-time data access patterns, agent memory and state persistence design, orchestration data requirements, and agent output integration back into the warehouse.
- Own the AI/ML feature layer — feature engineering strategy and standards, training data pipeline design, feature store architecture, and model output integration — enabling predictive analytics across churn, pipeline health, and operational forecasting.
- Design and govern the operational data and GPT context layer, including structured context feed design for GPT assistants, data freshness and access SLAs for AI use cases, and cross-departmental data reuse standards.
- Lead the Data Warehouse Foundation build in partnership with the external consulting team — setting architecture standards, reviewing implementation against AI-first principles, and ensuring the five-wave build plan delivers a foundation that serves the full intelligence architecture.
- Design and manage data ingestion, ELT/ETL, and orchestration pipelines across all source systems, ensuring reliability, performance, and cost efficiency.
- Establish and enforce AI data engineering standards across the organization — prompt-adjacent data design, agent data access patterns, reusable pipeline components, and quality assurance processes.
- Own data access policy design and least-privilege access controls in partnership with Security, ensuring data made available to AI systems is governed, auditable, and compliant.
- Define data quality standards and monitoring processes for AI-consumed data, where quality failures have direct impact on model and agent performance.
- Partner with the Principal Data and Integrations Architect on infrastructure design, ensuring data modeling and AI consumption requirements are incorporated into pipeline and architecture decisions from the start — not retrofitted after build.
- Partner with the VP FP&A and Manager of BI & Data to ensure the semantic and metrics layers are technically sound and serve both AI use cases and reporting requirements.
- Manage the AI Ops data architecture roadmap, translating business and AI use case requirements from all 11 departments into sequenced, prioritized technical work.
- Maintain documentation and knowledge transfer standards for all data architecture, pipelines, and integration patterns — ensuring AI Ops-built infrastructure is reusable, auditable, and not dependent on any single individual.
- Collaborate with the AI Agent Engineer and GPT & AI Systems Lead to ensure data infrastructure supports agent orchestration, retrieval-augmented generation, and multi-step reasoning workflows.
- Define the roadmap for data science and AI data work in partnership with the VP of AI Operations — this role does not take direction from IT on resource allocation or prioritization. All roadmapping is managed within AI Operations.
- Evaluate and recommend data tooling, frameworks, and platform components in alignment with AI Ops' technology-agnostic, build-for-leverage approach.
- Other duties as assigned.
Required Qualifications
- Bachelor's degree in Computer Science, Data Engineering, Information Systems, or related technical field required.
- 7–10 years of experience in data engineering, data architecture, or a related technical function, with at least 3 years focused on AI or ML data infrastructure.
- Deep expertise in modern data stack technologies — Snowflake required; experience with dbt, Airflow or equivalent orchestration, and ELT/ETL pipeline design.
- Demonstrated experience designing data architecture for AI consumption — including vector databases, embedding pipelines, RAG systems, or feature stores — not only for BI and reporting.
- Strong data modeling skills across multiple paradigms: dimensional modeling, normalized models, and AI-optimized schemas for agent and model consumption.
- Experience building and operating real-time or near-real-time data pipelines for operational AI use cases.
- Proficiency in Python and SQL; experience with cloud data infrastructure on AWS required.
- Experience designing data access patterns and governance controls for AI systems, including least-privilege access, audit logging, and AI-specific data security considerations.
- Demonstrated ability to own cross-functional technical programs — translating requirements from multiple business domains into coherent, prioritized data architecture decisions.
- Strong communication skills with the ability to make complex data architecture decisions legible to non-technical executives and cross-functional stakeholders.
- SaaS industry experience required.
Preferred Qualifications
- Experience in private equity-backed SaaS organizations.
- Experience with agentic AI frameworks — LangGraph, Mastra, or equivalent — and the data infrastructure requirements they create.
- Experience building or operating RAG architectures at production scale, including vector store selection, chunking strategy, retrieval optimization, and evaluation.
- Experience with agent memory architectures and state persistence design for multi-step AI workflows.
- Familiarity with AI governance and compliance requirements for data used in automated decision-making.
- Experience supporting investment board or executive-level AI progress reporting from a technical infrastructure perspective.
- Experience with Tines or equivalent no-code/low-code orchestration platforms for simple agent pipelines.
- Exposure to contract lifecycle management, legal tech, or professional services data domains.
Agiloft offers a comprehensive benefits package for US employees including but not limited to the following:
- Medical, dental, and vision insurance
- Short term and long-term disability
- Life insurance and AD&D
- Supplemental life insurance (Employee/Spouse/Child)
- Health care and dependent care Flexible Spending Accounts
- 401(k) with company match
- Paid time off: Flexible Vacation is provided to all eligible employees assigned to a salaried (non- overtime eligible) position.
- Paid parental leave
- Voluntary benefits including pet insurance
What We Do
Agiloft is the global value leader in data-first contract lifecycle management (CLM), offering the industry’s only no-code platform with AI on the Inside™ to enhance efficiency, cut review times by up to 80%, and accelerate business. Its Data-first Agreement Platform (DAP) transforms contracts into strategic, data-rich assets, integrating with 1,000+ systems to drive decisions and efficiency. Trusted by brands like Alkermes, Balluff, and TaylorMade, Agiloft boasts a 96% renewal rate and 100% satisfaction for implementations. Backed by KKR, JMI Equity, and FTV Capital, Agiloft empowers businesses to drive smarter strategies, faster decision-making and game-changing competitive advantage. Learn more at www.agiloft.com. We're hiring! To view our current job openings, please visit https://www.agiloft.com/jobs.htm.
Why Work With Us
We are a passionate group of humans dedicated to helping other humans thrive. We may work with contracts, but with careers at Agiloft, the most important contract we keep is the human contract, the commitment we have to each other.
Gallery









