Research Product Manager – AI Systems & Structured Data

Reposted 2 Days Ago
Mountain View, CA, USA
In-Office
160K-250K Annually
Mid level
Artificial Intelligence • Big Data • Cloud • Machine Learning • Software • Business Intelligence • Data Privacy
Better Data for Better AI
The Role
As a Research Product Manager, you will oversee complex research programs, turning technical ideas into execution plans and aligning research with production systems to enhance AI capabilities.
Summary Generated by Built In
Research Product Manager – AI Systems & Structured Data

Location: Downtown Mountain View, CA
Employment Type: Full-time
Work Model: On-site (5 days per week)
Department: Research

The Mission

Granica is building the next generation of efficient AI infrastructure.

Today’s AI systems are limited not only by model design but by the inefficiency of the data that feeds them. At enterprise scale, redundant data, inefficient representations, and poorly optimized learning pipelines create enormous cost, latency, and energy waste.

Granica’s mission is to eliminate that inefficiency.

We combine advances in information theory, machine learning, and distributed systems to design infrastructure that continuously improves how information is represented, compressed, and used by AI.

Granica’s research effort is led by Prof. Andrea Montanari (Stanford) and focuses on advancing learning systems that operate efficiently on large-scale structured and tabular data.

While much of the AI industry focuses on text and media models, Granica is building systems that learn directly from structured enterprise data—the operational data that powers the global economy.

Granica’s work sits at the intersection of learning theory, AI infrastructure, and large-scale data systems, an area that remains largely unexplored compared to modern LLM development.

The Role

The Research Product Manager sits at the intersection of research, systems engineering, and product strategy.

Your role is to help transform foundational advances in structured AI into production infrastructure and durable platform capabilities.

You will coordinate the path from research insight → experimental system → production deployment, ensuring that modeling advances translate into scalable systems and measurable economic value.

This is not a traditional product management role. It is designed for someone who can:

  • Understand how large AI systems are trained, deployed, and maintained

  • Work closely with researchers and engineers on deep technical problems

  • Translate modeling advances into production systems and platform strategy

Technical Judgment & Decision Ownership

You will participate in decisions about:

  • Which research directions should be productionized

  • How models are trained, evaluated, and deployed in real systems

  • Where infrastructure investment produces the highest leverage

  • How modeling advances translate into platform capabilities

You do not need to be the primary implementer, but you must be able to:

  • Reason about machine learning systems and large-scale data infrastructure

  • Challenge assumptions and propose technical alternatives

  • Make prioritization decisions alongside researchers and engineers

This role works best for candidates comfortable operating close to the technical core of AI systems.

What You’ll DoTranslate research into production systems
  • Work with Research Scientists and Applied AI Engineers to transform modeling advances into scalable systems

  • Define how structured AI models are trained, evaluated, and deployed

  • Design training and evaluation workflows operating over large structured datasets

  • Define model lifecycle processes including retraining cadence, monitoring, and schema evolution

Define infrastructure and system architecture
  • Help design training infrastructure for large tabular and relational models

  • Define evaluation harnesses and benchmarks for structured AI systems

  • Work with engineering teams to optimize data pipelines, training loops, and inference systems

  • Identify system bottlenecks across compute, storage, and data movement

Connect research to economic value
  • Identify where modeling improvements create economic advantage

  • Help define how research capabilities translate into platform features

  • Model infrastructure trade-offs across compute cost, training efficiency, and performance

  • Work with leadership to prioritize research directions with the highest long-term impact

Shape the research execution engine
  • Coordinate research priorities with engineering and product strategy

  • Identify which modeling advances should be productionized and scaled

  • Ensure the path from prototype → system → platform capability is clear and efficient

What You’ll BringTechnical Depth
  • Strong technical background in machine learning, distributed systems, or data infrastructure

  • Ability to engage deeply with researchers and engineers on complex technical topics

  • Understanding of how modern ML systems are trained, evaluated, and deployed

Systems Thinking
  • Familiarity with ML infrastructure, distributed training systems, or data platforms

  • Ability to reason about data layout, compute scheduling, model lifecycle, and system bottlenecks

  • Experience working with systems operating on large structured datasets

Product and Strategy Mindset
  • Ability to translate technical capabilities into platform features and economic value

  • Comfort operating in research-driven environments with ambiguous problem definitions

  • Strong communication skills and ability to align research, engineering, and product teams

Bonus
  • Experience working in AI infrastructure, ML platforms, or large-scale data systems

  • Background in computer science, machine learning, mathematics, physics, or engineering

  • Familiarity with structured data systems such as Parquet, Iceberg, or Delta Lake

  • Experience supporting research environments such as AI labs or ML infrastructure teams

  • Experience helping move research prototypes into production systems

This Role Is Not

This role is not a traditional product management position.

It is not primarily focused on:

  • Consumer AI products

  • Prompt engineering or LLM application features

  • Roadmap coordination or delivery management

  • Marketing or go-to-market ownership

Instead, this role focuses on translating frontier research into production AI infrastructure and system capabilities.

Successful candidates typically have experience working close to machine learning systems, research teams, or AI infrastructure platforms.

Who Thrives In This Role

People who succeed in this role often come from backgrounds such as:

  • ML infrastructure engineers who transitioned into product leadership

  • AI platform or ML systems product managers

  • Research engineers working closely with ML research teams

  • Early engineers or technical founders in AI infrastructure startups

  • Technical operators from research labs translating experiments into production systems

The common thread is the ability to connect research ideas, system architecture, and economic impact.

Why This Role Matters

The world’s most valuable data is structured.

Most AI systems today are not designed to learn from it efficiently.

Granica is building the systems that close this gap.

As a Research Product Manager, you will help define how frontier research becomes durable infrastructure—shaping the systems that enable AI to learn efficiently from the data that runs the global economy.

This role offers:

  • Direct collaboration with frontier research teams

  • Ownership of how research becomes production capability

  • Influence over both technical direction and platform strategy

Compensation & Benefits
  • Competitive salary, meaningful equity, and substantial bonus for top performers

  • Flexible time off plus comprehensive health coverage for you and your family

  • Support for research, publication, and deep technical exploration

At Granica, you will shape the fundamental infrastructure that makes intelligence itself efficient, structured, and enduring. Join us to build the foundational data systems that power the future of enterprise AI!

Top Skills

AI
Large Tabular Models
Machine Learning
Structured Data
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: Mountain View, California
37 Employees
Year Founded: 2023

What We Do

Massive-scale data should be an asset — not a liability. At Granica, we’re building a state-of-the-art AI efficiency, data optimization and compression platform designed to make cloud-scale data cheaper, faster, safer and more intelligent. As enterprises generate and store unprecedented volumes of data, traditional infrastructure can’t keep up — costs explode, performance lags, and privacy becomes harder to enforce. Granica changes that. We sit beneath the lakehouse — optimizing the data itself through advanced lossless compression, intelligent data selection, and built-in privacy preservation. Our platform enables enterprises to cut cloud data costs by up to 80% while improving performance and reducing operational complexity. This is deep infrastructure — already deployed in live production environments, operating across hundreds of petabytes of enterprise data.

Why Work With Us

We’re a tight-knit team combining --> * Fundamental research in compression, data systems, and information theory * World-class systems engineering across storage, cloud infrastructure, and security * A shared obsession with performance, scale, and clean design

Gallery

Gallery
Gallery
Gallery
Gallery
Gallery
Gallery
Gallery
Gallery
Gallery

Granica Offices

Hybrid Workspace

Employees engage in a combination of remote and on-site work.

Typical time on-site: Not Specified
HQMountain View, California
India
Learn more

Similar Jobs

Granica Logo Granica

Engineering Manager

Artificial Intelligence • Big Data • Cloud • Machine Learning • Software • Business Intelligence • Data Privacy
In-Office
Mountain View, CA, USA
37 Employees
190K-290K Annually

Granica Logo Granica

Software Engineering Intern — Foundational Data Systems for AI (Summer 2026)

Artificial Intelligence • Big Data • Cloud • Machine Learning • Software • Business Intelligence • Data Privacy
Hybrid
Mountain View, CA, USA
37 Employees

Granica Logo Granica

Senior Software Engineer

Artificial Intelligence • Big Data • Cloud • Machine Learning • Software • Business Intelligence • Data Privacy
In-Office
Mountain View, CA, USA
37 Employees
190K-250K Annually

Granica Logo Granica

Enterprise Account Executive

Artificial Intelligence • Big Data • Cloud • Machine Learning • Software • Business Intelligence • Data Privacy
In-Office
Mountain View, CA, USA
37 Employees
280K-340K Annually

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account