Granica is an AI research and systems company building the infrastructure for a new kind of intelligence: one that is structured, efficient, and deeply integrated with data.
Our systems operate at exabyte scale, processing petabytes of data each day for some of the world’s most prominent enterprises in finance, technology, and industry. These systems are already making a measurable difference in how global organizations use data to deploy AI safely and efficiently.
We believe that the next generation of enterprise AI will not come from larger models but from more efficient data systems. By advancing the frontier of how data is represented, stored, and transformed, we aim to make large-scale intelligence creation sustainable and adaptive.
Our long-term vision is Efficient Intelligence: AI that learns using fewer resources, generalizes from less data, and reasons through structure rather than scale. To reach that, we are first building the Foundational Data Systems that make structured AI possible.
The MissionAI today is limited not only by model design but by the inefficiency of the data that feeds it. At scale, each redundant byte, each poorly organized dataset, and each inefficient data path slows progress and compounds into enormous cost, latency, and energy waste.
Granica’s mission is to remove that inefficiency. We combine new research in information theory, probabilistic modeling, and distributed systems to design self-optimizing data infrastructure: systems that continuously improve how information is represented and used by AI.
This engineering team partners closely with the Granica Research group led by Prof. Andrea Montanari (Stanford), bridging advances in information theory and learning efficiency with large-scale distributed systems. Together, we share a conviction that the next leap in AI will come from breakthroughs in efficient systems, not just larger models.
What You’ll Build and LeadLead a team of exceptional engineers building core systems across storage, compute, and metadata at planetary scale.
Stay close to the code (roughly 30–40% hands-on) while guiding hiring, architecture, velocity, and execution.
Translate research prototypes into hardened, scalable infrastructure used across petabyte-scale enterprise environments.
Set technical direction through design reviews, RFCs, and principled trade-offs that balance performance, reliability, and maintainability.
Establish engineering rigor: precise code quality standards, latency and reliability SLAs, and learning-focused postmortems.
Scale the team through hiring, mentorship, and a culture of high trust, ownership, and continuous learning.
Drive outcomes: measurable latency reduction, throughput improvements, and reliability at four-nines and above.
Create autonomy through simple workflows that let small, high-agency teams ship fast.
7+ years in backend or infrastructure engineering, including 2+ years leading teams or large projects.
Proven ability to build, scale, and mentor high-performing engineering teams in complex systems domains.
Deep expertise in distributed compute frameworks (Spark, Trino, Presto) and columnar formats (Parquet, ORC).
Familiarity with Iceberg, Delta Lake, or Hudi internals.
Strong system design instincts and the ability to make and teach high-quality trade-offs.
A bias for action and comfort balancing ambiguity with clarity, and architectural depth with rapid iteration.
Commitment to clarity, technical excellence, and team growth.
Passion for building foundational technology that reshapes how data powers AI at scale.
Fundamental Research Meets Enterprise Impact. Work at the intersection of science and engineering, turning foundational research into deployed systems serving enterprise workloads at exabyte scale.
AI by Design. Build the infrastructure that defines how efficiently the world can create and apply intelligence.
Real Ownership. Design primitives that will underpin the next decade of AI infrastructure.
High-Trust Environment. Deep technical work, minimal bureaucracy, shared mission.
Enduring Horizon. Backed by NEA, Bain Capital, and various luminaries from tech and business. We are building a generational company for decades, not quarters or a product cycle.
Competitive salary, meaningful equity, and substantial bonus for top performers
Flexible time off plus comprehensive health coverage for you and your family
Support for research, publication, and deep technical exploration
Join us to build the foundational data systems that power the future of enterprise AI.
At Granica, you will shape the fundamental infrastructure that makes intelligence itself efficient, structured, and enduring.
Top Skills
What We Do
Massive-scale data should be an asset — not a liability.
At Granica, we’re building a state-of-the-art AI efficiency, data optimization and compression platform designed to make cloud-scale data cheaper, faster, safer and more intelligent.
As enterprises generate and store unprecedented volumes of data, traditional infrastructure can’t keep up — costs explode, performance lags, and privacy becomes harder to enforce. Granica changes that. We sit beneath the lakehouse — optimizing the data itself through advanced lossless compression, intelligent data selection, and built-in privacy preservation. Our platform enables enterprises to cut cloud data costs by up to 80% while improving performance and reducing operational complexity.
This is deep infrastructure — already deployed in live production environments, operating across hundreds of petabytes of enterprise data.
Why Work With Us
We’re a tight-knit team combining -->
* Fundamental research in compression, data systems, and information theory
* World-class systems engineering across storage, cloud infrastructure, and security
* A shared obsession with performance, scale, and clean design
Gallery
Granica Offices
Hybrid Workspace
Employees engage in a combination of remote and on-site work.