You're needed to push the boundaries of what our models can understand. We're not prompt engineering chatbots. We're building evaluation frameworks and research systems that measure, improve, and validate enterprise intelligence at a scale nobody has attempted.
Fluency is looking for a Research Engineer to design experiments, build evaluation infrastructure, and drive model quality for our process conformance, productivity measurement, and AI impact analysis across Fortune 500 organisations.
The Problem SpaceYou'll be developing the methodology and systems that determine whether our models actually work. Screenshots, OCR text, application metadata, behavioral signals: the inputs are messy and the ground truth is ambiguous. The challenge is building rigorous evaluation frameworks that quantify model performance and identify improvement opportunities.
This means:
Designing evaluation pipelines that measure accuracy, precision, and recall across classification tasks
Building ground truth datasets from ambiguous, real-world enterprise data
Running systematic prompt engineering experiments to optimize LLM performance
Developing A/B testing infrastructure for model comparison
Researching novel approaches to process understanding, activity classification, and intent extraction
Quantifying cost-accuracy tradeoffs across different model architectures and prompting strategies
Building automated world-model training infrastructure from our ontology
The playbook doesn't exist. You'll write it.
We're backed by T1 VCs like Accel, research labs like from Princeton, and are hitting an inflection point with Enterprises all around the globe.
You'll work directly with founders and our engineering team on technical challenges that span LLM evaluation, experimental design, and applied research.
About the RoleWe're looking for someone with:
Strong Python fundamentals and software engineering discipline
LLM prompt engineering and optimization (token efficiency, few-shot design, chain-of-thought)
Experience evaluating model performance: accuracy measurement, error analysis, regression detection
Ability to read, synthesize, and apply ML research papers
Statistical literacy: understanding when results are meaningful vs noise
Comfort with ambiguity and novel problem domains
Computer Science Background, with caveat. If you don't have a CS background, you're challenged to beat one of the founders in a 1:1 whiteboard duel on DS&A judged by Hung. Neither founder has a formal CS background, but come prepped.
There will be an expectation to stay up to business context, which could involve:
Watching key customer calls
Interacting with customers
Helping with product thinking
Experience building evaluation frameworks and benchmarking systems
Ground truth dataset creation and annotation pipeline experience
Experience with hybrid LLM/rule-based systems
OCR, document understanding, or computer vision background
Cost optimization for LLM-heavy systems
Classification and NLP systems experience
Published research or formal research methodology training
Familiarity with process mining or workflow analysis
Interesting personal projects that demonstrate depth
We work with some of the world's largest:
Financial services enterprises (Aon)
Manufacturing enterprises (Misumi)
And many more across the enterprise spectrum (PVH)
You're expected to be in love with the craft. You're expected to like laughing. You're expected to want to work on novel problems. You're expected to find satisfaction in novelty. You're expected to solve under obscurity.
Our ValuesIn hesitation lies destruction; in action, glory.
Those who merely meet expectations abandon the pursuit of greatness.
One who dwells within the forum must regard it as hallowed ground.
One who has not tasted the grapes declares them sour.
One who sits alone at the feast misses the richness of the table.
Full-time, in-person role based in San Francisco, CA.
We offer E3 sponsorship for Australians to relocate with stipend
US$150K - $320K salary, depending on candidate and experience
Substantial equity, every offer includes ownership
Mac, Linux, or Windows, your call
High-impact work with global enterprises
Technical, product-led founders
You want hybrid or remote
You don't like working hard and with insane velocity
You want to work a 9 to 5
You're not comfortable with rapid iteration
You think evaluation is grunt work
You've never shipped a model or evaluation system to production
You don't have personal projects
You dislike constraints (we have them: cost, latency, accuracy tradeoffs are real)
You aren't ambitious
You don't have a good reason for wanting to work at an early-stage company
Resume screen
1:1 with founder
Technical deep-dive on past research and evaluation work
Work through a real problem with the team - usually as a live coding exercise
Offer
We strongly encourage applicants from underrepresented backgrounds to apply. Diverse teams build better products, see value #5.
Top Skills
What We Do
Fluency gives leaders real-time visibility into how work happens across processes, projects, and ad-hoc tasks. By connecting execution to outcomes, we turn complexity into clarity and hidden effort into measurable impact. Fluency works across all tools and teams from day one, creating a unified view of enterprise execution without the need for integrations.
With Fluency, companies measure ROI on GenAI with precision, proving which initiatives move KPIs and which don't. Leaders can automatically discover and optimize workflows, identify automation opportunities, and track whether the business is on course to hit its goals.
Our platform builds a live map of how teams and functions operate, tying activity directly to revenue, cost, speed, quality, and risk. Fluency is the control layer that makes improvement continuous, automatic, and AI-first.









