Companies want to train their own large models on their own data. The current industry standard is to train on a random sample of your data, which is inefficient at best and actively harmful to model quality at worst. There is compelling research showing that smarter data selection can train better models faster—we know because we did much of this research. Given the high costs of training, this presents a huge market opportunity. We founded DatologyAI to translate this research into tools that enable enterprise customers to identify the right data on which to train, resulting in better models for cheaper. Our team has pioneered deep learning data research, built startups, and created tools for enterprise ML. For more details, check out our recent blog posts sharing our high-level results for text models and image-text models.
We've raised over $57M in funding from top investors like Radical Ventures, Amplify Partners, Felicis, Microsoft, Amazon, and notable angels like Jeff Dean, Geoff Hinton, Yann LeCun and Elad Gil. We're rapidly scaling our team and computing resources to revolutionize data curation across modalities.
This role is based in Redwood City, CA. We are in office 4 days a week.
About the RoleWe’re looking for an engineer with deep experience building and operating large-scale training and inference systems. You will design, implement, and maintain the infrastructure that powers both our internal ML research workflows and the high-performance inference pipelines that deliver curated data to our customers.
As one of our early hires, you will influence technical direction, partner directly with researchers and product engineers, and take ownership of systems that are central to our company’s success.
What You'll Work OnArchitect and maintain training infrastructure that are reliable, scalable, and cost-efficient.
Build robust model serving infrastructure for low-latency, high-throughput inference across heterogeneous hardware.
Automate resource orchestration and fault recovery across GPUs, networking, OS, drivers, and cloud environments.
Partner with researchers to productionize new models and features quickly and safely.
Optimize training and inference pipelines for performance, reliability, and cost.
Ensure all infrastructure meets the highest bar for reliability, security, and observability.
Have at least 5 years of professional software engineering experience.
Expertise in Python and experience with deep learning frameworks (PyTorch preferred)
Have an understanding of modern ML architectures and an intuition for how to optimize their performance, particularly for training and/or inference
Have familiarity with inference tooling like vLLM, SGLang, or custom model parallel systems.
Proven experience designing and running large-scale training or inference systems in production.
Have or can quickly gain familiarity with PyTorch, NVidia GPUs and the software stacks that optimize them (e.g. NCCL, CUDA), as well as HPC technologies such as InfiniBand, NVLink, AWS EFA etc.
Commitment to engineering excellence: strong design, testing, and operational discipline.
Collaborative, humble, and motivated to help the team succeed.
Ownership mindset: you’re comfortable learning fast and tackling problems end-to-end.
Don’t meet every single requirement? We still encourage you to apply. If you’re excited about our mission and eager to learn, we want to hear from you!
CompensationAt DatologyAI, we are dedicated to rewarding talent with highly competitive salary and significant equity. The base salary for this position ranges from $180,000 to $250,000.
The candidate's starting pay will be determined based on job-related skills, experience, qualifications, and interview performance.
We offer a comprehensive benefits package to support our employees' well-being and professional growth:
100% covered health benefits (medical, vision, and dental).
401(k) plan with a generous 4% company match.
Unlimited PTO policy
Annual $2,000 wellness stipend.
Annual $1,000 learning and development stipend.
Daily lunches and snacks are provided in our office!
Relocation assistance for employees moving to the Bay Area.
Top Skills
What We Do
DatologyAI builds tools to automatically select the best data on which to train deep learning models. Our tools leverage cutting-edge research—much of which we perform ourselves—to identify redundant, noisy, or otherwise harmful data points. The algorithms that power our tools are modality-agnostic—they’re not limited to text or images—and don’t require labels, making them ideal for realizing the next generation of large deep learning models. Our products allow customers in nearly any vertical to train better models for cheaper.