Deeter Analytics
At Deeter Analytics, we’re building something that doesn’t get built twice in a generation.
Our goal is to create a fundamental trading model as capable as today’s most advanced AI systems — but applied to global markets. Not incremental signals or isolated strategies, but a system that can continuously interpret, learn from, and act on the evolving state of the world.
We train on large-scale, real-time social data — capturing how narratives form, how sentiment propagates, and how collective behavior drives markets. This requires operating at the frontier of data infrastructure, model design, and compute, all tightly integrated into a single system.
You’ll work alongside a small group of elite engineers, AI researchers, and traders, in an environment defined by speed and ownership. We run experiments continuously. Ideas move from concept to production in hours. And the feedback loop is immediate — measured directly in live performance.
About the role
You will build and optimize the systems that turn data and compute into model capability.
This role sits at the intersection of distributed systems, GPU infrastructure, and model training — ensuring that both our in-house models and state-of-the-art external models can be trained efficiently at scale.
We prefer systems that maximize learning per unit of compute, not just systems that run.
What you’ll work on
● Designing and operating distributed training systems on GPU infrastructure
● Optimizing GPU utilization, throughput, and training efficiency
● Translating model requirements into efficient system configurations
● Improving training speed, cost efficiency, and reliability
● Debugging failures in high-cost, high-pressure training environments
What we’re looking for
We’re looking for people who understand how systems behave under real constraints — and know how to push them to perform.
Strong signals:
● You have run or significantly contributed to large-scale training workloads or compute-intensive systems
● You have a strong understanding of distributed systems in practice, including public cloud environments like AWS
● You understand how infrastructure behaves beneath the abstraction:
○ networking constraints
○ GPU/CPU utilization
○ memory and I/O bottlenecks
○ hardware limits at scale
● You can reason about how systems can be tuned for more efficient training and resource usage
● You have debugged systems where failures were non-trivial and costly
● You move quickly, identify bottlenecks, and eliminate them without being asked
Bonus signals:
● Experience optimizing systems where small efficiency gains had large downstream impact
● Experience working under strict compute or cost constraints
● Experience debugging distributed or asynchronous systems with non-obvious failure modes
● You use AI tools to accelerate debugging, development, and iteration
● You care about building systems that are measurably efficient, not just functional
Top Skills
What We Do
Turning market data into decisive action






