The Role
Join our data center team to profile performance of open source models using CI/CD, Linux, and dashboard technologies. Build performance dashboards and work with novel hardware platforms.
Summary Generated by Built In
About Positron AI
Positron delivers vendor freedom and faster inference for both enterprises and research teams, by allowing them to use hardware and software explicitly designed from the ground up for generative and large language models (LLMs).
Through lower power usage and drastically lower total cost of ownership (TCO), Positron enables you to run popular open source LLMs to serve multiple users at high token rates and long context lengths. Positron is also designing its own ASIC to expand from inference and fine tuning to also support training and other parallel compute workloads.
About the role
- You'll be joining our data center team, profiling performance on various open source models across hardware and software builds
What you'll do
- CI/CD, Linux administration, Ansible configurations, Plotly and Typescript dashboard front ends
Qualifications
- Experience building performance dashboards
- Experience with novel hardware platforms
- BA/BS required, preferred in technical subject like Computer Science, Physics, Applied Math, or Electrical Engineering
Top Skills
Ansible
Ci/Cd
Linux
Plotly
Typescript
Am I A Good Fit?
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.
Success! Refresh the page to see how your skills align with this role.
The Company
What We Do
Positron delivers vendor freedom and faster inference for both enterprises and research teams, by allowing them to use hardware and software explicitly designed from the ground up for generative and large language models (LLMs).
Through lower power usage and drastically lower total cost of ownership (TCO), Positron enables you to run popular open source LLMs to serve multiple users at high token rates and long context lengths. Positron is also designing its own ASIC to expand from inference and fine tuning to also support training and other parallel compute workloads.








