Senior Research Engineer, Model Inference

Posted Yesterday
Be an Early Applicant
Mountain View, CA
200K-250K Annually
Senior level
Artificial Intelligence • Software
The Role
Research Engineer specializing in model optimization and integration into production environments. Responsible for optimizing and deploying machine learning models for real-time applications, ensuring low latency and high throughput. Collaborate with cross-functional teams to meet requirements and constraints. Requires 5+ years of experience and a Master's or Ph.D. in computer science or related field.
Summary Generated by Built In

The Opportunity

If you are a Research Engineer with expertise in model optimization and integration into production environments, and a passion for applying state-of-the-art AI innovations to make conversations more valuable, look no further!
As a Research Engineer on our Platform team, you'll advance the frontier of AI by maximizing efficiency and performance to achieve feats previously thought impossible. You’ll do so by optimizing and deploying machine learning models for real-time applications. Join us alongside a cohort of industry-leading scientists, ML engineers, and production engineers as we grow Otter’s AI-powered collaboration platform that’s transcribed over 1B meetings. Together, we strive to elevate the value of conversations through innovative solutions.

Your Impact

  • Model optimization: Collaborate with machine learning researchers to understand model architectures and algorithms.
  • Implement optimization techniques to enhance machine learning models' efficiency and inference speed on production
  • Deployment and Integration: Work closely with product engineers to integrate machine learning models into production systems in a scalable way
  • Optimize models for real-time inference, ensuring low latency and high-throughput 
  • Set up monitoring systems to track model performance in real time.
  • Ensure models can scale horizontally to handle the increased load.
  • Implement strategies for resource-efficient inference, considering factors such as memory usage and CPU/GPU utilization.
  • Collaborate with cross-functional teams to understand requirements and constraints.
  • Provide technical expertise on inference-related matters during the model development lifecycle.
  • Document the deployment and optimization processes for machine learning models.

We're looking for someone who

  • 5+ years of professional industry experience
  • Masters or Ph.D. degree in computer science, machine learning, speech/language processing or related field
  • Experience in PyTorch
  • Proficiency in Python
  • Experience in C++
  • Basic knowledge of CUDA
  • Strong understanding of machine learning models, algorithms, and deployment strategies
  • Experience with model optimization techniques and performance profiling
  • Familiarity with docker and Kubernetes
  • Knowledge of AWS
  • Experience with monitoring tools 

About Otter.ai 

We are in the business of shaping the future of work. Our mission is to make conversations more valuable.

With over 1B meetings transcribed, Otter.ai is the world’s leading tool for meeting transcription, summarization, and collaboration. Using artificial intelligence, Otter generates real-time automated meeting notes, summaries, and other insights from in-person and virtual meetings - turning meetings into accessible, collaborative, and actionable data that can be shared across teams and organizations. The company is backed by early investors in Google, DeepMind, Zoom, and Tesla.

Otter.ai is an equal opportunity employer. We proudly celebrate diversity and are dedicated to inclusivity.

*Otter.ai does not accept unsolicited resumes from 3rd party recruitment agencies without a written agreement in place for permanent placements. Any resume or other candidate information submitted outside of established candidate submission guidelines (including through our website or via email to any Otter.ai employee) and without a written agreement otherwise will be deemed to be our sole property, and no fee will be paid should we hire the candidate.

Salary range

Salary Range: $200,000 to $250,000 USD per year.

This salary range represents the low and high end of the estimated salary range for this position. The actual base salary offered for the role is dependent based on several factors. Our base salary is just one component of our comprehensive total rewards package.

Top Skills

C++
Cuda
Python
PyTorch
The Company
HQ: Mountain View, CA
106 Employees
On-site Workplace

What We Do

A.I.-powered note-taking and collaboration app that lets you remember, search, and share your voice conversations.

Similar Jobs

SoFi Logo SoFi

Full Stack Engineer, Mobile - Member Growth

Fintech • Mobile • Software • Financial Services
San Francisco, CA, USA
4500 Employees

ServiceNow Logo ServiceNow

GenAI Architect

Artificial Intelligence • Cloud • HR Tech • Information Technology • Productivity • Software • Automation
San Diego, CA, USA
26000 Employees
140K-245K Annually

ServiceNow Logo ServiceNow

AI Steward- Governance

Artificial Intelligence • Cloud • HR Tech • Information Technology • Productivity • Software • Automation
Santa Clara, CA, USA
26000 Employees
159K-278K Annually

The Walt Disney Company Logo The Walt Disney Company

Sr Software Engineer

AdTech • Digital Media • News + Entertainment
San Francisco, CA, USA
200000 Employees
152K-204K Annually

Similar Companies Hiring

TrainingPeaks (A Peaksware Company) Thumbnail
Software • Fitness
Louisville, CO
69 Employees
bet365 Thumbnail
Software • Gaming • eSports • Digital Media • Automation
Denver, Colorado
6100 Employees
Jobba Trade Technologies, Inc. Thumbnail
Software • Professional Services • Productivity • Information Technology • Cloud
Chicago, IL
45 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account