Senior Software Engineer II, Cloud Data Pipeline

Reposted 4 Days Ago
Seattle, WA, USA
Hybrid
164K-221K Annually
Senior level
Biotech
The Role
Design and implement data pipelines for protein measurement, improve cloud architecture, maintain APIs, contribute to DevOps, and work cross-functionally with teams.
Summary Generated by Built In

At Nautilus, we have a big and important mission: improve the health of millions by unleashing the potential of the proteome to accelerate drug development and enable a new world of precision and personalized medicine. We are developing a single-molecule protein analysis platform of unprecedented sensitivity, scale, and ease of use that we believe will democratize access to the proteome -- one of the most dynamic and valuable sources of biological insight. To accomplish this, we are pursuing hard scientific problems with an entrepreneurial mindset and creating a world-class team of builders, innovators, and dreamers across a wide range of disciplines.

We are hiring a Senior Software Engineer II to join our Cloud Data Pipeline team. This team is responsible for the data infrastructure that transforms raw protein measurement data into the scientific outputs that our customers and internal researchers depend on. You'll own ETL pipelines, cloud infrastructure, and the APIs and databases that connect our data platform to the rest of the company. As Nautilus moves into commercial deployment, this role sits at the intersection of data engineering rigor and the practical demands of a production scientific platform. Your work will directly shape what our customers can learn from their experiments and how reliably they can trust the results.

This position will report to the Manager, Data Engineering and Cloud Pipelines and is located in Seattle, WA. The position is hybrid and requires a minimum of three days onsite.

Responsibilities

  • Design and implement data pipelines and ETLs that process protein measurement data at scale, turning instrument outputs into reliable, query-able scientific results.

  • Improve the architecture of existing cloud systems: identify structural weaknesses, propose better approaches, and drive implementation alongside the technical lead.

  • Maintain and evolve the APIs and database schemas that serve internal teams including bioinformatics, science, and product development, adapting as their needs grow.

  • Contribute to the team's DevOps practice: optimize AWS costs, manage cloud deployments, improve system security, and drive performance improvements through infrastructure changes.

  • Work cross-functionally with scientific and software teams to define data quality metrics, understand how downstream consumers use pipeline outputs, and ensure the platform meets their needs.

  • Surface and advocate for changes to project priorities and architecture across the cloud pipeline and adjacent projects.

Requirements

  • 7+ years of relevant experience in a software engineering organization, with a strong track record of delivering production-quality systems.

  • Bachelor's degree in Computer Science or a related field, or equivalent practical experience.

  • Fluency in a variety of programming languages. We are currently invested in Python for our data pipelines.

  • Solid experience with cloud infrastructure on AWS including cost management and deployment practices.

  • Experience with CI/CD pipelines and infrastructure-as-code (e.g., Terraform, CDK).

  • Experience with relational and non-relational database design

  • Demonstrated experience building and maintaining data pipelines or ETL systems at production scale.

  • Skilled in multiple technology domains with the ability to independently pick up new ones as needed.

  • Strong communication skills and comfort working across engineering, science, and product stakeholders.

  • Ability to identify when a change in direction is necessary and deal competently with that shift.

  • Familiar with AI-driven development tools and methodologies.

Nice to Haves

  • Experience with Docker and container orchestration tools (Kubernetes, ECS).

  • Experience with workflow orchestration tools (e.g., Nextflow, Step Functions, Airflow, Prefect).

  • Experience with data observability, pipeline monitoring, or data quality frameworks.

  • Background in biotech, life sciences, or scientific data processing.

  • Familiarity with NoSQL data stores and when to use them alongside relational databases.

Skills Required

  • 7+ years of relevant experience in a software engineering organization
  • Bachelor's degree in Computer Science or a related field
  • Fluency in a variety of programming languages
  • Solid experience with cloud infrastructure on AWS
  • Experience with CI/CD pipelines and infrastructure-as-code
  • Experience with relational and non-relational database design
  • Demonstrated experience building and maintaining data pipelines or ETL systems at production scale
  • Strong communication skills
  • Familiar with AI-driven development tools and methodologies
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: San Carlos, CA
137 Employees
Year Founded: 2016

What We Do

Born from the founders’ recognition that their diverse but complementary skills and experiences would enable them to successfully address challenges that others had not, Nautilus (Nasdaq: NAUT) set about solving a vexing problem: how to bring true proteomics to the world in a way that accelerates therapeutic development, dramatically improves medical diagnostics, and makes personalized and predictive medicine a reality. The extraordinary team at Nautilus represents a wide spectrum of disciplines and expertise, including: protein chemists, chip designers, molecular biologists, data scientists, material scientists, biophysicists, optical engineers, microfluidics engineers, bioinformaticists, software engineers, and more. Nautilus is positioned to revolutionize proteomics, transform the way drugs are developed, and significantly improve the way human health is managed.

Similar Jobs

HERE Technologies Logo HERE Technologies

Data Science Intern

Artificial Intelligence • Automotive • Computer Vision • Information Technology • Internet of Things • Logistics • Software
Hybrid
Home, WA, USA
6000 Employees
25-35 Hourly

Applied Systems Logo Applied Systems

Manager, Software Engineering

Cloud • Insurance • Payments • Software • Business Intelligence • App development • Big Data Analytics
Remote or Hybrid
United States
3040 Employees
115K-175K Annually

Enverus Logo Enverus

Consultant

Big Data • Information Technology • Software • Analytics • Energy
In-Office or Remote
3 Locations
1800 Employees
65K-70K Annually

MetLife Logo MetLife

Senior Account Executive

Fintech • Information Technology • Insurance • Financial Services • Big Data Analytics
Remote or Hybrid
United States
43000 Employees
100K-100K Annually

Similar Companies Hiring

Formation Bio Thumbnail
Artificial Intelligence • Big Data • Healthtech • Biotech • Pharmaceutical
New York, NY
140 Employees
SOPHiA GENETICS Thumbnail
Software • Healthtech • Biotech • Big Data • Artificial Intelligence
Boston, MA
450 Employees
Pfizer Thumbnail
Artificial Intelligence • Healthtech • Machine Learning • Natural Language Processing • Biotech • Pharmaceutical
New York, NY
121990 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account