We are developing a SaaS product called Skyway that simplifies financial planning and analysis of cloud billing data for large enterprises with complex cloud spending requirements.
Enterprise cloud cost management is a deceptively deep problem, and we believe the entire cost management space is solving the wrong part of the problem.
Our product, Skyway, is built on data. Our customers are enterprise finance, engineering, procurement, and FinOps teams using cloud billing data to make million-dollar decisions and the quality, reliability, and performance of our data pipelines directly determines whether they trust our product. We're looking for a Platform Engineer to own the data infrastructure that powers everything.
Historically, products in this space have made three fundamental choices:
They provided a cost-first data model to customers
They assumed the provider’s billing data is correct
They chose to solve a specific piece of the cost management problem domain
We’re taking the opposite of all three: our world is consumption-first, we’re recalculating the billing for each provider, and we’re working to solve the entire domain.
We’ve started with a AWS contract management product and expanding over time to handle forecasting, scenario modeling, cost allocation, chargeback/reinvoicing, practice management, and much more. Our goal is to be the system every enterprise runs their cost management practice on.
To accomplish this, we're processing hundreds of millions of rows of billing data per customer, per month. We're turning unstructured commercial contracts into structured data and dynamically recalculating bills using rates from both the contract and public pricing data. We're building a semantic layer that translates technical cloud usage data into business concepts anyone can understand, and we're creating the world's first standardized structure for describing a commercial contract and all of its facets.
There are three core issues we have to solve:
Near-unbounded data cardinality coupled with very large and complex datasets
Consumption-first means we need to model the pricing schemes and commercial contract structures for every vendor we support—and we intend to support hundreds-to-thousands
Hiding all of this complexity from the customer with excellent product design
Here's a sample of the kind of problems you'll help us solve:
A customer's organizational restructure moves $40 million in infrastructure spend from one VP to another overnight—both the historical view and the future view need to be correct, but they need to be correct in different ways.
A customer has 6,000 people in engineering, 2,000+ application IDs, and a highly matrixed reporting structure. "Who owns this?" is almost impossible to answer, but we need to make it simple. To make matters worse, engineering and finance have wildly different perspectives on this.
A SaaS provider changes their pricing model, breaking previous known structures. We need to detect it, adapt to it, and recalculate downstream.
Each provider has dozens of columns in its billing data. When combined with customer-provided data, the dimensions exploded to hundreds or thousands. Given the dimensionality and customer size, the dataset reaches into the terabytes easily. The customer needs to not just be able to query the data, but to also modify the data—and do so faster than a traditional data processing pipeline will allow.
Own the data infrastructure that powers everything. Build and maintain the pipelines that process hundreds of millions of rows of billing data per customer. But this isn't just writing DAGs all day: you'll also own how that data is modeled, stored, validated, and served to the rest of the product.
Build the contract application engine. Skyway doesn't just ingest billing data—we also enrich and transform it substantially. You'll work on the logic that matches usage lines to contract terms, applies negotiated discount rates, and produces the financial outputs customers rely on. Getting this right is the difference between a data pipeline and a financial product.
Become an expert on cloud billing data. You'll master provider billing schemas, quirks, and evolution, starting with AWS CUR, eventually expanding to GCP, Azure, and SaaS providers with radically different billing models. You’ll be diagnosing data anomalies regularly and document the tribal knowledge that makes this data so difficult to work with.
Design the data model for multi-provider support. Each provider structures their billing data differently—different columns, different granularity, different pricing concepts. We ingest from a variety of source formats, such as CSV, PDF, APIs, Parquet, even text emails. You'll help make foundational decisions about how we normalize across providers, how we handle cross-provider queries, and how we keep provider-native detail accessible for deep dives.
Build bulletproof data validation. Design quality control systems that catch issues before they reach customers. Financial data demands precision and a data quality bug can represent a customer breach in trust, not just an annoyance.
Optimize for performance at scale. Design storage patterns, partitioning strategies, and query approaches that serve sub-second performance alongside heavy analytical workloads.
Work across the stack. You'll collaborate with backend engineers on data contracts and APIs, help the product team understand what's possible with the data, and be the person who can answer "why does this number look like that?" when something seems off. This isn't a throw-it-over-the-wall role—our platform engineers are core to everything we do.
Our stack: Python, ClickHouse, Airflow, Parquet on S3, with a Flask backend and React frontend. What matters more than any specific tool is that you have strong opinions about data systems and the experience to back them up.
You’ll be given more autonomy than you’re comfortable with. We're a small team, around a dozen people today. People joining now are making foundational decisions that need to hold up to future growth in data and customers. We’ve got a lot to do and we hire for expertise and great judgement. We value quick decision-making and a “let’s find out!” attitude.
We ship daily. New engineers ship in their first week. We value making progress toward customer desires every day.
AI makes us all more capable. We use AI assistance throughout the company heavily and expect you to as well—not just tab-completion, but agents, code review, and whatever else helps you move faster. AI skeptics won’t do well here.
You're someone who builds data systems, not just queries data. You care about what happens to your pipelines at 3am. You care about the people downstream of your work, whether they’re backend engineers, product, or customers. When data looks weird, you dig until you understand why, not just patch it.
You have meaningful experience with data products, including warehouses, lakehouses, OLAPs, ETL pipelines, and semantic systems. You're strong in Python and SQL. You're comfortable with columnar databases and cloud storage. You're fastidious about data quality and comfortable when there's no answer key.
You don't need to have domain expertise in cloud billing. Upstream provider data sources have inconsistent documentation and are constantly changing. Instead, what matters is that you can figure things out without a spec and have a process for when schemas evolve underneath you.
You're a backend engineer who "can do data" but doesn't love DAGs and large-scale data processing
You're a data analyst who wants to stay in dashboards and reports
You need perfect requirements before you start building
You're used to months-long data warehouse projects at enterprise pace
"Close enough" is in your vocabulary for financial data
We've been doing customer research for seven years. Duckbill started life as services firm in 2019 and has worked primarily with large enterprises. Through that work, we've had a front row seat to the hardest problems organizations face. As a result, our vision for Skyway isn't built on hypotheses: it's built on having directly worked these problems alongside the people who live with them every day.
Services are a superpower. Every services engagement teaches us something new about customer pains and desires, creating an intelligence loop between services and product. At the scale we’re working at, we are exposed to the hardest cloud cost management problems in the world and can turn those insights directly into product capabilities. (and don’t worry, our engineering team is separate from our services team, so no FDE here)
Everyone knows us. We operate the largest AWS community outside of AWS itself—35,000 newsletter subscribers, 5 million podcast downloads, 100,000+ social media followers. Nearly every Fortune 100 company on AWS knows who we are and will take our call. We aren’t having to build a sales pipeline from scratch.
Compensation & BenefitsCompensation for this role is a salary range of $170,000-$220,000 plus early stage equity options. We provide a 401(k) and healthcare, vision, and dental with premiums fully covered by the company. Dependents are covered 50%. We offer 4 weeks of PTO, plus unlimited sick leave.
About UsWe are a small and growing team of 12 people, which means you get the opportunity to be on the ground floor of building the product and company. Our founders are domain experts in the market+problem space, bringing deep industry and customer connections in cloud cost management to the product.
We're backed by Heavybit and Uncork Capital, having raised $7.75 million, along with substantial revenue.
This is an in-office roleWe work together in the office in San Francisco three days per week, so you must be located in the SF Bay Area and willing to work in the office on a regular basis.
Top Skills
What We Do
Our cloud cost management experts help companies fix their AWS bill by making it smaller and less horrifying. You may know us from our publications: Last Week in AWS, AWS Morning Brief, and Screaming in the Cloud.









