Forward-Deployed Engineer

Reposted Yesterday
2 Locations
In-Office or Remote
200K-325K Annually
Senior level
Productivity • Business Intelligence • Consulting
make time matter
The Role
The Forward-Deployed Engineer will manage enterprise integrations, guide clients through setup and automation, build connectors, and improve infrastructure for data ingestion.
Summary Generated by Built In

We are hiring for our very first Forward-Deployed Engineer at Parable.

As the technical face of Parable for enterprise integrations, you will have a major influence on the development of our product. You will guide client admins through secure setup—API key/secret generation, permissions scoping, automated audit-log exports, and connector configuration across Workday, Oracle Fusion, Microsoft 365, Okta, Salesforce, Netsuite, and more—so that clean data lands reliably in their Parable tenant. When not with clients, you will build and harden connectors/taps, improve scaffolding/CLI and docs, and help make integrations self-serve. You will partner closely with the Raw Data TPM and contribute to both internal and external UX where our CX team and our clients manage tokens and view connector health.

About Parable

Parable’s mission is to make time matter.

We provide CEOs of companies with 1,000+ employees with deep observability of the time spent by their team across all strategies, projects, and processes. Our insights help teams focus on the work that matters the most, and drive data-driven decisions in resource allocation.

The company was founded by seasoned founders, with multiple 9-figure exits under their belts, and reached $2m of ARR within 6 months of going to market. Parable raised $17 million from investors like HOF Capital and Story Ventures, as well as 50+ founders and executives.

On the technical side, we are building an operating system for large enterprises – ingesting silo’ed data from across the workplace stack, shaping it into a strongly opinionated enterprise ontology, and contextualizing it to extract insights for clients. Each customer runs in a fully isolated, single-tenant GCP environment (own VPC, compute, storage, and KMS), with shared, parameterized pipelines instantiated per tenant—no bespoke schema per client.

Our platform stack includes Cloud Run Jobs/Compute Engine, Pub/Sub, Cloud Storage, Memorystore, BigQuery/Cloud SQL, and an Iceberg-based lake; we build primarily in Python/TypeScript (plus Rust).

The Raw Data team

This team’s mission is to productize landing client data—from SaaS APIs and custom/on-prem systems—into each client’s private data lake. They are building a Connector Factory (to reduce time to new connectors and taps), ingestion observability, and first-class API/documentation so clients and internal teams can self-serve.

You’ll be responsible for:
  • Owning client integrations end-to-end – Work directly with IT/SecOps to generate credentials (OAuth2, PATs, service principals), define least-privilege scopes, turn on audit exports/webhooks or scheduled jobs, and validate data flow in their isolated VPC.

  • Owning automations end to end – Turn client setup learnings into repeatable, automated processes with internal systems that keep clients and our CX teams up to date on the health and state of client integrations without human work.

  • Standing up production-grade ingestion – Plan backfills vs. incremental sync; handle rate limits/pagination; design retries/idempotency using our Pub/Sub-orchestrated jobs.

  • Building connectors & taps – Implement new sources in Python/TypeScript (plus Rust where useful); contribute scaffolds/CLI to shrink “time-to-first-tap.”

  • Instrumenting ingestion health – Expose coverage windows, lag, error budgets, and volumes—visible to clients and internal teams.

  • Writing crisp setup docs – Produce step-by-step guides for token admin flows and source-specific quirks; align with the client-facing App’s self-service token UX.

  • Partnering across streams – Ensure Raw Data unblocks Ontology & Context Mining by delivering the right datasets and documenting semantics for mapping into our opinionated schema.

This role is for someone who:
  • Is passionate about data infrastructure productization – you believe that removing friction from data ingestion directly accelerates client value.

  • Balances technical depth and product leadership – you can review APIs, write requirements, and contribute to technical docs, while also driving stakeholder alignment across Product, Engineering, Sales, and Customer Success.

  • Appreciates the details involved in solving complexity – you enjoy figuring out why specific API scopes and permissions don’t work, and troubleshooting those with high velocity.

  • Enjoys crafting documentation – you love writing explainer content for both developers and clients.

  • Sees GenAI as an enabler – you’re excited to use GenAI to scaffold taps from an API spec.

Requirements
  • Significant experience (generally 7–10+ years) shipping enterprise data integrations as a solutions/forward-deployed/implementation or software engineer.

  • Mastery of enterprise admin/security models: OAuth2/SAML/SCIM, service principals, RBAC, audit logs, IP allow-listing/egress controls.

  • Hands-on with enterprise systems like Workday, Oracle Fusion, Microsoft 365, Okta, Salesforce, Netsuite and their APIs, exports, and permission models.

  • Proven ability to debug auth/permissions, rate limits, schema mismatches, and data quality in production.

  • Strong proficiency in Python or TypeScript (bonus: Golang) and familiarity with GCP (Cloud Run Jobs, Pub/Sub, Cloud Storage, IAM, VPCs).

  • Security-first mindset aligned with per-tenant isolation and KMS boundaries; comfortable engaging client security teams.

  • Communication and leadership consistent with senior/staff-level expectations in our framework (strong execution, cross-functional influence, and client-facing clarity).

Other nice-to-haves include:

  • Building Iceberg/Delta-style lakes, streaming/batch ingestion patterns, and data-pipeline observability.

  • Authoring customer-ready docs and shaping self-serve token/admin experiences with the App + TPM.

Top Skills

BigQuery
Cloud Run
Cloud Sql
Cloud Storage
GCP
Oauth2
Pub/Sub
Python
Rust
SAML
Scim
Typescript
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: Brooklyn, NY
19 Employees
Year Founded: 2024

What We Do

Pioneering the P&L for company time.

Similar Jobs

ServiceNow Logo ServiceNow

Forward Deployed Solution Engineer - Applied AI FDE

Artificial Intelligence • Cloud • HR Tech • Information Technology • Productivity • Software • Automation
Remote or Hybrid
Santa Clara, CA, USA
27000 Employees
173K-303K Annually

ServiceNow Logo ServiceNow

Forward Deployed Solution Engineer - Applied AI FDE

Artificial Intelligence • Cloud • HR Tech • Information Technology • Productivity • Software • Automation
Remote or Hybrid
Santa Clara, CA, USA
27000 Employees
173K-303K Annually

Seekr Technologies Logo Seekr Technologies

Senior Forward Deployed Engineer

Artificial Intelligence • Information Technology
Remote
USA
69 Employees
200K-220K Annually
In-Office or Remote
2 Locations
72000 Employees

Similar Companies Hiring

Compa Thumbnail
Software • Other • HR Tech • Business Intelligence • Artificial Intelligence
Irvine, CA
60 Employees
Amplify Platform Thumbnail
Fintech • Financial Services • Consulting • Cloud • Business Intelligence • Big Data Analytics
Scottsdale, AZ
62 Employees
Credal.ai Thumbnail
Software • Security • Productivity • Machine Learning • Artificial Intelligence
Brooklyn, NY

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account