DataOps Engineer

Posted 4 Days Ago
Be an Early Applicant
Ramat Gan
In-Office
Senior level
Artificial Intelligence • Big Data • Software • Analytics • Business Intelligence • Big Data Analytics
Placer.ai is the leader in location analytics.
The Role
Own and operate the Kubernetes-based data platform running Spark and large-scale workloads. Ensure reliability, performance, cost-efficiency, CI/CD, observability, cloud storage (GCS/S3), Delta Lake/Unity Catalog management, and collaborate with data engineers to optimize workloads and platform tooling.
Summary Generated by Built In

ABOUT PLACER.AI: 

Placer.ai is transforming how organizations understand the physical world. Our location analytics platform provides unprecedented visibility into locations, markets, and consumer behavior. Placer empowers thousands of customers—from Fortune 500 companies, to local governments and nonprofits— to make smarter, data-driven decisions.


What sets us apart? We've built the most advanced location intelligence platform in the market while maintaining an uncompromising commitment to privacy, proving that powerful analytics and responsible data practices can coexist.


Our growth reflects the market's demand: we reached $100M in annual recurring revenue within just 6 years of launching, achieved unicorn status with a $1B+ valuation in 2022, and continue to expand rapidly as one of North America's fastest-growing tech companies. We're creating a $100B+ market opportunity, and we're just getting started.


Named one of Forbes America's Best Startup Employers and a Deloitte Technology Fast 500 company, we're building a culture where innovation thrives, collaboration is the norm, and every team member contributes to reshaping how the world understands location.


SUMMARY:

 

We are looking for a DataOps Engineer to own the infrastructure that powers Placer's large-scale data processing platform. This is a platform-facing role sitting at the intersection of data engineering and infrastructure — you'll be the person who makes Spark run reliably and efficiently on Kubernetes, so that data engineers can build with confidence.

You understand data workloads deeply enough to make smart infrastructure decisions, and you have the production instincts to keep complex systems healthy at scale. If you get excited about shaving minutes off Spark job runtimes, right-sizing cluster autoscalers, and building the internal tooling that makes a data platform feel effortless, this role is for you.


RESPONSIBILITIES:

 

  • Design, deploy, and operate the Kubernetes-based infrastructure that runs Apache Spark and large-scale data processing workloads
  • Own the reliability, performance, and cost-efficiency of the data platform — including SLAs, autoscaling, resource quotas, and workload isolation
  • Manage Spark-on-K8s configurations, Airflow infrastructure, and Databricks integration; tune for throughput, latency, and cost
  • Build and maintain CI/CD pipelines and infrastructure-as-code for data platform components
  • Develop observability tooling — metrics, logging, alerting, and data quality dashboards — to proactively surface issues across the pipeline stack
  • Collaborate closely with Data Engineers to understand workload patterns and translate them into infrastructure decisions
  • Manage cloud storage (GCS/S3), Delta Lake, and Unity Catalog infrastructure
  • Drive platform improvements end-to-end: from design through deployment and ongoing ownership

 

REQUIREMENTS:

 

  • 5+ years of experience in a production infrastructure, SRE, or DevOps role
  • 2+ years of hands-on experience running data processing workloads (Apache Spark, Flink, or similar) in production
  • Strong Kubernetes experience, including Spark-on-K8s, autoscaling, resource management, and the broader K8s ecosystem
  • 2+ years with infrastructure-as-code tools (Terraform, Pulumi, or similar)
  • Proficiency in at least one general-purpose language — Python or Go preferred
  • Experience with workflow orchestration tools, particularly Apache Airflow
  • Solid understanding of cloud infrastructure — GCP preferred (GCS, GKE, IAM)
  • Strong observability skills: metrics pipelines, structured logging, alerting frameworks

 

OTHER REQUIREMENTS: 


  • Familiarity with Delta Lake, Parquet, and columnar storage formats
  • Experience with data quality frameworks and pipeline lineage tooling
  • Knowledge of query optimization, partition strategies, and Spark performance tuning
  • Experience managing queues and databases (Kafka, PostgreSQL, Redis, or similar)

WHY JOIN PLACER.AI? 


  • Join a rocketship! We are pioneers of a new market that we are creating
  • Take a central and critical role at Placer.ai
  • Work with, and learn from, top-notch talent
  • Competitive salary
  • Excellent benefits

NOTEWORTHY LINKS TO LEARN MORE ABOUT PLACER 

  • Placer.ai's $100M round C funding (unicorn valuation!)
  • See our data in action at The Anchor
  • Placer.ai in the news
  • Video: About Placer for Commercial Real Estate
  • Video Playlist: Placer Brand & Explainer Videos

Placer.ai is committed to maintaining a drug-free workplace and promoting a safe, healthy working environment for all employees.

Placer.ai is an equal opportunity employer and has a global remote workforce. Placer.ai’s applicants are considered solely based on their qualifications, without regard to an applicant’s disability or need for accommodation. Any Placer.ai applicant who requires reasonable accommodations during the application process should contact Placer.ai’s Human Resources Department to make the need for an accommodation known.

Top Skills

Apache Spark,Apache Flink,Kubernetes,Spark-On-K8S,Apache Airflow,Databricks,Terraform,Pulumi,Python,Go,Gcs,Amazon S3,Delta Lake,Unity Catalog,Parquet,Kafka,Postgresql,Redis,Gke,Iam,Ci/Cd
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: Santa Cruz, CA
800 Employees
Year Founded: 2017

What We Do

Placer.ai is the most advanced foot traffic analytics platform allowing anyone with a stake in the physical world to instantly generate insights into any property for a deeper understanding of the factors that drive success.

Placer.ai is the first platform that fully empowers professionals in retail, commercial real estate, hospitality, finance, economic development, and more to truly understand and maximize their offline activities.

Why Work With Us

Placer is a remote first with a highly collaborative culture. Our product solves many use cases, from mitigating food deserts to battling blood shortages during rapid response crises, Placer data plays a critical role in supporting local communities and driving social progress in addition to providing invaluable market intelligence to our clients.

Gallery

Gallery

Similar Jobs

HiBob Logo HiBob

Systems Engineer

HR Tech • Information Technology • Professional Services • Sales • Software
Remote or Hybrid
Israel
1350 Employees

HiBob Logo HiBob

Finops Lead

HR Tech • Information Technology • Professional Services • Sales • Software
Remote or Hybrid
Israel
1350 Employees

HiBob Logo HiBob

Designer

HR Tech • Information Technology • Professional Services • Sales • Software
Remote or Hybrid
Israel
1350 Employees

Zscaler Logo Zscaler

Senior Back-end Engineer

Cloud • Information Technology • Security • Software • Cybersecurity
Easy Apply
Hybrid
Nefat Tel Aviv-Yafo, ISR
8697 Employees

Similar Companies Hiring

Idler Thumbnail
Artificial Intelligence
San Francisco, California
6 Employees
Fairly Even Thumbnail
Software • Sales • Robotics • Other • Hospitality • Hardware
New York, NY
Bellagent Thumbnail
Artificial Intelligence • Machine Learning • Business Intelligence • Generative AI
Chicago, IL
20 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account