ABOUT PLACER.AI:
Placer.ai is transforming how organizations understand the physical world. Our location analytics platform provides unprecedented visibility into locations, markets, and consumer behavior. Placer empowers thousands of customers—from Fortune 500 companies, to local governments and nonprofits— to make smarter, data-driven decisions.
What sets us apart? We've built the most advanced location intelligence platform in the market while maintaining an uncompromising commitment to privacy, proving that powerful analytics and responsible data practices can coexist.
Our growth reflects the market's demand: we reached $100M in annual recurring revenue within just 6 years of launching, achieved unicorn status with a $1B+ valuation in 2022, and continue to expand rapidly as one of North America's fastest-growing tech companies. We're creating a $100B+ market opportunity, and we're just getting started.
Named one of Forbes America's Best Startup Employers and a Deloitte Technology Fast 500 company, we're building a culture where innovation thrives, collaboration is the norm, and every team member contributes to reshaping how the world understands location.
We're looking for a Data Platform Engineer to own and scale the Kubernetes infrastructure powering our large-scale data processing platform.
- Operate and scale Kubernetes clusters with thousands of nodes supporting large-scale Spark and data processing workloads
- Manage and optimize Apache Spark on Kubernetes — executor autoscaling, driver scheduling, resource tuning, spot instance strategies
- Deploy and tune remote shuffle services (e.g., Apache Celeborn) to handle shuffle data at scale across multiple availability zones
- Operate and improve self-hosted Apache Airflow infrastructure on Kubernetes
- Configure and optimize batch schedulers (e.g., YuniKorn, Volcano) for gang scheduling, fair-share queuing, and resource prioritization
- Drive cost optimization across large compute fleets — spot vs. on-demand strategies, node right-sizing, autoscaling policies, local SSD utilization
- Support and collaborate with Data Engineering teams on workload performance, resource allocation, and infrastructure requirements
- Manage infrastructure-as-code (Terraform) and GitOps deployments (ArgoCD, Helm) for data platform services
- Integrate with managed data platforms (e.g., Databricks) and cloud storage for hybrid processing architectures
- 3+ years of experience operating Kubernetes in production at significant scale (hundreds to thousands of nodes)
- Hands-on experience with Apache Spark on Kubernetes — you understand executors, drivers, dynamic allocation, shuffle behavior, and how they map to K8s primitives
- Strong understanding of Kubernetes internals — scheduling, resource management, node autoscaling, pod lifecycle, taints/tolerations, local storage
- Experience with cloud infrastructure (GCP preferred) — managed Kubernetes, spot/preemptible instances, local SSDs, networking at scale
- Comfortable with infrastructure-as-code (Terraform) and GitOps workflows
- Proficiency in Python or Go
- Experience operating Apache Airflow at scale on Kubernetes
- Experience with Apache Celeborn or similar remote shuffle services
- Familiarity with YuniKorn or Volcano batch schedulers
- Experience with Databricks administration and integration
- Knowledge of data formats and storage systems (Parquet, Delta Lake, cloud object storage)
- Experience with streaming or messaging systems (Kafka)
- Experience with Prometheus/Grafana observability stacks for data platform monitoring
- Contributions to open-source data infrastructure projects
WHY JOIN PLACER.AI?
- Join a rocketship! We are pioneers of a new market that we are creating
- Take a central and critical role at Placer.ai
- Work with, and learn from, top-notch talent
- Competitive salary
- Excellent benefits
NOTEWORTHY LINKS TO LEARN MORE ABOUT PLACER
- Placer.ai's $100M round C funding (unicorn valuation!)
- See our data in action at The Anchor
- Placer.ai in the news
- Video: About Placer for Commercial Real Estate
- Video Playlist: Placer Brand & Explainer Videos
Placer.ai is committed to maintaining a drug-free workplace and promoting a safe, healthy working environment for all employees.
Placer.ai is an equal opportunity employer and has a global remote workforce. Placer.ai’s applicants are considered solely based on their qualifications, without regard to an applicant’s disability or need for accommodation. Any Placer.ai applicant who requires reasonable accommodations during the application process should contact Placer.ai’s Human Resources Department to make the need for an accommodation known.
Skills Required
- 5+ years production infrastructure, SRE, or DevOps experience
- 2+ years running data processing workloads (Apache Spark, Flink, or similar) in production
- Strong Kubernetes experience including Spark-on-K8s, autoscaling, and resource management
- 2+ years with infrastructure-as-code tools (Terraform, Pulumi, or similar)
- Proficiency in at least one general-purpose language (Python or Go preferred)
- Experience with workflow orchestration tools, particularly Apache Airflow
- Solid understanding of cloud infrastructure (GCP preferred: GCS, GKE, IAM)
- Strong observability skills (metrics pipelines, structured logging, alerting frameworks)
- Familiarity with Delta Lake, Parquet, and columnar storage formats
- Experience with data quality frameworks and pipeline lineage tooling
- Knowledge of query optimization, partition strategies, and Spark performance tuning
- Experience managing queues and databases (Kafka, PostgreSQL, Redis, or similar)
What We Do
Placer.ai is the most advanced foot traffic analytics platform allowing anyone with a stake in the physical world to instantly generate insights into any property for a deeper understanding of the factors that drive success. Placer.ai is the first platform that fully empowers professionals in retail, commercial real estate, hospitality, finance, economic development, and more to truly understand and maximize their offline activities.
Why Work With Us
Placer is a remote first with a highly collaborative culture. Our product solves many use cases, from mitigating food deserts to battling blood shortages during rapid response crises, Placer data plays a critical role in supporting local communities and driving social progress in addition to providing invaluable market intelligence to our clients.








