What will I be doing?
- Architect and evolve scalable infrastructure to ingest, process, and serve large volumes of data efficiently, using Kubernetes and Databricks as core building blocks.
- Design, build, and maintain Kubernetes-based infrastructure, owning deployment, scaling, and reliability of data workloads running on our clusters.
- Operate Databricks as our primary data platform, including workspace and cluster configuration, job orchestration, and integration with the broader data ecosystem.
- Work in improvements to existing frameworks and pipelines to ensure performance, reliability, and cost-efficiency across batch and streaming workloads.
- Build and maintain CI/CD pipelines for data applications (DAGs, jobs, libraries, containers), automating testing, deployment, and rollback.
- Implement release strategies (e.g., blue/green, canary, feature flags) where relevant for data services and platform changes.
- Establish and maintain robust data governance practices (e.g., contracts, catalogs, access controls, quality checks) that empower cross-functional teams to access and trust data.
- Build a framework to move raw datasets into clean, reliable, and well-modeled assets for analytics, modeling, and reporting, in partnership with Data Engineering and BI.
- Define and track SLIs/SLOs for critical data services (freshness, latency, availability, data quality signals).
- Implement and own monitoring, logging, tracing, and alerting for data workloads and platform components, improving observability over time.
- Lead and participate in on-call rotation for data platforms, manage incidents, and run structured postmortems to drive continuous improvement.
- Investigate and resolve complex data and platform issues, ensuring data accuracy, system resilience, and clear root-cause analysis.
- Maintain high standards for code quality, testing, and documentation, with a strong focus on reproducibility and observability.
- Work closely with the Data Enablement team, BI, and ML stakeholders to continuously evolve the data platform based on their needs and feedback.
- Stay current with industry trends and emerging technologies in DataOps, DevOps, and data platforms to continuously raise the bar on our engineering practices.
What skills do I need?
- Bachelor’s degree in Computer Engineering, Data Engineering, Computer Science, or a related technical field (or equivalent practical experience).
- Proven experience in data engineering, platform engineering, or backend software development, ideally in cloud-native environments.
- Deep expertise in Python or/and SQL, with strong skills building data or platform tooling.
- Strong experience with distributed data processing frameworks such as Apache Spark (Databricks experience strongly preferred).
- Solid understanding of cloud platforms, especially AWS and/or GCP.
- Hands-on experience with containerization and orchestration: Docker, Kubernetes / EKS / GKE / AKS (or equivalent)
- Proficiency with Infrastructure-as-Code (e.g., Terraform, Pulumi, CloudFormation) for managing data and platform components.
- Experience implementing CI/CD pipelines (e.g., GitHub Actions, GitLab CI, Jenkins, CircleCI, ArgoCD, Flux) for data workloads and services.
- Experience in monitoring & observability (metrics, logging, tracing) using tools like Prometheus, Grafana, Datadog, CloudWatch, or similar.
- Experience with incident management: Participating in or leading on-call rotations.
- Handling incidents and running postmortems
- Building automation and guardrails to prevent regressions
- Strong analytical thinking and problem-solving skills, comfortable debugging across infrastructure, network, and application layers.
- Able to work autonomously and collaboratively.
- Experience designing and maintaining DAGs with Apache Airflow or similar orchestration tools (Dagster, Prefect, Argo Workflows).
- Familiarity with modern data formats and table formats (e.g., Parquet, Delta Lake, Iceberg).
- Experience acting as a Databricks admin/developer, managing workspaces, clusters, compute policies, and jobs for multiple teams.
- Exposure to data quality, data contracts, or data observability tools and practices.
Top Skills
What We Do
dLocal started with one goal – to close the payments innovation gap between global enterprise companies, and customers in emerging economies. We have over 900 payment methods, in more than 40 countries.
With the ability to accept local payment methods and facilitate cross-border fund settlement worldwide, our merchants reach billions of underserved consumers in the high-growth markets of Africa, Asia, and Latin America. dLocal offers the ideal payment solutions for global commerce:
Payins: Accept local payment methods
Payouts: Compliantly send funds cross-border
Defense Suite: Manage fraud effectively
dLocal for Platforms: Unify your platform’s payment solution
Local Issuing: Localize payments for your gig-economy workers, suppliers, and partners






