What will I be doing?
- Own, operate, and optimize DynamoDB workloads in production, including data modeling, capacity strategy (provisioned/on‑demand), GSIs/LSIs, TTL, Streams, backup/restore, and multi‑region patterns.
- Design, deploy, and maintain DocumentDB and MongoDB clusters, including sharding, indexing strategies, backups, and performance tuning.
- Leverage AWS services across the data stack (e.g., S3, KMS, IAM, CloudWatch, DMS, Lambda) to build reliable, observable, and secure data solutions.
- Implement automation-as-code (CLI/SDK) for schema and capacity changes, operational tasks, and guardrails to reduce toil and error.
- Establish and improve observability for databases (metrics, logs, traces, dashboards, SLOs/alerts) and lead performance/root‑cause investigations.
- Collaborate with engineers to design efficient data access patterns and guide migrations and schema evolution with minimal downtime.
- Contribute to reliability engineering practices (backups, DR, chaos/restore testing, incident reviews) across NoSQL platforms.
- Participate in an on‑call rotation and attend emergency escalations, including during off‑hours when required.
What skills do I need?
- At dLocal, we embrace an AI‑first culture—using AI tools is a standard part of how we design, build, and operate.
- Hands‑on, senior‑level expertise operating DynamoDB in production (table design and single‑table modeling, indexes, Streams, backup/restore, performance/cost optimization) — this is a must‑have.
- Experience with AWS MSK or Kafka for event‑driven and streaming use cases — highly valued.
- Experience with DocumentDB and/or MongoDB administration (cluster sizing, replication/sharding, indexing, backups, upgrades, performance tuning).
- Solid command of AWS cloud fundamentals for data platforms (IAM, KMS encryption, networking basics, S3 patterns, monitoring/alerting with CloudWatch; experience with DMS and Lambda is a plus).
- Automation proficiency using Python or JavaScript/TypeScript, GO the AWS CLI, and SDKs to build repeatable operations and safety checks.
- Working knowledge of relational databases as a plus (e.g., MySQL/Aurora, PostgreSQL) to partner with peers and support hybrid data designs.
- Clear communication in English and a collaborative, service‑oriented mindset working with distributed teams.
- Certifications in AWS (e.g., Solutions Architect, Database – Specialty) or equivalent demonstrable expertise.
- Experience with infrastructure as code and Git‑based workflows (e.g., Terraform/CloudFormation, GitOps).
- Exposure to containers/orchestration (ECS/EKS) and securing data workloads in cloud‑native environments.
- Prior work with DocumentDB/MongoDB performance tuning at scale and multi‑region topologies.
- Ownership and bias for action, with strong troubleshooting instincts and attention to detail.
- Proactive approach to automation, continuous improvement, and reliability engineering.
- Collaboration and communication that builds trust across engineering, platform, and operations teams.
Top Skills
What We Do
dLocal started with one goal – to close the payments innovation gap between global enterprise companies, and customers in emerging economies. We have over 900 payment methods, in more than 40 countries.
With the ability to accept local payment methods and facilitate cross-border fund settlement worldwide, our merchants reach billions of underserved consumers in the high-growth markets of Africa, Asia, and Latin America. dLocal offers the ideal payment solutions for global commerce:
Payins: Accept local payment methods
Payouts: Compliantly send funds cross-border
Defense Suite: Manage fraud effectively
dLocal for Platforms: Unify your platform’s payment solution
Local Issuing: Localize payments for your gig-economy workers, suppliers, and partners








