Senior Machine Learning Engineer

Posted 3 Days Ago
Be an Early Applicant
5 Locations
Remote or Hybrid
Senior level
Fintech • Payments • Financial Services
The Role
As a Senior MLOps & AI Infrastructure Engineer, you'll build and maintain ML infrastructures, focusing on Feature Store, automation, and data pipelines, ensuring compliance and collaboration with data science teams.
Summary Generated by Built In
Why should you join dLocal?

dLocal enables the biggest companies in the world to collect payments in 40 countries in emerging markets. Global brands rely on us to increase conversion rates and simplify payment expansion effortlessly. As both a payments processor and a merchant of record where we operate, we make it possible for our merchants to make inroads into the world’s fastest-growing, emerging markets. 

By joining us you will be a part of an amazing global team that makes it all happen. Being a part of dLocal means working with 1000+ teammates from 30+ different nationalities and developing an international career that impacts millions of people’s daily lives. We are builders, we never run from a challenge, we are customer-centric, and if this sounds like you, we know you will thrive in our team.




What’s the opportunity?

As a Senior MLOps Engineer at dLocal, you will be a key individual contributor in the team that builds and operates our ML and AI platform, with a strong focus on Feature Store and MLOps workflows.

You will implement and evolve the components that Data Science and AI teams use every day to take models and AI‑powered services from idea to production: feature pipelines, training and deployment workflows, observability and automation.

A core part of this role is to use agents and AI services to automate as much as possible of what we do in MLOps — from feature store and platform operations to fraud/anomaly workflows and ML cost optimization — working side by side with the AI Team and the MLOps Technical Referent.

What will I be doing?

    1. Building and evolving the Feature Store
    - Implement and maintain online and offline feature pipelines that feed our enterprise Feature Store, combining:
          - Flink‑based streaming jobs ingesting large volumes of events from multiple sources (payments, fraud, anomaly, etc.) into online stores.
          - Databricks / Spark pipelines for offline feature computation, backfills and training datasets.
      - Ensure:
          - Point‑in‑time correctness for offline training and backtesting.
          - Low‑latency, high‑throughput online feature serving with clear SLAs, TTL semantics and multi‑tenant safety.
    - Contribute to the feature catalog and specs:
          - Define entities, feature views, schemas, SLAs, PII classification and owners.
          - Help data scientists and domain teams onboard new features safely and consistently across Flink and Databricks.
    - Develop tooling for:
          - Backfills and materialization coordination between Flink and Databricks (Lakehouse / Delta).
          - Offline–online parity checks, data quality, drift and freshness monitoring for critical feature groups.
          - Unified feature retrieval APIs (online/offline/batch) and SDK/CLI usage from models and services.
    2. MLOps platform implementation (training, serving, observability)
    - Implement and improve training and evaluation pipelines:
          - Reproducible workflows, experiment tracking and model registry integration.
          - Promotion flows from dev → staging → production, following platform standards.
    - Work on online and batch inference paths:
          - Model packaging and deployment.
          - Rollout strategies (canary, shadow, rollback) aligned with SRE/Infra.
    - Instrument pipelines and services with metrics, logs and traces:
          - Integrate with our observability stack (e.g. OTel, Coralogix).
          - Expose dashboards and alerts for ML components (latency, errors, drift, freshness).
    3. AI‑assisted automation for MLOps and Feature Store
    - Integrate and extend agents and AI services (built by the AI Team and MLOps) to automate key parts of the Feature Store and MLOps workflows (health checks, drift and quality analysis, documentation/specs, incident triage, FinOps suggestions, etc.).
    - Design these automations with clear guardrails: observable, auditable and easy to roll back, always keeping humans in control of production decisions.
    4. Reliability, security and compliance in practice
    - Implement changes that respect platform standards around:
          - Access control, secrets management and PII handling in features and models.
          - Environment separation and change management for ML/AI components.
    - Participate in on‑call rotations or escalation paths for ML pipelines and feature infrastructure:
          - Diagnose and fix incidents.
          - Contribute improvements to playbooks, dashboards and tests.
    5. Collaboration and technical contribution
    - Work closely with:
          - MLOps Technical Referent to align on architecture and technical direction.
          - Data Science squads and the AI Team to understand requirements and unblock use cases.
          - Fraud, Anomaly and other product squads as consumers of features and models.
    - Contribute to internal documentation, RFCs, examples and onboarding guides so other engineers and data scientists can adopt the platform more easily.
    - Mentor mid‑level engineers on good practices in pipelines, testing, observability and automation.

What skills do I need?

    Must‑haves
    - Solid experience as a Senior Engineer working on:
          - MLOps, data platforms, or large‑scale backend / distributed systems.
    - Hands‑on experience with big data / streaming technologies (e.g. Spark, Flink, Kafka, Kinesis, or similar).
    - Proven track record building production‑grade ML pipelines:
          - Experiment tracking and reproducible training flows.
          - CI/CD for models and data pipelines.
          - Online and batch inference at scale.
    - Familiarity with cloud‑based ML platforms and containerized deployments (e.g. Databricks, SageMaker, Vertex AI, or equivalent).
    - Strong understanding of observability:
          - Metrics, logs and traces.
          - Data and model drift, freshness and quality checks.
    - Ability to write clean, maintainable code and collaborate through reviews, design docs and pairing sessions.
    - Comfortable communicating with Data Scientists, ML Engineers and Infra/SRE, translating requirements into concrete technical solutions.
    Nice to have
    - Experience working with or around Feature Stores (Feast, Databricks Feature Store, custom implementations, etc.).
    - Exposure to LLMs, agents and AI assistants, especially applied to:
          - Developer productivity (code/infra copilots).
          - Log/metric/incident analysis or documentation generation.
    - Experience in Fintech, risk, fraud or anomaly detection environments.
    - Contributions to internal standards, RFCs, runbooks or technical talks.

What do we offer?

Besides the tailored benefits we have for each country, dLocal will help you thrive and go that extra mile by offering you:
- Flexibility: we have flexible schedules and we are driven by performance.
- Fintech industry: work in a dynamic and ever-evolving environment, with plenty to build and boost your creativity.
- Referral bonus program: our internal talents are the best recruiters - refer someone ideal for a role and get rewarded.
- Learning & development: get access to a Premium Coursera subscription.
- Language classes: we provide free English, Spanish, or Portuguese classes.
- Social budget: you'll get a monthly budget to chill out with your team (in person or remotely) and deepen your connections!
- dLocal Houses: want to rent a house to spend one week anywhere in the world coworking with your team? We’ve got your back!

Flexibility in how you work: We focus on impact and productivity over fixed hours. This means our teams have flexible schedules and, depending on your role and location, you will combine self‑managed focus time with moments of in‑person connection in our collaboration hubs.

What happens after you apply?
Our Talent Acquisition team is invested in creating the best candidate experience possible, so don’t worry, you will definitely hear from us. We will review your CV and keep you posted by email at every step of the process!

Also, you can check out our webpage, Linkedin and Youtube for more about dLocal!

Top Skills

Databricks
Databricks
Feature Store
Flink
Kafka
Kinesis
Mlops
Sagemaker
Spark
Vertex Ai
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
932 Employees
Year Founded: 2016

What We Do

dLocal started with one goal – to close the payments innovation gap between global enterprise companies, and customers in emerging economies. We have over 900 payment methods, in more than 40 countries.

With the ability to accept local payment methods and facilitate cross-border fund settlement worldwide, our merchants reach billions of underserved consumers in the high-growth markets of Africa, Asia, and Latin America. dLocal offers the ideal payment solutions for global commerce:

Payins: Accept local payment methods
Payouts: Compliantly send funds cross-border
Defense Suite: Manage fraud effectively
dLocal for Platforms: Unify your platform’s payment solution
Local Issuing: Localize payments for your gig-economy workers, suppliers, and partners

Similar Jobs

SunnyData Logo SunnyData

Senior Machine Learning Engineer

Information Technology • Software • Analytics
Remote or Hybrid
Uruguay
103 Employees

Circle (Community) Logo Circle (Community)

Head of Media

Artificial Intelligence • Consumer Web • Digital Media • Information Technology • Social Impact • Software
Easy Apply
Remote
31 Locations
250 Employees
150K-220K Annually

Circle (Community) Logo Circle (Community)

Lead Product Designer

Artificial Intelligence • Consumer Web • Digital Media • Information Technology • Social Impact • Software
Easy Apply
Remote
31 Locations
250 Employees
140K-170K Annually

Circle (Community) Logo Circle (Community)

Software Engineer

Artificial Intelligence • Consumer Web • Digital Media • Information Technology • Social Impact • Software
Easy Apply
Remote
31 Locations
250 Employees
130K-140K Annually

Similar Companies Hiring

Granted Thumbnail
Mobile • Insurance • Healthtech • Financial Services • Artificial Intelligence
New York, New York
23 Employees
Scotch Thumbnail
Software • Retail • Payments • Fintech • eCommerce • Artificial Intelligence • Analytics
US
25 Employees
Kepler  Thumbnail
Fintech • Software
New York, New York
6 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account