Senior Data Engineer

Reposted 3 Days Ago
Easy Apply
Be an Early Applicant
Hiring Remotely in Venezuela
Remote
Senior level
Fintech • Financial Services
The Role
The Senior Data Engineer will design and build event-driven, near real-time data ingestion and Lakehouse architecture using tools like AWS, Snowflake, and Kafka, ensuring data reliability and governance.
Summary Generated by Built In

What is Cobre, and what do we do?

Cobre is Latin America’s leading instant b2b payments platform. We solve the region’s most complex money movement challenges by building advanced financial infrastructure that enables companies to move money faster, safer, and more efficiently.

We enable instant business payments—local or international, direct or via API—all from a single platform.

Built for fintechs, PSPs, banks, and finance teams that demand speed, control, and efficiency. From real-time payments to automated treasury, we turn complex financial processes into simple experiences.

Cobre is the first platform in Colombia to enable companies to pay both banked and unbanked beneficiaries within the same payment cycle and through a single interface.

We are building the enterprise payments infrastructure of Latin America!


What we are looking for:

We are looking for a Senior Data Engineer to design, build, and evolve our modern, event-driven and near real-time Lakehouse platform on AWS + Snowflake.

This is a data platform role, focused on high-volume transactional data, near real-time processing, and strong architectural foundations.

Our data ecosystem is built around:

  • Event-driven ingestion using Confluent Cloud (Kafka)
  • CDC ingestion from databases via AWS DMS
  • Custom connectors for specific internal and third-party sources
  • Near real-time and batch processing
  • AWS S3 + S3 Tables (Apache Iceberg) as the core Lakehouse storage
  • Medallion Architecture (Bronze / Silver / Gold)
  • Glue & dbt as the transformation layer
  • Snowflake as the access, analytics, and data governance layer

You will play a key role in defining how data flows from source to Lakehouse to analytics, ensuring reliability, scalability, observability, and governance.

What would you be doing:

Event-Driven & Near Real-Time Data Ingestion

  • Design and maintain event-driven ingestion pipelines using Confluent + AWS.
  • Ingest CDC streams from transactional databases using AWS DMS.
  • Build and maintain custom ingestion connectors for internal and external sources.
  • Ensure data consistency, ordering, idempotency, and replayability in near real-time pipelines.

Lakehouse & Storage Layer

  • Own the Lakehouse architecture based on Apache Iceberg on S3 Tables.
  • Design Iceberg tables optimized for streaming and batch workloads.
  • Manage schema evolution, partitioning strategies, compaction, and time travel.
  • Implement and evolve Bronze, Silver, and Gold layers aligned with event-driven ingestion patterns.

Transformations & Processing

  • Build near real-time and batch ELT pipelines using AWS Glue, Python, and dbt.
  • Implement incremental models optimized for streaming-derived datasets.
  • Ensure transformations are modular, testable, and production-ready.

Observability, Quality & Reliability

  • Implement data observability and monitoring across ingestion and transformation layers.
  • Track freshness, volume, schema changes, and data quality metrics.
  • Define alerting and monitoring strategies to proactively detect pipeline and data issues.
  • Implement data quality checks, contracts, and validation rules.

Analytics Enablement & Governance

  • Expose curated datasets through Snowflake as a governed access layer.
  • Collaborate on access control, data retention, and lineage strategies.
  • Ensure consistency between Lakehouse storage and analytics consumption.

Platform Ownership

  • Contribute to architectural decisions across ingestion, storage, and consumption.
  • Balance latency, cost, scalability, and reliability trade-offs.
  • Define standards, templates, and best practices for the data platform.

What do you need:

Must have

  • 3–6+ years of experience as a Data Engineer (flexible depending on depth).
  • Strong experience with event-driven data architectures.
  • Hands-on experience with Kafka / Confluent in production environments.
  • Experience ingesting data via CDC (AWS DMS or similar tools).
  • Solid experience designing near real-time data pipelines.
  • Strong knowledge of Apache Iceberg 
  • Experience with AWS (S3, Glue, Lambda, EventBridge, Firehose, etc.).
  • Advanced SQL and Python.
  • Production experience using dbt.
  • Experience using Snowflake as an analytics and governance layer.

Nice to have

  • Experience with S3 Tables specifically.
  • Infrastructure as Code (Terraform / Terragrunt).
  • Experience with high-volume transactional or fintech systems.
  • Familiarity with data contracts, schema registries, and data observability tools.

Who will thrive in this role

  • Engineers who enjoy event-driven systems and near real-time processing.
  • People who think in platforms, not one-off pipelines.
  • Engineers comfortable owning architectural decisions.
  • Profiles that enjoy working close to core business events and transactional data.

What this role makes explicit

  • Kafka / Confluent is the primary ingestion layer.
  • CDC + events are first-class citizens.
  • Near real-time is core, not an afterthought.
  • Iceberg-first Lakehouse, Snowflake as access & governance, not as the lake.
  • Clear Mid / Senior expectations.
  • Observability and reliability are part of the job, not “nice to have”.

Not a fit if

  • You focus on BI/dashboards or SQL-only analytics.
  • You’ve only worked with batch ETL.
  • You’re looking for a junior or execution-only role.
  • You’re not interested in event-driven, near real-time systems.

Top Skills

Apache Iceberg
AWS
Aws Dms
Aws Glue
Confluent Cloud
Dbt
Kafka
Python
Snowflake
SQL
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: Bogota
170 Employees
Year Founded: 2019

What We Do

Optimize online treasury and ensure secure online payments with Cobre. We are your trusted partner in treasury control and more. At Cobre, we build technology to solve the most complex problems of money movement. Why? So that companies, with their attention focused on what is truly important, can do more. With Cobre, companies build intelligent and interoperable financial processes and products. Your business doesn't stop. With Cobre, your money doesn't either.

Similar Jobs

DevSavant Logo DevSavant

Senior Data Engineer

Information Technology • Software
Remote
Venezuela
93 Employees

DevSavant Logo DevSavant

Senior Data Engineer

Information Technology • Software
Remote
Venezuela
93 Employees

AssureSoft Logo AssureSoft

Senior Business Intelligence Engineer

Information Technology • Consulting
Remote
Venezuela
272 Employees

Alpaca Logo Alpaca

Senior Data Engineer

Fintech • Information Technology
Easy Apply
Remote
Venezuela
132 Employees

Similar Companies Hiring

Granted Thumbnail
Mobile • Insurance • Healthtech • Financial Services • Artificial Intelligence
New York, New York
23 Employees
Scotch Thumbnail
Artificial Intelligence • eCommerce • Fintech • Payments • Retail • Software • Analytics
US
35 Employees
Kepler  Thumbnail
Fintech • Software
New York, New York
6 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account