Senior Data Engineer

Posted 5 Days Ago
Be an Early Applicant
Hiring Remotely in Montevideo
In-Office or Remote
Senior level
Database • Analytics
The Role
The Senior Data Engineer will build enterprise-grade data pipelines, manage data mart creation, and implement CI/CD processes while collaborating with clients on scalable solutions.
Summary Generated by Built In
Company Description

Blend is a premier AI services provider, committed to co-creating meaningful impact for its clients through the power of data science, AI, technology, and people. With a mission to fuel bold visions, Blend tackles significant challenges by seamlessly aligning human expertise with artificial intelligence. The company is dedicated to unlocking value and fostering innovation for its clients by harnessing world-class people and data-driven strategy. We believe that the power of people and AI can have a meaningful impact on your world, creating more fulfilling work and projects for our people and clients. For more information, visit www.blend360.com  

We are seeking a Senior Data Engineer to contribute to our next level of growth and expansion.

Job Description

What is this position about?

We are looking for a Senior Data Engineer to join a high-impact Customer Insights product engagement for a global QSR client. This role is hands-on-keyboard and focused on building enterprise-grade data pipelines that power a unified Analytics ID, modular data marts, and a scalable feature store. The ideal candidate brings deep expertise in Databricks, Spark (PySpark), SQL, and large-scale identity resolution pipelines, and thrives in complex, production-ready data environments.

Job Description

  • Design and build production-grade data pipelines in Databricks using Spark/PySpark and SQL.

  • Develop and maintain an Analytics ID stitching pipeline using deterministic and probabilistic matching techniques across multiple customer data sources.

  • Build and manage modular data marts (Identity, Behavior, Demographics) with independent refresh cadences.

  • Implement and maintain a scalable feature store supporting downstream analytics and data science use cases.

  • Own the end-to-end data lifecycle: ingestion, transformation, validation, deployment, monitoring, and optimization.

  • Develop data quality frameworks including schema drift detection, anomaly monitoring, match-rate validation, and automated deduplication audits.

  • Implement CI/CD processes for multi-environment promotion (dev/staging/prod) in Databricks environments.

  • Coordinate orchestration workflows and manage dependencies using Databricks Workflows or similar tools.

  • Collaborate closely with Data Architects and Client stakeholders to translate business rules into scalable technical solutions.

  • Produce comprehensive technical documentation including data contracts, lineage maps, architecture diagrams, and operational runbooks.

Qualifications

  • 5+ years of experience in Data Engineering building production-grade data pipelines at scale.

  • Strong hands-on experience with Databricks and Apache Spark (PySpark preferred).

  • Advanced SQL skills (complex joins, CTEs, window functions, performance tuning).

  • Experience developing identity resolution or entity matching pipelines (deterministic and/or probabilistic).

  • Experience designing and implementing data marts or dimensional models (Kimball or similar).

  • Familiarity with data quality frameworks (schema drift detection, validation, anomaly monitoring).

  • Experience implementing CI/CD for data pipelines and managing multi-environment deployments.

  • Strong communication skills and ability to present technical concepts to non-technical stakeholders.

  • Experience using Jira for ticket tracking and Confluence for documentation.

Nice to Have:

  • Experience with third-party data providers (Epsilon, LiveRamp, Neustar).

  • Experience with feature stores (Databricks Feature Store, Feast, or similar).

  • Knowledge of Databricks Unity Catalog.

  • Experience managing large-scale customer data (transactions, loyalty, retail/QSR data).

  • Experience with Delta Lake / Lakehouse architecture.

  • Familiarity with orchestration tools such as Airflow.

  • Experience working in consulting or embedded enterprise client environments.

 

What about languages?

  • Advanced English level (written and spoken) required for client-facing collaboration and technical presentations.

How much experience must I have?

  • Minimum of 5 years of professional experience in Data Engineering roles working with large-scale distributed data systems.

Additional Information

Our Perks and Benefits:
📚Learning Opportunities:

  • Certifications in AWS (we are AWS Partners), Databricks, and Snowflake.
  • Access to AI learning paths to stay up to date with the latest technologies.
  • Study plans, courses, and additional certifications tailored to your role.
  • Access to Udemy Business, offering thousands of courses to boost your technical and soft skills.
  • English lessons to support your professional communication.

👨🏽‍💻Travel opportunities to attend industry conferences and meet clients.

👩‍🏫 Mentoring and Development:

  • Career development plans and mentorship programs to help shape your path.

🎁 Celebrations & Support:

  • Special day rewards to celebrate birthdays, work anniversaries, and other personal milestones.
  • Company-provided equipment. 

⚖️ Flexible working options to help you strike the right balance.

Other benefits may vary according to your location in LATAM. For detailed information regarding the benefits applicable to your specific location, please consult with one of our recruiters.

 

So what are the next steps?

Our team is eager to learn about you! Send us your resume or LinkedIn profile below and we’ll explore working together!

Top Skills

Airflow
Spark
Databricks
Delta Lake
Lakehouse Architecture
Pyspark
Spark
SQL
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: Columbia, MD
390 Employees
Year Founded: 2016

What We Do

Our Vision is to build a company of world-class people that helps our clients optimize business performance through data, technology and analytics.

Blend360 has two divisions:
Data Science Solutions: We work at the intersection of data, technology and analytics.
Talent Solutions: We live and breathe the digital and talent marketplace.

Similar Jobs

Launch Potato Logo Launch Potato

Senior Data Engineer

AdTech • Big Data • Consumer Web • Digital Media • Marketing Tech
Easy Apply
Remote
Montevideo, URY
160 Employees

Truelogic Software Logo Truelogic Software

Platform Engineer

Information Technology • Software
Remote
Uruguay
266 Employees

SunnyData Logo SunnyData

Senior Data Engineer

Information Technology • Software • Analytics
Remote or Hybrid
Uruguay
103 Employees

N-iX Logo N-iX

Support Engineer

Information Technology • Consulting
Remote
Uruguay
2135 Employees

Similar Companies Hiring

Northslope Technologies Thumbnail
Software • Information Technology • Generative AI • Consulting • Artificial Intelligence • Analytics
Denver, CO
88 Employees
Scotch Thumbnail
Software • Retail • Payments • Fintech • eCommerce • Artificial Intelligence • Analytics
US
25 Employees
Milestone Systems Thumbnail
Software • Security • Other • Big Data Analytics • Artificial Intelligence • Analytics
Lake Oswego, OR
1500 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account