Senior Data Engineer (Cloud / Snowflake / PySpark)

Posted 24 Days Ago
Be an Early Applicant
Hiring Remotely in Dinastía, Salinas Victoria, Nuevo León, MEX
Remote
Senior level
Information Technology • Software • Consulting
The Role
Design, build, and maintain scalable and reliable data pipelines, implementing ETL workflows and optimizing Snowflake solutions, while collaborating with data scientists and engineers.
Summary Generated by Built In

We love technology, and we enjoy what we do. We are always looking for innovation. We have social awareness and strive to improve it daily. We make things happen. You can trust us. Our Enrouters are always up for a challenge. We ask questions, and we love to learn. We pride ourselves on having great benefits and compensations, a fantastic work environment, flexible schedules, and policies that positively impact the balance of work and life outside of it.

At Enroute, we are looking for a Senior Data Engineer to join a growing Data team responsible for designing, building, and evolving scalable data platforms and cloud-native pipelines that support business intelligence, analytics, and operational workloads.

The ideal candidate is highly hands-on with Python, Spark/PySpark, Snowflake, and cloud-based data architectures, with strong experience building reliable, production-grade ETL/ELT pipelines and modern data warehousing solutions.

This role is ideal for someone who enjoys solving complex data challenges, optimizing performance at scale, and collaborating closely with data scientists, analysts, and engineering teams.


Requirements✅ Must-Have Requirements

Core Data Engineering

  • 5+ years of professional experience in Data Engineering or related fields
  • Strong experience designing and maintaining scalable data pipelines
  • Deep understanding of ETL / ELT best practices
  • Strong experience with large-scale data processing architectures
  • Proven experience with batch data processing
  • Strong experience with data warehousing concepts
Programming & Data Processing
  • Advanced Python
  • Strong hands-on experience with Apache Spark / PySpark
  • Advanced SQL (complex queries, optimization, transformations)
  • Strong experience processing large structured and unstructured datasets
Cloud & Infrastructure
  • Hands-on experience with AWS or Azure
  • Experience building cloud-native data solutions
  • Experience with Docker
  • Experience with CI/CD pipelines
  • Strong knowledge of Git / version control
Data Orchestration
  • Strong hands-on experience with Apache Airflow
  • Experience designing workflow orchestration pipelines
  • Scheduling, monitoring, and failure recovery strategies
Critical Must-Have
  • Strong expertise in Snowflake (MUST HAVE)
  • Snowflake data warehouse design
  • Snowflake development
  • Query and warehouse optimization
  • Performance tuning and cost efficiency
  • Cloud data warehouse architecture best practices
🎯 Responsibilities
  • Design, build, and maintain scalable, reliable, and high-performance data pipelines
  • Develop end-to-end ETL / ELT workflows
  • Process large-scale datasets using Spark / PySpark
  • Build and orchestrate cloud-native pipelines in AWS and/or Azure
  • Design and optimize Snowflake data warehouse solutions
  • Ensure performance, scalability, governance, and cost optimization
  • Write and optimize advanced SQL queries
  • Collaborate with Data Scientists, Analysts, and Software Engineers
  • Translate business requirements into production-ready data solutions
  • Ensure data consistency, availability, and quality
  • Implement CI/CD, Git workflows, and Dockerized deployments
  • Improve reliability and observability of data platforms

Benefits
  • Monetary compensation
  • Year-end Bonus
  • IMSS, AFORE, INFONAVIT
  • Major Medical Expenses Insurance
  • Minor Medical Expenses Insurance
  • Life Insurance
  • Funeral Expenses Insurance
  • Preferential rates for car insurance
  • TDU Membership
  • Holidays and Vacations
  • Sick days
  • Bereavement days
  • Civil Marriage days
  • Maternity & Paternity leave
  • English and Spanish classes
  • Performance Management Framework
  • Certifications
  • TALISIS Agreement: Discounts at ADVENIO, Harmon Hall, U-ERRE, UNID
  • Taquitos Rewards
  • Amazon Gift Card on your Birthday
  • Work-from-home Bonus
  • Laptop Policy

Equal employment

Enroute is committed to providing equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation, and training.

Skills Required

  • 5+ years of professional experience in Data Engineering or related fields
  • Strong experience designing and maintaining scalable data pipelines
  • Deep understanding of ETL / ELT best practices
  • Strong experience with large-scale data processing architectures
  • Proven experience with batch data processing
  • Strong experience with data warehousing concepts
  • Advanced Python
  • Strong hands-on experience with Apache Spark / PySpark
  • Advanced SQL (complex queries, optimization, transformations)
  • Strong experience processing large structured and unstructured datasets
  • Hands-on experience with AWS or Azure
  • Experience building cloud-native data solutions
  • Experience with Docker
  • Experience with CI/CD pipelines
  • Strong knowledge of Git / version control
  • Strong hands-on experience with Apache Airflow
  • Experience designing workflow orchestration pipelines
  • Strong expertise in Snowflake (MUST HAVE)
  • Snowflake data warehouse design
  • Snowflake development
  • Query and warehouse optimization
  • Performance tuning and cost efficiency
  • Cloud data warehouse architecture best practices
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: Houston, TX
187 Employees
Year Founded: 2005

What We Do

We are a group of top-notch gurus that bring together years of experience, cutting edge technology skills and cost-effective frameworks into a single place, we fuse them and deliver a sexy and perfectly fitted solution for all of our partners. We guide our clients for preparation and execution of critical projects and sourcing plans to: Minimize risks Drive Innovation Ensure the highest ROI Create value

Similar Jobs

Mondelēz International Logo Mondelēz International

ISC Finance Inventory Analyst

Big Data • Food • Hardware • Machine Learning • Retail • Automation • Manufacturing
Remote or Hybrid
Monterrey, Nuevo León, MEX
90000 Employees

Mondelēz International Logo Mondelēz International

TIM Manager MTY HUB

Big Data • Food • Hardware • Machine Learning • Retail • Automation • Manufacturing
Remote or Hybrid
2 Locations
90000 Employees

SailPoint Logo SailPoint

PMO Project Quality Coordinator

Artificial Intelligence • Cloud • Sales • Security • Software • Cybersecurity • Data Privacy
Remote or Hybrid
México
2461 Employees

GitLab Logo GitLab

Customer Success Manager

Cloud • Security • Software • Cybersecurity • Automation
Easy Apply
Remote
3 Locations
2500 Employees
85K-144K Annually

Similar Companies Hiring

Fairly Even Thumbnail
Hardware • Other • Robotics • Sales • Software • Hospitality
New York, NY
30 Employees
Golden Pet Brands Thumbnail
Digital Media • eCommerce • Information Technology • Marketing Tech • Pet • Retail • Social Media
El Segundo, California
178 Employees
Kepler  Thumbnail
Fintech • Software
New York, New York
6 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account