(1226) Senior Data Engineer

Posted Yesterday
Easy Apply
Be an Early Applicant
12 Locations
Remote
Senior level
Big Data • Blockchain • Software • Business Intelligence • App development • Big Data Analytics • Automation
Your partner in digital innovation and growth
The Role
As a Senior Data Engineer, design and maintain data workflows using AWS and modern data technologies while ensuring data quality and scalability.
Summary Generated by Built In

Explore the Nearsure experience! 

🌐 Join our close-knit LATAM remote team: Connect through fun activities like coffee breaks, tech talks, and games with your team-mates and management. 

🍃 Say goodbye to micromanagement! We champion autonomy, open communication, and respect for diversity as our core values. 

Your well-being matters: Our People Care team is here from day one to support you with everything from time-off requests to wellness check-ins. 

Plus, our Accounts Management team ensures smooth, effective client relationships, so you can focus on what you do best. 

Ready to grow with us? 🚀 
 
Here’s what we offer you by joining us! 
 
Competitive USD salary 💲 – We value your skills and contributions! 

🌐 100% remote work 🏢 – While you can work from anywhere, you’re always welcome to connect with teammates and grow your network at our coworking spaces across LATAM!  

💼 Paid time off – Take the time you need according to your country’s regulations, all while receiving your full salary. Rest, recharge, and come back stronger!  

🎉 National Holidays celebrated 🌴 – Take time off to celebrate important events and traditions with loved ones, fully embracing your culture. 

😷 Sick leave – Focus on your health without the stress. Take the necessary time to recover and feel better. 

💸 Refundable Annual Credit – Spend it on the perks you love to enhance your work-life balance! 

🤝 Team-building activities – Join us for coffee breaks, tech talks, and after-work gatherings to bond with your Nearsure family and feel part of our vibrant community. 

🥳 Birthday day off 🎂 – Enjoy an extra day off during your birthday week to celebrate in style with friends and family! 
 
 
About the project 
 
As a Senior Data Engineer you will work with modern data technologies such as Apache Spark (PySpark or Spark), Apache Iceberg, and AWS services (S3, EMR, Athena, Glue), ensuring data quality, scalability, and operational excellence. Familiarity with orchestration tools (Airflow), CI/CD practices, and the ability to leverage AI-powered development assistants (e.g., GitHub Copilot) is essential. 
 
How your day-to-day work will look like 
 
Design, develop, and maintain batch ETL/ELT pipelines and data workflows for large-scale datasets in AWS. 
Implement and optimize data lakehouse architectures using Apache Iceberg on S3, ensuring schema evolution, partitioning strategies, and table maintenance. 
Build and tune distributed data processing jobs with Apache Spark (PySpark or Spark) for performance and cost efficiency. 
Orchestrate workflows using Apache Airflow, including DAG design, scheduling, and SLA monitoring. 
Apply best practices in code quality, version control (Git/GitHub), and CI/CD for data engineering projects. 
Ensure data quality, security, and compliance through validation, monitoring, and governance frameworks (Glue Catalog, IAM, encryption). 
Collaborate with cross-functional teams (data scientists, analysts, architects) to deliver scalable and reliable data solutions. 
Contribute to the development and optimization of analytics applications, ensuring they are powered by well-structured, high-quality data pipelines. 
Continuously evaluate and adopt emerging technologies and AI-powered tools to improve productivity and maintain technical excellence. 
 
This would make you the ideal candidate 
 
Bachelor's Degree in Computer Science, Engineering, or a related field. 
5+ Years of experience working in data engineering, designing and developing large-scale data solutions. 
5+ Years of experience working with Python for data manipulation, scripting, and integration tasks. 
5+ Years of experience working with SQL & DBMS (e.g., PostgreSQL, MySQL, SQL Server), data modeling (Star Schema, Snowflake Schema) and query tuning and optimization. 
3+ Years of experience working with Apache Spark (PySpark preferred). 
3+ Years of experience working building batch ETL/ELT pipelines for largescale datasets. 
3+ Years of experience working with AWS data services (S3, Athena/Presto, Glue, Lambda, CloudWatch) in production. 
2+ Years of experience working with Apache Iceberg (table design, partition strategies, schema evolution, maintenance, ingestion pipelines into Apache Iceberg on S3). 
2+ Years of experience working with AWS EMR as the execution platform for big data workloads. 
2+ Years of experience working with orchestrating data workflows with Apache Airflow (DAG design, scheduling, backfills, SLAs). 
2+ Years of experience using Git/GitHub (branching strategies, Pull request reviews, CI/CD for data pipelines). 
Experience designing efficient ingestion pipelines into analytical systems. 
Proficiency in logging, auditing, and monitoring for data pipelines. 
Experience with data cleansing, validation, and transformation for analytical/reporting systems. 
Familiarity with data security and privacy practices. 
Solid understanding of cloud-native analytics architectures (data lake/lakehouse, ELT patterns). 
Proven ability to leverage AI-powered assistants (e.g., GitHub Copilot, AI coding agents) as part of the engineering workflow, demonstrating comfort and efficiency in using these tools to accelerate delivery and maintain code quality. 
Advanced English Level is required for this role, as you will work with US clients. Effective communication in English is essential to deliver the best solutions to our clients and expand your horizons. 
 
What to expect from our hiring process 
 
1. Let’s chat about your experience!  
2. Impress our recruiters, and you’ll move on to a technical interview with our top developers.  
3. Nail that, and you’ll meet our client - your final step to joining our amazing team! 
 
🎯 At Nearsure, we’re dedicated to solving complex business challenges through cutting-edge technology and we believe in the power of tailored solutions. Whether you are passionate about transforming businesses with Generative AI, building innovative software products, or implementing comprehensive enterprise platform solutions, we invite you to be part of our dynamic team! 

We would love to hear from you if you are eager to make an impact and join a collaborative team that values creativity and expertise.  

Let’s work together to shape the future of technology!  

🧑💻 Apply now! 

By applying to this position, you authorize Nearsure to collect, store, transfer, and process your personal data in accordance with our Privacy Policy. For more information, please review our Privacy Policy.

Top Skills

Apache Airflow
Apache Iceberg
Spark
Aws Athena
Aws Emr
Aws Glue
Aws S3
Git
Git
MySQL
Postgres
Pyspark
Python
SQL
SQL Server
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: Los Angeles, California
650 Employees

What We Do

We solve complex business problems through technology. Whether you’re transforming your business with Gen AI, building a software product, or implementing an enterprise platform solution, we deliver customized solutions according to your precise needs and leverage our experiences across industries and technical challenges to deliver results. In short, we're your partner in digital innovation and growth.

Similar Jobs

phData Logo phData

Senior Data Engineer

Information Technology
Remote
12 Locations
202 Employees

Superhuman Logo Superhuman

Software Engineer

Artificial Intelligence • Information Technology • Machine Learning • Natural Language Processing • Productivity • Software • Generative AI
Easy Apply
Remote or Hybrid
13 Locations
1500 Employees
124K-145K Annually

Deepgram Logo Deepgram

Research Staff, LLMs

Artificial Intelligence • Machine Learning • Natural Language Processing • Software • Conversational AI
In-Office or Remote
51 Locations
150 Employees
150K-250K Annually

Rubrik Logo Rubrik

Join Our Sales Talent Community

Artificial Intelligence • Big Data • Cloud • Information Technology • Software • Cybersecurity • Data Privacy
Remote
15 Locations
3000 Employees

Similar Companies Hiring

PRIMA Thumbnail
Travel • Software • Marketing Tech • Hospitality • eCommerce
US
15 Employees
Rain Thumbnail
Web3 • Payments • Infrastructure as a Service (IaaS) • Fintech • Financial Services • Cryptocurrency • Blockchain
New York, NY
40 Employees
Scotch Thumbnail
Software • Retail • Payments • Fintech • eCommerce • Artificial Intelligence • Analytics
US
25 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account