We are seeking an experienced and skilled Data Engineer to join our team. As a Data Engineer, your primary responsibility will be to design, develop, and maintain data pipelines and solutions using modern technologies. You should have expertise in SQL with a focus on data warehousing, as well as experience with Azure Databricks, PySpark, Azure Data Factory, and Azure Data Lake. Strong knowledge of data engineering fundamentals and working with Parquet/Delta tables in Azure Data Lake. Proficiency in Python programming and the ability to create data pipelines is necessary.
Job Responsibilities
- Designing and developing data pipelines to extract, transform, and load data from various sources into the data warehouse leveraging Python & notebooks.
- Writing complex SQL queries for data extraction and manipulation from the data warehouse.
- Building and maintaining ETL processes using Azure Databricks with PySpark.
- Implementing data integration workflows using Azure Data Factory.
- Collaborating with cross functional teams including developers, data analysts, and business stakeholders to understand requirements and deliver high quality solutions.
- Optimizing performance of the data pipelines and ensuring scalability and reliability of the systems.
- Monitoring Data quality and troubleshooting issues in collaboration with the operations team.
- Maintaining documentation of the design and implementation of the data pipelines.
- Ability to collaborate on best practices in code creation while maintaining communication with the team for new business logic transformations.
Requirements
- Expertise in SQL, ideally with experience in working with data warehousing concepts.
- Strong firsthand experience with Azure Databricks and Spark.
- Proficiency in designing and implementing data integration workflow using Azure Data Factory.
- Demonstrates proficiency in Python programming and the ability to develop scalable data engineering pipelines in Python.
- Solid understanding of data engineering fundamentals including data modeling, data transformation, change data capture and performance optimization techniques.
- Experience working with Azure Data Lake for storing large data sets, maintaining Parquet/Delta tables, and performing efficient querying.
- Experience with version control systems and familiarity with CI/CD practices.
- Strong interpersonal skills, ability to clearly communicate, and voice concerns in a group setting.
- Initiative-taking, self-reliant approach with a willingness to learn business logic and work with critical faults. Candidates should be able to independently understand business requirements without relying on subject matter experts for ongoing explanations.
- Ability to collaborate effectively in planning and refinement sessions.
Top Skills
What We Do
Experts in crafting digital products ⚡️
At Thaloz, the mission is to support at every stage of the digital product journey. With a team of over 100 experts and a global presence in 30 countries, we leverage top-tier Latin American talent to deliver exceptional software development solutions that drive success.
Our Services:
→ Product Lab: Comprehensive product development services to build and scale software solutions. From strategy and design to development, testing, and launch, every aspect is handled with expertise.
→ Talent Hub: Accelerate the team-building process by 50% with carefully vetted LATAM talent. Select the team members, and they will be seamlessly integrated into projects under the client's leadership.
→ Enterprise Pod: Optimize operations with streamlined complex integrations and flawless implementations of digital products for B2B companies, ensuring rapid and smooth deployments.
Ready to assist in turning ideas into reality, get in touch through www.thaloz.com/contact-us
Join our community! 👨💻
Instagram: @thalozteam
YouTube: @thalozteam
Clutch: @thaloz








