The Role
Design and maintain scalable data pipelines, implement Lakehouse architecture, develop ETL processes, and ensure data integrity and quality.
Summary Generated by Built In
Company Description
TLVTech is a dynamic technology firm dedicated to building exceptional products using modern technologies for the world's most admired companies. We pride ourselves on innovation, collaboration, and delivering excellence in every project.
Job DescriptionResponsibilities:
- Design and maintain scalable data pipelines with Spark Structured Streaming.
- Implement Lakehouse architecture with Apache Iceberg.
- Develop ETL processes for data transformation.
- Ensure data integrity, quality, and governance.
- Collaborate with stakeholders and IT teams for seamless solution integration.
- Optimize data processing workflows and performance.
Requirements:
- 5+ years in data engineering.
- Expertise in Apache Spark and Spark Structured Streaming.
- Hands-on experience with Apache Iceberg or similar data lakes.
- Proficiency in Scala, Java, or Python.
- Knowledge of big data technologies (Hadoop, Hive, Presto).
- Experience with cloud platforms (AWS, Azure, GCP) and SQL.
- Strong problem-solving, communication, and collaboration skills.
Location: Ra'anana
Start Date: Immediate
Top Skills
Apache Iceberg
AWS
Azure
ETL
GCP
Hadoop
Hive
Java
Presto
Python
Scala
Spark
SQL
Am I A Good Fit?
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.
Success! Refresh the page to see how your skills align with this role.
The Company
What We Do
TLVTech provides professional, customer-centric software development solutions for startups, large companies and organizations worldwide.