The Role
We are seeking a Data Engineer to design, build and optimize scalable data pipelines. Strong experience in Databricks, PySpark, and data warehouse concepts is required.
Summary Generated by Built In
This is a remote position.
We are looking for a Data Engineer with strong experience in Databricks, PySpark, and modern Data Warehouse systems. The ideal candidate can design, build, and optimize scalable data pipelines and work closely with analytics, product, and engineering teams.
Requirements
- Strong hands-on skills in Databricks, PySpark, and SQL
- Experience with data warehouse concepts, ETL frameworks, batch/streaming pipelines
- Solid understanding of Delta Lake and Lakehouse architecture
- Experience with at least one cloud platform (Azure preferred)
- Experience with workflow orchestration tools (Airflow, ADF, Prefect, etc.)
Skills Required
- Strong hands-on skills in Databricks, PySpark, and SQL
- Experience with data warehouse concepts, ETL frameworks, batch/streaming pipelines
- Solid understanding of Delta Lake and Lakehouse architecture
- Experience with at least one cloud platform (Azure preferred)
- Experience with workflow orchestration tools (Airflow, ADF, Prefect, etc.)
Am I A Good Fit?
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.
Success! Refresh the page to see how your skills align with this role.
The Company
What We Do
Vyusoft is a future-focused software solutions company that empowers businesses to navigate digital transformation with confidence. They specialize in delivering customized, intelligent solutions using cutting-edge technologies like AI, cloud computing, data analytics, and automation.




.png)



