The Role
The Senior Data Engineer designs and maintains data ingestion frameworks, optimizes ETL pipelines, and ensures data integrity across multiple sources.
Summary Generated by Built In
Role Overview
We are looking for a Senior Data Engineer who will play a key role in designing, building, and maintaining data ingestion frameworks and scalable data pipelines. The ideal candidate should have strong expertise in platform architecture, data modeling, and cloud-based data solutions to support real-time and batch processing needs.
What you'll be doing:
- Design, develop, and optimise DBT models to support scalable data transformations.
- Architect and implement modern ELT pipelines using DBT and orchestration tools like Apache Airflow and Prefect.
- Lead performance tuning and query optimization for DBT models running on Snowflake, Redshift, or Databricks
- Integrate DBT workflows & pipelines with AWS services (S3, Lambda, Step Functions, RDS, Glue) and event-driven architectures
- Implement robust data ingestion processes from multiple sources, including manufacturing execution systems (MES), Manufacturing stations, and web applications.
- Manage and monitor orchestration tools (Airflow, Prefect) for automated DBT model execution.
- Implement CI/CD best practices for DBT, ensuring version control, automated testing, and deployment workflows.
- Troubleshoot data pipeline issues and provide solutions for optimizing cost and performance.
What you'll have:
- 5+ years of hands-on experience with DBT, including model design, testing, and performance tuning.
- 5+ years of Strong SQL expertise with experience in analytical query optimization and database performance tuning.
- 5+ years of programming experience, especially in building custom DBT macros, scripts, APIs, working with AWS services using boto3.
- 3+ years of Experience with orchestration tools like Apache Airflow, Prefect for scheduling DBT jobs.
- Hands-on experience in modern cloud data platforms like Snowflake, Redshift, Databricks, or Big Query
- Experience with AWS data services (S3, Lambda, Step Functions, RDS, SQS, CloudWatch).
- Familiarity with serverless architectures and infrastructure as code (CloudFormation/Terraform).
- Ability to effectively communicate timelines and deliver MVPs set for the sprint.
- Strong analytical and problem-solving skills, with the ability to work across cross-functional teams.
Nice to haves:
- Experience in hardware manufacturing data processing.
- Contributions to open-source data engineering tools.
- Knowledge of Tableau or other BI tools for data visualization.
- Understanding of front-end development (React, JavaScript, or similar) to collaborate effectively with UI teams or build internal tools for data visualization
Top Skills
Apache Airflow
AWS
Boto3
CloudFormation
Databricks
Dbt
Glue
Lambda
Prefect
Rds
Redshift
S3
Snowflake
SQL
Step Functions
Terraform
Am I A Good Fit?
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.
Success! Refresh the page to see how your skills align with this role.
The Company
What We Do
Aeva develops a new sensing & perception paradigm for autonomous machines. Aeva's technology brings together the best of vision, depth & motion sensors into a single product with superior performance. Compared to today's best in class, it provides greater range & resolution across weather conditions and a new dimension to precisely measure velocity of every pixel in the scene.


.jpg)





