The Role
Develop and maintain data pipelines and systems, collaborate with teams to define requirements, and ensure reliability and performance.
Summary Generated by Built In
About ShyftLabs
ShyftLabs is a fast-growing data product company founded in early 2020, working primarily with Fortune 500 clients. We design and deliver cutting-edge digital and data-driven solutions that help businesses accelerate growth, improve decision-making, and create measurable value through innovation.
Position Overview
We’re looking for an experienced Data Engineer who’s passionate about building scalable, high-performance data solutions. In this role, you’ll collaborate with cross-functional teams-including Data Engineers, Analysts, and Product Managers-to design, implement, and maintain robust data pipelines and systems that power our clients’ most critical business decisions.
Key Responsibilities
- Design, develop, and maintain data pipelines and ETL/ELT processes using Python.
- Build and optimize scalable, high-performance data applications.
- Collaborate with cross-functional teams to define requirements and deliver reliable solutions.
- Develop and manage real-time streaming pipelines using Pub/Sub or Apache Beam.
- Participate in code reviews, architecture discussions, and continuous improvement initiatives.
- Monitor, troubleshoot, and optimize production data systems for reliability and performance.
Key Qualifications
- 5+ years of professional experience in software or data engineering using Python.
- Strong understanding of software engineering best practices (testing, version control, CI/CD).
- Proven experience building and optimizing ETL/ELT pipelines and data workflows.
- Proficiency in SQL and database concepts.
- Experience with data processing frameworks (e.g., Pandas).
- Understanding of software design patterns and scalable architecture principles.
- Experience with cloud platforms (GCP preferred).
- Knowledge of CI/CD pipelines and Infrastructure as Code tools.
- Familiarity with containerization (Docker, Kubernetes).
- Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent experience).
- Excellent problem-solving, analytical, and communication skills.
Preferred Qualifications
- Experience with GCP services such as Cloud Run and Dataflow.
- Experience with stream processing technologies (e.g., Pub/Sub).
- Familiarity with workflow orchestration tools (e.g., Airflow).
- Exposure to data visualization tools or libraries.
- Knowledge of GitLab CI/CD and Terraform.
- Experience with Snowflake, BigQuery, or Databricks.
- GCP Data Engineer Certification is a plus.
We are proud to offer a competitive salary alongside a strong insurance package. We pride ourselves on the growth of our employees, offering extensive learning and development resources.
Top Skills
Airflow
BigQuery
Databricks
Docker
GCP
Gitlab
Kubernetes
Pandas
Python
Snowflake
SQL
Terraform
Am I A Good Fit?
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.
Success! Refresh the page to see how your skills align with this role.
The Company
What We Do
We provide customized data and analytics consulting services, including automation and software development for a sustainable and intuitive digital transformation.









