Data Engineer

Posted 6 Days Ago
Easy Apply
Reno, NV
In-Office
Mid level
3PL: Third Party Logistics
The Role
The Data Engineer will perform ETL/ELT operations, develop data solutions, maintain data structures in a data warehouse, and resolve data issues while collaborating with teams on projects.
Summary Generated by Built In

About ITS Logistics

Are you ready to build a successful career in a trillion-dollar global industry?  ITS Logistics is one of the fastest growing, exciting logistics companies in the United States providing creative supply chain solutions to Fortune 500 companies.  With unmatched agility, sense of urgency, and work ethic, ITS commits to a relentless pursuit of excellence and leading from the front to keep the world moving, every day.

What makes us different is our focus on people and culture. We are offering more than just a job, we’re offering career opportunities that you can build your life around. We need the best talent to continue to invest, grow and win every day and are looking for people with the right mindset – honest, adaptable, and commitment.

When you join our herd, we invest in your personal and professional growth with paid training, providing resources and support to learn everything you need to know about the supply chain industry.  We empower you to seize every opportunity and thrive in a fast paced, ever changing environment.   

Do you have what it takes? Find out today -  www.its4logistics.com.


The Position:

Design and build scalable data pipelines and data products within a modern cloud-based data and AI platform built on Azure and Databricks.

The Data Engineer will focus on building and operating reliable data pipelines, implementing Lakehouse architecture and Medallion data modeling patterns (Bronze / Silver / Gold), and delivering trusted data for analytics, operational intelligence, and AI/ML initiatives across the organization.

This role is responsible for ingesting and transforming large-scale datasets using Spark, Python, and SQL, integrating enterprise systems and APIs, and ensuring high data quality, reliability, and performance across the platform.

The position works closely with analytics engineers, data scientists, and application teams to deliver high-quality data products that power analytics and AI-driven decision-making. This role focuses on data platform engineering and pipeline development, while downstream semantic models and reporting are implemented by the analytics engineering team.

 Principal Accountabilities:

  • Translate business and technical requirements into scalable data pipeline and data platform solutions.
  • Design, build, and maintain data ingestion and transformation pipelines using Azure and Databricks.
  • Implement Lakehouse architecture using Medallion data modeling patterns (Bronze, Silver, Gold).
  • Develop scalable data transformations using Python, PySpark, Spark SQL, and Delta Lake.
  • Build reliable ingestion pipelines integrating APIs, SaaS platforms, operational systems, and external data sources.
  • Implement data quality validation, monitoring, and automated testing within data pipelines.
  • Develop and maintain curated data products and datasets used by analytics, operational applications, and AI/ML workloads.
  • Optimize and maintain distributed data processing workloads to ensure high performance and scalability.
  • Implement data engineering best practices, including version control, CI/CD pipelines, and automated deployment processes.
  • Leverage AI-assisted development tools and automation to accelerate pipeline development and improve engineering productivity.
  • Collaborate with analytics engineers, data scientists, and software engineers to support advanced analytics and machine learning initiatives.
  • Diagnose and resolve complex data pipeline and data platform issues.
  • Develop proof-of-concept projects to evaluate and introduce new technologies and improvements to the data platform.

 Position Requirements:

  • Strong understanding of modern data engineering architecture and cloud data platforms
  • Experience designing and implementing data pipelines using Databricks and Apache Spark
  • Strong experience with Python and SQL for data engineering
  • Experience working with distributed data processing frameworks (Spark / PySpark)
  • Understanding of Lakehouse architecture and Medallion data modeling patterns
  • Experience implementing data quality validation, testing, and monitoring
  • Experience with data pipeline orchestration tools such as Azure Data Factory, Airflow, or similar
  • Experience integrating data via REST APIs, SaaS platforms, and enterprise systems
  • Experience working with Delta Lake or similar transactional data lake technologies
  • Experience using Git-based source control and CI/CD pipelines
  • Understanding of dimensional modeling and analytical data structures
  • Familiarity with Power BI or other BI platforms (supporting downstream analytics teams)
  • Strong analytical and problem-solving skills
  • BS/BA in Computer Science, Engineering, Information Systems, or 3–5 years of relevant data engineering experience
  • Excellent communication and collaboration skills

Preferred:

Experience with Azure data platform technologies, including:

  • Azure Databricks
  • Azure Data Factory
  • Azure Data Lake Storage

Experience working with open-source data engineering tools, such as:

  • Apache Spark
  • Delta Lake
  • dbt
  • Apache Airflow

Experience implementing data observability and data quality frameworks

Experience building pipelines supporting machine learning or AI workflows

Experience working with large-scale data lake or lakehouse architectures

Experience with event-driven or streaming data architectures

Experience using AI-assisted development tools or engineering copilots

JOIN OUR TEAM TODAY 

Want to learn more about ITS - Check out this video! Join Our Team


Top Skills

Adf
Azure Data Factory
Azure Logic Apps
Azure Synapse
Bash
Elt
ETL
Microsoft Bi Stack
Power BI
Powershell
Python
SQL
Ssas
Ssis
Tableau
Talend
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: Reno, NV
590 Employees
Year Founded: 1999

What We Do

ITS Logistics is a premier Third-Party Logistics company ranking #60 in North America that provides creative supply chain solutions with an asset-lite transportation division that ranks #24 in North America, a top-tier asset-based dedicated fleet, a Top 20 intermodal and drayage division, and innovative omnichannel distribution and fulfillment services. With the highest level of service, unmatched industry experience and work ethic, and a laser focus on innovation and technology–our purpose is to improve the quality of life by delivering excellence in everything we do.

Similar Jobs

Trumid Logo Trumid

Data Engineer

Fintech • Information Technology • Software • Financial Services
Easy Apply
Remote or Hybrid
USA
200 Employees
175K-225K Annually

Samsara Logo Samsara

Data Engineer

Artificial Intelligence • Cloud • Computer Vision • Hardware • Internet of Things • Software
Easy Apply
Remote or Hybrid
United States
4000 Employees
102K-154K Annually

ChowNow Logo ChowNow

Data Engineer

Food • Software
Easy Apply
Remote or Hybrid
USA
255 Employees
158K-205K Annually

Trumid Logo Trumid

Senior Data Engineer

Fintech • Information Technology • Software • Financial Services
Easy Apply
Remote or Hybrid
USA
200 Employees
200K-250K Annually

Similar Companies Hiring

Arrive Logistics Thumbnail
Software • Sales • Logistics • 3PL: Third Party Logistics
Austin, TX
1700 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account