Senior Snowflake Data Engineer

Sorry, this job was removed at 12:33 p.m. (CST) on Thursday, Apr 24, 2025
Bridgewater, NJ
In-Office
Cloud • Information Technology • Consulting
The Role

Job Title: Senior Data Engineer (Snowflake)
Location: Bridgewater, NJ (Hybrid)
Type: Long-Term Contract

Job Description:
We are seeking a highly skilled Senior Data Engineer with expertise in Snowflake and data pipeline development for a long-term contract opportunity based in Bridgewater, NJ. The ideal candidate will have a proven track record of building and maintaining scalable data pipelines, leveraging Snowflake as the primary data warehouse, and delivering high-quality data solutions that enable business intelligence, analytics, and data-driven decision-making.

Key Responsibilities:

  • Design, develop, and maintain robust, scalable, and high-performance data pipelines using Snowflake.
  • Collaborate with cross-functional teams to gather data requirements and deliver data solutions that meet business needs.
  • Implement best practices for data modeling and data pipeline optimization in Snowflake.
  • Build and manage data integration processes to ingest data from multiple sources into Snowflake.
  • Perform data quality checks and ensure that data pipelines are reliable, accurate, and consistent.
  • Develop automated processes for monitoring and troubleshooting data pipeline issues.
  • Optimize and tune the performance of Snowflake queries and data pipelines.
  • Work closely with data analysts, data scientists, and business stakeholders to support their data requirements.
  • Participate in code reviews, maintain high standards of code quality, and provide mentorship to junior data engineers.
  • Stay up-to-date with the latest trends and best practices in cloud data engineering and Snowflake development.


Required Skills & Qualifications:

  • Strong hands-on experience with Snowflake, including data loading, schema design, performance tuning, and query optimization.
  • Proven experience with designing, developing, and optimizing ETL processes and data pipelines.
  • In-depth knowledge of SQL and experience with scripting languages such as Python, Java, or Scala.
  • Familiarity with cloud platforms such as AWS, Azure, or GCP, particularly in relation to data storage and processing services.
  • Experience working with data orchestration tools (e.g., Apache Airflow, dbt) for building, scheduling, and monitoring data pipelines.
  • Solid understanding of data warehousing concepts, data modeling, and data transformation.
  • Ability to work with large-scale data sets and perform data analysis for optimization.
  • Strong problem-solving skills and the ability to troubleshoot complex data issues.
  • Excellent communication skills and the ability to collaborate effectively with both technical and non-technical stakeholders.
  • Master’s/Bachelor's degree in Computer Science, Information Technology, or a related field (or equivalent experience).


 

Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
21 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account