Data Engineer - Snowflake

Posted 10 Days Ago
Chicago, IL
Senior level
Big Data • Analytics • Business Intelligence • Big Data Analytics
The Role
The Data Engineer will architect, design, and implement advanced analytics capabilities, focusing on data integration, pipeline development, and reporting support. Key tasks include maintaining data synchronization with Oracle and Snowflake, optimizing data models, and using Airflow and Python for data processing.
Summary Generated by Built In

Description

Tiger Analytics is a fast-growing advanced analytics consulting firm. Our consultants bring deep expertise in Data Science, Machine Learning and AI. We are the trusted analytics partner for multiple Fortune 500 companies, enabling them to generate business value from data. Our business value and leadership has been recognized by various market research firms, including Forrester and Gartner. We are looking for top-notch talent as we continue to build the best global analytics consulting team in the world.

The Data Engineer will be responsible for architecting, designing, and implementing advanced analytics capabilities. The right candidate will have broad skills in database design, be comfortable dealing with large and complex data sets, have experience building self-service dashboards, be comfortable using visualization tools, and be able to apply your skills to generate insights that help solve business challenges.We are looking for someone who can bring their vision to the table and implement positive change in taking the company's data analytics to the next level.

Requirements

Key Responsibilities:

Data Integration:

Implement and maintain data synchronization between on-premises Oracle databases and Snowflake using Kafka and CDC tools.

Support Data Modeling:

Assist in developing and optimizing the data model for Snowflake, ensuring it supports our analytics and reporting requirements.

Data Pipeline Development:

Design, build, and manage data pipelines for the ETL process, using Airflow for orchestration and Python for scripting, to transform raw data into a format suitable for our new Snowflake data model.

Reporting Support:

Collaborate with data architect to ensure the data within Snowflake is structured in a way that supports efficient and insightful reporting.

Technical Documentation:

Create and maintain comprehensive documentation of data pipelines, ETL processes, and data models to ensure best practices are followed and knowledge is shared within the team.

Tools and Skillsets:

Data engineering: proven track record of developing and maintaining data pipelines and data integration projects

Databases: Strong experience with Oracle, Snowflake, and Databricks.

Data Integration Tools: Proficiency in using Kafka and CDC tools for data ingestion and synchronization.

Orchestration Tools: Expertise in Airflow for managing data pipeline workflows.

Programming: Advanced proficiency in Python and SQL for data processing tasks.

Data Modeling: Understanding of data modeling principles and experience with data warehousing solutions.

Cloud Platforms: Knowledge of cloud infrastructure and services, preferably Azure, as it relates to Snowflake and Databricks integration.

Collaboration Tools: Experience with version control systems (like Git) and collaboration platforms.

CI/CD Implementation: Utilize CI/CD tools to automate the deployment of data pipelines and infrastructure changes, ensuring high-quality data processing with minimal manual intervention.

Communication: Excellent communication and teamwork skills, with a detail-oriented mindset. Strong analytical skills, with the ability to work independently and solve complex problems.

Requirements

  • 8+ years of overall industry experience specifically in data engineering
  • 5+ years of experience building and deploying large-scale data processing pipelines in a production environment.
  • Strong experience in Python, SQL, and PySpark
  • Creating and optimizing complex data processing and data transformation pipelines using python
  • Experience with “Snowflake Cloud Datawarehouse” and DBT tool
  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
  • Understanding of Datawarehouse (DWH) systems, and migration from DWH to data lakes/Snowflake
  • Understanding of ELT and ETL patterns and when to use each. Understanding of data models and transforming data into the models
  • Strong analytic skills related to working with unstructured datasets
  • Build processes supporting data transformation, data structures, metadata, dependency and workload management
Benefits

This position offers an excellent opportunity for significant career development in a fast-growing and challenging entrepreneurial environment with a high degree of individual responsibility.

Top Skills

Python
SQL
The Company
Bengaluru, Bengaluru
5,000 Employees
On-site Workplace
Year Founded: 2011

What We Do

Tiger Analytics is a global leader in AI and Analytics, helping Fortune 1000 companies solve their toughest challenges. We offer fullstack AI and analytics services & solutions to empower businesses to achieve real outcomes and value at scale. We are on a mission to push the boundaries of what AI and analytics can do to help enterprises navigate uncertainty and move forward decisively. Our purpose is to provide certainty to shape a better tomorrow.

Our team of 4000+ technologists and consultants are based in the US, Canada, the UK, India, Singapore, and Australia, working closely with clients across CPG, Retail, Insurance, BFS, Manufacturing, Life Sciences, and Healthcare.

We are Great Place to Work-Certified™ and have been recognized by analyst firms such as Forrester, Gartner, Everest, ISG, HFS, and others. Ranked among the ‘Best’ and ‘Fastest Growing’ analytics firms lists by Inc., Financial Times, Economic Times and Analytics India Magazine.

In India, our offices are located in Chennai, Hyderabad and Bangalore.

Similar Jobs

Capital One Logo Capital One

Lead Data Engineer (Python, AWS, Kafka)

Fintech • Machine Learning • Payments • Software • Financial Services
Hybrid
Chicago, IL, USA
55000 Employees
188K-214K Annually

Capital One Logo Capital One

Lead Data Engineer (python, Spark, Big Data, AWS)

Fintech • Machine Learning • Payments • Software • Financial Services
Hybrid
Chicago, IL, USA
55000 Employees
188K-214K Annually

McCain Foods Logo McCain Foods

Director of Business Data Governance

Food • Retail • Agriculture • Manufacturing
Oakbrook Terrace, IL, USA
20000 Employees

McCain Foods Logo McCain Foods

Manager SCA Business Analytics

Food • Retail • Agriculture • Manufacturing
Oakbrook Terrace, IL, USA
20000 Employees

Similar Companies Hiring

Energy CX Thumbnail
Utilities • Professional Services • Greentech • Financial Services • Energy • Consulting • Business Intelligence
Chicago, IL
55 Employees
MassMutual India Thumbnail
Insurance • Information Technology • Fintech • Financial Services • Big Data
Hyderabad, Telangana
InCommodities Thumbnail
Renewable Energy • Machine Learning • Information Technology • Energy • Automation • Analytics
Austin, TX
234 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account