Our Data Engineering team, within our Data Services Organization, builds and maintains the infrastructure essential to delivering high-volume, business-critical data to the organization to enable data-driven decisions.
We are focused on expanding our curated and modeled data that unify sources of truth across our multiple products and domains. You'll have the opportunity and empowerment to guide the data engineering team on best practices using modern data tools like Snowflake, dbt, and Kafka.
This is an ideal opportunity for someone that wants to contribute to our growing data platform and warehouse by writing and implementing production data pipelines. You will get your hands dirty with lots of data and make a large impact with delivering datasets to stakeholders.
Who you are:
- 2+ years of experience designing and delivering data warehouses and marts to support business analytics
- Experience with dimensional data modeling- we use dbt for these pipelines
- Experience with data workflow diagrams (conceptual, logical, and physical)
- 2+ years of experience in SQL development on RDBMS (Snowflake and Postgres preferred) and performance tuning
- 2+ years of experience designing and developing data curation and integration processes
- Experience with source control and deployment workflows for ETL (Fivetran, github actions, gitlab, airflow, etc.)
- Experience working with AWS services such as DynamoDB, Glue, Lambda, Step Functions, S3, CloudFormation
- Hands on experience with scripting languages (Python, BASH, etc)
What you'll own:
- Contribution to Data Warehouse models and curation
- Support and evolution of data environment to deliver high-quality data, speed, and availability
- Curation of source-system data to deliver trusted data sets
- Involvement on data cataloging and data management efforts
- Production ETL performance tuning and environment-level resource consumption and management
- Migration of POC pipelines to production data processes
Experience you'll need:
- Capability to manipulate and analyze complex, high-volume data from a variety of sources
- Experience designing and building end-to-end data models and pipelines as well as alerting
- Experience in data modeling for batch processing and streaming data feeds; structured and unstructured data
- Experience with streaming & real-time data processing using a technology like Spark, Kafka, ksqlDB, or Databricks, etc. is a plus.
Working at Pluralsight
Founded in 2004 and trusted by Fortune 500 companies, Pluralsight is the technology skills platform organizations and individuals in 150+ countries count on to create progress for the world.
Our platform helps technologists master their craft and take control of their careers. We empower businesses everywhere to build adaptable teams, speed up release cycles and become scalable, reliable and secure. We come to work every day knowing we're helping our customers build the skills that power innovation.
And we don't let fear, egos or drama distract us from our mission. Our mission to democratize technology skills is what drives us and our values are at the helm of how we work together. It's our commitment to practicing them day in, day out that enables our performance. We're adults, and we treat each other that way. We have the autonomy to do our jobs, transparency to eliminate office politics and trust each other to do the right thing. We thrive in an environment with creativity around every corner, challenges that keep us on our toes, and peers who inspire us to be the best we can be. We bring different viewpoints, backgrounds and experiences, and united by our mission, we are one.
Bring yourself. Pluralsight is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age or veteran status.