Principal Data Engineer

Sorry, this job was removed at 9:13 p.m. (CST) on Tuesday, March 1, 2022
Find out who's hiring in Los Angeles, CA.
See all Data + Analytics jobs in Los Angeles, CA
Apply
By clicking Apply Now you agree to share your profile information with the hiring company.

Note to applicants: remote in the US is ok, except Colorado


Who we are

Albert is a new type of financial service that uses powerful technology to automate your finances, with a team of human experts to guide you. Albert saves and invests automatically for you, helps you avoid overdrafts, finds savings you’re missing, identifies bills you’re overpaying, and much more. Text Albert a financial question, and our geniuses won’t just offer guidance — they’ll help you take action.


We're an LA-based startup with a proven business model, backed by top-tier institutional investors and with over 8 million users who have trusted Albert to help them achieve their financial goals. We're on a mission to democratize money management through our simple, beautifully designed product, and we're looking for thoughtful, talented people to join us on our journey.


About the role

Access to trustworthy data, metrics, and analytics is critical to every business process at Albert, from backend and mobile development to growth and business strategy. We are looking for a Principal Data Engineer to build and scale low-latency data pipelines, big data storage solutions, and analytics processing tools so that key insights can be quickly extracted from massive datasets.

Things you're good at

  • Ownership: Diving in and taking ownership of projects, then driving them to completion in a methodical, organized, independent manner. All the while communicating plans and progress effectively.
  • Shipping: Delivering great products that you're proud of on a regular basis.
  • Architecture: Getting it done is important. Getting it done in way that will scale is equally important.
  • Collaboration: We bring the best out of each other. We're looking for people who will bring the best out of all of us.
  • Communication: You will be excellent at communicating technical topics concisely and practically, both verbally and in writing, in order to get buy-in from team members and move projects along effectively.
  • Organization: We value making sure things are well-organized and well-documented, whether it’s code or documentation.

Responsibilities

  • Build scalable ETL data pipelines, ingesting terabytes of data from internal and external sources. 
  • Design and maintain data storage solutions such as data lakes and data warehouses that allow for large-scale analytics processing.
  • Build self-service analytics solutions for non-technical consumers such as: charting dashboards, scheduled transformations, and data scientist notebooks.
  • Work closely with product engineering teams to ensure consistent data modeling across services

Requirements

  • Bachelors Degree
  • 5+ years of experience building scalable data pipelines.
  • Highly proficient in Python or Java.
  • Experience with data warehouses such as Redshift, Snowflake, and BigQuery.
  • Experience with data streaming solutions such as Kafka and Kinesis.
  • Familiar with distributed data processing technologies such as Presto, Spark, and Hadoop.
  • Familiar with cloud-hosted services such as AWS and GCP.

Benefits

  • Competitive salary and meaningful equity
  • Health, vision and dental insurance
  • Meals provided
  • Monthly wellness stipend
  • 401k match



Job Applicants California Privacy Notice


This California Privacy Notice applies to personal information of California job applicants that Albert collects and processes as it relates to the submission of a job application.

Read Full Job Description
Apply Now
By clicking Apply Now you agree to share your profile information with the hiring company.

Similar Jobs

Apply Now
By clicking Apply Now you agree to share your profile information with the hiring company.
Learn more about AlbertFind similar jobs