Principal Data Engineer

Reposted 20 Hours Ago
Westlake, TX
In-Office
Senior level
Fintech
The Role
The Principal Data Engineer creates data visualizations, performs data analysis using big data tools, manages data governance projects, develops data solutions, and collaborates with business stakeholders to address data challenges and ensure quality data pipelines, utilizing AWS and various big data technologies.
Summary Generated by Built In
Job Description:

Position Description:

Creates data visualizations using object oriented and object function scripting languages -- Python, Java, C++, and Scala. Performs data analysis using Big Data tools (Hadoop, Spark, Kafka, and Kubernetes). Tracks the data lifecycle with data management tools (Collibra, Alation, and BigID). Works in various data councils, with data owners, data stewards, and data custodians. Manages projects with new and emerging technologies and products in the early strategy and design stage. Performs big data analytics leveraging Amazon Web Services (AWS) Cloud services (EC2, EMR, Snowflake, and Elastic-Search).

Primary Responsibilities:

  • Applies data management practices including data governance, data catalog, data privacy, data quality, and data lineage to ensure data is secure, private, accurate, available, and usable.

  • Works with data governance groups across the enterprise to align and scale effective practices.

  • Partners with key stakeholders to understand key business questions, and delivers analytic self-service solutions.

  • Simplifies and effectively communicates data governance challenges, solutions options, and recommendations to business partners and technology leadership.

  • Collaborates with business stakeholders, chapter leads, squad leads, tech leads, and architects to drive Fidelity’s data strategy forward.

  • Applies process and technology to deliver innovative solutions to meet business challenges.

  • Understands detailed requirements and delivers solutions that meet or exceed customer expectations.

  • Confers with data processing or project managers to obtain information on limitations or capabilities for data processing projects.

  • Develops or directs software system testing, validation procedures, programming, or documentation.

  • Maintains databases within an application area, working individually or coordinating database development as part of a team.

  • Analyzes information to determine, recommend, and plan computer software specifications on major projects and proposes modifications and improvements based on user need.

Education and Experience:

Bachelor's degree in Computer Science, Engineering, Information Technology, Information Systems, or a closely related field (or foreign education equivalent) and five (5) years of experience as a Principal Data Engineer (or closely related occupation) designing, developing, and maintaining large-scale data infrastructure, emerging technologies, data pipelines, and platforms, using AWS, Snowflake, Jenkins, Control-M and technologies (SQL and Python).

Or, alternatively, Master's degree in Computer Science, Engineering, Information Technology, Information Systems, or a closely related field (or foreign education equivalent) and three (3) years of experience as a Principal Data Engineer (or closely related occupation) designing, developing, and maintaining large-scale data infrastructure, emerging technologies, data pipelines, and platforms, using AWS, Snowflake, Jenkins, Control-M and technologies (SQL and Python).

Skills and Knowledge:

Candidate must also possess:

  • Demonstrated Expertise (“DE”) performing enterprise-scale data modeling, ingestion, cleansing, transformation, and integration, using Extract, Transform, Load / Extract, Load, Transform (ETL/ELT) frameworks (PySpark, SnapLogic, DbT, Kafka Streaming, and Snowflake); and ensuring high performance and governance through Collibra and Alation.

  • DE deploying, orchestrating, and optimizing solutions for analytics and financial operations projects, using big data technologies, distributed systems, delta lake, and data warehousing solutions (Databricks, Snowflake, or Apache Spark), using containerized environments with Docker for scalable, cost-efficient data solutions.

  • DE architecting, automating, and monitoring data workflows (while optimizing Cloud platforms (AWS or Azure) for efficiency, scalability, and cost savings), using scalable, reusable data engineering solutions (Python and SQL) to enable performance-optimized queries and CI/CD pipelines (GitHub Actions and Jenkins) for deployment automation and scheduling through Control-M.

  • DE designing, developing, and testing high-volume, fault-tolerant, real-time data processing, analytics, and reporting pipelines to ensure high availability for enterprise applications; and incorporating DevOps practices, Agile methodology, security, and observability frameworks (Prometheus, Datadog, Grafana, and OTEL) to ensure security, reliability and proactive monitoring.

#PE1M2

#LI-DNI

Certifications:

Category:Information Technology

Most roles at Fidelity are Hybrid, requiring associates to work onsite every other week (all business days, M-F) in a Fidelity office. This does not apply to Remote or fully Onsite roles.

Please be advised that Fidelity’s business is governed by the provisions of the Securities Exchange Act of 1934, the Investment Advisers Act of 1940, the Investment Company Act of 1940, ERISA, numerous state laws governing securities, investment and retirement-related financial activities and the rules and regulations of numerous self-regulatory organizations, including FINRA, among others. Those laws and regulations may restrict Fidelity from hiring and/or associating with individuals with certain Criminal Histories.

Top Skills

Alation
AWS
Bigid
C++
Collibra
Control-M
Databricks
Datadog
Dbt
Docker
Ec2
Elastic-Search
Emr
Github Actions
Grafana
Hadoop
Java
Jenkins
Kafka
Kubernetes
Otel
Prometheus
Pyspark
Python
Scala
Snaplogic
Snowflake
Spark
SQL
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: Boston, MA
58,848 Employees
Year Founded: 1946

What We Do

At Fidelity, our goal is to make financial expertise broadly accessible and effective in helping people live the lives they want. We do this by focusing on a diverse set of customers: - from 23 million people investing their life savings, to 20,000 businesses managing their employee benefits to 10,000 advisors needing innovative technology to invest their clients’ money. We offer investment management, retirement planning, portfolio guidance, brokerage, and many other financial products.

Privately held for nearly 70 years, we’ve always believed by providing investors with access to the information and expertise, we can help them achieve better results. That’s been our approach- innovative yet personal, compassionate yet responsible, grounded by a tireless work ethic—it is the heart of the Fidelity way.

Similar Jobs

Wells Fargo Logo Wells Fargo

Platform Engineer

Fintech • Financial Services
Hybrid
3 Locations
205000 Employees
159K-305K Annually

CrowdStrike Logo CrowdStrike

Data Engineer

Cloud • Computer Vision • Information Technology • Sales • Security • Cybersecurity
Remote or Hybrid
USA
10000 Employees
170K-260K Annually

DIRECTV Logo DIRECTV

Data Engineer

Consumer Web • Digital Media • Information Technology • News + Entertainment • On-Demand
In-Office or Remote
2 Locations
12000 Employees
148K-268K Annually
In-Office
3 Locations
4900 Employees
201K-250K Annually

Similar Companies Hiring

Camber Thumbnail
Social Impact • Healthtech • Fintech
New York, NY
53 Employees
Rain Thumbnail
Web3 • Payments • Infrastructure as a Service (IaaS) • Fintech • Financial Services • Cryptocurrency • Blockchain
New York, NY
80 Employees
Scotch Thumbnail
Software • Retail • Payments • Fintech • eCommerce • Artificial Intelligence • Analytics
US
25 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account