Discover. A brighter future.
With us, you’ll do meaningful work from Day 1. Our collaborative culture is built on three core behaviors: We Play to Win, We Get Better Every Day & We Succeed Together. And we mean it — we want you to grow and make a difference at one of the world's leading digital banking and payments companies. We value what makes you unique so that you have an opportunity to shine.
Come build your future, while being the reason millions of people find a brighter financial future with Discover.Job Description
As a Senior Data Engineer, you will provide technical (design & development) leadership to the Payments Data Ops Team in development of Extract/Transform/Load (ETL) applications that will interface with all key Discover applications.
Position requires excellent communication skills for understanding business vision and the ability to translate the vision into technical artifacts. Strong technical analysis and design background is also a must-have to ensure that technical deliverables are providing flexible, architecturally sound infrastructure with the ability to reuse in future Data Stores, analytical and operational in AWS.
- Develop data driven solutions utilizing current and next generation technologies to meet evolving business needs
- Build and maintain highly available data pipelines that facilitate deeper analysis and reporting by the Data and Analytics department
- Design and develop scalable data warehouse solutions which turn data into actionable analytics for end users
- Implement continuous improvement frameworks to reduce overall time to market and improve customer satisfaction
- Ability to build ETL applications using automated metadata driven data pipelines
- Keep up to date with the industry best practices along with continuous skill development
- Develop application systems that comply with the standard system development methodology and concepts for design, programming, backup, and recovery to deliver solutions that have superior performance and integrity.
- Utilize multiple development languages/tools such as Python, Spark to build prototypes and evaluate results for effectiveness and feasibility.
- Develop real-time data ingestion and stream-analytic solutions leveraging technologies such as Kafka, Apache Spark, Python and AWS based solutions.
- Work heavily within the Cloud ecosystem and migrate data from Teradata to AWS based platform.
- Provide support for deployed data applications and analytical models by being a trusted advisor to Data Scientists and other data consumers by identifying data problems and guiding issue resolution with partner Data Engineers and source data providers.
- Provide subject matter expertise in the analysis, preparation of specifications and plans for the development of data processes.
- Optimize the performance of ETL processes and scripts by working with other technical staff as needed
- Ensure proper data governance policies are followed by implementing or validating Data Lineage, Quality checks, classification, etc.
- Designs and develops data ingestion frameworks, real time processing solutions, and data processing/transformation frameworks leveraging open source tools.
- Deploys application code and analytical models using CI/CD tools and techniques and provides support for deployed data applications and analytical models.
- Provides senior level technical consulting to peer data engineers during design and development for highly complex and critical data projects
At a minimum, here’s what we need from you:
- Bachelors degree in Information Technology, or related field
- 2+ years of work experience in Data Platform Administration/Engineering, or related
If we had our say, we’d also look for:
- 4+ years of work experience in Data Platform Administration/Engineering, or related
- Hands on experience with Amazon Web Services (AWS) based solutions such as Lambda, Dynamo dB, Snowflake and S3.
- Knowledge of Data Warehouse technology (Unix/Teradata/Ab Initio/Python/Spark/Snowflake/No SQL)
- Experience in migrating ETL processes (not just data) from relational warehouse Databases to AWS based solutions.
- Experience in building & utilizing tools and frameworks within the Big Data ecosystem including Kafka, Spark, and NoSQL.
- Deep knowledge and very strong in SQL and Relational Databases
- Knowledge of Data Warehouse technology (Unix/Teradata/Ab Initio)
- Willingness to continuously learn & share learnings with others
- Ability to work in a fast-paced, rapidly changing environment
- Very strong verbal & written communication skills
- Experience within the Financial industry
What are you waiting for? Apply today!
The same way we treat our employees is how we treat all applicants – with respect. Discover Financial Services is an equal opportunity employer (EEO is the law). We thrive on diversity & inclusion. You will be treated fairly throughout our recruiting process and without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status in consideration for a career at Discover.