Discover. A brighter future.
With us, you’ll do meaningful work from Day 1. Our collaborative culture is built on three core behaviors: We Play to Win, We Get Better Every Day & We Succeed Together. And we mean it — we want you to grow and make a difference at one of the world's leading digital banking and payments companies. We value what makes you unique so that you have an opportunity to shine.
Come build your future, while being the reason millions of people find a brighter financial future with Discover.Job Description
At Discover, be part of a culture where diversity, teamwork and collaboration reign. Join a company that is just as employee-focused as it is on its customers, and which is consistently awarded for both. We’re all about people and our employees are why Discover is a great place to work. Be the reason we help millions of consumers build a brighter financial future and achieve yours along the way with a rewarding career.
This is a hands-on engineering position that will be a part of a team enabling Model Ops capabilities for the Machine Learning Platform. This team is taking an innovative approach to defining fully automated and advanced ModelOps capabilities as a service for the company. This role will be instrumental in developing innovative software delivery pipeline capabilities and API’s, as well as develop the tools and infrastructure that enable self-service adoption. Candidates will be expected to bring their expertise and creativity to help solve key technical as well as non-technical challenges in driving our vision, engineering our solutions and unlocking value across the organization.
- Build, evolve and scale the data science technology capabilities to enable our machine learning platform
- Lead ML Ops strategy implementation, including batch and real-time model delivery, model management, DevOps practices and platform engineering
- Partner with management, architects and product owners to understand requirements, refining features and delivering technical capabilities
- Create automated pipelines and workflows to implement algorithms, data features and models into production, at scale
- Engage with Data Science, Technology and Product Owners to understand challenges around deploying, maintaining and monitoring data science models in production
- Ensure designs and solutions are highly available, secure, and continue to drive automation of data science capabilities
- Engage internal development teams on tools, techniques, capabilities as well as gather feedback on data science capability evolution from our internal communities
- Build and enable capabilities in production, enabling the latest modeling techniques & technology frameworks in ML and AI such as Spark, TensorFlow, Keras, & Graph technologies like Neo4J & Neptune to optimize performance and provide cost-efficiencies in the platform
- Deliver software capabilities and products from initial concept through continuous improvement
- Develop and implement automated testing frameworks for ML and AI delivery such as UAT, integration, A/B, champion / challenger methodologies
- Build functions for statistical tests
- Develops and maintains complex front-ends with a focus on user experience
- Develops and maintains backend systems
- Works with key stakeholders to design complex solutions and lead from inception to production
- Creates and maintains DevOps processes, application infrastructure, and utilizes cloud services (including database systems and models)
- Innovates on and advocates for best practices and improved team processes; mentors junior team members
- Supports live systems to ensure business continuity
At a minimum, here’s what we need from you:
- Bachelor's Degree in Information Technology, or related field
- 6+ years of work in Computer Science, Information Technology, or related
- In lieu of degree, 8+ years of work in Computer Science, Information Technology, or related
If we had our say, we’d also look for:
- 2+ years of experience in automated software build & deployment automation in distributed cloud environment such as AWS, GCP, Azure
- Experience developing and implementing API service capabilities and re-usable components
- Knowledge of containerization (Kubernetes) platforms and understand concepts around pods, configMaps, Secrets, etc..)
- CI / CD Pipeline Automation using tools such as Jenkins, Bamboo, or similar
- Understanding of groovy scripting or similar to provide template CI / CD artifacts
- Experience working with code repositories such as Github and competent in implementing versioning, branching, etc..
- Experience working with a variety of data platforms such as S3, Snowflake, Redis, Cassandra
- Understanding of observability and how to achieve reliability in a service
- Understanding of software testing principles and methodologies
- Skilled in high availability & scalability design, as well as performance monitoring
- Knowledge in machine learning, deep learning and other AI use cases a plus
- Experience as part of an agile engineering or development team
- Exposure to statistical tests
What are you waiting for? Apply today!
The same way we treat our employees is how we treat all applicants – with respect. Discover Financial Services is an equal opportunity employer (EEO is the law). We thrive on diversity & inclusion. You will be treated fairly throughout our recruiting process and without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status in consideration for a career at Discover.