Join us as a Data Engineer
- You’ll be the voice of our customers, using data to tell their stories and put them at the heart of all decision-making
- We’ll look to you to drive the build of effortless, digital first customer experiences
- If you’re ready for a new challenge and want to make a far-reaching impact through your work, this could be the opportunity you’re looking for
- We're offering this role at vice president level
As a Data Engineer, you’ll be looking to simplify our organisation by developing innovative data driven solutions through data pipelines, modelling and ETL design, inspiring to be commercially successful while keeping our customers, and the bank’s data, safe and secure.
You’ll drive customer value by understanding complex business problems and requirements to correctly apply the most appropriate and reusable tool to gather and build data solutions. You’ll support our strategic direction by engaging with the data engineering community to deliver opportunities, along with carrying out complex data engineering tasks to build a scalable data architecture.
Your responsibilities will also include:
- Building advanced automation of data engineering pipelines through removal of manual stages
- Embedding new data techniques into our business through role modelling, training, and experiment design oversight
- Delivering a clear understanding of data platform costs to meet your departments cost saving and income targets
- Sourcing new data using the most appropriate tooling for the situation
- Developing solutions for streaming data ingestion and transformations in line with our streaming strategy
To thrive in this role, you’ll need a strong understanding of data usage and dependencies and experience of extracting value and features from large scale data. You’ll also bring practical experience of programming languages alongside knowledge of data and software engineering fundamentals.
Additionally, you’ll need:
- Experience in leading the design and delivery of real time data pipelines using Apache Kafka, ensuring low latency, high throughput event streaming across distributed systems
- To architect scalable distributed data processing solutions using Apache Spark optimized for performance, cost, and resilience
- Exposure in driving adoption of AWS-native data services such as Kinesis, Glue, EMR, Lambda, DynamoDB, Aurora, S3, and Athena to build cloud-first data platforms
- Experience in designing and implementing NoSQL data models like MongoDB, DynamoDB, Cassandra for high-availability, high-volume operational and analytical workloads
- Data warehousing and data modelling capabilities
Hours
45Job Posting Closing Date:
13/04/2026Top Skills
What We Do
We’re a business that understands when our customers and people succeed, our communities succeed, and our economy thrives. As part of our purpose, we’re looking at how we can drive change for our communities in enterprise, learning and climate. As one of the leading supporters of UK business, we’re prioritising enterprise as a force of change. We’re focusing on the people and communities who have traditionally faced the highest barriers to entry and figuring out ways to remove these. Learning is also key to our continued growth as a company in an ever changing and increasingly digital world. By setting a dynamic and leading learning culture, our people prosper, and our customers are given the tools to continue to improve their financial capability and confidence. One of the biggest challenges we all face in our future is climate change. That’s why we’ve put it right at the core of our purpose. We want to champion climate solutions with financing and entrepreneurial support, fully embed climate into our culture and decision making, and be climate positive by 2025. We’re committed to using our purpose to break down barriers, drive change and ultimately create a great place to work.








