Kape is a cybersecurity company focused on helping everyone have a better digital experience with greater privacy and protection. With over 1000 experienced individuals across ten global locations, including the UK, Israel, Germany, Romania, France, the Philippines, USA, Singapore, Hong Kong, and Cyprus, the Kape team is guided by a single mission: to create a more secure and safe world.
Job SummaryWe are seeking a highly skilled and hands-on Senior Data Engineer to architect and maintain modern data infrastructure and pipelines. This is a technical role focused on building a scalable data platform that powers analytics, insights, and agentic workflows to enable data-driven decision making across the company. You will work closely with analysts, product, and engineering teams to design end-to-end data solutions using tools like AWS Redshift, Athena, Snowflake, and other leading cloud platforms.
Core Responsibilities- Design and build scalable data pipelines to ingest, process, and store large volumes of structured and unstructured data from diverse sources.
- Develop and maintain robust data warehouse architectures leveraging tools such as AWS Redshift, Athena, and Snowflake.
- Optimize data models, queries, and storage strategies for performance, scalability, and cost-effectiveness.
- Collaborate with cross-functional stakeholders (analytics, product, payments, and finance) to gather requirements and deliver data solutions that support business goals.
- Ensure data quality, security, and privacy through best practices in governance, testing, and monitoring.
- Own and operate production data workflows, resolving incidents and ensuring reliability.
- Develop and maintain infrastructure optimized for both analytics and AI workloads, including transformation and semantic layers.
- Design and monitor agentic flows for common analytical use cases, for example, ‘ask your data’ interfaces.
- Languages & Tools: Python, SQL
- Data Warehousing & Query Engines: Snowflake, AWS Redshift, Athena
- Pipeline & Orchestration: Apache Airflow, AWS Glue, dbt
- Cloud & DevOps: AWS (Lambda, S3, IAM), Docker, Terraform
- Streaming: Kafka, Kinesis
- BI & Analytics: Tableau, Power BI
- Bachelor's or Master’s degree in Computer Science, Engineering, or related field
- 5+ years of experience as a Data Engineer, with a strong focus on data pipeline development and data warehousing
- Deep proficiency in Snowflake or AWS Redshift
- Strong programming skills in Python and SQL
- Experience with pipeline orchestration tools (Airflow, Glue, dbt)
- Solid understanding of data modeling, schema design, and ETL/ELT processes
- Strong analytical, problem-solving, and communication skills
- Ability to work cross-functionally and mentor others
- Experience in high-growth, product-led, or SaaS environments
- Hands-on experience with LLMs (Claude, OpenAI, etc.) in a production environment
- Familiarity with AWS infrastructure and DevOps practices
- Knowledge of data privacy and security standards (GDPR, SOC2)
- Understanding of agent architectures (tool use, planning, memory management), prompt engineering and evaluation
- At the moment, we do not sponsor visas in the EU. For Hong Kong, we require at least two years of working experience and a university degree in a related field. For Singapore and the UK, we can only sponsor visas for mid-career or above.
- Please upload your resume as a PDF and do not include any salary or compensation information in it.
ExpressVPN is one of the world’s leading providers of online privacy and security services for consumers. Started in 2009, we’ve grown to have millions of active paying customers, a team of more than 700 people worldwide, and a brand recognized by hundreds of millions of people in 18 languages and more than a hundred countries. We see huge growth in our industry, and are gaining market share through strong execution.
Top Skills
What We Do
The leading VPN company. Building a safer, better digital world.









