What You'll Do:
- Create and maintain optimal data pipeline architecture
- Optimize Spark clusters for efficiency and performance by implementing robust monitoring systems to identify bottlenecks using data and metrics. Provide actionable recommendations for continuous improvement
- Assemble large, complex data sets that meet functional / non-functional business requirements
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics
- Work with stakeholders including the Executive, Product, Data, and Design teams to assist with data-related technical issues and support their data infrastructure needs
- Keep our data separated and secure across national boundaries through multiple data centers and AWS regions
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader
- Work with data and analytics experts to strive for greater functionality in our data systems
What You Will Bring to Coupa:
- Strong programming skills in Python.
- Strong working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Experience with processing workloads and code on Spark clusters.
- Experience with Data Warehouse solutions to support analytical and reporting needs.
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Strong analytic skills related to working with unstructured datasets.
- Knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
- Experience supporting and working with cross-functional teams.
- We are looking for a candidate with 4+ years of experience in a Software Engineer – Data role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or another quantitative field. They should also have experience using the following software/tools: Experience with object-oriented/object function scripting languages: Python, Java, C++, .net, etc. Expertise in Python is a must.
- Experience with big data tools: Spark, Kafka, etc. Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. Experience with AWS cloud services: EC2, EMR, RDS, Redshift.
What We Do
Coupa is a global technology company that helps businesses run smarter by connecting all the ways they spend money — from procurement and expenses to payments and supply chain decisions — in one intelligent platform. In simple terms, Coupa gives organizations the visibility and control they need to make better financial choices, reduce waste, and drive real impact. It’s where technology meets purpose: helping companies manage their resources more responsibly while creating a positive ripple across their people, partners, and the planet.
Why Work With Us
At Coupa, we prioritize an inclusive and empathetic workplace where every voice is valued. Our teams are proactive and accountable, ensuring we collaborate effectively to achieve our goals. The foundation of our culture rests on our people; we believe in fostering an environment that encourages innovation and curiosity.
Gallery
Coupa Offices
Remote Workspace
Employees work remotely.
Our virtual-first approach is intentional. It gives you the freedom to do your best work in a space that supports focus, balance, and creativity, while staying connected to a global team of changemakers who are redefining the future of business spend













