We’re not just building better tech. We’re rewriting how data moves and what the world can do with it. With Confluent, data doesn’t sit still. Our platform puts information in motion, streaming in near real-time so companies can react faster, build smarter, and deliver experiences as dynamic as the world around them.
It takes a certain kind of person to join this team. Those who ask hard questions, give honest feedback, and show up for each other. No egos, no solo acts. Just smart, curious humans pushing toward something bigger, together.
One Confluent. One Team. One Data Streaming Platform.
About the Role:The Stream Processing & Analytics (SPA) team is building an elastic, reliable, durable, cost-effective, and performant stream processing engine based on Apache Flink for Confluent Cloud.
This role is critical for enabling our customers to build custom functions and apps in Confluent Cloud, extending the stream processing engine to meet the specific needs of their use-cases no matter how complex. You will champion a best-in-class SDLC for users who prefer Java or Python over SQL and want to leverage the power of managed stream processing on Confluent Cloud without sacrificing best-practices from their typical workflows.
Work on Flink user-defined functions and Table API to provide a great experience for customers with sophisticated use-cases
Play a crucial role in designing, developing and operationalizing critical user-facing interfaces as well as the backing cloud infrastructure
Collaborate with the Apache Flink community to establish standards and improve open source foundations
Produce clean, well-documented, and maintainable code that adheres to established team standards and security best practices.
Deliver value for customers by taking on their most challenging problems
As a vital member of our team, take responsibility for developing, managing, and maintaining a mission-critical service with a 99.99 SLA running on 88+ AWS, GCP, and Azure regions
Enhance the stability, performance, scalability, and operational excellence across multiple critical systems
BS, MS, or PhD in computer science or a related field, or equivalent work experience
2-4 years of relevant stream processing experience
Strong fundamentals in distributed systems design and development
Experience running production services in the cloud and being part of oncall rotation
A self-starter with the ability to work effectively in teams
Proficiency in Java, and comfortable working with Go and Python
A strong background in distributed storage systems or databases
Experience/knowledge with public clouds (AWS, Azure, or GCP) and Kubernetes operators
Contributions to open-source projects, especially in Flink or the stream processing area
Belonging isn’t a perk here. It’s the baseline. We work across time zones and backgrounds, knowing the best ideas come from different perspectives. And we make space for everyone to lead, grow, and challenge what’s possible.
We’re proud to be an equal opportunity workplace. Employment decisions are based on job-related criteria, without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, or any other classification protected by law.
Top Skills
What We Do
Your data shouldn’t be a problem to manage. It should be your superpower. The Confluent data streaming platform transforms organizations with trustworthy, real-time data that seamlessly spans your entire environment and powers innovation across every use case. Create smarter, deploy faster, and maximize efficiency with a true data streaming platform from the pioneers in data streaming. Learn more at confluent.io.
Why Work With Us
At Confluent, we’re not just building better tech, we’re rewriting how data moves. No egos, no solo acts - just smart, curious people pushing toward something bigger, together. Belonging isn’t a perk here. It’s the baseline. Work from anywhere. Build with everyone. One Confluent. One team. whole new way of making data flow.
Gallery







