Principal Data Processing Engineer - OSS
Mountain View, CA
About DataPelago:
DataPelago is at the forefront of revolutionizing data processing for traditional analytics and cutting-edge GenAI preprocessing. We are building an innovative data processing engine that is transforming how Apache Spark, Apache Flink, Ray, and others operate on diverse, large-scale data. Our team of engineers drive and adopt advances in hardware-accelerated computing, parallel processing of large-scale data, query optimization, distributed systems, compilers, machine learning, and cloud-native computing. We are looking for world-class engineers to join our team and shape the future of accelerated data processing.
The Role:
As a Principal Data Processing Engineer (OSS), you will be a key individual contributor in
adopting and advancing the capabilities of open-source software (OSS) platforms such as Apache
Gluten, Velox, Apache Spark, and Apache Flink in the context of DataPelago’s data processing engine. You will enhance the functional breadth, performance, scale, and reliability of the DataPelago engine through downstream and upstream contributions. You will have the opportunity to engage with the community working on these platforms. This is a unique opportunity to make a significant impact on a category-defining product and work with a talented team of engineers.
What You'll Do:
- Influence the architecture of how our data processing engine interfaces with open-source platforms and engines.
- Lead the design of functional and performance enhancements to open source platforms such as Apache Gluten and Velox, and their integration with our data processing engine.
- Individually design, implement, test, optimize, and maintain components of the data processing engine.
- Analyze the technology roadmap of Apache Gluten, Velox, and equivalent platforms and identify opportunities for our engine to enhance technology and product leadership.
- Collaboration: Partner with engineering, product management, the open-source community and customer success teams.
- Foster best practices in design and code reviews, testing, CI/CD, and issue resolution to maintain the highest product quality, security, efficiency, and productivity.
What You'll Bring:
- BS/MS in Computer Science (or a related field) with 6+ years of relevant experience
- 3+ years of deep technical experience in instrumenting, analyzing, and optimizing the performance of data processing engine components on benchmark and customer workloads.
- Sound knowledge of the architecture and internal operation of one or more of Apache Spark,
Apache Flink, Presto/Trino. - Demonstrated experience in the design, development, and successful release of high-performance data processing engines for large production deployments.
- Exceptional programming skills in C, C++, and Java.
- Extensive development experience in Linux environments.
- Excellent communication and collaboration skills, with the ability to articulate complex technical concepts to both technical and non-technical audiences.
- Strong analytical and problem-solving skills with a passion for performance optimization.
Location Considerations:
We value face-to-face collaboration, but recognize that talent can be found anywhere. Our engineering team works at our headquarters in Mountain View, CA, at our India office in Hyderabad, and at remote locations.
Why Join DataPelago?
- Technical Leadership: Take a leadership role in shaping the architecture and development of how our core engine works with open source data processing platforms
- Cutting-Edge Innovation: Work on challenging problems at the forefront of accelerated
computing and data processing. - Significant Impact: Your contributions will directly impact the performance and scalability of our mission-critical platform.
- Mentorship and Growth: Mentor and guide other talented engineers while expanding your own technical expertise.
- Competitive compensation, stock options, comprehensive benefits package, and leadership development opportunities
Top Skills
What We Do
DataPelago is redefining how enterprises process data for AI and analytics at scale. As organizations race to operationalize artificial intelligence, they are discovering that the greatest barrier to progress isn’t a lack of models or talent – it’s the infrastructure beneath them. Data pipelines remain fragmented across specialized systems for analytics, AI, and data engineering, each optimized for specific workloads but incapable of operating as a cohesive whole. The result is inefficiency: duplicated data, stranded compute resources, and escalating costs that slow innovation.
DataPelago was founded to solve this challenge. Its flagship product, Nucleus, is the world’s first Universal Data Processing Engine (UDPE) – a new layer that sits between data lakes and query engines to unify data processing within a single, hardware-aware stack. Built from first principles for accelerated computing, Nucleus allows companies to process, move, and activate their data orders of magnitude more efficiently than existing systems.
At its core, Nucleus dynamically orchestrates workloads across heterogeneous compute environments – CPUs, GPUs, TPUs, and FPGAs – ensuring every job runs on the optimal hardware for maximum performance and efficiency. This unified approach eliminates the need to maintain separate infrastructure for different data workloads, dramatically reducing complexity and total cost of ownership by up to 40%.
Nucleus supports structured, unstructured, and semi-structured data in a single environment, enabling AI and analytics workloads to coexist seamlessly. It integrates easily with existing data ecosystems and open-source frameworks, providing enterprises with flexibility and performance without requiring code changes or proprietary lock-in.
With Nucleus, data teams can accelerate queries, streamline pipelines, and scale AI initiatives faster, all while controlling infrastructure spend. Early adopters across industries are leveraging the platform to speed up data preparation, model training, and real-time analytics by up to 10x, turning data from a bottleneck into a competitive advantage.
DataPelago’s mission is to make high-performance, cost-efficient data processing achievable for every enterprise. By bridging the gap between data infrastructure and AI innovation, the company is helping organizations unlock the full potential of their data, laying the foundation for a new era of intelligence at scale.
Why Work With Us
DataPelago is pioneering the world’s first Universal Data Processing Engine, unifying AI and analytics in a single, hardware-aware platform. We’re solving one of the biggest challenges in enterprise AI – making data infrastructure faster, simpler, and more efficient. Join us to build the foundation for the next era of intelligent computing.
Gallery







