Parallel Software Engineer

Posted 3 Days Ago
Hiring Remotely in Mountain View, CA, USA
In-Office or Remote
Senior level
Software
DataPelago helps enterprises process data efficiently for AI and analytics with its Nucleus engine.
The Role
As a Parallel Software Engineer, you will develop advanced parallel software, enhancing performance and reliability of data processing operators, while collaborating with engineering teams and optimizing for hardware accelerators.
Summary Generated by Built In

Parallel Software Engineer
Mountain View, CA / Hyderabad, IN / Remote

About DataPelago:

DataPelago is at the forefront of revolutionizing data processing for traditional analytics and cutting-edge GenAI preprocessing. We are building an innovative data processing engine that is transforming how Apache Spark, Apache Flink, Ray and others operate on diverse, large-scale data. Our team of engineers drive and adopt advances in hardware-accelerated computing, parallel processing of large-scale data, query optimization, distributed systems, compilers, machine learning, and cloud-native computing. We are looking for specialists to join our engineering team and shape the future of accelerated data processing.

The Opportunity:
As a Parallel Software Engineer, you will be a key individual contributor in developing advanced
parallel software that unlocks the full potential of diverse hardware accelerators including GPUs
and SIMD-capable CPUs. You will enhance functional breadth, performance, scale, and reliability of data processing operators that are integral to our data processing engine. This is a unique opportunity to make a significant impact on a category-defining product and work with a talented team of engineers.

What You'll Do:

• Architect: Influence the architecture of how our data processing engine harnesses the par-
allelism in GPUs and SIMD-capable CPUs efficiently in processing diverse large-scale data.

• Design: Lead design of functional and performance enhancements to operators and functions that are accelerated by our engine.

• Core Development: Individually design, implement, test, optimize, and maintain parallel
implementations of operators and functions on diverse acceleration hardware.

• Innovation and Differentiation: Analyze advances in accelerated computing hardware, programming models, and related tools and ensure our engine extends technology and product

leadership.
• Collaboration: Partner effectively with the execution engine engineering team in integrating
parallel software components with the overall engine.
• Continuous Improvement: Foster best practices in design and code reviews, testing, CI/CD,
and issue resolution to maintain highest product quality, security, efficiency, & productivity.

What You'll Bring:

• Bachelor's degree in Computer Science or a related field with 5+ years of relevant experience OR a Master's degree in Computer Science or a related field with 3+ years of relevant

experience.
• 3+ years of deep technical experience in developing production applications that process
large scale data using SIMD-extensions of CPUs (e.g., AVX), GPU programming models
(e.g., CUDA, ROCm), or equivalent accelerated computing framework.

• Demonstrated experience working with software libraries, development tools, and profiling
tools specific to parallel and accelerated computing.
• Demonstrated experience in troubleshooting and resolving functional and performance
anomalies in both pre- and post-production scenarios.
• Strong knowledge of computer architecture.
• Exceptional programming skills in C, C++.
• Extensive development experience in Linux environments.
• Strong analytical and problem-solving skills with a passion for performance optimization.

Location Considerations:

We value face-to-face collaboration, but recognize that talent can be found anywhere. Our engineering team works at our headquarters in Mountain View, CA, at our India office in Hyderabad, and at remote locations.

Why Join DataPelago?

• Technology Leadership: Shape the architecture and development of core parallel data processing functions of our engine.

• Cutting-Edge Innovation: Work on challenging problems at the forefront of accelerated
computing and data processing.
• Significant Impact: Your contributions will directly impact the performance and scalability
of our mission-critical platform.
• Growth: Expand your technical expertise and scope of responsibilities working with other
talented engineers and with a growing product.

• Competitive compensation, stock options, comprehensive benefits package, leadership development opportunities.

Skills Required

  • Bachelor's degree in Computer Science or related field with 5+ years of relevant experience or Master's degree with 3+ years
  • 3+ years of experience in developing production applications processing large scale data using SIMD-extensions or GPU programming models
  • Experience with software libraries, development tools, and profiling tools for parallel and accelerated computing
  • Experience in troubleshooting and resolving functional and performance anomalies
  • Strong knowledge of computer architecture
  • Exceptional programming skills in C, C++
  • Extensive development experience in Linux environments
  • Strong analytical and problem-solving skills
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: Mountain View, CA
60 Employees
Year Founded: 2025

What We Do

DataPelago is redefining how enterprises process data for AI and analytics at scale. As organizations race to operationalize artificial intelligence, they are discovering that the greatest barrier to progress isn’t a lack of models or talent – it’s the infrastructure beneath them. Data pipelines remain fragmented across specialized systems for analytics, AI, and data engineering, each optimized for specific workloads but incapable of operating as a cohesive whole. The result is inefficiency: duplicated data, stranded compute resources, and escalating costs that slow innovation. DataPelago was founded to solve this challenge. Its flagship product, Nucleus, is the world’s first Universal Data Processing Engine (UDPE) – a new layer that sits between data lakes and query engines to unify data processing within a single, hardware-aware stack. Built from first principles for accelerated computing, Nucleus allows companies to process, move, and activate their data orders of magnitude more efficiently than existing systems. At its core, Nucleus dynamically orchestrates workloads across heterogeneous compute environments – CPUs, GPUs, TPUs, and FPGAs – ensuring every job runs on the optimal hardware for maximum performance and efficiency. This unified approach eliminates the need to maintain separate infrastructure for different data workloads, dramatically reducing complexity and total cost of ownership by up to 40%. Nucleus supports structured, unstructured, and semi-structured data in a single environment, enabling AI and analytics workloads to coexist seamlessly. It integrates easily with existing data ecosystems and open-source frameworks, providing enterprises with flexibility and performance without requiring code changes or proprietary lock-in. With Nucleus, data teams can accelerate queries, streamline pipelines, and scale AI initiatives faster, all while controlling infrastructure spend. Early adopters across industries are leveraging the platform to speed up data preparation, model training, and real-time analytics by up to 10x, turning data from a bottleneck into a competitive advantage. DataPelago’s mission is to make high-performance, cost-efficient data processing achievable for every enterprise. By bridging the gap between data infrastructure and AI innovation, the company is helping organizations unlock the full potential of their data, laying the foundation for a new era of intelligence at scale.

Why Work With Us

DataPelago is pioneering the world’s first Universal Data Processing Engine, unifying AI and analytics in a single, hardware-aware platform. We’re solving one of the biggest challenges in enterprise AI – making data infrastructure faster, simpler, and more efficient. Join us to build the foundation for the next era of intelligent computing.

Gallery

Gallery

Similar Jobs

AI Digital Logo AI Digital

Business Development Associate

AdTech • Artificial Intelligence • Digital Media • Machine Learning • Marketing Tech • Analytics • Consulting
Remote
United States
420 Employees

AI Digital Logo AI Digital

Account Coordinator

AdTech • Artificial Intelligence • Digital Media • Machine Learning • Marketing Tech • Analytics • Consulting
Remote
United States
420 Employees

Zapier Logo Zapier

Senior Financial Analyst

Artificial Intelligence • Productivity • Software • Automation
Remote
2 Locations
800 Employees
131K-238K Annually

Hex Logo Hex

Software Engineer

Artificial Intelligence • Big Data • Software • Analytics • Business Intelligence • Big Data Analytics
Remote or Hybrid
3 Locations
160 Employees
176K-220K Annually

Similar Companies Hiring

Fairly Even Thumbnail
Hardware • Other • Robotics • Sales • Software • Hospitality
New York, NY
30 Employees
Kepler  Thumbnail
Fintech • Software
New York, New York
6 Employees
Onshore Thumbnail
Artificial Intelligence • Fintech • Software • Financial Services
New York City, NY
100 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account