The Role
The Staff Product Support Engineer will design, optimize, and manage Hadoop and Spark applications, solve data challenges, and ensure operational stability in a 24/7 environment.
Summary Generated by Built In
We are seeking a Staff Product Support Engineer / Product Specialist - Hadoop Operations SME who will be responsible for designing, optimizing, migrating, and scaling Hadoop and Spark-based data processing systems. This role involves hands-on experience with Hadoop and other core data operations, focusing on building resilient, high-performance distributed data systems.
You will collaborate with customer engineering teams to deliver high-throughput Hadoop, Nifi, and Spark applications and solve complex data challenges in migration, upgrades, reliability, and optimise post-migration system performance.
This role requires flexibility to work in rotational shifts, based on team coverage needs and customer demand. Candidates should be comfortable supporting operations in a 24/7 environment and be willing to adjust their working hours accordingly.
We’re looking for someone who perform:
- Design and optimise distributed Hadoop-based applications, ensuring low-latency, high-throughput performance for big data workloads.
- Troubleshooting: Provide expert-level support for data or performance issues in Hadoop, Nifi, and Spark jobs and clusters.
- Data Processing Expertise: Work extensively with large-scale data pipelines using Hadoop, Nifi, and Spark's core components.
- Performance Tuning: Conduct deep-dive performance analysis, debugging, and optimization of Nifi, Impala, and Spark jobs to reduce processing time and resource consumption.
- Cluster Management: Collaborate with DevOps and infrastructure teams to manage Nifi, Impala, and Spark clusters on platforms like Hadoop/YARN, Kubernetes, or cloud platforms (AWS EMR, GCP Dataproc, etc.).
- Migration: Provide dedicated support to ensure the stability and reliability of the new ODP Hadoop environment during and after migration. Promptly address evolving technical challenges and optimise system performance after migration to ODP.
Good to have:
- Master’s degree and Experience working with scripting languages (Scala, Python, Bash, PowerShell).
- Familiarity with virtual machine technologies and multi-node environment (50+ nodes).
- Proficient with Linux, NFS, and Windows, including application installation, scripting, and working with the command line.
- Working knowledge of application, server, and network security management concepts. Certification on any of the leading Cloud providers (AWS, Azure, GCP ) and/or Kubernetes.
- Knowledge of databases like MySQL and PostgreSQL.
- Be involved with and work on other support-related activities - Performing POC & assisting with Onboarding deployments of Acceldata & Hadoop distribution products.
Top Skills
Aws Emr
Bash
Gcp Dataproc
Hadoop
Impala
Kubernetes
Linux
MySQL
Nifi
Postgres
Powershell
Python
Scala
Spark
Am I A Good Fit?
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.
Success! Refresh the page to see how your skills align with this role.
The Company
What We Do
Founded in 2018, Campbell, CA-based Acceldata has developed the world's first enterprise data observability platform to help enterprises build and operate great data products.
Acceldata's solutions have been embraced by global customers, such as Dun & Bradstreet, Verisk, Oracle, PubMatic, PhonePe (Walmart), and many more.
Acceldata investors include Insight Partners, March Capital, Industry Ventures, Lightspeed, Sorenson Ventures, Sanabil, and Emergent Ventures. Contact us to learn about the benefits of data observability.






