ABOUT THE ROLE
We’re seeking someone who can flex across both EST and PST time zones. This role also includes weekend coverage, so a willingness to adjust working hours as needed is important.
RESPONSIBILITIES
- Troubleshooting: Provide tier-2/3 support for data or performance issues in Hadoop clusters across the entire technical stack.
- Debugging: Conduct deep-dive debugging and optimization of Hadoop clusters, including NiFi, Impala, and Spark jobs.
- Migration: Lead product support during ODP Hadoop migrations and upgrades, ensuring post-migration stability, addressing upgrades, and evolving technical hurdles.
- Optimization: Design and optimize distributed Hadoop-based applications to ensure low-latency, high-throughput performance for big data workloads.
REQUIREMENTS
- Experience: 5+ years of hands-on experience working with hadoop environments
- Technical proficiency in core hadoop services (HDFS, YARN, and Hive/Impala) and good working knowledge of Kafka, NiFi, Ambari, and Cloudera Manager internals.
- Extensive experience in troubleshooting and debugging hadoop components,
- Linux: Advanced skills in configuring, tuning, and troubleshooting Red Hat and Debian-based distributions.
- Strong desire to tackle complex technical problems in Hadoop and proven ability to do so with little or no direct daily supervision.
- Good to have proficiency in Python, Bash, or Scala for system automation and performance monitoring.
What We Do
Founded in 2018, Campbell, CA-based Acceldata has developed the world's first enterprise data observability platform to help enterprises build and operate great data products. Acceldata's solutions have been embraced by global customers, such as Dun & Bradstreet, Verisk, Oracle, PubMatic, PhonePe (Walmart), and many more. Acceldata investors include Insight Partners, March Capital, Industry Ventures, Lightspeed, Sorenson Ventures, Sanabil, and Emergent Ventures. Contact us to learn about the benefits of data observability.

.png)






