Dremio is the unified lakehouse platform for self-service analytics and AI, serving hundreds of global enterprises, including Maersk, Amazon, Regeneron, NetApp, and S&P Global. Customers rely on Dremio for cloud, hybrid, and on-prem lakehouses to power their data mesh, data warehouse migration, data virtualization, and unified data access use cases. Based on open source technologies, including Apache Iceberg and Apache Arrow, Dremio provides an open lakehouse architecture enabling the fastest time to insight and platform flexibility at a fraction of the cost. Learn more at www.dremio.com.
About the role
As a member of the Observability and Core Services team, you will play a key role in delivering critical platform capabilities for both Dremio Software and Dremio Cloud. You will be developing the new observability platform to stream telemetric data from customers' private enterprise environments into Dremio Cloud, enabling real-time insights to better understand and support their needs at scale. In addition, you will help modernize our software stack by delivering shared services and core components that drive efficiency, consistency, standardization, and performance. You’ll be responsible for designing and implementing solutions to solve complex challenges, including distributed and consistent persistence, distributed caching, networking, configurability with ease, containerized web services, proxies, private links, Terraform, roles and permissions, and much more.
You will grow through collaboration with other developers, taking ownership of complex problems, and delivering high-quality distributed systems at massive scale.
What you'll be doing
- Deliver an observability platform to ingest customer telemetric data using OpenTelemetry and industry best practices.
- Develop and maintain scalable core data store services to support a wide range of persistence usage and needs, including remote KV-stores and distributed caching solutions.
- Design and maintain specialized coordination services such as distributed semaphores, process scheduling, leader election, and pub/sub messaging queues shared components.
- Create and maintain application frameworks to standardize development, reduce time-to-delivery, and solve complex problems like dependency injection management and roles/permissions at scale.
- Collaborate with cloud SREs and DevOps teams to define best practices and deliver standardized solutions for common engineering processes.
- Engage with the developer community to gain deep insights into their problem domains and challenges.
- Prioritize the platform's engineering and technical roadmaps based on direct feedback from engineering teams.
- Experiment and innovate to tackle complex engineering challenges at scale, with a platform-first mindset.
What we're looking for
- Bachelor's, Master's, or higher in Computer Science or a related technical field
- 5+ years of relevant work experience
- Proficient in Java, Python, Bash, and Node.js
- Familiar with cloud platforms such as AWS, Azure, or GCP
- Hands-on experience with OpenTelemetry or related observability standards and platforms
- Strong understanding of networking and common protocols like TCP/IP, DNS, HTTP, etc.
- Proficient with NoSQL databases such as MongoDB or RocksDB
- Knowledge of SQL and RDBMS systems
- Experience with Redis caching and queuing systems
- Familiar with CI/CD tools like Jenkins and ArgoCD
- Experience with container orchestration tools, such as Kubernetes and Docker
Bonus Points
- In-depth knowledge of SaaS, microservices, and distributed systems development
- Hands-on experience with multi-threaded and asynchronous programming models
- Extensive experience in query processing and optimization, distributed systems, concurrency control, data replication, code generation, networking, and storage systems
- Practical experience with Java GC/heap management, Apache Arrow, SQL operators, caching techniques, and disk spilling
Return to Office Philosophy
Workplace Wednesdays - to break down silos, build relationships and improve cross-team communication. Lunch catering / meal credits provided in the office and local socials align to Workplace Wednesdays. In general, Dremio will remain a hybrid work environment. We will not be implementing a 100% (5 days a week) return to office policy for all roles.
#LI-JF1
At Dremio, we hold ourselves to high standards when it comes to People, Thinking, and Action. Our Gnarlies (that's what we call our employees) communicate with clarity, drive accountability, and are respectful towards each other. We confront brutal facts and focus on results while operating with a sense of urgency and building a "flywheel". People who like to jump in and drive momentum will thrive in our #GnarlyLife.
Dremio is an equal opportunity employer supporting workforce diversity. We do not discriminate on the basis of race, religion, color, national origin, gender identity, sexual orientation, age, marital status, protected veteran status, disability status, or any other unlawful factor.
Dremio is committed to providing any necessary accommodations for individuals with disabilities within our application and interview process. To request accommodation due to a disability, please inform your recruiter.
Dremio has policies in place to protect the personal information that employees and applicants disclose to us. Please click here to review the privacy notice.
Important Security Notice for Candidates
At Dremio, we uphold trust and transparency as paramount values in all our interactions with customers, partners, employees, and the general public. We have been targeted by individuals creating fake domains similar to ours to scam prospects and candidates. Please note that all official communications from us will be from an @dremio.com domain. If you suspect you've been targeted by a scam, it's imperative to report the incident to your local law enforcement agencies. For more information about this type of scam, please refer to Dremio's official statement here.
Top Skills
What We Do
Dremio is the Data Lake Engine. Created by veterans of open source and big data technologies, and the creators of Apache Arrow, Dremio is a fundamentally new approach to data analytics that helps companies get more value from their data, faster. Dremio makes data engineering teams more productive, and data consumers more self-sufficient. For more information, visit www.dremio.com.
Founded in 2015, Dremio is headquartered in Mountain View, CA. Investors include Lightspeed Venture Partners, Redpoint, and Norwest Venture Partners. Connect with Dremio on GitHub, LinkedIn, Twitter, and Facebook.