Posting Type
Remote
Job Overview
We are building a specialized team focused on enabling advanced analytics and reporting capabilities across our internal data ecosystem. As an Advanced Data Platform Engineer, you will design and implement scalable, cloud-native data platforms that integrate modern lakehouse technologies, distributed compute frameworks, and cloud-native services to support diverse analytical use cases and enterprise-scale insights.You will work on systems leveraging Apache Spark, Delta Lake, and Iceberg to process large-scale datasets efficiently, while enabling internal users to build reporting and analytics through curated data models, optimized query performance, and reliable data pipelines. This role emphasizes technical depth, performance optimization, and governance best practices to deliver secure and reliable solutions.
Relativity’s scale and breadth provide significant opportunities for rich data exploration and insights. Our data infrastructure ensures that vast datasets remain accessible, secure, and compliant, while enabling innovation across the organization. We are making substantial investments in data lake technology and distributed systems to support future growth and advanced analytics.
Job Description and Requirements
Your Role in Action
Design and implement complex data pipelines and distributed systems using Spark and Python.
Apply software engineering best practices: clean code, modular design, CI/CD, automated testing, and code reviews.
Develop and maintain lakehouse capabilities with Delta Lake and Iceberg, ensuring reliability and performance.
Enable analytics workflows by integrating dbt for SQL transformations running on Spark.
Collaborate with internal teams to deliver curated datasets and self-service analytics capabilities.
Optimize data warehousing solutions such as Databricks and Snowflake for performance and scalability.
Implement observability and governance frameworks, including data lineage and compliance controls.
Drive performance tuning, scalability strategies, and cost optimization across Spark jobs and cloud-native environments.
Core Requirements:
Strong programming skills in Python and SQL; experience with Apache Spark for distributed data processing.
Expertise in Delta Lake and/or Apache Iceberg for lakehouse architecture.
Familiarity with dbt, Databricks, and Snowflake for analytics workflows.
Solid understanding of software engineering principles, CI/CD, and automated testing.
Familiarity with Kubernetes, Docker, and infrastructure-as-code tools.
Understanding of performance tuning, scalability strategies, and cost optimization for large-scale systems.
Nice to Have:
Exposure to event-driven architectures and advanced analytics platforms.
Experience enabling self-service analytics for internal stakeholders.
Experience in any of the following languages: Java, Scala, Rust.
Relativity is a diverse workplace with different skills and life experiences—and we love and celebrate those differences. We believe that employees are happiest when they're empowered to be their full, authentic selves, regardless how you identify.
Benefit Highlights:
Comprehensive health, dental, and vision plans
Parental leave for primary and secondary caregivers
Flexible work arrangements
Two, week-long company breaks per year
Unlimited time off
Long-term incentive program
Training investment program
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, or national origin, disability or protected veteran status, or any other legally protected basis, in accordance with applicable law.
Relativity is committed to competitive, fair, and equitable compensation practices.
This position is eligible for total compensation which includes a competitive base salary, an annual performance bonus, and long-term incentives.
The expected salary range for this role is between following values:
146 000 and 218 000PLNThe final offered salary will be based on several factors, including but not limited to the candidate's depth of experience, skill set, qualifications, and internal pay equity. Hiring at the top end of the range would not be typical, to allow for future meaningful salary growth in this position.
Suggested Skills:
Engineering Principle, Hardware Integration, Innovation, Problem Solving, Process Improvements, Quality Assurance (QA), Research and Development, System Designs, Technical Documents, TroubleshootingTop Skills
What We Do
At Relativity, we build innovative and comprehensive tools for making sense of unstructured data. When more people can find the facts in mountains of documents, emails, and texts, more legal and data-centric matters can be resolved equitably. Join us in our mission to help our customers organize data, discover the truth, and act on it.
Relativity makes software to help users organize data, discover the truth and act on it. Its SaaS product, RelativityOne, manages large volumes of data and quickly identifies key issues during litigation and internal investigations. Relativity has more than 300,000 users in approximately 40 countries serving thousands of organizations globally primarily in legal, financial services and government sectors, including the U.S. Department of Justice and 198 of the Am Law 200.
Relativity does not tolerate racism or discrimination of any kind. We do not accept unfair treatment of any person or group of people. We’re committed to advocating for change to make our world a more inclusive, just place.
Why Work With Us
We believe in our team members and we want to help you own your career as part of a community of values-driven people who help customers around the world solve complex data challenges. At Relativity, you’ll take on challenging work, but you’ll also partner with talented colleagues and pursue plenty of learning and development opportunities.
Gallery






