Rhino Federated Computing Rhino solves one of the biggest challenges in AI: seamlessly connecting siloed data through federated computing. The Rhino Federated Computing Platform (Rhino FCP) serves as the ‘data collaboration tech stack’, extending from providing computing resources to data preparation & discoverability, to model development & monitoring - all in a secure, privacy preserving environment. To do this, Rhino FCP offers flexible architecture (multi-cloud and on-prem hardware), end-to-end data management workflows (multimodal data, schema definition, harmonization, and visualization), privacy enhancing technologies (e.g., differential privacy), and allows for the secure deployment of custom code & 3rd party applications via persistent data pipelines. Rhino is trusted by >60 leading organizations worldwide - including 14 of 20 of Newsweek’s ‘Best Smart Hospitals’ and top 20 global biopharma companies - and is leveraging this foundation for financial services, ecommerce, and beyond.
The company is headquartered in Boston, with an R&D center in Tel Aviv.
About the RoleWe are seeking a Senior Analytics Engineer to architect, build, and evolve Rhino’s core analytics data infrastructure. This role focuses heavily on data modeling, metrics standardization, data quality and governance, semantic layer design, and building reusable analytics foundations that power decision-making across the company.
You will own the creation of Rhino’s analytics data models, governed metrics, documentation, and insights generation layer, ensuring that teams across Sales, Customer Solutions, Product, and Business Operations can rely on consistent, trusted, and high-quality data.
This is a high-impact role combining strong technical rigor with a product-oriented mindset for enabling data consumers.
Key ResponsibilitiesData Modeling & Metrics Layer (Primary Focus)
- Architect and maintain Rhino’s central analytics data warehouse data models, ensuring they are scalable, performant, well-documented, and aligned with business logic.
- Build and own a consistent metrics layer (semantic layer / metrics catalog) to standardize KPIs across the company and reduce ambiguity in reporting.
- Develop governed, reusable fact and dimension models supporting product usage analytics, platform health analytics, and operational analytics..
- Design and maintain robust Python/SQL pipelines to ingest, clean, transform, and integrate data from sources such as HubSpot, Jira Service Desk, platform logs/metrics, internal databases, and third-party systems.
- Build CI/CD-ready analytics pipelines with monitoring, tests, and data quality alerts.
- Partner closely with Engineering to align data definitions, schemas, and event instrumentation.
- Build and maintain the insights/semantic layer that powers dashboards, self-service exploration, and automated insights.
- Create intuitive self-service tools (Looker dashboards, data marts) that help teams easily navigate trusted data.
- Document models and metrics clearly to support cross-team accessibility and adoption.
- Work with Commercial, Customer Solutions and Delivery, Product, and Operations leadership to translate business questions into data models and metric definitions, not just dashboards.
- Serve as the analytics “product manager” for the analytics stack—gather requirements, prioritize improvements, and ensure alignment with strategic needs.
- Close the loop with engineering and product teams to ensure event data, platform telemetry, and operational data are modeled accurately and consistently.
- Leverage LLMs and generative AI to build question-answering capabilities on top of structured data models.
- Build structured embedding-friendly data layers that support semantic search and automated insights.
- Explore opportunities to accelerate analysis through AI-assisted workflows and toolchains.
Required Skills
- 5+ years in Analytics Engineering, Data Engineering, Business Intelligence, or related roles, with a strong focus on building data models and governed metrics.
- Expert SQL and Python skills, including experience designing production-grade data transformations and pipelines.
- Strong experience with enterprise data warehouses (BigQuery preferred), analytics modeling tools and BI platforms (Looker preferred), and workflow orchestration tools (Airflow or simialr)
- Demonstrated ability to design clean, scalable, well-documented data models and metrics.
- Deep attention to detail regarding data accuracy, definitions, lineage, and quality.
- Strong cross-functional collaboration and ability to translate business logic into technical data structures.
- Experience building or scaling analytics engineering foundations in a startup environment.
- Familiarity with GTM analytics, product usage analytics, and operational KPIs.
- Experience with BigQuery, Looker, Vertex AI, Grafana, Streamlit, or similar modern analytics infrastructure.
- Prior experience with applying LLMs to analytics workflows (semantic layers, insights automation, data Q&A systems).
Hybrid - 3 days onsite in Boston
Top Skills
What We Do
Rhino Federated Computing solves one of the biggest challenges in AI: seamlessly connecting siloed data through federated computing. The Rhino Federated Computing Platform (Rhino FCP) serves as the ‘data collaboration tech stack’, extending from providing computing resources to data preparation & discoverability, to model development & monitoring — all in a secure, privacy-preserving environment. To do this, Rhino FCP offers flexible architecture (multi-cloud and on-prem hardware), end-to-end data management workflows (multimodal data, schema definition, harmonization, and visualization), privacy enhancing technologies (e.g., differential privacy), and allows for the secure deployment of custom code & 3rd party applications via persistent data pipelines.
Rhino is trusted by >60 leading organizations worldwide — including 14 of 20 of Newsweek’s ‘Best Smart Hospitals’ and top 20 global biopharma companies — and is leveraging this foundation for financial services, ecommerce, and beyond.
The company is headquartered in Boston, with an R&D center in Tel Aviv.









