Key Responsibilities
- Data Pipeline Development: Design, build, and maintain ingestion/processing pipelines at scale using Python/SQL and Spark; operate within a lakehouse stack (Apache Iceberg or Delta Lake).
- Databricks & Snowflake Engineering: Implement and optimize workflows on Databricks (Jobs, Workflows, Delta) and/or Snowflake (Warehouses, Tasks, Streams) and/or native cloud provider solutions (AWS / AZURE).
- Platform Optimization: Improve performance, reliability, and cost on AWS (S3, Redshift, Athena/Glue, Lambda), with strong observability and IaC practices.
- Secure Data Management: Apply security-by-design, data governance, and compliance best practices across storage, compute, and sharing layers.
- AI Use Case Enablement: Partner with Product team and R&D team to prepare data for initial AI/ML use cases (feature pipelines, data quality, lineage).
- Data Sharing & Integration: Enable secure, efficient data access for customers via connectors, APIs, and lakehouse sharing patterns (e.g., Delta Sharing, Snowflake data sharing).
Experience and Background
- Equivalent engineering school degree, or a Master's degree in Computer Science, Data Science, or Applied Mathematics.
- 7–12 years in data engineering or backend data platforms.
- Strong Python and SQL; experience with Spark and modern ELT/Orchestration (e.g., dbt, Airflow).
- Hands-on with Databricks and/or Snowflake in production.
- Experience on AWS (S3, Glue/Athena, Redshift, Lambda) and lakehouse formats (Iceberg or Delta Lake).
- Familiarity with data security, governance, and compliance.
- Proven experience with data modeling.
- Knowledge of cost management principles.
- Salesforce data knowledge is a plus, not mandatory.
- Foundational AI/ML understanding and motivation to contribute to early use cases.
- Fluent in English and French, clear communication, ownership mindset, and collaborative approach.
- Excellent interpersonal skills and ability to interact with diverse business stakeholders
Where you'll be
- Based in Paris (75002), France.
- Hybrid: 3 days in the office / 2 days remote work.
- Full time permanent contract position.
Top Skills
What We Do
At enterprise scale, Salesforce data is different. Data volumes are large. Data models are more sophisticated. Integrations, regulations, and business processes are much more intricate. All this complexity dramatically increases the risks to your data threatening to grind business to a halt. Odaseva is the only data platform built specifically to help the world's largest, most ambitious Salesforce customers keep their data protected, compliant, and agile.
With Odaseva, Salesforce architects and platform owners get a powerful set of tools to help solve the problems at the foundation of the Salesforce data value chain. Keep customer data intact and available with comprehensive backup and archiving, apply analytics to prevent disruptions before they happen, use automation to take control of the entire data lifecycle and solve privacy and compliance issues at the root, and easily move data between production and non-production environments, to sandboxes, and to systems outside Salesforce.
White Paper: Odaseva Complete Guide to Salesforce Backup and Restore
https://www.odaseva.com/complete-guide-to-salesforce-backup-and-restore









