Duties and Responsibilities - Architecture Design
- Plan, design, and evolve data platform solutions within a Data Mesh architecture, ensuring decentralized data ownership and scalable, domain-oriented data pipelines.
- Apply Domain-Driven Design (DDD) principles to model data, services, and pipelines around business domains, promoting clear boundaries and alignment with domain-specific requirements.
- Collaborate with stakeholders to translate business needs into robust, sustainable data architecture patterns.
Duties and Responsibilities - Software Development & DevOps
- Develop and maintain production-level applications primarily using Python (Pandas, PySpark, SnowPark), with the option to leverage other languages (e.g., C#) as needed.
- Implement and optimize DevOps workflows, including Git/GitHub, CI/CD pipelines , and infrastructure-as-code (Terraform), to streamline development and delivery processes.
- Containerize and deploy data and application workloads on Kubernetes leveraging KEDA for event-driven autoscaling and ensuring reliability, efficiency, and high availability.
Duties and Responsibilities - Big Data Processing
- Handle enterprise-scale data pipelines and transformations, with a strong focus on Snowflake, or comparable technologies such as Databricks or BigQuery.
- Optimize data ingestion, storage, and processing performance to ensure high-throughput and fault-tolerant systems.
Duties and Responsibilities - Data Stores
- Manage and optimize SQL/NoSQL databases, Blob storage, Delta Lake, and other large-scale data store solutions.
- Evaluate, recommend, and implement the most appropriate storage technologies based on performance, cost, and scalability requirements.
Duties and Responsibilities - Data Orchestration & Event-Driven Architecture
- Build and orchestrate data pipelines across multiple technologies (e.g., dbt, Spark), employing tools like Airflow, Prefect, or Azure Data Factory for macro-level scheduling and dependency management.
- Design and integrate event-driven architectures (e.g., Kafka, RabbitMQ) to enable real-time and asynchronous data processing across the enterprise.
- Leverage Kubernetes & KEDA to orchestrate containerized jobs in response to events, ensuring scalable, automated operations for data processing tasks.
Duties and Responsibilities - Scrum Methodologies
- Participate fully in Scrum ceremonies, leveraging tools like JIRA and Confluence to track progress and collaborate with the team.
- Provide input on sprint planning, refinement, and retrospectives to continuously improve team efficiency and product quality.
Duties and Responsibilities - Cloud
- Deploy and monitor data solutions in Azure, leveraging its native services for data and analytics.
Duties and Responsibilities - Collaboration & Communication
- Foster a team-oriented environment by mentoring peers, offering constructive code reviews, and sharing knowledge across the organization.
- Communicate proactively with technical and non-technical stakeholders, ensuring transparency around progress, risks, and opportunities.
- Take ownership of deliverables, driving tasks to completion and proactively suggesting improvements to existing processes.
Duties and Responsibilities - Problem Solving
- Analyze complex data challenges, propose innovative solutions, and drive them through implementation.
- Maintain high-quality standards in coding, documentation, and testing to minimize defects and maintain reliability.
- Exhibit resilience under pressure by troubleshooting critical issues and delivering results within tight deadlines.
Required Education and Experience
- Bachelor’s degree in Computer Science, Information Systems, Engineering, or a related field (or equivalent professional experience).
- Proven experience with Snowflake (native Snowflake application development is essential).
- Proficiency in Python for data engineering tasks and application development.
- Experience deploying and managing containerized applications using Kubernetes (preferably on Azure Kubernetes Services).
- Understanding of event-driven architectures and hands-on experience with event buses (e.g., Kafka, RabbitMQ).
- Familiarity with data orchestration and choreography concepts, including the use of scheduling/orchestration tools (e.g., Airflow, Prefect) and using eventual consistency/distributed systems patterns to avoid centralised orchestration at the platform level.
- Hands-on experience with cloud platforms (Azure preferred) for building and operating data pipelines.
- Solid knowledge of SQL and database fundamentals.
- Strong ability to work in a collaborative environment, including cross-functional teams in DevOps, software engineering, and analytics.
Preferred Education and Experience
- Master’s degree in a relevant technical field.
- Certifications in Azure, Snowflake, Databricks (e.g., Microsoft Certified: Azure Data Engineer, SnowPro, Databricks Certified: Data Engineer).
- Experience implementing CI/CD pipelines for data-related projects.
- Working knowledge of infrastructure-as-code tools (e.g., Terraform, ARM templates).
- Exposure to real-time data processing frameworks (e.g., Spark Streaming, Flink).
- Familiarity with data governance and security best practices (e.g., RBAC, data masking, encryption).
- Demonstrated leadership in data engineering best practices or architecture-level design.
Supervisory Responsibilities
- This position may lead project-based teams or mentor junior data engineers, but typically does not include direct, ongoing management of staff.
- Collaboration with stakeholders (Data Architects, DevOps engineers, Data Product Managers) to set technical direction and ensure high-quality deliverables.
Job Title
- Once hired this person will have the job title Senior Engineer II
Similar Jobs
What We Do
Enable helps manufacturers, distributors, and retailers take control of their rebate programs and turn them into an engine for growth. Starting in finance and commercial teams, Enable helps better manage rebate complexity with automated real-time data and insights, accurate forecasting, and stronger cross functional alignment. This lets you — and everyone else you authorize in your business — know exactly where you are with rebates. Then you can extend Enable externally to your suppliers and/or customers, setting you and your partners up to use rebates as a strategy with one collaborative place to author, agree, execute, and track the progress of your deals.









