What you'll do:
- Undertake technical discovery with stakeholders, and understand the business context in designing and implementing solutions for data pipelines, models and applications.
- Hands-on development of robust and scalable data pipelines and models using Python, SQL, and our evolving data stack (e.g., DBT, Snowflake, and MLOps tooling).
- Model data for optimal workload consumption, eg analytical, BI, ML modelling.
- Contribute to our library of analytical assets, including our feature store, enabling self-service analytics and reducing time spent on data wrangling.
- Collaborate with Data Scientists and Engineers to build and maintain end-to-end machine learning pipelines for training, inference, and monitoring at scale.
- Develop and maintain infrastructure, tooling, and monitoring for data applications and reproducible data science workflows.
- Test and deploy data assets and ML pipeline components to production environments, adhering to software engineering best practices (e.g., version control, CI/CD, testing).
- Identify and expose technical debt and related issues for resolution, contributing to the overall health and maintainability of our data ecosystem.
- Stay current with emerging practices, techniques, and frameworks in applied machine learning, data engineering, and analytics engineering.
What you'll bring:
- Minimum of 2 years experience in data engineering, analytics engineering, ML engineering, or a software engineering role with a data focus.
- Strong proficiency in Python for data processing, pipeline development, and scripting. Experience with ML libraries (e.g., scikit-learn, pandas, NumPy) is highly desirable.
- Proven experience in developing, deploying, and maintaining data pipelines and data models in production environments.
- Familiarity with software engineering best practices (e.g., Git, CI/CD, testing, code reviews).
- Solid experience with SQL for data querying, transformation, and optimization.
- Experience or a strong interest in machine learning concepts and MLOps (e.g., model deployment, monitoring, versioning) is a significant plus.
- Experience with cloud data warehouses like Snowflake is highly desirable.
- Experience with data pipeline orchestration tools like Airflow, Dagster, Prefect and data modeling tools like DBT is desirable.
Similar Jobs
What We Do
Xero is a global small business platform with 3.95 million subscribers which includes a core accounting solution, payroll, workforce management, expenses and projects. Xero also has an extensive ecosystem of connected apps and connections to banks and other financial institutions helping small businesses access a range of solutions from within Xero’s open platform to help them run their business and manage their finances.
Why Work With Us
Xero is not like most companies. When you join Xero, you become part of something beautiful —a global community of people who are passionate about making an impact on the world. It’s a place where you can truly be yourself and find success in a way that’s meaningful to you.
Gallery










Xero Teams
Xero Offices
Hybrid Workspace
Employees engage in a combination of remote and on-site work.
Join us from home or at one of our beautiful workspaces. Xero has offices in Australia, New Zealand, United Kingdom, United States, Canada, Singapore, and South Africa.