Senior Data Engineer (Remote) at Inspire
Inspire is a clean energy technology company on a mission to transform the way consumers access clean energy and to accelerate the world’s transition to a net-zero carbon future.
We provide our customers with access to renewable energy from wind, solar, and hydro powered sources without service interruptions or costly installations at a flat, predictable monthly rate. For every year that a customer spends with Inspire Clean Energy, they have a greater impact on climate change than 10 years of strict recycling.
Our rapidly growing team of mission-driven, climate enthusiasts is passionate, innovative and committed to a better future for the planet.
As a Sr. Data Engineer on our Operating Model Team, your work will help drive down costs, manage risk, accelerate growth and improve our member experience -- broadening access to clean energy as we grow to new markets, and enabling new products that accelerate a net zero carbon future.
Inspire’s Operating Model is the heart and the brain of our clean energy platform -- tracking and forecasting costs and revenues as the source of truth for key stakeholders across the business to make decisions about growth, optimization and strategy.
You will work closely with executive stakeholders alongside machine learning engineers and data scientists to apply sophisticated modeling techniques on a modern data stack to predict and build the future of energy usage and customer engagement.
THE SENIOR DATA ENGINEER HAS 5 MAIN RESPONSIBILITIES
- Building SQL models that transform raw data into actionable insights
- Managing graph-oriented workflows in Python to automate repetitive tasks
- Communication with technical and non-technical audiences, and ability to translate between the domains of business problems and implementations
- Team-oriented development: building modular & re-usable tools, writing maintainable code, owning technical and business documentation
- Working with our Data Science and ML Engineering Team to provide high-fidelity datasets for our machine learning algorithms and assess and understand the performance of our ML Models
SOME YEAR 1 DELIVERABLES
- Extend the Operating Model to accommodate new product offerings and support our expansion to new markets
- Identify bottlenecks and improve scalability to support our growing customer base
- Improve data governance and quality control to pre-empt data quality issues in critical systems.
- Work with our Applied Modelling team to ensure quality and reliability as we introduce increasingly sophisticated machine learning algorithms into our Operating Model
- Cultivated familiarity with Inspire’s frameworks & operating model
- Delivery of high-quality pull requests, evidencing strong code standards & testing practices
- High quality documentation for both technical and non-technical audiences
- Comfort with self-directed project management: requiring minimal oversight to assess a problem, formulate a solution, deliver code, and document changes.
- Positive interactions with department stakeholders: guidance and input that creates business value for non-technical personnel; feedback on priorities, status, and estimates that create transparency and build trust.
- Analytical: Able to develop a keen understanding of the problem before deciding on a solution
- Curious: Desire to understand the underpinnings of complex business processes in order to design the correct technical solution
- Determined: Able to focus on the problem at hand and deliver a complete solution quickly.
- Open minded: Incorporates new information quickly in a fast changing environment; willing to take input from others.
- Growth Mindset: Looking for challenges and opportunities to develop new skills and acquire knowledge.
- 4+ years experience working with Python in a data intensive application
- 1-2 years working with Apache Spark
- 4+ years experience with the software development life cycle (git, Pull Requests, Code Reviews, Testing, etc)
- Strong SQL experience working with large complex datasets
- Proficiency working with the command line
Nice to Have
- Experience working in the energy industry
- Experience working with financial data and complex financial models
- Experience with Machine Learning tools and techniques, understanding how models make decisions from data
- Experience with key technologies: Snowflake, dbt, Airflow
- Experience at a similar scale of data processing (Multi-TB/billions of rows)
- Experience with containerized development using Docker, Kubernetes
- Experience with technical communication to audiences of diverse backgrounds
- Experience with problem solving and exploratory data analysis