Senior Data Engineer (Hyper Ingestion)
Austin, TX (Onsite 4 days per week)
Note: This is a full-time, in-house position. We do not offer C2C or C2H employment and are not able to sponsor visas for this position.
Acrisure Innovation is where human expertise meets advanced technology.
Acrisure Innovation is a fast paced, AI-driven team building innovative software to disrupt the $6T+ insurance industry. Our mission is to help the world share its risk more intelligently to power a more vibrant economy. To do this, we are transforming insurance distribution and underwriting into a science.
At the core of our operating model is our technology: we’re building a digital marketplace for risk and applying it at the center of Acrisure, a privately held company recognized as one of the world's top 10 insurance brokerages and the fastest growing insurance brokerage globally. By leveraging technology to push the boundaries of understanding and transferring risk, we are systematically converting data into predictions, insights, and choices, and we believe we can remove the constraints associated with scale, scope, and learning that have existed in the insurance industry for centuries.
Our culture is strong. We are a collaborative company of entrepreneurial, innovative, and talented people who believe in our future. We outthink and out work the competition. We look outside our walls and are energized by our fast-paced trajectory.
Our vision for the future is clear. We have limitless potential to achieve unprecedented success in the insurance industry. To achieve our opportunity, a best-in-class Team must support us.
Learn more about Acrisure Innovation: https://builtin.com/company/acrisure-innovation
The Role
The Innovation team’s mission is to unify data across the enterprise to optimize business decisions made at the strategic, tactical, and operational levels of the organization. We accomplish this by building a data lakehouse that powers analytics and reporting platforms, and business processes that provide quality data, in a timely fashion, from any channel of the company and present them in such a manner as to maximize the value of that data for both internal and external customers.
The Senior Data Engineer is responsible for building technology and pipelines to rapidly onboard raw datasets into our multicloud data lakehouse data platform from a wide variety of data sources and associated technologies. Responsibility includes hands-on contributions in a team environment. Ensuring high quality and best practices are maintained through the development cycle is key to this position.
Our Data Lakehouse is built upon a Google and Microsoft hybrid cloud tech stack. Our data storage layer includes BigQuery, Databricks, Azure, and Postgres. Across our broader tech stack, we code primarily in Python, Scala, Kotlin, Java, and JavaScript and make use of many frameworks including Dataflow, Apache Airflow, Apache Spark, dbt, Delta Lake, Apache Iceberg, Cloud AI Platform, Spring, and React.
Here are some of the ways in which you’ll achieve impact
- Leverage established patterns and establish new patterns as required to rapidly onboard new data sets into the data lakehouse platform
- Establish repeatable, automated data management processes for managing sensitive information
- Own deliverables while also collaborating with and supporting others in a team setting including conducting and participating in design and code reviews
- Collaborate with product on priorities and approach and with other engineering and AI teams who would consume raw data
You may be fit for this role if you have
- 8-10 years of engineering experience with data warehouses, data lakes or software engineering with a data emphasis
- Strong SQL knowledge
- Proficient in Scala, Python, or Java
- Well versed in Data Lake & Delta Lake Concepts
- Understanding of data security best practices
- Well versed in Databricks usage in dealing with Delta tables (external / managed)
- Familiarity with data orchestration technologies like Airflow, Dagster, or Fivetran
- Experience with cloud environments (GCS, Azure) a definite plus
- Familiar with DevOps process
- Experience creating and sharing standards, best practices, documentation, and reference examples for data warehouse, integration/ETL systems
- Apply a disciplined approach to testing software and data
- Experience working with APIs
Academics: Undergraduate degree in computer science or related discipline preferred or equivalent experience along with a demonstrated desire for continuing education and improvement
Location: Austin, TX hybrid
We are interested in every qualified candidate who is eligible to work in the United States. We are not able to sponsor visas for this position.
#LI-Hybrid
Top Skills
What We Do
Acrisure Innovation is part of a fast-paced, AI-driven fintech company and builds cutting-edge solutions to disrupt the insurance and financial services industry. We are a team of high-caliber engineers, technologists, marketers, and successful startup founders, with a diverse background across industries and technologies.
Our mission is to help the world share its risk more intelligently to power a more vibrant economy. We're pushing the boundaries of understanding risk and systematically converting data into predictions, insights, and choices. We're removing the constraints associated with scale, scope, and learning that have existed in the insurance industry for centuries.
We put our advanced technology and hundreds of billions of data points to work to elevate every aspect of our business, ensuring everyone–from our clients to our team members–has what they need to succeed.
Why Work With Us
We are a small team of humble and extremely high caliber employees with a diverse background across industries and technologies. Our values embody how we operate: Default to Open, People 1st & Always, Collaboration Drives Success, Never Stop Growing, Data-Driven.
Gallery
.jpeg)

.jpeg)
.jpeg)






Acrisure Innovation Offices
OnSite Workspace
We are onsite 4 days per week.