Passionate about precision medicine and advancing the healthcare industry?
Recent advancements in underlying technology have finally made it possible for AI to impact clinical care in a meaningful way. Tempus' proprietary platform connects an entire ecosystem of real-world evidence to deliver real-time, actionable insights to physicians, providing critical information about the right treatments for the right patients, at the right time.
Passionate about building great software products?
At Tempus, software products are owned and developed by small, autonomous teams composed of developers, designers, scientists, and product managers. You and your team set the goals, build the software, deploy the code, and contribute to a growing software platform that will make a lasting impact in the field of cancer research and treatment
Tempus builds software as nimble as our teams. Our modern tech stack - containerized applications running on GCP managed services - allows our teams to iterate rapidly and lead our industry in innovation. Emphasis on automation coupled with our decentralized, microservice architecture allows us to deliver advanced solutions with confidence at scale.
Why we’re looking for you:
- You know what it takes to build and run resilient data pipelines in production and have experience implementing ETL/ELT to load a multi-terabyte enterprise data warehouse.
- You have implemented analytics applications using database technologies such as relational or multidimensional (OLAP).
- You value the importance of defining and enforcing data contracts with experience in writing specifications.
- You write code to transform data between data models and formats, preferably in Python.
- You've worked in agile environments and are comfortable iterating quickly.
Bonus points for:
- Experience with one of the many infrastructure-as-code tools such as Terraform (our favorite), Kubernetes, CloudFormation, Docker, Ansible, Salt, Packer, Puppet, Chef, or similar
- Expert knowledge of relational database modeling concepts, SQL skills, proficiency in query performance tuning, and desire to share knowledge with others.
- Experience building cloud-native applications and supporting technologies / patterns / practices including: Google Dataflow, Google BigQuery, Google Cloud Composer, Google Pub/Sub, Google Cloud Storage, Docker, and CI/CD
Responsibilities for the Position:
- Works as an embedded member of an engineering team
- Be a hands-on contributor to the implementation of our data lake and data pipelines
- Work with data producers and data scientists to understand our data processing needs.
- Use your experience, technical knowledge, and creativity to simplify development and infrastructure provisioning workflows
- Configures and deploys cloud infrastructure using Terraform and CI tools like Concourse
- Proactively and continuously learn about new and relevant technologies
- Use your knowledge to influence other developers and advocate for best practices
- Implements dashboards, monitoring, and alerting for team services
- Support your users either in-person or via Slack
- Document usage and query patterns for our data platform
#LI-BL1