Staff Data Engineer
At Flexport, we believe global trade can move the human race forward. That’s why it’s our mission to make it easy and accessible for everyone. We’re shaping the future of a $8.6T industry with solutions powered by innovative technology and exceptional people. Today, companies of all sizes—from emerging brands to Fortune 500s—use Flexport technology to move more than $19B of merchandise across 112 countries a year.
The recent global supply chain crisis has put Flexport center stage as we continue to play a pivotal role in how goods move around the world. At a valuation of $8 billion, we’re experiencing record growth and are proud to have the support of the best investors in the game who believe in our mission, solutions and people. Ready to tackle global challenges that impact business, society, and the environment? Come join us.
Create economics insights to help make global trade easy!The opportunity:
Flexport generates a vast amount of information on global trade. Using that data with public sources of information, you will work with the economics team to provide unique insight through the publication of indicators, research and analysis.
You will work alongside self-starters interested in solving real-world problems and creating novel insight on global trade.
You will:
- Architect, build, publish, and maintain performant and reliable data models and pipelines to power:
- Self-service data consumption throughout the enterprise
- Flexible querying and data visualization
- Advanced analytical and scientific use cases
- Serve as a data steward and subject-matter expert for a dedicated set of business and technology domains.
- Build, maintain, and improve tooling and systems consumed by the Flexport Research team to collect and analyze data.
- Drive data quality by transforming dynamic data and business logic into consistent and trustworthy datasets
- Develop and evangelize development standards and best practices for data modeling and working with our data and tools.
You Should Have:
- 8+ years of work experience querying, visualizing, and presenting data; prior experience at a modern technology company with a well-developed data organization
- Advanced skills with writing clean, performant, scalable SQL
- Advanced familiarity with modern BI tools: Looker, Superset, Metabase
- 6+ years of advanced experience with Snowflake or Bigquery
- 2+ years with dbt or spark-based data pipelines
- 6+ years of extensive experience in schema design and data modeling strategies (e.g. dimensional modeling, data vault, etc)
- Significant experience with general-purpose programming (e.g. Python, Java, Go), dealing with a variety of data structures, algorithms, and serialization formats
Other Requirements
- Ability to solve ambiguous problems independently
- Detail oriented and excited to learn new skills and tools
- Ability to write clear, concise documentation, and to communicate generally with a high degree of precision
- Passion for high data quality and building systems and processes that scale
Our Stack
- Ingestion: Kafka, Snowpipe, Fivetran, Data Coral, Mulesoft
- Warehouse: Snowflake
- Orchestration: Astronomer
- Transformation: dbt
- BI: Looker, Snowsight
- Reverse ETL: Mulesoft
- Cloud: AWS
- Other: Terraform, Sumologic
#LI-LB1