Data Integration Engineer
Here at Syndigo, we're enabling our clients to deliver better eCommerce experiences. We've mastered the right data, right now. From creation to sale, that's the value our partners get from us - a holistic, truly differentiated end-to-end solution that closes the loop while increasing sales. Basically, we're the accurate data behind how people feel when they shop online with confidence!
We cannot do all of this without our amazing people! Our employees make the magic happen here at Syndigo and we're growing rapidly, looking to welcome an Data Integration Engineer to the team! We're ready for you to challenge the status quo!
The mission of a Syndigo Data Integration Engineer is to implement data ingestion, validation and transformation pipelines at Syndigo. In collaboration with the Product, Development, and Enterprise Data teams, the Data Integration Engineer will design and maintain batch and streaming integrations across a variety of data domains and platforms. The ideal candidate is experienced in big data, cloud architecture and is excited to advance innovative analytics solutions!
HOW WE'LL BE WINNING TOGETHER DAY-TO-DAY!
- Work with stakeholders to define and develop data ingest, validation, and transform pipelines
- Participate in solution and architecture design & planning
- Troubleshoot data pipelines and resolve issues in alignment with SDLC
- Ability to diagnose and troubleshoot data issues, recognizing common data integration and transformation patterns
- Estimate, track, and communicate status of assigned items to a diverse group of stakeholders
WE SHOULD TALK IF THIS SOUNDS LIKE YOU!
- 3+ years experience in developing large scale data pipelines in a cloud environment
- Demonstrated proficiency in Scala (Object Oriented Programming) or Python, SQL or SPARK SQL
- Experience with Databricks, including Delta Lake
- Experience with Azure and cloud environments, including Azure Data Lake Storage (Gen2), Azure Blob Storage, Azure Tables, Azure SQL Database, Azure Data Factory
- Experience with ETL/ELT patterns, preferably using Azure Data Factory and Databricks jobs
- Fundamental knowledge of distributed data processing and storage
- Fundamental knowledge of working with structured, unstructured, and semi structured data
LOCATION:
- USA Remote
- Office Locations in Chicago, Nashville, Austin, Brookfield + more!
- Collaborate with multi-location, multicultural, multi-skill and multi-disciplinary teams