What You'll Do
- Build scalable data product architecture, capable of supporting both internal and external data consumers
- Responsible for modernizing our data frameworks and integrations with Databricks and BigQuery
- Upgrade and reduce toil for developers on Apache Airflow
- Develop and optimize data transformations using Apache Spark (PySpark/Scala)
- Build procedures and guidelines to help teams operate with data
- Identify bottlenecks in our development lifecycle and find solutions to improve them
- Drive innovation throughout the tech org by evangelizing and educating teams on best practices and new technologies
- Work directly with our data teams and FinOps teams to drive efforts that span across teams
- Implement data governance, access control, and auditing using Databricks Unity Catalog
- Build and integrate automated, reusable data validation suites using data quality frameworks (Great Expectations or similar)Implement monitoring and anomaly detection systems for data quality, reliability and performance
- Develop and manage REST APIs to support secure data access, automation, and integration
- Collaborate with data scientists, analysts, and software engineers to deliver governed, reusable data assets
- Implement monitoring, logging, and alerting for data workflows
- Optimize cost and performance of cloud-based data infrastructure
Who You Are
- Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent experience)
- 2+ years of experience in data engineering or a related role
- Strong hands-on experience with Databricks, Apache Spark and BigQuery or Snowflake
- Proven experience with Modern table formats such as Delta Lake and Iceberg
- A deep understanding of the data lifecycle and how teams operate with data
- Hands-on experience implementing data governance and metadata management using Databricks Unity Catalog
- Experience managing and extending Apache Airflow (custom operators, plugins, infrastructure)
- Experience with Kubernetes
- Solid experience with AWS cloud services, especially S3 and data-related services
- Experience with data validation and data quality principles and working with SLA systems
- Proficiency in Python and SQLExperience with data modeling, data lakes, and lakehouse architectures and a strong understanding of distributed systems and big data processing
Top Skills
What We Do
Television remains a vibrant cultural influence and an essential source of entertainment and information worldwide. Tremendous growth in content choices, and viewing platforms that allow us to watch anything, anytime, on any screen, has actually made it harder for viewers to discover and keep up with all the great programming available. It’s also more competitive for content providers to keep your attention, and for marketers to make strong, measurable connections with their target consumers.
Technology that improves the viewing experience, enables content discovery, and addresses audience fragmentation across screens will strengthen television’s business model and relevance to consumers. Data is at the center of any solution to make TV better.
Samba TV's technology is built into Smart TVs and easily maps to smart phones and tablets. By recognizing what's on screen, Samba TV learns what viewers like and using machine learning algorithms, enables discovery of shows and actors in a whole new way. Likewise, our data and measurement products are transforming the way stakeholders across the media landscape are thinking about their business. Given the dramatic growth in streaming services, connected devices, time-shifting, and multi-screen viewership, our data products solve real problems and create a meaningful competitive advantage for our clients.

.png)







