WHAT YOU WILL DO
- Design, develop, and maintain data pipelines for the end-to-end delivery of measurement reports and data licensing products to clients, using Apache Airflow, Databricks, and PySpark
- Configure and troubleshoot push delivery workflows, including database migrations, DAG configuration, GCS/S3 bucket management, and client-facing file delivery verification
- Build and operate agentic automation workflows to reduce manual operational toil, improve data validation, and accelerate delivery turnaround times
- Investigate and resolve production data issues by navigating complex systems spanning Airflow DAGs, PostgreSQL databases, cloud storage (AWS/GCP), and client-specific delivery configurations
- Manage and execute custom product delivery requests, including measurement requests, data licensing operations, matching file generation, cross-reference file creation, and client delivery setup
- Develop data validation and quality assurance tooling to ensure accuracy and consistency of custom datasets before they reach clients
- Write and maintain database migrations to update delivery configurations, report integrations, and client setup across staging and production environments
- Collaborate cross-functionally with product, measurement sciences, client services, and engineering teams to translate delivery requirements into reliable, automated solutions
- Document operational processes, runbooks, and delivery workflows to enable knowledge sharing and team scalability
WHO YOU ARE
- Bachelor’s degree in Computer Science, Engineering, Data Science, or a related technical field, or equivalent practical experience
- 3–5+ years of professional experience in data engineering, software engineering, or a related operational engineering role
- Strong proficiency in Python, with hands-on experience building and debugging data pipelines and automation scripts
- Experience with Apache Airflow for workflow orchestration, including DAG development, operator configuration, and troubleshooting failed runs
- Proficiency in SQL for data extraction, transformation, and database administration, including complex queries with joins, window functions, and JSONB manipulation
- Experience with cloud infrastructure (AWS and/or GCP), including S3/GCS bucket management, IAM role assumption, and ephemeral credential workflows
- Familiarity with Databricks and PySpark for large-scale data processing and transformationExperience with database migration workflows and version-controlled configuration management (Git)
- Strong debugging and problem-solving skills with the ability to trace issues across distributed systems (databases, orchestration tools, cloud storage, delivery endpoints)
- Ability to work independently, manage a queue of operational tickets, and prioritize based on SLA urgency
Top Skills
What We Do
Television remains a vibrant cultural influence and an essential source of entertainment and information worldwide. Tremendous growth in content choices, and viewing platforms that allow us to watch anything, anytime, on any screen, has actually made it harder for viewers to discover and keep up with all the great programming available. It’s also more competitive for content providers to keep your attention, and for marketers to make strong, measurable connections with their target consumers. Technology that improves the viewing experience, enables content discovery, and addresses audience fragmentation across screens will strengthen television’s business model and relevance to consumers. Data is at the center of any solution to make TV better. Samba TV's technology is built into Smart TVs and easily maps to smart phones and tablets. By recognizing what's on screen, Samba TV learns what viewers like and using machine learning algorithms, enables discovery of shows and actors in a whole new way. Likewise, our data and measurement products are transforming the way stakeholders across the media landscape are thinking about their business. Given the dramatic growth in streaming services, connected devices, time-shifting, and multi-screen viewership, our data products solve real problems and create a meaningful competitive advantage for our clients.









