Data Engineer

Reposted 3 Days Ago
Be an Early Applicant
Warsaw, Warszawa, Mazowieckie, POL
In-Office
250K-350K Annually
Mid level
AdTech • Machine Learning
The Role
As a Data Engineer, you will design and maintain data pipelines, automate workflows, and resolve issues in complex systems. You will collaborate with teams to enhance operational efficiency and ensure data accuracy.
Summary Generated by Built In
Samba is an AI-powered media intelligence company on a mission to give marketers the complete picture of their audiences. Our AI indexes media consumption across millions of smart TVs and 2.5 billion web pages, combining that data with third-party signals through the Samba Knowledge Graph, a map of the real interests, behaviors, and purchase intent of 1.5 billion user profiles globally. Brands, agencies, publishers, and platforms use Samba to make smarter decisions across every stage of the marketing funnel.

We are seeking a Data Engineer to join our Technology Operations team. This team drives the operational delivery of Samba measurement and data licensing products—the core revenue engines of the business. As a Data Engineer, you will design, build, and maintain the data pipelines, automation workflows, and infrastructure that ensure reliable, on-time delivery of customized data products to clients. In addition, you will be building and maintaining agentic AI-driven workflows to automate repetitive operational tasks, enhance data validation, and streamline end-to-end delivery processes. You will work hands-on with production systems, debug complex delivery issues across distributed environments, and collaborate cross-functionally to continuously improve operational efficiency through both traditional engineering and intelligent automation.

WHAT YOU WILL DO

  • Design, develop, and maintain data pipelines for the end-to-end delivery of measurement reports and data licensing products to clients, using Apache Airflow, Databricks, and PySpark
  • Configure and troubleshoot push delivery workflows, including database migrations, DAG configuration, GCS/S3 bucket management, and client-facing file delivery verification
  • Build and operate agentic automation workflows to reduce manual operational toil, improve data validation, and accelerate delivery turnaround times
  • Investigate and resolve production data issues by navigating complex systems spanning Airflow DAGs, PostgreSQL databases, cloud storage (AWS/GCP), and client-specific delivery configurations
  • Manage and execute custom product delivery requests, including measurement requests, data licensing operations, matching file generation, cross-reference file creation, and client delivery setup
  • Develop data validation and quality assurance tooling to ensure accuracy and consistency of custom datasets before they reach clients
  • Write and maintain database migrations to update delivery configurations, report integrations, and client setup across staging and production environments
  • Collaborate cross-functionally with product, measurement sciences, client services, and engineering teams to translate delivery requirements into reliable, automated solutions
  • Document operational processes, runbooks, and delivery workflows to enable knowledge sharing and team scalability

WHO YOU ARE

  • Bachelor’s degree in Computer Science, Engineering, Data Science, or a related technical field, or equivalent practical experience
  • 3–5+ years of professional experience in data engineering, software engineering, or a related operational engineering role
  • Strong proficiency in Python, with hands-on experience building and debugging data pipelines and automation scripts
  • Experience with Apache Airflow for workflow orchestration, including DAG development, operator configuration, and troubleshooting failed runs
  • Proficiency in SQL for data extraction, transformation, and database administration, including complex queries with joins, window functions, and JSONB manipulation
  • Experience with cloud infrastructure (AWS and/or GCP), including S3/GCS bucket management, IAM role assumption, and ephemeral credential workflows
  • Familiarity with Databricks and PySpark for large-scale data processing and transformationExperience with database migration workflows and version-controlled configuration management (Git)
  • Strong debugging and problem-solving skills with the ability to trace issues across distributed systems (databases, orchestration tools, cloud storage, delivery endpoints)
  • Ability to work independently, manage a queue of operational tickets, and prioritize based on SLA urgency

Samba is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.  We strive to empower connection with one another, reflect the communities we serve, and tackle meaningful projects that make a real impact.
 
Samba may collect personal information directly from you, as a job applicant, Samba may also receive personal information from third parties, for example, in connection with a background, employment or reference check, in accordance with the applicable law. For further details, please see Samba's Applicant Privacy Policy. For residents of the EU , Samba Inc. is the data controller.

Top Skills

Apache Airflow
AWS
Databricks
GCP
Pyspark
Python
SQL
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
HQ: San Francisco, CA
318 Employees
Year Founded: 2008

What We Do

Television remains a vibrant cultural influence and an essential source of entertainment and information worldwide. Tremendous growth in content choices, and viewing platforms that allow us to watch anything, anytime, on any screen, has actually made it harder for viewers to discover and keep up with all the great programming available. It’s also more competitive for content providers to keep your attention, and for marketers to make strong, measurable connections with their target consumers. Technology that improves the viewing experience, enables content discovery, and addresses audience fragmentation across screens will strengthen television’s business model and relevance to consumers. Data is at the center of any solution to make TV better. Samba TV's technology is built into Smart TVs and easily maps to smart phones and tablets. By recognizing what's on screen, Samba TV learns what viewers like and using machine learning algorithms, enables discovery of shows and actors in a whole new way. Likewise, our data and measurement products are transforming the way stakeholders across the media landscape are thinking about their business. Given the dramatic growth in streaming services, connected devices, time-shifting, and multi-screen viewership, our data products solve real problems and create a meaningful competitive advantage for our clients.

Similar Jobs

Guesty Logo Guesty

Data Engineer

Real Estate • Software
Remote or Hybrid
Poland
374 Employees

Citi Logo Citi

Data Engineer

Fintech • Financial Services
In-Office
Warsaw, Warszawa, Mazowieckie, POL
223850 Employees
242K-412K Annually
Hybrid
Warsaw, Warszawa, Mazowieckie, POL
138 Employees
In-Office
Warsaw, Warszawa, Mazowieckie, POL
61500 Employees
156K-289K Annually

Similar Companies Hiring

ClickMint Thumbnail
Marketing Tech • Generative AI • eCommerce • AdTech
Malibu, CA
9 Employees
Credal.ai Thumbnail
Software • Security • Productivity • Machine Learning • Artificial Intelligence
Brooklyn, NY
Bellagent Thumbnail
Artificial Intelligence • Machine Learning • Business Intelligence • Generative AI
Chicago, IL
20 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account