Responsibilities
- Data Pipeline Operations
- Design, build, and maintain robust and scalable data pipelines to ingest, transform, and deliver structured and unstructured data from multiple sources.
- Ensure high-quality data by implementing monitoring, validation, and error-handling processes.
- Platform Engineering & Optimization
- Create and update data models to represent the structure of the data.
- Design, implement, and maintain database systems. Optimize database performance and ensure data integrity. Troubleshoot and resolve database issues.
- Build and manage data warehouses for storage and analysis of large datasets.
- Collaborate on data modeling, schema design, and performance optimization for large-scale datasets.
- Data Quality and Governance: Implement and enforce data quality standards. Contribute to data governance processes and policies.
- Scripting and Programming: Develop and automate data processes through programming languages (e.g., Python, Java, SQL). Implement data validation scripts and error handling mechanisms.
- Version Control: Use version control systems (e.g., Git) to manage codebase changes for data pipelines.
- Monitoring and Optimization: Implement monitoring solutions to track the performance and health of data systems. Optimize data processes for efficiency and scalability.
- Cloud Platforms: Work with cloud platforms (e.g., AWS, Azure, GCP) to deploy and manage data infrastructure. Utilize cloud-based services for data storage, processing, and analytics.
- Security: Implement and adhere to data security best practices. Ensure compliance with data protection regulations.
- Troubleshooting and Support: Provide support for data-related issues and participate in root cause analysis.
Skills
- Expertise in data modeling, database design, and data warehousing. Proficient in SQL and programming languages such as Python, Java, or Scala.
- Cloud-native architecture expertise (AWS, GCP, or Azure), including containerization (Docker, Kubernetes) and infrastructure-as-code (Terraform, CloudFormation).
Experience and Qualifications
- Bachelor’s/Master’s degree in engineering (computer science, information systems) with 3-5 years of experience in data engineering, BI engineering, and data warehouse development.
- Excellent command on SQL and one or more programming languages, preferably Python or Java.
- Knowledge of Flink, Airflow, Apache Spark, DBT (good to have), Athena / Presto
- Experience working with Kubernetes.
Top Skills
What We Do
Founded in 2015, Zeta is a provider of next-gen credit card processing platform. Zeta’s cloud-native and fully API-enabled stack offers a comprehensive range of capabilities, including processing, issuing, lending, core banking, fraud detection, and loyalty programs. With a strong focus on technology, Zeta has over 1700+ employees and contractors, with more than 70% dedicated to technology roles. Operating across the US, UK, Middle East, and Asia, Zeta has served a global customer base of 35+ clients who have issued over 15 million cards on Zeta's platform to date. Backed by prominent investors such as Softbank Vision Fund 2 and Mastercard, Zeta has raised $280 million, at a valuation of $1.5 billion.
.jpg)







