Senior Data Engineer

Posted Yesterday
Be an Early Applicant
Kountríon, Trifylia
In-Office
Senior level
Information Technology • Business Intelligence • Consulting
The Role
Design, build, and maintain scalable batch and streaming data pipelines and architectures across cloud platforms (GCP/Azure). Optimize distributed processing, ensure data quality/governance, support AI/ML feature workflows, mentor junior engineers, and drive platform reliability and cost-effective operations.
Summary Generated by Built In
Job Summary
As a Senior Data Engineer at iHorizons, you will design, develop, and maintain scalable data pipelines and architectures that power our AI and advanced analytics initiatives across government and private clients. Working closely with the AI & Data Science team, you will ensure high-quality, reliable, and secure data flow across batch and streaming systems, enabling data-driven model development, deployment, and business intelligence at scale for iHorizons and its clients.
Job Objectives
• Build and maintain robust, scalable data pipelines that power AI/ML workflows and analytics platforms.
• Ensure high-quality, governed, and observable data across the organization’s data ecosystem.
• Optimize distributed data processing systems for performance, reliability, and cost efficiency.
• Collaborate with AI engineers, analysts, and stakeholders to deliver trusted datasets and features.
• Drive continuous improvement in data architecture, tooling, and engineering best practices.
Job Responsibilities
Data Pipeline Development & Engineering
• Design, develop, and deploy scalable ETL/ELT pipelines for structured and unstructured datasets.
• Implement batch and real-time data processing solutions using modern frameworks.
• Build data ingestion systems from multiple sources such as APIs, databases, logs, IoT devices, and streaming platforms.
• Ensure data pipelines support AI/ML feature engineering and training workflows.
• Automate pipeline execution, monitoring, and orchestration using tools such as Apache Airflow.
• Build and manage data transformation workflows using modern tools such as dbt to support SQL-based data modeling within cloud data warehouses.
Big Data & Distributed Systems
• Develop distributed processing jobs using Apache Spark and Hadoop ecosystem tools.
• Work with streaming platforms such as Apache Kafka for real-time data delivery.
• Apply distributed computing principles including scalability, partitioning, and fault tolerance.
• Optimize workloads for performance and reliability across large-scale datasets.
Cloud Data Platforms & Infrastructure
• Build and manage cloud-native pipelines and warehousing solutions on GCP and Azure.
• Work with services such as BigQuery, Dataflow, Pub/Sub, Azure Synapse, Databricks, and Data Factory.
• Implement containerized deployments using Docker and Kubernetes.
• Support cost optimization and performance tuning of cloud-based data platforms.
Data Modeling & Architecture
• Design and implement enterprise-grade data lakes and data warehouses.
• Apply medallion architecture principles across bronze, silver and gold data layers.
• Develop dimensional data models using Kimball methodology, including star and snowflake schemas.
• Ensure strong governance, data quality, lineage, and observability practices.
• Build reusable, scalable data models for analytics and AI feature stores.
Database Management & Optimization
• Work extensively with relational databases such as PostgreSQL, MySQL, and Oracle.
• Write complex SQL queries with advanced proficiency.
• Apply indexing strategies, query optimization, and performance tuning.
• Design efficient schemas aligned with normalization and warehousing standards.
• Support NoSQL database solutions where required, including MongoDB, Cassandra, Redis, and DynamoDB.
Documentation and Other Responsibilities
• Develop and maintain clear technical documentation for data pipelines, architectures, and implementations.
• Write high-quality, maintainable code aligned with established engineering standards and best practices.
• Ensure all solutions comply with iHorizons’ data security, privacy, and governance policies.
• Troubleshoot and resolve data pipeline and system issues through structured root-cause analysis.
• Collaborate with cross-functional teams to continuously improve platform reliability and delivery outcomes.
• Provide technical guidance and mentorship to junior engineers, supporting skill development and excellence.
Job Requirements
Educational Qualification
• Bachelor’s degree in Computer Science, Software Engineering, Information Systems, Data Science, or a related field.
• Master’s degree is an advantage, particularly in Data Engineering, AI, or Cloud Computing.
Certifications (Optional but Valuable)
Professional certifications are considered a strong advantage, particularly:
• Google Professional Data Engineer (highly valuable)
• Azure Data Engineer Associate
• Databricks Certified Data Engineer
• Apache Spark Certification
Previous Work Experience
• 6 plus years of experience in data platform architecture design, enterprise-scale data ecosystems, cloud cost optimization, and mentoring junior engineers.
Skills & Abilities
• Strong foundational knowledge in data structures and algorithms, database systems, and distributed computing principles, forming the basis for building scalable and high-performance data platforms.
• Advanced proficiency in programming languages such as Python, Java, Scala, Shell scripting, and especially SQL, which is essential for this role.
• Strong hands-on experience working with relational database systems such as PostgreSQL, MySQL, and Oracle, including expertise in writing complex SQL queries, indexing strategies, query optimization, and data modelling techniques such as 3NF, star schema, and snowflake schema.
• Familiarity with NoSQL database technologies, depending on project needs, including platforms such as MongoDB, Cassandra, Redis, and DynamoDB.
• Proven ability to design and implement scalable ETL/ELT pipelines, with solid understanding of data warehousing concepts and experience building both batch and streaming data workflows.
• Experience using industry-standard tools and platforms such as Apache Airflow, Informatica, Snowflake, Google BigQuery, and Azure Synapse to support enterprise data integration and analytics.
• Strong knowledge of big data and distributed systems, including frameworks such as Apache Spark, the Hadoop ecosystem, and streaming platforms like Apache Kafka, with an understanding of distributed computing principles, scalability, and fault tolerance.
• Hands-on expertise with modern cloud data platforms, particularly Google Cloud Platform (BigQuery, Dataflow, Pub/Sub) and Microsoft Azure (Data Factory, Synapse, Databricks), which are critical for today’s data engineering environments.
• Infrastructure-level understanding of containerization and orchestration tools such as Docker and Kubernetes.
• Demonstrated ability to design data lakes and enterprise data architectures, including implementing medallion architecture (bronze, silver, gold layers), applying dimensional modelling approaches such as Kimball methodology, and ensuring strong practices in data governance, quality, and observability.
• Strong working knowledge of supporting engineering practices including version control (Git), schema design, and end-to-end data pipeline architecture.
• Strong analytical and problem-solving skills, with the ability to troubleshoot complex data and pipeline issues. 
• Excellent communication skills to explain technical concepts clearly to both technical and non-technical stakeholders. • Strong collaboration mindset and interpersonal skills

Top Skills

Python,Java,Scala,Shell Scripting,Sql,Postgresql,Mysql,Oracle,Mongodb,Cassandra,Redis,Dynamodb,Apache Airflow,Dbt,Apache Spark,Hadoop,Apache Kafka,Gcp,Google Bigquery,Dataflow,Pub/Sub,Azure,Azure Synapse,Data Factory,Databricks,Docker,Kubernetes,Snowflake,Informatica,Git
Am I A Good Fit?
beta
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.

The Company
Doha, 13085
243 Employees
Year Founded: 1996

What We Do

iHorizons is a leading provider of business solutions and technology services across MENA and emerging markets. Headquartered in Doha, Qatar, we help organizations, businesses, and governments accelerate their business transformation to create a digital future. The ultimate outcomes are radically improved customer experiences and increased operational efficiencies

Similar Jobs

Remote or Hybrid
Greece
138 Employees

Ascent.io Logo Ascent.io

Data Engineer

Cloud • Software • Consulting
In-Office
Kountríon, Trifylia, GRC
452 Employees

ConsenSys Logo ConsenSys

Senior Data Engineer

Blockchain • Software • Web3
In-Office or Remote
39 Locations
900 Employees
156K-187K Annually

Shelf Logo Shelf

Senior Data Scientist

Information Technology
In-Office
29 Locations
125 Employees

Similar Companies Hiring

Amplify Platform Thumbnail
Fintech • Financial Services • Consulting • Cloud • Business Intelligence • Big Data Analytics
Scottsdale, AZ
62 Employees
Standard Template Labs Thumbnail
Software • Information Technology • Artificial Intelligence
New York, NY
15 Employees
Bellagent Thumbnail
Artificial Intelligence • Machine Learning • Business Intelligence • Generative AI
Chicago, IL
20 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account