The Role
About the Role
We are seeking a Microsoft Fabric Data Engineer with 7+ years of experience for a lead role. The ideal candidate will be responsible for designing, developing, and deploying data pipelines, ensuring efficient data movement and integration within Microsoft Fabric.
Responsibilities
Data Pipeline Development: Design, develop, and deploy data pipelines within Microsoft Fabric, leveraging OneLake, Data Factory, and Apache Spark to ensure efficient, scalable, and secure data movement across systems.
ETL Architecture: Architect and implement ETL workflows optimized for Fabric’s unified data platform, streamlining ingestion, transformation, and storage.
Data Integration: Build and manage integration solutions that unify structured and unstructured sources into Fabric’s OneLake ecosystem. Utilize SQL, Python, Scala, and R for advanced data manipulation.
Fabric OneLake & Synapse: Leverage OneLake as the single data lake for enterprise-scale storage and analytics, integrating with Synapse Data Warehousing for big data processing and reporting.
Cross-functional Collaboration: Partner with Data Scientists, Analysts, and BI Engineers to ensure Fabric’s data infrastructure supports Power BI, AI workloads, and advanced analytics.
Performance Optimization: Monitor, troubleshoot, and optimize Fabric pipelines for high availability, fast query performance, and minimal downtime.
Data Governance & Security: Implement governance and compliance frameworks within Fabric, ensuring data lineage, privacy, and security across the unified platform.
Leadership & Mentorship: Lead and mentor a team of engineers, oversee Fabric workspace design, code reviews, and adoption of new Fabric features.
Automation & Monitoring: Automate workflows and orchestration using Fabric Data Factory, Azure DevOps, and Airflow, ensuring smooth operations.
Documentation & Standards: Document Fabric pipeline architecture, data models, and ETL processes. Contribute to Fabric engineering best practices and enterprise guidelines.
Innovation: Stay current with Fabric’s evolving capabilities (Real-Time Analytics, Data Activator, AI integration) and drive innovation within the team.
Qualifications
7+ years of experience in data engineering, with a strong focus on Microsoft Fabric and related technologies.
Required Skills
Proficiency in SQL, Python, Scala, and R.
Experience with data pipeline development and ETL processes.
Strong understanding of data governance and security practices.
Ability to lead and mentor a team effectively.
Preferred Skills
Experience with Azure DevOps and Airflow.
Familiarity with Power BI and AI workloads.
Knowledge of real-time analytics and data activator features.
Pay range and compensation package
Competitive salary based on experience and qualifications.
Equal Opportunity Statement
We are committed to creating a diverse and inclusive environment for all employees. We encourage applications from individuals of all backgrounds and experiences.
Primary Skills
- AKS, Event Hub, Azure DevOps, Cosmos DB, Azure Functions
Specialization
Job requirements
Similar Jobs
Get Personalized Job Insights.
Our AI-powered fit analysis compares your resume with a job listing so you know if your skills & experience align.
Success! Refresh the page to see how your skills align with this role.
The Company
What We Do
Brillio is the leader in global digital business transformation, applying technology with a human touch. We help businesses define internal and external transformation objectives, and translate those objectives into actionable market strategies using proprietary technologies. With 2600+ experts and 13 offices worldwide, Brillio is the ideal partner for enterprises that want to quickly increase their core business productivity, and achieve a competitive edge, with the latest digital solutions.







