Data Engineer with Fabric

Key Job Details

Role :

Location :

Level :

Employment type : Full Time

Job Description

We’re seeking a Microsoft Fabric Data Engineer to design, develop, and operationalize end-to-end analytics solutions on Microsoft Fabric. You will own the data lifecycle—from ingestion and transformation to semantic modelling, warehousing, real-time analytics, and BI—leveraging One Lake, Lakehouse, Data Factory, Synapse Data Warehouse, Real-Time Analytics (KQL), and Power BI.


Experience Required

  • 5-8 Years

Key Responsibilities

Data Engineering & Modeling
  • Implement robust ELT/ETL pipelines with Git integration, CI/CD, deployment pipelines, and parameterization.
  • Develop high-quality semantic models (Direct Lake/Import/DQ), DAX measures, and calculation groups for high-performance BI.
  • Optimize storage and compute in OneLake using shortcuts, mirroring, and incremental loading strategies.
Real-Time & Streaming
  • Create Real-Time Analytics solutions using KQL, event streams, and integrations with streaming services to power low-latency dashboards and alerting.
Governance, Security & Compliance
  • Implement data governance with Microsoft Purview: lineage, glossary, data classification, and access policies.
  • Enforce security best practices: RBAC, MS Entra ID (Azure AD) integration, conditional access, row/object-level security, and secrets management.
  • Define DR/backups, cost management, and monitoring/observability (workspace metrics, pipeline runs, Spark job performance).
Operational Excellence
  • Establish development standards, coding conventions, unit/integration testing, and reliability patterns.
  • Lead performance tuning for Spark, SQL, and BI models; troubleshoot cross-component issues end-to-end.
  • Mentor engineers and partner closely with analytics, product, and business stakeholders to translate requirements into reliable solutions.

Required Qualifications

  • 5–8 years in data engineering/analytics; 3–5 years building at scale in Azure/Microsoft data stack.
  • Deep, hands-on expertise with Microsoft Fabric, including:
    • OneLake, Lakehouse, Delta Tables, Shortcuts/Mirroring
    • Data Engineering (Spark Notebooks, PySpark/Scala/SQL)
    • Data Factory (pipelines), Dataflows Gen2
    • Synapse Data Warehouse (T-SQL, ELT patterns)
    • Power BI (Semantic Models, DAX, Direct Lake, performance tuning)
    • Real-Time Analytics (KQL, event streams)
    • Strong skills in Python (PySpark), SQL/T-SQL, DAX, Git/GitHub/GitLab.
    • Experience with CI/CD (Azure DevOps/GitHub Actions), deployment pipelines, and infrastructure-as-code for analytics workspaces.
    • Solid understanding of data governance (Purview), security models (RLS/OLS, IAM), and cost optimization.

    Preferred/Bonus Skills

    • Experience with Databricks, Azure Data Explorer, Event Hub/Kafka, REST APIs, and Python packaging for reusable transformations.
    • Advanced Power BI capabilities (composite models, aggregations, calculation groups, query folding).
    • Knowledge of dimension modeling, Kimball/Inmon, and data product thinking.
    • Exposure to Copilot in Fabric, AI integration, and augmenting analytics workflows with LLMs.

    Education & Certifications

    • Bachelor’s/Master’s in Computer Science, Data Engineering, or related field (or equivalent experience).
    • Certifications (nice to have):
      • Microsoft Certified: Fabric Analytics Engineer Associate
      • Microsoft Certified: Azure Data Engineer Associate (DP-203)
      • Microsoft Certified: Power BI Data Analyst Associate

Key Job Details

Role :

Location :

Level :

Employment type : Full Time

Job Description

We’re seeking a Microsoft Fabric Data Engineer to design, develop, and operationalize end-to-end analytics solutions on Microsoft Fabric. You will own the data lifecycle—from ingestion and transformation to semantic modelling, warehousing, real-time analytics, and BI—leveraging One Lake, Lakehouse, Data Factory, Synapse Data Warehouse, Real-Time Analytics (KQL), and Power BI.


Experience Required

  • 5-8 Years

Key Responsibilities

Data Engineering & Modeling
  • Implement robust ELT/ETL pipelines with Git integration, CI/CD, deployment pipelines, and parameterization.
  • Develop high-quality semantic models (Direct Lake/Import/DQ), DAX measures, and calculation groups for high-performance BI.
  • Optimize storage and compute in OneLake using shortcuts, mirroring, and incremental loading strategies.
Real-Time & Streaming
  • Create Real-Time Analytics solutions using KQL, event streams, and integrations with streaming services to power low-latency dashboards and alerting.
Governance, Security & Compliance
  • Implement data governance with Microsoft Purview: lineage, glossary, data classification, and access policies.
  • Enforce security best practices: RBAC, MS Entra ID (Azure AD) integration, conditional access, row/object-level security, and secrets management.
  • Define DR/backups, cost management, and monitoring/observability (workspace metrics, pipeline runs, Spark job performance).
Operational Excellence
  • Establish development standards, coding conventions, unit/integration testing, and reliability patterns.
  • Lead performance tuning for Spark, SQL, and BI models; troubleshoot cross-component issues end-to-end.
  • Mentor engineers and partner closely with analytics, product, and business stakeholders to translate requirements into reliable solutions.

Required Qualifications

  • 5–8 years in data engineering/analytics; 3–5 years building at scale in Azure/Microsoft data stack.
  • Deep, hands-on expertise with Microsoft Fabric, including:
    • OneLake, Lakehouse, Delta Tables, Shortcuts/Mirroring
    • Data Engineering (Spark Notebooks, PySpark/Scala/SQL)
    • Data Factory (pipelines), Dataflows Gen2
    • Synapse Data Warehouse (T-SQL, ELT patterns)
    • Power BI (Semantic Models, DAX, Direct Lake, performance tuning)
    • Real-Time Analytics (KQL, event streams)
    • Strong skills in Python (PySpark), SQL/T-SQL, DAX, Git/GitHub/GitLab.
    • Experience with CI/CD (Azure DevOps/GitHub Actions), deployment pipelines, and infrastructure-as-code for analytics workspaces.
    • Solid understanding of data governance (Purview), security models (RLS/OLS, IAM), and cost optimization.

    Preferred/Bonus Skills

    • Experience with Databricks, Azure Data Explorer, Event Hub/Kafka, REST APIs, and Python packaging for reusable transformations.
    • Advanced Power BI capabilities (composite models, aggregations, calculation groups, query folding).
    • Knowledge of dimension modeling, Kimball/Inmon, and data product thinking.
    • Exposure to Copilot in Fabric, AI integration, and augmenting analytics workflows with LLMs.

    Education & Certifications

    • Bachelor’s/Master’s in Computer Science, Data Engineering, or related field (or equivalent experience).
    • Certifications (nice to have):
      • Microsoft Certified: Fabric Analytics Engineer Associate
      • Microsoft Certified: Azure Data Engineer Associate (DP-203)
      • Microsoft Certified: Power BI Data Analyst Associate

Apply for the Data Engineer with Fabric role

    Apply for the Data Engineer with Fabric role

      Airolabs.ai
      Privacy Overview

      This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.