See your company like never before
Power BI deep-dives, migration playbooks, and data strategy for enterprise teams.


Join dozens of organizations who have moved to Beyond The Analytics. Book your personalized demo today
Build real-time analytics in Microsoft Fabric: Eventstream ingestion, KQL databases, Real-Time hub, and live Power BI dashboards for UAE & Saudi teams.
Quick answer: Microsoft Fabric Real-Time Intelligence is an end-to-end platform for ingesting, transforming, storing, and acting on streaming data — combining Eventstream, Eventhouse (KQL databases), Real-Time hub, Real-Time dashboards, and Activator into a single governed environment that replaces fragmented real-time architectures.
Most GCC enterprises run real-time analytics the hard way. Sensor data flows through Azure Event Hubs into a custom Stream Analytics job, lands in a separate database, and then feeds a Power BI dashboard that refreshes on a schedule. Each piece has its own billing model, its own security layer, and its own failure modes. When something breaks at 2 AM, three different teams get paged.
Microsoft Fabric Real-Time Intelligence consolidates that stack. Forrester recognized this in The Forrester Wave: Streaming Data Platforms, Q4 2025, naming Microsoft a Leader and noting that it "excels at messaging, analytics, governance, developer experience, business user experience, and more." The decisive factor was Microsoft's integration of batch and streaming data through OneLake, giving enterprises a single governed data layer for both historical and real-time workloads.
For GCC organizations running oil and gas sensor networks, logistics fleet tracking, retail transaction monitoring, or smart city IoT platforms, this matters. Real-time analytics is no longer a specialized engineering project — it is a platform capability within the same Fabric environment where your Power BI reports and data warehouses already live.
Quick answer: Data flows from event sources (Azure Event Hubs, IoT Hub, Kafka, MQTT) through Eventstream for no-code transformation, into an Eventhouse (KQL database) for sub-second querying, and out to Real-Time dashboards and Activator alerts — all within Fabric's governed data plane.
The architecture follows five stages: ingest, transform, store, visualize, and act.
Real-Time hub is the tenant-wide catalog for all streaming data in your Fabric environment. It surfaces every active data stream, Microsoft source, and Fabric event across your organization. Think of it as a searchable directory of everything that moves — sensor feeds, database change data capture (CDC) streams, application logs, workspace events.
The hub eliminates the common problem of streaming data being invisible to everyone except the engineering team that set it up. Business analysts, data scientists, and operations teams can browse, preview, and subscribe to streams without filing a ticket.
Eventstream is the no-code ingestion and transformation engine. It connects to external sources, shapes data in flight, and routes events to one or more destinations.
Supported sources as of early 2026 include:
| Category | Sources |
|---|---|
| Azure native | Event Hubs, IoT Hub, Event Grid, Service Bus, Cosmos DB CDC, Azure SQL DB CDC |
| Open-source / multi-cloud | Apache Kafka, Confluent Cloud Kafka, Amazon Kinesis, Amazon MSK, Google Cloud Pub/Sub |
| IoT protocols | MQTT (v3.1/v3.1.1), Azure IoT Operations |
| Database CDC | PostgreSQL, MySQL, MongoDB, SQL Server on VM, Azure SQL Managed Instance |
| Specialized | Cribl, Solace PubSub+, HTTP endpoints, custom endpoints |
Eventstream's processor supports filtering, field renaming, format conversion, windowed aggregations, deduplication, and content-based routing — all configured visually. Derived eventstreams let you publish transformed streams back to Real-Time hub for consumption by other teams.
Eventhouse is the analytics engine purpose-built for time-series and streaming data. Each Eventhouse contains one or more KQL databases that store structured, semi-structured, and unstructured event data with automatic time-based partitioning.
Data can be queried using native KQL (Kusto Query Language) or T-SQL. KQL is optimized for time-series analysis and includes built-in functions for anomaly detection, trend analysis, forecasting, and signal filtering — capabilities that would require custom Python code in a traditional data warehouse.
Fabric Real-Time dashboards connect directly to Eventhouse data and auto-refresh without scheduled intervals. They are lightweight, purpose-built for operational monitoring, and support natural language queries via Copilot.
For richer analytical reporting, Eventhouse data can also feed Power BI reports through DirectQuery. Note that Microsoft is retiring the legacy Power BI streaming dataset feature (new streaming datasets cannot be created after October 31, 2027), making Fabric Real-Time Intelligence the forward path for all live data scenarios.
Fabric Activator is the no-code alerting engine. It monitors data in Eventstreams, Eventhouse queries, or Power BI visuals and triggers actions when conditions are met — sending emails, Teams notifications, Power Automate flows, or launching Fabric pipelines. Latency is subsecond.
Quick answer: KQL (Kusto Query Language) is a read-optimized query language designed for large-scale time-series and log analysis, with native functions for anomaly detection, trend fitting, and forecasting that make it significantly faster than SQL for streaming data patterns.
KQL was originally built for Azure Data Explorer and is now the native query language for Fabric Eventhouse. Its syntax is pipeline-oriented — you chain operators left to right, which makes complex queries more readable than nested SQL subqueries.
make-series generates regular time-series buckets from raw events, with built-in handling for missing valuesseries_decompose_anomalies() identifies outliers across thousands of time series in secondsseries_fit_line() and series_fit_2lines() detect linear trends and trend changes — useful for monitoring scenarios where you need to identify when a sensor reading starts driftingseries_fir() applies finite impulse response filters for moving averages and change detection; series_iir() handles exponential smoothingseries_decompose_forecast() projects future values based on historical patternsA practical example: monitoring pressure sensors across an oil pipeline network. In SQL, detecting an anomalous pressure trend across 500 sensors requires multiple CTEs, window functions, and statistical calculations. In KQL, it is a single query using make-series and series_decompose_anomalies() that returns results in seconds over millions of data points.
KQL databases also support T-SQL for teams more comfortable with traditional SQL. However, the time-series and anomaly detection functions are KQL-native — there is no T-SQL equivalent for most of them.
Quick answer: Oil and gas sensor monitoring, logistics fleet tracking, retail transaction analysis, and smart city IoT are the four highest-value real-time use cases for GCC enterprises, each benefiting from Fabric's unified architecture and UAE North data residency.
GCC oil and gas operations generate massive sensor telemetry — pressure, temperature, flow rate, vibration, and gas composition readings from thousands of field instruments. The traditional approach pushes this data through SCADA systems into historians, with analytics running hours or days behind.
With Fabric Real-Time Intelligence, sensor data from IoT gateways flows through Azure IoT Hub or MQTT into Eventstream, transforms in flight (unit conversion, quality filtering, downsampling), and lands in an Eventhouse. KQL's anomaly detection can flag pressure deviations across an entire pipeline network in real time. Activator triggers maintenance workflows when readings cross threshold boundaries.
Microsoft's own predictive maintenance reference architecture documents this pattern — streaming IoT events into Eventhouse with ML models scoring in real time for equipment health prediction.
Dubai, Abu Dhabi, and Jeddah are major logistics hubs. Fleet operators need real-time position tracking, ETA predictions, cold chain temperature monitoring, and geofence alerts. Event data from vehicle telematics units (typically via MQTT or Kafka) flows into Eventstream, and KQL's geospatial functions calculate distances, detect geofence crossings, and generate route deviation alerts.
Fabric's Map visualization in Real-Time Intelligence renders live fleet positions on interactive maps with custom layers — bubbles, heatmaps, and polygon overlays for warehouse zones and delivery territories.
Retail chains across the GCC process millions of point-of-sale transactions daily. Real-time analytics on transaction streams enables instant fraud detection (unusual patterns, velocity checks), dynamic pricing adjustments, and live inventory visibility across hundreds of stores.
Eventstream ingests transaction events from POS systems via Event Hubs or custom HTTP endpoints. KQL's pattern-matching and windowed aggregation functions identify suspicious transaction sequences — for example, multiple high-value transactions across different terminals within a short window.
The UAE and Saudi Arabia are investing heavily in smart city infrastructure. Dubai's "Dubai Live" initiative integrates digital twins with predictive analytics for real-time city management. Abu Dhabi's Zayed Smart City Project uses IoT sensors for traffic flow, air quality, energy usage, and public utility monitoring. Saudi Arabia's NEOM project relies on extensive IoT sensor networks for its autonomous systems.
These platforms generate continuous telemetry that fits naturally into Fabric's architecture — MQTT sensors feeding Eventstream, KQL databases handling time-series analysis across thousands of sensor nodes, and Real-Time dashboards providing operational views for city management teams.
Quick answer: Microsoft Fabric supports all workloads — including Real-Time Intelligence — in the Azure UAE North (Dubai) region, with Multi-Geo enabling capacity placement across regions for organizations operating in multiple GCC countries.
As of March 2026, here is the Fabric availability picture for the Middle East:
| Region | Fabric Availability | Notes |
|---|---|---|
| UAE North (Dubai) | All workloads | Full Real-Time Intelligence support |
| Qatar Central (Doha) | Power BI only | Other Fabric workloads not yet available |
| Saudi Arabia | Expected 2026 | Three availability zones under construction |
For organizations that need all streaming data to remain within UAE borders — common for government entities and regulated industries — UAE North supports the complete Real-Time Intelligence stack: Eventstream, Eventhouse, Real-Time hub, Activator, and Real-Time dashboards.
Qatar-based organizations that need Real-Time Intelligence today must use a non-Qatar region for their Fabric capacity and apply Multi-Geo for Power BI content in Qatar Central. This is a meaningful architectural constraint — it means sensor data leaves Qatar for processing. For compliance-sensitive workloads, this needs explicit sign-off from your data governance team. Our data residency guide covers the compliance considerations in detail.
Azure Event Hubs and IoT Hub are available in UAE North, so the ingestion layer can run in-region even if the Fabric capacity sits elsewhere.
Quick answer: Real-time workloads consume Fabric Capacity Units (CUs) continuously — unlike batch workloads that spike and settle — so capacity planning must account for always-on Eventstream and Eventhouse compute, with F64 as the practical minimum for production streaming.
Real-time analytics has a fundamentally different consumption profile than batch reporting. A Power BI report refreshes eight times a day and consumes CUs in bursts. An Eventstream runs 24/7, and an Eventhouse scales its vCores based on ingestion volume and query load.
Eventstream billing has four components, documented by Microsoft:
| Operation | Rate | Notes |
|---|---|---|
| Eventstream Per Hour | 0.222 CU hours | Flat charge, only when data is flowing |
| Data Traffic Per GB | 0.342 CU hours | Ingress/egress volume including 24-hour retention |
| Processor Per Hour | Starting at 0.778 CU hours | Autoscales based on throughput and complexity |
| Connectors Per vCore Hour | 0.611 CU hours | Per source connector (excludes Event Hubs/IoT Hub) |
The processor rate is the one to watch. On the "Low" throughput setting, it starts at 0.778 CU hours and can autoscale up to 4x the base rate (approximately 9.3 CU hours) under high throughput. "Medium" and "High" settings start at higher base rates and scale further.
Eventhouse compute is measured as "Eventhouse UpTime" — the number of seconds the engine is active multiplied by the number of virtual cores in use. An autoscale mechanism adjusts vCores based on ingestion volume and query patterns.
An always-on Eventhouse with 4 vCores running for one hour consumes 14,400 CU seconds (4 vCores x 3,600 seconds). Storage is billed separately across two tiers: OneLake Cache Storage (for fast query performance, configurable via cache policy) and OneLake Standard Storage (for all retained data).
For a GCC enterprise running a moderate real-time workload — for example, 50 IoT sensors producing events every second, with three Eventstream transformations, a single Eventhouse, and two Real-Time dashboards:
That is roughly 7-14 CUs of sustained consumption, leaving room on an F64 capacity for batch workloads (data warehouse, Power BI refreshes, data pipelines). At higher sensor counts or sub-second ingestion rates, F128 or above may be necessary.
Use the Microsoft Fabric Capacity Estimator to model your specific workload before committing. For a breakdown of F-SKU pricing and reserved instance savings, see our Fabric licensing guide.
When capacity limits are reached, Fabric applies throttling in three stages: proactive (queries throttled, ingestion continues), reactive (both paused, no data loss), and extreme reactive (both paused, potential data loss after a period). For production real-time workloads, right-sizing capacity to avoid even proactive throttling is essential.
Quick answer: Real-Time dashboards are lightweight, auto-refreshing visualizations built for operational monitoring of Eventhouse data, while Power BI reports offer richer analytical capabilities but rely on scheduled refresh or DirectQuery — use Real-Time dashboards for live operations and Power BI for deeper analysis.
| Capability | Real-Time Dashboard | Power BI Report |
|---|---|---|
| Data source | Eventhouse (KQL database) | Multiple sources, including Eventhouse via DirectQuery |
| Refresh model | Continuous auto-refresh | Scheduled refresh, DirectQuery, or auto page refresh |
| Latency | Seconds | DirectQuery: seconds; Import: refresh interval |
| Query language | KQL | DAX and CALCULATE |
| Copilot / NL queries | Yes | Yes |
| Alerting | Built-in Activator integration | Via Activator or data-driven alerts |
| Best for | Live operational monitoring | Strategic analysis, executive reporting |
The two are complementary, not competing. A typical implementation uses Real-Time dashboards on operations center screens for live monitoring, and Power BI reports for the same data consumed by managers and executives who need historical trends, drill-through, and formatted exports.
Eventhouse data is reflected in OneLake, which means the same data that feeds Real-Time dashboards can also be accessed by Power BI, Fabric notebooks, and data pipelines — without duplication.
Quick answer: Start with a proof of concept on a single data stream — connect one event source through Eventstream, land it in an Eventhouse, build a Real-Time dashboard, and set up one Activator alert — before scaling to production workloads.
summarize, bin, render timechart) and progress to time-series analysis (make-series, series_decompose_anomalies)Real-time analytics implementations are multi-month engagements that touch ingestion architecture, data modeling, capacity planning, security, and visualization. For organizations new to streaming architectures, working with a partner experienced in both Fabric's Real-Time Intelligence stack and GCC data residency requirements can significantly reduce time to production. Beyond The Analytics delivers these implementations for enterprises across the UAE and broader GCC.
Yes. As of March 2026, Azure UAE North (Dubai) supports all Fabric workloads, including the complete Real-Time Intelligence stack — Eventstream, Eventhouse, KQL databases, Real-Time hub, Activator, and Real-Time dashboards. This means streaming data can be ingested, processed, stored, and visualized entirely within the UAE, meeting data residency requirements for regulated industries and government entities. Qatar Central currently supports Power BI only, with broader Fabric workloads expected as the region matures.
Real-time workloads run on any Fabric F SKU, but F64 is the practical minimum for production deployments. F64 provides 64 Capacity Units — enough to sustain moderate Eventstream ingestion, an Eventhouse, and concurrent dashboards alongside batch Power BI workloads. F64 also unlocks free viewer licensing, which matters for organizations distributing Real-Time dashboards to operations teams. For high-throughput scenarios (thousands of sensors, sub-second ingestion), F128 or higher is typical. Use the Microsoft Fabric Capacity Estimator to model your specific workload.
Both work, and most implementations use both. Eventhouse supports DirectQuery from Power BI, so you can build Power BI reports that query KQL databases with near-real-time latency. Real-Time dashboards are better for live operational monitoring — they auto-refresh continuously and are optimized for high-frequency data. Power BI reports are better for strategic analysis with richer visual formatting, drill-through, and distribution via the Power BI service. Microsoft is retiring legacy Power BI streaming datasets (no new creation after October 31, 2027), so Real-Time Intelligence is the forward-looking path for all streaming scenarios.
Eventstream supports over 25 source connectors as of early 2026, including Azure Event Hubs, Azure IoT Hub, Apache Kafka, Confluent Cloud Kafka, Amazon Kinesis, Amazon MSK, Google Cloud Pub/Sub, MQTT brokers, Azure Event Grid, Azure Service Bus, HTTP endpoints, and database CDC feeds (Azure SQL, PostgreSQL, MySQL, MongoDB, Cosmos DB, SQL Server on VM, Azure SQL Managed Instance). Custom endpoints allow any application to push events using a connection string, making Eventstream adaptable to virtually any streaming source.
KQL (Kusto Query Language) is purpose-built for time-series and log analytics, while SQL is a general-purpose relational query language. KQL has native functions for anomaly detection (series_decompose_anomalies), trend analysis (series_fit_line), forecasting (series_decompose_forecast), and signal filtering (series_fir, series_iir) that have no direct SQL equivalents. Its pipeline syntax is also more readable for complex analytical queries. However, Fabric Eventhouse also supports T-SQL queries for teams transitioning from SQL backgrounds. For most real-time analytics workloads — especially IoT sensor analysis and operational monitoring — KQL is significantly more efficient.
Yes. In The Forrester Wave: Streaming Data Platforms, Q4 2025, Forrester named Microsoft a Leader, highlighting that Microsoft "excels at messaging, analytics, governance, developer experience, business user experience, and more, enabling robust performance for real-time analytics and event-driven applications." Forrester specifically cited the integration of streaming and analytics within the Fabric platform as a differentiator, noting that the approach brings "dozens of services together under a single umbrella" making real-time development seamless.
Microsoft Partner · Dubai
Your business intelligence partner for the GCC
Have a data challenge or a project in mind? Reach out and let's explore how we can help.
Clients we've worked with






