Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

We've captured the moments from FabCon & SQLCon that everyone is talking about, and we are bringing them to the community, live and on-demand. Starts on April 14th. Register now

Reply
F003Yum
Helper I
Helper I

Super long refresh times despite incremental refresh

Hi!

I'm having an issue with two models that I'm uploading into our premium capacity workspace with their refreshes taking way longer than they should. 

 

The first model is set to 24 months archive + 14 days incremental refresh. But it takes way to long to refresh it's 14 day incremental. 

those 14 days would be around 450K rows across 45ish columns (lots of columns even after removing a whole bunch) 

Refreshing one month 940k rows takes 23 seconds. But when refreshing in the service it takes over 20 minutes

 

When I check the query history in our snowflake environment I can see that it is properly querying only for those 14 days. So query folding seems to be happening as far as I can tell. 

 

These are the execution details in power bi service:

{ "timeStart": "2026-03-11T07:55:44.0460000Z", "timeEnd": "2026-03-11T08:14:13.7200000Z", "durationMs": 1109674, "externalQueryExecutionTimeMs": 84039, "vertipaqJobCpuTimeMs": 1386031, "queryProcessingCpuTimeMs": 9563, "totalCpuTimeMs": 2366141, "executionDelayMs": 294, "approximatePeakMemConsumptionKB": 15487240, "tabularConnectionTimeoutMs": 17999000, "commandType": "Batch", "discoverType": 129, "queryDialect": 4, "refreshParallelism": 6, "vertipaqTotalRows": 3566793, "intendedUsage": 2, "mEngineCpuTimeMs": 142625, "mEnginePeakMemoryKB": 2024484 }

 

Vertipaq CPU seems to be the main consuming part here. Which I don't understand? 

 

There are also no transformations at all in powerquery, it's just a date/time filter for the rangestart/rangeend parameters and then selecting the columns needed. 

 

Also, for both models the snowflake queries are very quick. All less than 0.5 sec with some at most 2 sec

Any ideas on what's going on or how to troubleshoot this further? 

Thanks!

 

I then have another issue with a table in another model that is set to 12 months archive, 14 days incremental. 

One month in this table is around 3 million rows, so we can assume that 14 days is around 1.5 million rows across 11 columns. 

This table is then also loaded, but aggregated via group by in power query (we need both for the report) into only 254k rows for 1 month. Edit: This aggregated table is not referencing the first table. 

 

But this refresh is taking almost TWO HOURS, and again. When I check snowflakes query history I still only see queries for those 14 days that should be refreshed. So seems like the incremental query folding is working? 

 

When I check the execution details in power bi service it says this:

{ "timeStart": "2026-03-11T10:35:30.4850000Z", "timeEnd": "2026-03-11T12:23:26.9100000Z", "durationMs": 6476425, "externalQueryExecutionTimeMs": 166693, "vertipaqJobCpuTimeMs": 135484, "mEngineCpuTimeMs": 87156, "queryProcessingCpuTimeMs": 734, "totalCpuTimeMs": 7364578, "executionDelayMs": 6599793, "approximatePeakMemConsumptionKB": 12280456, "mEnginePeakMemoryKB": 1621644, "tabularConnectionTimeoutMs": 17999000, "commandType": "Batch", "discoverType": 129, "queryDialect": 4, "refreshParallelism": 6, "vertipaqTotalRows": 377166978, "intendedUsage": 2, "qsoReplicaVersion": 134177054007085840 }

 

And here we can see that the Execution Delay is 110 minutes, which is super weird, because when I checked the compute history for our server it was around 40-50%, so I don't understand what it is waiting for? 

 

 

 

 

1 ACCEPTED SOLUTION

@v-sshirivolu Hi!

After some more experimenting and remaking all the calculated columns, reinstalling power bi etc it now works. I also noticed during on of the tests where a refresh took 2 hours, that it completed with a warning, and that warning was referencing a table that wasn't even loaded into the model. 

So seems that perhaps some metadata or other stuff had been slightly corrupted? Either way now after rebuild it's only taking 3 minutes to refresh, with the idenditcal aggregation etc... 

 

View solution in original post

16 REPLIES 16
v-sshirivolu
Community Support
Community Support

Hi @F003Yum ,

I would take a moment to thank @Juan-Power-bi , for actively participating in the community forum and for the solutions you’ve been sharing in the community forum. Your contributions make a real difference.
 

I wanted to check if you had the opportunity to review the information provided. Please feel free to contact us if you have any further questions

 

Juan-Power-bi
Memorable Member
Memorable Member

Good follow-up questions!
Model 1 — pivoting: It depends on what "pivot" means in context. If you mean transposing wide columns into rows (unpivoting), that would actually help a lot — fewer columns means less VertipAQ encoding work, and narrow tall tables compress far better than wide flat ones. If you mean something else, share more detail. But as a general rule: reducing from 45 columns to something leaner (even 20-25) would noticeably cut that VertipAQ CPU time.
Model 2 — the execution delay: You're right that the actual processing in the logs is trivial — the real work takes maybe 3-4 minutes total. The 110-minute wait is entirely executionDelayMs, which is capacity queuing, not your transformation. The transformation itself isn't the problem at all.
The reason capacity queuing happens even at 40-50% compute is that Premium capacity throttling isn't purely CPU-based — it uses a CU (Capacity Unit) smoothing algorithm that averages usage over a 24-hour rolling window. If earlier jobs consumed a burst of CUs, your capacity can be in a throttled state even while the instantaneous CPU looks fine. Check the Fabric Capacity Metrics app and look at the CU smoothing chart around the time of the refresh — you'll likely see the throttling period clearly there, even if the raw compute graph looks okay.
Short-term fix: schedule model 2's refresh during off-peak hours when the CU bucket has recovered.

@Juan-Power-bi 

Thanks again,

 

Yes that's what I meantwith pivoting, it did however having the effect of running out of memory when trying to refresh the model:

Data source errorResource Governance: This operation was canceled because there wasn't enough memory to finish running it. Either reduce the memory footprint of your dataset by doing things such as limiting the amount of imported data, or if using Power BI Premium, increase the memory of the Premium capacity where this dataset is hosted. More details: consumed memory 24880 MB, memory limit 24775 MB, database size before command execution 824 MB

 

For model 2, I did try and recreate the model one table at a time and at every step it worked perfectly and eventually the whole model was uploaded in and it all worked great, the only thing that wasn't added to the model was the measures. Other than that the model was exactly the same. Except for the final step, when I added a direct query to one of our large semantic models that feeds a couple of other reports. Then it stopped refreshing and we encounterd the previously mentioned super long execution delay. 

Which is very weird, since it's working with a bunch of other reports? 

Also weird is that the version of this report that is in the production workspace also has this connection and there it works? 

Hi @F003Yum ,

What you’re seeing comes down to two separate issues. For the first model, the memory error after unpivoting shows that the problem isn’t just VertiPaq CPU but the memory required during processing. While unpivoting reduces columns, it significantly increases row count and cardinality, which can quickly push you over the capacity limit. Instead of unpivoting, a better approach here would be to keep the table narrower without increasing rows, for example splitting the 45 column table into a fact table and a few dimension or attribute tables, and reducing high cardinality columns like text or timestamps.

For the second model, the key clue is that everything works fine until the DirectQuery connection to another semantic model is added. At that point, the dataset becomes a composite model, which introduces cross model dependencies and can lead to Premium capacity queuing. The long executionDelayMs means the refresh is waiting for resources rather than actually processing, even if CPU usage looks low. The reason it works in other reports or production is likely due to differences in capacity load or timing. The most reliable fix is to avoid using DirectQuery to another semantic model in this dataset, either pull the data directly from the source like Snowflake or move that DirectQuery piece into a separate dataset and keep this model import only. If that’s not possible, scheduling during off peak hours can help, but the root issue is architectural rather than performance tuning.

@v-sshirivolu 

Hi, thanks for the assist, 

 

I'm now mostly having issues with the second model. For some reason the first just started working on its own. 

 

The second is still behaving very weird. 

Like previously mentioned I tried creating a separate file and re-adding one table with all transformations and calculations at a time, + the same incremental refresh settings, and it worked great that way. But then got stuck when adding the connection to the other semantic model. 

 

I've now added the needed table from that model into this report, which is only 240k rows across 7 columns, for one month. And the incremental settings is 12 months archive 14 days refresh. And the only transformations are some filters and removing columns. 

So this should be a very fast refresh. 

But now yesterday it again took almost hours and failed the second after running for 5 hours!!

 

Here's the log of the successful run that took forever:

{
"timeStart": "2026-03-12T15:22:45.2800000Z",
"timeEnd": "2026-03-12T17:16:51.5990000Z",
"durationMs": 6846319,
"externalQueryExecutionTimeMs": 158280,
"vertipaqJobCpuTimeMs": 55469,
"mEngineCpuTimeMs": 44234,
"queryProcessingCpuTimeMs": 1156,
"totalCpuTimeMs": 7405500,
"executionDelayMs": 1701765,
"approximatePeakMemConsumptionKB": 12120131,
"tabularConnectionTimeoutMs": 17999000,
"commandType": "Batch",
"discoverType": 129,
"queryDialect": 4,
"refreshParallelism": 6,
"vertipaqTotalRows": 362835436,
"intendedUsage": 2,
"mEnginePeakMemoryKB": 1116324,
"qsoReplicaVersion": 134178094053182000
}

@v-sshirivolu

Here's another execution log from Power BI service, this time there's no delay, but it's still taking two hours despite the snowlake queries and processing is 20 minutes total (this refresh was a full refresh across 12 months) 

 

{
"timeStart": "2026-03-19T09:05:03.7350000Z",
"timeEnd": "2026-03-19T11:07:23.4060000Z",
"durationMs": 7339671,
"externalQueryExecutionTimeMs": 572287,
"vertipaqJobCpuTimeMs": 279266,
"queryProcessingCpuTimeMs": 1750,
"totalCpuTimeMs": 9037406,
"executionDelayMs": 465,
"approximatePeakMemConsumptionKB": 13291915,
"tabularConnectionTimeoutMs": 17999000,
"commandType": "Batch",
"discoverType": 129,
"queryDialect": 4,
"refreshParallelism": 6,
"vertipaqTotalRows": 428585739,
"intendedUsage": 2,
"mEngineCpuTimeMs": 348906,
"mEnginePeakMemoryKB": 4295808
}

Hi @F003Yum ,
Based on what you’ve shared, the behavior aligns with documented Power BI behavior. For the first model, the memory error after unpivoting is expected, as increasing row count and cardinality can significantly raise memory usage during VertiPaq processing, even if columns are reduced. In such cases, keeping the model optimized with a star schema approach and reducing high cardinality columns is generally more effective. For the second model, adding the DirectQuery connection to another semantic model creates a composite model, which can introduce additional processing overhead and dependencies that impact refresh performance.

In addition, Premium capacity uses workload management and capacity limits, meaning refresh operations can be queued or delayed even when CPU usage appears normal, which is reflected in the executionDelayMs values you’re seeing. It’s also important to note that incremental refresh reduces the amount of data retrieved from the source, but partitions still need to be processed in memory, so overall refresh duration can still be affected by model size and design. Overall, this behavior is consistent with how Premium capacity and composite models operate, so focusing on model optimization and being mindful of composite model overhead should help improve performance.

@v-sshirivolu 

Hi!

Thanks for the reply and sorry for my delayed response. 

 

Regarding the second model I'm still encountering the same issues even when having removed the direct query and instead import the table from our database instead. 

The execution logs from previos posts are with that done. 

 

When doing some more testing this morning I ran into an issue where it has previously worked during testing. 

 

It's a table where we have rows aggregated for every 15 minutes. And in power query I do an group by / aggregation up to every hour instead. 

 

Previously I've tested adding incremental refresh one table at a time and uploading into our test workspace (in a brand new report file) and there it has worked with incremental refresh on all tables. 

But now this morning it failed with the above mentioned group by. 

This report uses the same fact table three times, 

One without transformations and is the default table with 15 minute intervals, and rows for Column B, One column A can have multiple rows for the same 15 minute interval, with ColumnB being different. 

 

One version where we group by to get per hour and with all from "ColumnB" toghether. This group by works fine with incremental refresh. 

 

And one final version where we group by to get per hour. And this one is now failing, but it previously worked... 

Example of default table, but then more rows for each quarter of the day etc

 

NameBCValue1Value2Hour
A2026-03-23 12:15:00111012
A2026-03-23 12:15:00222012
A2026-03-23 12:15:00333012
B2026-03-23 12:15:00165657612

 

After the third group by:

NameBValue1Value2Hour
A2026-03-2366012
B2026-03-2365657612

 

Hi @F003Yum ,

Thank you for the detailed update, it really helps clarify the issue. From what you’ve described, it seems the problem is likely related to query folding at the group by step, rather than capacity or composite model issues. Incremental refresh depends on Power BI pushing date filters to the source, but certain group by patterns or changes in column order or data types can prevent this. When folding breaks, Power BI pulls in more data and aggregates locally, which explains the failures and inconsistent behavior you’re experiencing, and why one grouped version works while another does not. Using the same fact table multiple times can also increase the load, as each reference might be evaluated separately. The main thing to check is whether folding is maintained in the failing query. If not, performing hourly aggregation closer to the source usually makes incremental refresh more reliable in these cases.

Hi @F003Yum ,

I hope the above details help you fix the issue. If you still have any questions or need more help, feel free to reach out. We’re always here to support you

 

Hi @F003Yum ,
I wanted to check if you had the opportunity to review the information provided. Please feel free to contact us if you have any further questions.

@v-sshirivolu  

Hi sorry for the late reply, When I check in our SnowFlake Query history, it does look like it's only sending queries for the intervals set up in the incremental settings. But there are some extra queries like the following that appear almost more often than the real query

SHOW PRIMARY KEYS IN TABLE "DB"."Schema"."Table"

 

SHOW COLUMNS IN TABLE "DB"."Schema"."Table"

 

Here's an anonymised version of the query that messes up the load time: (then as previously mentioned there's a small number of simple calcualted columns as well)

 
let
    Source =
        Snowflake.Databases(
            "your_account.region.azure.snowflakecomputing.com",
            "DATA_SOURCE",
            [Implementation = "2.0"]
        ),

    TargetDatabase =
        Source{[Name = "TARGET_DB", Kind = "Database"]}[Data],

    TargetSchema =
        TargetDatabase{[Name = "TARGET_SCHEMA", Kind = "Schema"]}[Data],

    FactTable =
        TargetSchema{[Name = "FACT_TABLE", Kind = "Table"]}[Data],

    #"Filtered on Date Range" =
        Table.SelectRows(
            FactTable,
            each [TimePeriod] >= RangeStart
                and [TimePeriod] < RangeEnd
        ),

    #"Selected Columns" =
        Table.SelectColumns(
            #"Filtered on Date Range",
            {
                "EntityID",
                "Date",
                "Hour",
                "TimePeriod",
                "DeviceID",
                "Measure1",
                "Measure2",
                "Measure3",
                "Measure4"
            }
        ),

    #"Grouped Rows" =
        Table.Group(
            #"Selected Columns",
            {"EntityID", "Date", "Hour", "DeviceID"},
            {
                {"Measure1_Per_Hour", each List.Sum([Measure1]), type nullable number},
                {"Measure2_Per_Hour", each List.Sum([Measure2]), type nullable number},
                {"Measure3_Per_Hour", each List.Sum([Measure3]), type nullable number},
                {"Measure4_Per_Hour", each List.Sum([Measure4]), type nullable number}
            }
        )
in
    #"Grouped Rows"

 

Hi @F003Yum ,

Your Snowflake history shows incremental refresh is working, and the SHOW COLUMNS and SHOW PRIMARY KEYS queries are expected metadata calls. The main cause of slow or inconsistent refresh is the Table.Group step, which can break query folding and force processing to the Power BI engine. When folding is lost, each partition executes locally, increasing refresh time even with incremental refresh. To improve performance, move aggregations to Snowflake via a view or native query and verify folding using View Native Query. Keeping transformations foldable lets the source handle execution efficiently.

@v-sshirivolu Hi!

After some more experimenting and remaking all the calculated columns, reinstalling power bi etc it now works. I also noticed during on of the tests where a refresh took 2 hours, that it completed with a warning, and that warning was referencing a table that wasn't even loaded into the model. 

So seems that perhaps some metadata or other stuff had been slightly corrupted? Either way now after rebuild it's only taking 3 minutes to refresh, with the idenditcal aggregation etc... 

 

Juan-Power-bi
Memorable Member
Memorable Member

Great detail in those execution logs

Maybe you can try this :

 

Stagger your refreshes so the two models don't run at the same time


Check the Fabric Capacity Metrics app to see if you're consistently hitting capacity limits during the refresh window


For model 1, trim columns and consider whether the 45-column wide table could be split into a fact + narrow attribute table


For model 2, check if the aggregated table really needs to be in the same semantic model or could be pre-aggregated upstream in a dataflow

@Juan-Power-bi 

Hi!

Thanks for the reply!

 

For model 1: Would it help to pivot the table? 

 

For model 2: I guess we could pre-aggregate it in snowflake but I'm confused as to why it's needed based on the log details. 

With my limited knowledge it seems like that transformation isn't taking that long when reading the log details? 

 

When I check the compute graph it's not over 40-50% load during the refresh, so I don't think that's the culprit 

Helpful resources

Announcements
New to Fabric survey Carousel

New to Fabric Survey

If you have recently started exploring Fabric, we'd love to hear how it's going. Your feedback can help with product improvements.

Power BI DataViz World Championships carousel

Power BI DataViz World Championships - June 2026

A new Power BI DataViz World Championship is coming this June! Don't miss out on submitting your entry.

Join our Fabric User Panel

Join our Fabric User Panel

Share feedback directly with Fabric product managers, participate in targeted research studies and influence the Fabric roadmap.

March Power BI Update Carousel

Power BI Community Update - March 2026

Check out the March 2026 Power BI update to learn about new features.

Top Solution Authors