July '25 - Fabric and Power BI update

 Just when I thought the product team might be slowing down for year end and the summer break, they've only gone against my expectations and not only dropped a July feature - but a couple of updates since.


Given this one wasn't expected, I'm not going to be able to do my usual breakdown of what is GA and what's not - but I'll try to call it out as we go along.

Up first we've got a fairly major update to the Spark compute - one that changes the billing approach


Autoscale billing for spark (GA - Generally Available)

For those that haven't seen it, this is a pretty big shift in the way that the spark compute is billed. Most of you are probably familiar with the SKU based approach that Fabric follows - that is that you bay for a particular license level and you get a number of compute units (think of it as credits) to spend on the activities you do in Fabric. The idea being that it gets around the difficulty of budgeting for the CFO that consumption based processing creates.

Now that is great in theory, but in practice it can cause issues - in that someone has burnt all the credits when you come to run a job and you have to sit and wait for more to become available.

That's where this feature comes in. It means that we can now standup a compute outside of the sku system to run our Spark jobs - one that is billed on a consumption basis.

Meaning that we now have the flexibility of both worlds - consumption based pricing for those mission critical jobs that have to run to keep the lights on, and SKU based pricing for those jobs that can be throttled. Allowing you to balance a predictable bill with making sure the stuff that has to run isn't blocked.

The downside, it's going to be harder to predict the total cost of ownership upfront for this hybrid approach and it will put even more importance on getting your Azure landing zone architected according to best practice - otherwise you risk not having the necessary billing controls in place.

Domain tags (Preview)

Maintained at a domain level, these make it easier for consumers to filter and search for items across workspaces. Meaning that a report, Lakehouse, etc can be tagged as needed and then those tags can be used as a filter on the workspace to make it easier to find the relevant objects.

Just make sure that you know the limitations before you start creating them.

OneLake catalog enhancements (GA)

This one is a very minor change. The update means that when you go into the OneLake catalog it now respects the persona that you are in.

For example, if you go in as a PowerBI user, you'll see the PowerBI elements highlighted by default.

Personally I wish they'd ditch the separate personas completely. With some of the UI respecting them and some not, it's starting to feel a bit messy.

For me they need to move to two basic personas, consumers and developers (and update the licensing to reflect this).

Fabric data agent integration with Copilot Studio (Preview)

Pretty much what it says on the tin, and doing so makes sense as it will enable the A2A protocol. Meaning that you can easily build agenetic setups with the relevant data from Fabric being exposed to each agent.

Data source instructions for data agents (Preview)

I'm hearing a number of users saying they aren't convinced with the results they are getting out of data agents when a complex model is involved. The result is that the answers aren't what they are expecting in specific situations.

This is a feature that should help, by allowing developers to provide the AI with instructions around how to query tables, apply filters, interpret columns, or join datasets.

Streaming data set support for data agents (Preview)

Another one that's pretty clear, but data agents now support streaming results, meaning that users get a live, updating view.

The trick with this one is going to be making sure that it's clear as to which data sets are streaming and which aren't.

Alongside this we've also had a few other UI enhancements.

Activator - Rule and object creation UI enhancements (GA)

With this one, the Fabric team have reduced the number of steps required to setup a rule once connected to a stream of events.

Activator - Teams channel and group support (GA)

Whilst this one seems like a small change on the surface, it's actually pretty big. By allowing us to send alerts to a group chat or channel, it makes it much easier to administer and ensure that key person dependencies are eliminated.

Activator - Pass parameter values to Fabric items (Preview)

Allows us to call items such as pipelines and notebooks with dynamic values based on the parameters defined within Activator

Data factory - Incremental copy (GA)

The copy job functionality can now undertake incremental copies. It has been designed to do a full initial pull and then run the incremental.

At the moment, the documentation doesn't say what happens if an incremental fails for some reason. My assumption is that it will retry from the last successful attempt, but definitely one to consider in downstream activities if it's something that you are interested in using.

Upsert support and additional connectors for copy job

When using a copy job to pull data into Fabric a number of connectors now support upsets into their destination - have a look at the documentation for more details.

Alongside this, the copy job has gotten a whole bunch more connectors. If it's something you are either using or looking to use, do check out this months update blog.

Azure SQL DB mirroring over firewall (GA)

Hidden away at the end of the documentation is this little gem. I know a number of organisations that have been waiting for mirroring over both vNet and on-premise data gateways to be GA.

On top of that the process for restarting a mirrored database if the capacity is paused has been improved. 







Comments

Popular posts from this blog

Workspace topologies in Microsoft Fabric

Ignite 2024 - What's been announced for Microsoft Fabric

Power BI - Fabcon keynote, preview features, and March 2025 announcements