September 2025 - Fabric and Power BI GA features (part 1)

This month I thought I'd combine the Fabric release notes along with the Power BI release notes to give you a full view of what has been announced in the September releases. 

This blog will cover features that have gone GA and for preview features there will be a separate blog to follow. 

First up Fabric. 

Fabric 

General platform announcements

The first section in this month's blog covers the general announcements about the platform.

Multitasking UI updates

Microsoft have put this one further down there release now it's however I'm gonna push it right to the top. The reason for that is though this particular feature is in preview, a lot of you will have already seen the changes that it has brought into the user interface.

For example, we will start to see tabs across the top for Soda every time you open a new fabric object you get a new tab that is color coded based upon the Workspace that object is in. That's the most noticeable change from most developers and one everyone should be aware of because it's the first thing you're gonna see if you've not already been in to Fabric today.

Govern tab in OneLake catalogue

It's great to see the OneLake catalogue and start to get some governance features. The challenge that I have with it is a lot of it is currently focused around content that you've created yourself. I'd like to see this expanded to give you the option to either look at just your content or look at content across the estate. 

Whilst I get why this feature hasn't been added as it would create two larger overlap with Purview, it kind of reduces how useful it is by not being able to see the estate at large.

Domains API 

This month we've also had a new API added for creating a managing domains with inside Fabric. What's this functionality is mainly aimed at those that are using data mesh approaches, I personally think that it's is a good practise to actually align your workspaces alongside domains and subdomains regardless of whether you're using a data mesh approach or not. 

With this API now being generally available we can manage those domains and the assignment of workspaces to those domains as part of our CICD pipelines in the likes of as Azure DevOps or GitHub.

Purview protection policies for Fabric 

For those that are already using Purview protection policies they are now GA and able to be easily used across Fabric.

If you don't know what Purview protection policies are they enable you to create classifications such as public, internal,  and confidential. Those can then be applied to Fabric items in order to apply necessary security controls within the platform.

On top of this when combined with using domains, we can apply a default protection level now. This ensures that we get consistent information protection policies put on all of our objects.

Purview data loss prevention policy

This feature helps monitor since information such as PII and sends alerts to platform administrators if data is being put into or taken out of the platform.

Great for those that have got sensitive data with inside your platform and needs to show compliance legislation such as gdpr.

Variable libraries

This feature allows you to set different values for variables depending upon which environment you're actually in. It's great for having to help those things like shortcuts that can be a bit of a pain to maintain as part of cicd processes.

The big thing to remember it with it is the the values are not stored encrypted. So if you go and look at your variable library with inside source control you'll be able to see the values exactly as they appear in the UI. That means these really aren't the place to put anything that's secure you should be using a key vault still. So this really isn't a replacement for that.

As part of ga we also get a couple of additional features with variable libraries now being supported as part of dataflow Gen 2 and copy job. This feature isn't quite live yet but we'll begin rolling out on the 30 September.

UI changes for deployment pipelines

This one is more of a UI change but do be aware that at some stage the previous dialogue design is going to be deprecated. You will get 30 days notice when that happens, but it does mean that you're gonna have to move it some stage.

New resources for Terraform 

For me I personally wouldn't be using Terraform for a lot of the new features that have been added I would be implementing those in a development environment using github integrations and promote them. The one exception to that is the new Terraform resource that's been added for OneLake shortcut implementation.

I need to look into the detail around the Terraform implementation more but the potential to be able to link variable libraries and manage shortcuts via Terraform gives us a really powerful way of promoting those through environments. That has always been the biggest challenge so far and it seems as though it could well be solved.

New version of the Fabric CLI and move to open source

I covered this one in my previous blog around key Fabric announcements. If this is something you want to look into more please go and have a look at that blog and look at Microsoft learn.

OneLake

We've only had one feature really covered in the blog this month and it's around OneLake catalogue getting a secure tab. This allows you to see a very high level who has access to which space workspaces as as well as the security roles that are mapped to workspaces and objects.

What this doesn't cover, is the OneLake security model has been announced for a while and we are still waiting to go into public preview. Whilst at Fabcon we were told it should be going into public preview soon, at yhe time of writing we've had a blog about it published and then removed. Hopefully by the time you're reading this blog things will be a little bit clearer.

Data engineering

User data functions

User data functions have now gone ga meaning that those can really be used in production. For those that haven't come across them, previously we've done things like create helper notebooks and then caused into our main notebooks to reduce code duplication. This effectively replaces that approach. My concern in using it in this way, is about code lineage and how easy it will be able to trace dependencies on these user data functions. Without using something like co-pilot for GitHub, will we end up with hidden code like we used to in on-premises with database triggers?

Actually this isn't the real power of this function. Instead, the real power is that it enables write back from power bi without the needs to create a power app and embed it. For me, that's where I see a clear use case. The previous approach embedding our Power app in our power bi report has always been cost prohibitive in my experience.

The best new feature for these functions is the ability to test them and developer mode before they are deployed to the platform. Until now the only way to test the code within them it was to deploy them and then run from with inside the platform.

Python notebook

This allows us to access Python with inside notebooks not to be confused with pyspark. As is always the challenge in Spark environments, when using traditional Python packages, all code is executed on a single node - in this instance a node with two virtual cores and 16 gigabytes of RAM.

For those that haven't come across it before, pure Python packages don't execute well in spark environments because they cannot parallelise. That means that the code executes on the controller node instead of dividing up all of the work and publishing it across each of the worker notes. To get the performance out of the spark engine data scientists need to be using spark equivalent of the functions they want to use if available. These spark equivalents are designed to be able to parallelise and make the most of your spark clusters.

New notebook utils apis

The biggest change to The Notebook utils is the ability to run multiple notebooks in parallel. Placing our views this one in the past to get the most out of my spark based Solutions.

Advanced python intellisense 

For those that starting out with Python or pyspark it's great to this coming in. Whilst copilot can do so much, personally I'm never satisfied in the code quality it generates. So having this help improve performance during those correction processes is great to see.

Multi-source checkpoint for Notebook version history

This one's basically a fancy way of saying you can see the version history from numerous different ways across fabric. Not a huge update but at least we can now access to version history easily.

Python notebook real-time resource usage monitoring

This one only impacts Python notebooks instead of pySpark notebooks. But for those using the new Python notebook experience, it means you couldn't see what's going on in your single node as execute your code.

Environment public apis

For those using spark environments, you can now use external facing apis to manage these. Meaning that it becomes easier to add them to your cicd processes.

Query databases in spark notebook

Previously, in order to query a mirrored database you have to set up a lake house in a shortcut into the mirror. With this latest update, you can now query mirror databases directly from spark notebooks.

Download files in LakeHouse Explorer

Whilst we've had ways to get files back out of OneLake, this new functionality makes it easier than ever. That means it's going to become more critical to have the right governance in place to control access to this particular feature.

Multi-LakeHouse experience

This one's a simple UI changed to make it easier for those that have multiple like houses to work with them in one place.

Fabric spark monitoring apis

This feature basically makes it easier for those running multiple Talents or complex environments to monitor all of their spark workloads in one location. If there's supposed to you do go and check out this month's release notes.

Spark run series analysis

This feature basic allows you to compare the performance of previous runs of a particular notebook spark job etc against maybe the latest run for example. It's basically the tool that is going to be needed in order to improve performance and debug issues.


For those using spark applications you've got a very similar feature announced this month as well.

Mirrored database and cicd support for data agents

Whilst this feature is still in preview, the addition of cicd means it is one to watch. As the bifocal podcast team pointed out, this support moves us a significant step forwards in data agents going ga. Personally they are now at the stage I would consider using them in production.

Migration assistant for Fabric data warehouse

Now that we have a migration system to move Synapse or SQL Server databases into Fabric, I would personally be considering migration activity - especially from synapse.

Whilst Synapse is still being sold today, the product hasn't had an update for the last couple of years and my money is on no further updates being issued unless it's something critical.


With how long this blog is getting, I'm going to stop here and publish a second part that covers RTI and beyond from the Fabric blog plus Power BI announcements. The third part will cover anything in public preview across these two blogs.

Comments

Popular posts from this blog

Power BI - Fabcon keynote, preview features, and March 2025 announcements

Fabcon Vienna 2025 - Key announcements

Workspace topologies in Microsoft Fabric