May '25 - Fabric and Power BI preview features
With the GA features and key build announcements out the way, we now turn our attention to the preview features. Again, I'm not going to cover the key announcements from build in this blog - if you are interested in those, do go have a look at my previous blog on the matter.
First up, Power BI.
Power BI
Copilot and AI
As with all things Copilot, for UK readers, please do remember you have to turn on out of region processing - and risk your data being processed in the US. For everyone else in the EU, this isn't an issue (your data is processed in Paris).
AI data schema
Part of the challenge we're going to face with Copilot of semantic models is doing all we can to ensure the semantic models they are using are optimised - to help reduce the chances of hallucinations further. AI data schema is one of the tools to do this.
With this wizard we are able to:
- Simplify the schema by disabling tables, columns, measures, and hierarchies. Reducing the data set that Copilot can see.
- Establish a data set of curated responses for common queries - meaning copilot uses these verified answers instead of trying to work it out themseves. Meaning that we can ensure that we provide known solutions to the common questions that are asked. These responses are also used to help train Copilot on your data set.
- Add business context and guidelines on how to analyse data through AI instructions. This allows us Copilot to provide more tangible responses, whilst respecting specific guardrails.
Skill Picker
Added to Power BI desktop, it allows you to test the impact that setting a data schema has on Copilot. Currently it focuses on testing three areas:
- Answering questions about the data (semantic model)
- Analyse report visuals
- Create new report pages
Mark a semantic model as prepared for AI
This marks the semantic model as ready for copilot - and is an sepecially important step in the standalone Copilot experience.
Reporting
Data writeback
With Fabric shifting to more of an operational platform, we can now use Fabric data functions to be able to write updates back to the platform - no more having to create a PowerApp and embedding it in the report!
Task flows
Another byproduct of Fabric becoming more operational is that we now gain the abvility to automate tasks based on report interactions. From the tutorial, these seem pretty complex to setup today - hopefully we'll see this simplified going forwards.
With Task flows, we can interact with workflows, notifications, APIs and more apparently. For example, if you have exec presentations built off PBI reports then analysts could record comments on lower level reports that are then automatically consumed into the exec report - whilst keeping a record for future reference.
Persisted sorting for field parameters
At last, no more resetting of sorting on visuals when field parameters are changed.
New functions with visual calculations
Personally, I'd try to avoid visual calculations as to me it's hidden code that is only going to cause a maintenance headache later on. However, if you do use them you now have access to LOOKUP and LOOKUPWITHTOTALS.
Updates to new list slicer
Yep this one is still in preview since it was released in October 2024. If you are using it, check it out.
Modeling
TMDL view enhancements
If you are already using the TMDL view that's in preview, you get:
- Contect tooltips
- New formatting options
- Automatic code fix recommendations
- Compatibility upgrade prompts
- Renaming column changes
Check out the blog for more details.
Direct lake relationship improvements
When creating relationships, PBI now tries to work out cardinality by counting rows. Problem is it doesn't validate if it's got it right or not - making it critical to double check every relationship still.
On top of that we get the normal assume referential integrity checkbox - allowing us to control if the (blank) option shows up in filters or not.
Mixed mode semantic models
PBI now supports both import and direct lake tables existing within one model. This means that we can now select the storage mode based on update frequency - e.g. for fact tables with slowly moving/static cardinality we can use import mode, but for dimension tables we could use DirectLake.
This allows us to minimise the cost of maintaining the cache sat behind DirectLake models.
Data connectors
We have updates to data connectors for:
- BigQuery
- Vertica
- Oracle
- Snowflake
Fabric
Platform
Shortcut transformations
This allows for application of AI transformations and changing formats within a shortcut - removing the need to create a notebook for a relatively simple transformation, and further reducing the number of copies of data stored in OneLake (and the associated costs, enviromental impact, etc).
Updates to user data functions
14 regions have had user data functions added to them. If this is something you've been waiting to test, have a look at the blog post to see if your region is in them.
Beyond this, we have:
- Service principal support
- Private library support
Data science
Data Agent integration with Copilot Studio
You can now add data agents into custom agents built in Copilot Stuio - simplifying the effort of integrating your data into your Agentic AI solutions.
Real-time intelligence (RTI)
Continuous ingestion from Azure storage to Eventhouse
This extends current functionality so that when data is uploaded to a blob/file store it is automatically loaded.
Instead of having to build custom code to handle this approach as you would today, it will be available via the get data wizard.
Derived streams in Direct Ingestion mode
It sounds complex, but what this means is that you can use a derived stream destination before the data is consumed into an eventhouse sync that is running in direct ingestion mode.
Databases
Native CDC support in copy job
For those that are using CDC, you can now set it up in the copy job wizard. Meaning you now get the option to choose between CDC or watermark-based incremental copy (alongside all the advantages that CDC gives).
Dataflow Gen2
Public APIs
Fabric now has a set of APIs available for a wide range of use cases. Making it easier to do things like:
- CI/CD
- Monitoring and alerts
- Error handling
If these are interesting, make sure you checkout the Microsoft Learn documentation.
Parameterisation
Dataflows now have the ability to add parameters - meaning that these can now be passed from pipelines down into dataflows.
Lakehouse as an incremental refresh destination
Previously when using a Lakehouse as a destination you could only replace the table. Now you are able to upsert.
SharePoint files as a destination
So far we've been able to consume Sharepoint files, now we can write them.
Comments
Post a Comment