Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

We've captured the moments from FabCon & SQLCon that everyone is talking about, and we are bringing them to the community, live and on-demand. Starts on April 14th. Register now

Reply
Poojitha_P
New Member

Audit logs in Notebooks

Microsoft Fabric / Spark notebooks, writing small DML operations (insert/update) to an audit log table can be slow on spark cluster taking long time (30–120 sec sometimes) compared to sql db and server.

 

On optimising this performance would be efficient to capture notebook process status on audit log table.

1 ACCEPTED SOLUTION
v-saisrao-msft
Community Support
Community Support

Hi @Poojitha_P,

Thank you @Mauro89, for your insights,

Frequent small DML operations from Spark notebooks, such as audit-log inserts or updates, can result in many small files within Delta tables. This leads to increased task and metadata overhead, which can impact performance. According to Microsoft documentation, enabling Optimize Write helps by performing pre-write compaction, making it useful for tables with frequent small inserts or UPDATE and MERGE operations. Additionally, Auto-compaction can automatically compact fragmented files after writes, which helps maintain table performance when there are frequent small writes.

Tune File Size - Microsoft Fabric | Microsoft Learn

Cross-Workload Table Maintenance and Optimization - Microsoft Fabric | Microsoft Learn

 

Thank you.

View solution in original post

4 REPLIES 4
v-saisrao-msft
Community Support
Community Support

Hi @Poojitha_P,

Checking in to see if your issue has been resolved. let us know if you still need any assistance.

 

Thank you.

v-saisrao-msft
Community Support
Community Support

Hi @Poojitha_P,

Have you had a chance to review the solution we shared earlier? If the issue persists, feel free to reply so we can help further.

 

Thank you.

v-saisrao-msft
Community Support
Community Support

Hi @Poojitha_P,

Thank you @Mauro89, for your insights,

Frequent small DML operations from Spark notebooks, such as audit-log inserts or updates, can result in many small files within Delta tables. This leads to increased task and metadata overhead, which can impact performance. According to Microsoft documentation, enabling Optimize Write helps by performing pre-write compaction, making it useful for tables with frequent small inserts or UPDATE and MERGE operations. Additionally, Auto-compaction can automatically compact fragmented files after writes, which helps maintain table performance when there are frequent small writes.

Tune File Size - Microsoft Fabric | Microsoft Learn

Cross-Workload Table Maintenance and Optimization - Microsoft Fabric | Microsoft Learn

 

Thank you.

Mauro89
Super User
Super User

Hi @Poojitha_P

 

welcome to the community and thanks for your post.

 

Iam not sure about the question you have, may I ask you to clear thinks up a bit?
If it is more of a feature request I recommend you to post this in the ideas section of this community here: Fabric Ideas - Microsoft Fabric Community

 

Best regards!

PS: If you find this post helpful consider leaving kudos or mark it as solution

 

Helpful resources

Announcements
FabCon and SQLCon Highlights Carousel

FabCon &SQLCon Highlights

Experience the highlights from FabCon & SQLCon, available live and on-demand starting April 14th.

New to Fabric survey Carousel

New to Fabric Survey

If you have recently started exploring Fabric, we'd love to hear how it's going. Your feedback can help with product improvements.

Join our Fabric User Panel

Join our Fabric User Panel

Share feedback directly with Fabric product managers, participate in targeted research studies and influence the Fabric roadmap.

March Fabric Update Carousel

Fabric Monthly Update - March 2026

Check out the March 2026 Fabric update to learn about new features.

Top Kudoed Authors