Skip to content

Latest commit

 

History

History
245 lines (158 loc) · 11.2 KB

File metadata and controls

245 lines (158 loc) · 11.2 KB
title Fabric Notebooks troubleshooting guide
description This article provides troubleshooting steps for common issues encountered in Fabric Notebooks.
ms.reviewer deevij
ms.topic troubleshooting
ms.date 04/01/2026
ai.usage ai-assisted

Fabric notebooks troubleshooting guide

Use this guide to quickly find and fix common issues in Fabric notebooks. Each issue includes examples and next steps to help you fix problems fast.

Error messages and resolution categories

This table lists common Fabric Notebooks error messages and links to relevant troubleshooting sections.

Error Categories and resolution
Your session timed out after inactivity. Timeouts
Your session expired. General connectivity, Timeouts, and Session connectivity
Your notebook disconnected. General connectivity and Timeouts
Failed to retrieve MWC token… General connectivity
You're currently offline. General connectivity
Can't connect to the collaboration server. General connectivity
Error when shutting down kernel due to ajax error 410. General connectivity
Access denied. Access
Unable to fetch high concurrency sessions. Access
Unable to save your notebook. Save failures, Access, and Paused capacity
The capacity with ID <ID> is paused. Your organization has reached its compute capacity limit. Paused capacity
Your organization has reached its compute capacity limit. Paused capacity
Item not found. Missing items
Cannot call methods on a stopped SparkContext Spark code issue

Notebook errors

The following section outlines common Notebook errors and their suggested resolutions.

Use Fix with Copilot for failed cells and Spark jobs

When a cell or Spark job fails, a Fix with Copilot action appears below the failed cell. It provides an error summary, root-cause analysis, and recommended fixes. You can review an approval diff and optionally auto-apply code changes suggested by Copilot. To access the action, select Fix with Copilot in the notebook UI or open the Copilot chat pane.

Copilot diagnostics

You can run /fix in Copilot chat to perform targeted diagnostics for a specific cell or the entire notebook. Copilot provides validation and step-by-step recommendations to help you resolve errors. For more information, see Diagnose notebook failures with Copilot.

Timeouts

Why it happens:

Notebook sessions automatically shut down after a period of inactivity. By default, the timeout is 20 minutes.

What to do:

  1. Rerun the notebook to restart the session.
  2. Adjust the session timeout at the notebook or workspace level.

Change the timeout at the notebook level

  1. Open a notebook and start a session from the Connect toolbar menu.

  2. Select the Session ready indicator in the lower-left corner.

  3. Update the timeout duration in the dialog that appears.

    [!NOTE] The Session Ready indicator is only visible when a session is active.

    :::image type="content" source="media/fabric-notebooks-troubleshooting-guide/session-timeout.png" alt-text="Screenshot of where to adjust the session timeout for a Fabric notebook.":::

Change the timeout at the Workspace level

  1. Go to Workspace settings.

  2. Select Data Engineering/Science Spark settings.

  3. Under the Jobs tab, adjust the session timeout duration as needed.

    :::image type="content" source="media/fabric-notebooks-troubleshooting-guide/workspace-timeout.png" alt-text="Screenshot of where to adjust the workspace timeout for a Fabric notebook.":::

General connectivity

Why it happens:

Network instability or a temporary backend delay.

What to do:

Retry after a few moments.

Session connectivity

Why it happens:

A session isn't connected.

What to do:

Start a session.

Tip

Copilot in notebooks is context-aware of the workspace, attached Lakehouse schemas, tables, files, notebook structure, and runtime state. It can provide guidance even before a session is started. A session is still required to execute cells, but Copilot can assist in planning fixes and validating code prior to starting a session.

You can start a session using three methods:

  1. Select Connect to start a session or attach to a high concurrency session without running the notebook.

  2. Select Run all to start a session and execute all code cells in the notebook.

  3. Select Run on a cell to start a session and execute the selected cell.

    :::image type="content" source="media/fabric-notebooks-troubleshooting-guide/start-session.png" alt-text="Screenshot of the methods to start a new session in a Fabric notebook.":::

Access

Why it happens:

These failures can happen for any of the following reasons:

  • Incorrect sign-in credentials
  • Expired login session
  • Missing permissions for Notebook, Lakehouse, or Workspace
  • Tenant restrictions

What to do:

  1. Verify your sign-in: Ensure you're logged in with the correct Microsoft Entra (formerly Azure Active Directory) account associated with your Fabric environment.
  2. Refresh your session: Sign out and sign back in to refresh your authentication token. To do this, select your profile icon in the top-right corner of the window, and then select Sign out.
  3. Check permissions: Confirm that you have the necessary role (that is, Contributor or Admin) for the resource you’re trying to access. This includes the notebook, lakehouse, workspace, or data warehouse.
  4. Contact your administrator: If you're still blocked, contact your tenant or workspace administrator to:
  • Confirm your user role and access level.
  • Check for token expiration issues.
  • Ensure you’re added to the correct Fabric tenant, especially if you recently joined the organization or switched accounts.

How to manage access in a Fabric workspace

To manage user access in a Fabric workspace, follow these steps:

  1. Browse to Microsoft Fabric and sign in with your Microsoft account.

  2. Open the workspace for which you need to manage access, by selecting Workspaces on the left navigation pane and selecting the workspace you need to manage.

  3. Select the ellipsis button (...) that appears to the right of the selected workspace as you hover above it, and then select Workspace access from the menu that appears.

    :::image type="content" source="media/fabric-notebooks-troubleshooting-guide/workspace-access.png" alt-text="Screenshot of the Workspace access menu option for a workspace.":::

  4. A list of users and their assigned roles—Admin, Member, Contributor, or Viewer—is displayed, and you can update or add users as needed.

Paused capacity

Why it happens:

An administrator paused Fabric capacity.

What to do:

  • Ask your Fabric administrator to resume capacity.
  • Steps (for admins):
    1. Go to the Microsoft Fabric Admin Portal.
    2. Navigate to Capacities.
    3. Select the paused capacity.
    4. Select Resume.

Missing items

Why it happens:

An item was deleted, moved, or you don’t have access.

What to do:

  1. Use the global search box at the top center of the Fabric browser page to try locating the item across all workspaces.
  2. Contact the item owner to confirm whether it still exists and request access if needed.
  3. If the item was deleted, the owner can restore it from version history or a backup, depending on workspace settings.

Save failures

Why it happens:

Network connectivity drops before changes are saved, or the session timed out.

What to do:

  1. Check network connectivity: Save failures are often caused by temporary internet or service disruptions.

  2. Save a copy: Duplicate the notebook to avoid losing unsaved changes.

  3. Turn on AutoSave: AutoSave is on by default. Check under the Edit menu to ensure that it hasn’t been disabled so your changes are saved automatically at regular intervals.

    :::image type="content" source="media/fabric-notebooks-troubleshooting-guide/autosave.png" alt-text="Screenshot of the AutoSave button on the Edit menu in the Fabric user interface.":::

  4. Create a checkpoint: If changes were saved before the failure, use the Version history feature in the Microsoft Fabric Notebook to manually save a snapshot of the notebook.

  • Select the History button at the top right.

    :::image type="content" source="media/fabric-notebooks-troubleshooting-guide/history.png" alt-text="Screenshot of the History button in the Fabric notebook user interface.":::

  • Select + Version to add a new snapshot of the notebook.

    :::image type="content" source="media/fabric-notebooks-troubleshooting-guide/new-version.png" alt-text="Screenshot of the + Version button in the Fabric notebook history window.":::

Collaboration conflicts

Why it happens:

The same notebook was modified by another user outside of a collaboration session—such as through VS Code, the Update Definition API, manual save mode, a deployment pipeline, or Git sync.

:::image type="content" source="media/fabric-notebooks-troubleshooting-guide/collaboration-conflict.png" alt-text="Screenshot of the collaboration conflict error in the Fabric notebook user interface.":::

What to do:

  1. Select the View changes button on the error message bar and choose a version to work as the live notebook.
  2. Select the History button at the top right of the window and use the Version history panel to find the external record. Then you can either restore or save a copy of that version.

Spark code issue

You see an error indicating: Excessive query complexity.

Why it happens:

Spark's Catalyst optimizer has produced a very large logical/physical plan.

What to do:

Modify the query logic by breaking down the query. Refactor complex pipelines into smaller, staged queries. You can also use Copilot to surface performance insights (for example, data size considerations, efficient join strategies, and avoiding shuffles) and suggest refactoring into reusable functions. Consider using Copilot to validate the end-to-end workflow and propose staged query patterns.

Example fix:

Instead of chaining everything in one go, like this:

df = (
    spark.read.parquet("...")
         .filter(...)
         .join(...)
         .groupBy(...)
         .agg(...)
         .join(...)
         .filter(...)
         .withColumn(...)
         .join(...)  # and so on...
)

Break it into parts:

df1 = spark.read.parquet("...").filter(...)
df2 = df1.join(...).groupBy(...).agg(...)
df2.write.parquet("/tmp/intermediate1")
df3 = spark.read.parquet("/tmp/intermediate1").join(...).filter(...)

Related content

Fabric known issues