Local Chat Assistant
Local AI and deterministic workbench for napari image-analysis workflows
Local, Ollama-powered AI and deterministic workbench for napari image-analysis workflows.
napari-chat-assistant adds a dock widget inside napari that understands the active viewer session, runs built-in image-analysis actions, generates executable napari Python code when a request goes beyond the current toolset, and lets users promote repeatable tasks into one-click shortcuts.
The goal is not to bolt a generic chatbot onto a viewer. The goal is to turn napari into a more practical analysis workspace for people who work with microscopy and other large multidimensional imaging datasets, especially users who want local AI help, reproducible workflows, direct control over their data, and fewer clicks per task.
What's New In 2.2.0
Version 2.2.0 makes the assistant easier to guide, easier to inspect, and easier to improve from real usage.
New in this release:
Rate Resultlets you quickly mark the latest result asHelpful,Wrong Route,Wrong Answer, orDidn't Work- telemetry is easier to inspect with clearer session views for
Model Activity,Intent Signals,Problems, andRaw Log - the assistant can now suggest prompts that are more likely to trigger supported local workflows
- long-running model requests can now be stopped directly from the chat panel
- runtime guidance is clearer when generated code depends on packages that are not included by default
These updates improve local workflow discovery, feedback-driven refinement, and day-to-day control of the assistant.
For complete release history, see CHANGELOG.md.
Who It Is For
This plugin is built for:
- imaging core facility users
- researchers, staff scientists, and students working with imaging data
- teachers and educators running imaging demos or training sessions
- It is also designed for users who prefer to describe analysis goals in natural language instead of memorizing commands or writing scripts.
It is especially useful when you:
- inspect large 2D or 3D image data in napari
- move between interactive viewing, measurement, and Python-based analysis
- want a local open-weight model instead of a cloud service
- need to save, reuse, restore, and teach common imaging workflows
- want fast deterministic actions and one-click shortcuts for repeated tasks
Why It Is Different
This plugin is built from practical imaging workflow needs, not from the idea of putting a generic chatbot beside a viewer.
It is designed around how work actually happens in napari:
- start from the data already open in the viewer
- inspect layers, objects, and regions of interest
- run the next analysis step through chat, actions, templates, shortcuts, or code
- review the result directly in the same session
- save useful workflows for later reuse
The assistant is grounded in the live napari session. It can inspect loaded layers, use ROI context, run built-in analysis actions, generate or refine viewer-bound Python when needed, and support deterministic one-click workflows through Actions and Shortcuts. The result is closer to an imaging workbench than a chat panel.
Interaction Model
The plugin now supports a deliberate spectrum of interaction styles:
Prompt: AI-first natural language requestsCode: direct viewer-bound Python for users who want exact controlTemplates: reusable examples and built-in starting pointsActions: deterministic built-in functions that can be run directlyShortcuts: user-defined one-click action buttons for repeated daily work
This is now a core design principle of the plugin: reduce how many clicks and how much time it takes for a user to complete a task.
Interface Overview

The main dock is organized into a small number of practical work areas:
-
Connection and model controlsSelect the local model, monitor status, and manage the backend connection. -
Layer ContextReview the active workspace layers and insert exact layer names into prompts or code. -
LibraryBrowse reusablePrompts,Code,Templates, and deterministicActions. -
ShortcutsKeep your most-used actions as one-click buttons for repeated work. -
SessionSave or restore workspace state and access activity, telemetry, and diagnostics. -
ChatSee the assistant transcript, generated code, and direct action feedback in one place. -
PromptEnter natural language requests or paste Python forRun My CodeandRefine My Code.
What You Can Do
Current workflows include:
- inspect the selected layer or named layers with structured summaries
- review live layer context and insert exact layer names into prompts or code
- run built-in tools for enhancement, thresholding, binary mask cleanup, measurement, projection, cropping, montage, presentation, and layer visibility control
- add non-destructive annotation overlays, including free text, particle labels, callout labels with leader lines, and boxed titles above the image
- use deterministic
Actionsfor common workflows without depending on prompt phrasing - build and save your own
Shortcutslayouts for repeated one-click work - inspect ROI context and measure or extract values from
Labels,Shapes, and line-based workflows - use interactive analysis widgets such as
ROI Intensity Analysis,Line Profile Analysis, andGroup Comparison Statistics - access SAM2 setup, live preview, box prompting, points prompting, and mask refinement from the same workbench
- browse built-in prompt templates, code templates, and learning templates for plugin workflows, teaching, and model testing
- generate napari Python code when no built-in tool is the right fit
- paste and run your own viewer-bound Python from the prompt box with
Run My Code - repair or explain broken pasted Python with
Refine My Code - save, pin, tag, rename, and reuse prompts and code from the local Library
- use built-in synthetic data generators for repeatable testing, teaching, and workflow development
- learn from built-in content for microscopy, electron microscopy, imaging physics, quantitative imaging, statistics, academic prompting, and language support
- save and restore workspace state with a JSON manifest plus OME-Zarr assets for generated image and labels data
- use
Atlas Stitchfrom the advanced menu for specialized stitching and export workflows
Example requests:
Inspect the selected layerPreview threshold on the selected imageApply gaussian blur to the selected image with sigma 1.2Remove small objects from the selected mask with min_size 64Run watershed on the selected maskMeasure labels table for the selected labels layerAnnotate template_blob_labels with particle 1 to 4Annotate template_blob_labels using publication-style callout labels with leader lines.Add title WT Group N=10 above the image on the leftInspect the current ROIExtract ROI values from the selected image using the current ROIOpen ROI intensity analysisInitialize a SAM2 points prompt layer for the selected imageWrite napari code to plot object area by condition
Local-First By Design
The assistant runs on local open-weight models through Ollama:
- no API key required
- no cloud dependency
- no internet requirement during normal use
- no image data leaves your workstation
This makes it a better fit for research and facility environments where users want privacy, controllability, and local reproducibility.
Installation
Requirements:
- Python 3.9+
- napari
- Ollama running locally
- one local Ollama model, such as
nemotron-cascade-2:30b
Install Ollama and start the server:
macOS and Linux:
curl -fsSL https://ollama.com/install.sh | sh
ollama serve
Windows:
- download and install Ollama from
https://ollama.com/download/windows - start the Ollama application or service
ollama pull nemotron-cascade-2:30b
Optional model alternatives:
ollama pull gpt-oss:120b
ollama pull qwen3-coder-next:latest
ollama pull qwen3.5
ollama pull qwen2.5:7b
Install the plugin:
pip install napari-chat-assistant
Development install:
git clone https://github.com/wulinteousa2-hash/napari-chat-assistant.git
cd napari-chat-assistant
pip install -e .
The plugin does not bundle Ollama or model weights. Larger models may require substantial RAM or VRAM.
Usage
- Start napari.
- Open
Plugins -> Chat Assistant. - Leave
Base URLashttp://127.0.0.1:11434unless your Ollama server is elsewhere. - Choose a model from the
Modeldropdown or type a model tag manually. - Use
Loadif you want to warm the selected model before the first request. - Start chatting, or use the Library for repeatable tasks and reusable code.
The assistant works best when prompts describe a concrete action. If you already have Python code, paste it into the Prompt box and use Run My Code. If pasted code fails or needs adaptation to the current viewer, use Refine My Code.
Examples:
Inspect the selected layerPreview threshold on em_2d_snr_midApply gaussian denoise to em_2d_snr_low with sigma 1.2Fill holes in mask_messy_2dRemove small objects from mask_messy_2d with min_size 64Measure labels table for rgb_cells_2d_labelsAnnotate template_blob_labels with publication-style callout labelsCreate a max intensity projection from em_3d_snr_mid along axis 0Crop em_2d_snr_high to the bounding box of em_2d_mask with padding 8Extract ROI values from em_2d_snr_mid using em_2d_maskPrepare this viewer for image reviewUndo last workflow
Core Workflow
- Open your image or volume in napari.
- Use
Layer Contextto inspect loaded layers or insert exact layer names into the Prompt box. - Ask for a concrete action, or browse
Actionswhen you want deterministic execution. - Use
Templatesfor reusable prompt/code examples andShortcutsfor repeated one-click workflows. - Use generated code,
Run My Code, orRefine My Codewhen a task needs custom Python. - Save useful prompts, code snippets, workspace state, and shortcut layouts for later reuse.
This keeps inspection, analysis, code review, and workflow reuse close to the current napari session.
Synthetic Data Templates
Use the Library Templates > Data area or built-in code snippets to load repeatable synthetic datasets.
Current built-in synthetic generators include:
- Synthetic 2D SNR Sweep Gray
- Synthetic 3D SNR Sweep Gray
- Synthetic 2D SNR Sweep RGB
- Synthetic 3D SNR Sweep RGB
- messy masks 2D/3D
These create named layers so you can test built-in tools quickly without hunting for sample data. Labels layers from these synthetic datasets can also be used as ROIs for ROI inspection and value extraction.
Feature Summary
Built-in workflows include:
- layer inspection and live layer-context insertion
- Quick Controls for fit view, zoom, 2D/3D mode, axes, scale bar, tooltips, overlays, grid view, and layer visibility
- enhancement, thresholding, mask cleanup, connected components, labels measurement, projection, cropping, montage, and presentation helpers
- ROI inspection and value extraction from
Labels,Shapes, and line-based workflows - annotation overlays, including free text, particle labels, callouts, and boxed titles
- workspace save/restore with a JSON manifest plus OME-Zarr assets for generated image and labels data
- deterministic
Actions, reusableTemplates, and user-definedShortcuts - optional advanced workflows such as SAM2 and Atlas Stitch
Code generation workflows
When a request is not covered by a built-in tool, the assistant can return napari Python code instead of forcing the wrong tool.
Generated code can be:
- copied to the clipboard
- reviewed in chat
- executed from the plugin
- repaired or explained in place when you use
Refine My Codeon pasted or failed user code
You can also paste your own Python directly into the Prompt box and run it from the plugin with Run My Code, without switching to QtConsole.
Use assistant-generated code when you want a reusable script or need custom logic beyond the current built-in tools.
Use Run My Code when you already have Python you want to test quickly inside the current napari session.
Use Refine My Code when your own code fails validation, errors at runtime, or needs adjustment to the current napari viewer state.
Library, Actions, And Shortcuts
The Library stores repeatable prompts, code snippets, templates, and deterministic actions.
Useful behavior:
- single click loads a prompt or code snippet into the editor
- double click sends a prompt or runs a code snippet
- templates can be previewed, loaded, or run
- actions can be previewed, run, or added to
Shortcuts - saved and recent prompts/code can be renamed, tagged, pinned, or cleared
- shortcuts can be arranged, saved, loaded, and reused across sessions
This is now part of the core product direction: keep AI available, but let mature workflows become fast, button-driven, and reusable.
Optional Integrations
If napari-nd2-spectral-ome-zarr is installed, the assistant can open:
- the ND2-to-OME-Zarr export widget
- the Spectral Viewer widget
- the Spectral Analysis widget
This lets chat act as an entry point for Nikon ND2 conversion and spectral workflows without rebuilding those UIs inside this plugin.
Install links:
- GitHub:
https://github.com/wulinteousa2-hash/napari-nd2-spectral-ome-zarr - napari Hub:
https://napari-hub.org/plugins/napari-nd2-spectral-ome-zarr.html
Experimental SAM2
Behavior:
- SAM2 is accessed from
Advanced, not from the main toolbar SAM2 Setupis always available fromAdvancedSAM2 Livestays disabled until the backend is configured and passes readiness checks- the rest of the assistant remains usable even if SAM2 is not configured
Current setup expects:
- a working Python environment that already includes the dependencies required by SAM2
- an external SAM2 project path
- a valid checkpoint path
- a valid config path
napari-chat-assistant now ships its own bundled SAM2 adapter in
napari_chat_assistant.integrations.sam2_adapter, so users only need the SAM2 repo,
checkpoint, and config files in the normal places.
The SAM2 Setup dialog now includes:
Auto Detectto scan common local clone locations and fill likely project, checkpoint, and config pathsSetup Helpfor short setup commands and field tips
Minimal install:
git clone https://github.com/facebookresearch/sam2.git && cd sam2
pip install -e .
Typical setup flow:
- Start napari from the environment that contains your SAM2 dependencies.
- Open
Plugins -> Chat Assistant. - Open
Advanced -> SAM2 Setup. - Click
Auto Detectfirst. - Confirm or edit the SAM2 project path, checkpoint path, config path, and device.
- Click
Test. - Save the settings.
- Open
Advanced -> SAM2 Livewhen the backend reports ready.
How It Works
The assistant is designed to operate within constrained napari workflows rather than as a general-purpose chatbot.
The current strategy is:
- collect structured napari viewer context
- build deterministic per-layer profile objects from the current viewer state
- add bounded approved session memory when available
- send that context and the user request to a local Ollama model
- the model returns a structured JSON response that specifies either:
- a normal reply
- a built-in tool call
- generated Python code
- run the selected registry-backed tool or expose the generated code through the UI
- update session memory from explicit user feedback or successful follow-up behavior
This keeps the assistant more grounded than a plain chat interface and makes common operations more reliable.
Current viewer state and explicit user clarification remain the primary source of truth. Session memory is selective and secondary, not a full transcript memory system.
Recommended Models
For a broader list of models tested during development, see docs/tested_models.md.
Good starting choices:
nemotron-cascade-2:30bgpt-oss:120bqwen3-coder-next:latestqwen3.5gemma4:e4b
nemotron-cascade-2:30b is the current default. Larger models may improve reasoning or code generation but require more RAM or VRAM; smaller tags are better for constrained systems.
Current Limitations
- the dataset profiler is still Phase 1 and remains strongest on already-loaded napari layers rather than reader- or file-format-specific workflows
- TIFF vs OME-Zarr adapter behavior is not implemented yet
- ND2 and Zeiss reader-aware adapters are not implemented in this plugin
- the tool registry is in progress; some tools are now registry-backed, but the migration is not complete yet
- session memory is selective and bounded; it is not full conversation memory
- model output can still be inconsistent, especially when falling back to generated code
- some requests still miss built-in tools and fall through to code generation when a stronger built-in workflow would be preferable
- generated code can still fail if the model invents incorrect napari APIs or unsupported imports
- no image attachment or multimodal input pipeline yet
- performance optimization for very large 2D/3D datasets is still in progress
- hard native crashes in Qt/C-extension code may not be captured cleanly by the plugin crash log even when normal plugin errors are logged
Most reliable current workflow:
- use built-in tools for common layer inspection and mask/image actions
- trust current viewer context and current layer profiles over any remembered prior turn
- use the Library for repeated prompts, demo packs, and reusable code
- use generated code when you want explicit review and control
- use
Run My Codewhen you already have working Python and want to test it directly inside napari
For demo and education workflows:
- ask for code that uses the current napari
viewer - avoid prompts that create a second
napari.Viewer()or callnapari.run() - prefer docked widgets over unmanaged popup windows for histogram or SNR teaching tools
Troubleshooting
Ollama not running
If Test fails after restarting your computer, Ollama is usually not running yet.
Start it in a terminal:
ollama serve
Then return to the plugin and click Test again.
Pulling a model
Model downloads are intentionally handled outside the plugin.
To try a different model:
- browse tags at
https://ollama.com/search - type the tag into the plugin
Modelfield if needed - pull it in a terminal, for example:
ollama pull nemotron-cascade-2:30b
Then use Test to refresh the plugin state.
Logs and crash logs
The plugin writes two local log files:
~/.napari-chat-assistant/assistant.log~/.napari-chat-assistant/crash.log
Use these together with the terminal traceback when diagnosing crashes or unclear UI failures.
Local model telemetry
If telemetry is enabled, the plugin writes lightweight local model events to:
~/.napari-chat-assistant/model_telemetry.jsonl
Telemetry is opt-in from Session -> Telemetry and includes summary, log, and reset controls for advanced users.
Generated code is also preflight-validated before execution for common dtype mistakes, unsupported napari imports, and unavailable viewer.* APIs. When validation blocks execution, the code remains visible and copyable for review or regeneration.
Release
This package is published to PyPI so napari Hub can discover it.
For maintainer release instructions and PyPI publishing setup, see RELEASING.md.
Development
Build a release artifact:
python -m build
License
MIT.
Version:
- 2.2.0
Last updated:
- 2026-04-11
First released:
- 2026-03-27
License:
- MIT

