cell-AAP
Utilities for the semi-automated generation of instance segmentation annotations to be used for neural network training. Utilities are built ontop of UMAP, HDBSCAN and a finetuned encoder version of FAIR's Segment Anything Model developed by Computational Cell Analytics for the project micro-sam. In addition to providing utilies for annotation building, we train a network, FAIR's detectron2 to
- Demonstrate the efficacy of our utilities.
- Be used for microscopy annotation of supported cell lines
Supported cell lines currently include:
- HeLa
In development cell lines currently include:
- U2OS
- HT1080
- Yeast
We've developed a napari application for the usage of this pre-trained network and propose a transfer learning schematic for the handling of new cell lines.
We highly recommend installing cell-AAP in a clean conda environment. To do so you must have miniconda or anaconda installed.
If a conda distribution has been installed:
-
Create and activate a clean environment
conda create -n cell-aap-env conda activate cell-app-env
-
Within this enviroment install pip
conda install pip
-
Then install cell-AAP from PyPi
pip install cell-AAP --upgrade
-
Finally detectron2 must be built from source, atop cell-AAP
#For MacOS CC=clang CXX=clang++ ARCHFLAGS="-arch arm64" python -m pip install 'git+https://github.com/facebookresearch/detectron2.git' #For other operating systems python -m pip install 'git+https://github.com/facebookresearch/detectron2.git'
- To open napari simply type "napari" into the command line, ensure that you are working the correct environment
- To instantiate the plugin navigate to the "Plugins" menu and select "cell-AAP"
- You should now see the Plugin, where you can select an image, display it, and run inference on it.
If running inference on large volumes of data, i.e. timeseries data >= 300 MB in size, we recommed to procceed in the following manner.
- Assemble a small, < 100 MB, substack of your data using python or a program like ImageJ
- Use this substack to find the optimal parameters for your data, (Number of Cells, Confidence)
- Run Inference over the volume using the discovered optimal parameters
Note: Finding the optimal set of parameters requires some trial and error, to assist we've created a table.
Classifications $\Downarrow$ Detections $\Rightarrow$ | Too few | Too many |
---|---|---|
Dropping M-phase | Confidence $\Downarrow$ Number of Cells $\Uparrow$ | Confidence $\Downarrow$ Number of cells $\Downarrow$ |
Missclasifying M-phase | Confidence $\Uparrow$ Number of Cells $\Uparrow$ | Confidence $\Uparrow$ Number of Cells $\Downarrow$ |
Once inference is complete the following colors indicate class prediction
- Red: Non-mitotic
- Blue: Mitotic
- Purple: Interclass double prediction
Note: Interclass double predictions are often early prophase cells that the network is not "confident" in, to mitigate such predictions increase the minimum confidence threshold. This will typically result in most double predictions regressing to the Non-mitotic class.
Version:
- 0.0.8
Last updated:
- 27 November 2024
First released:
- 28 May 2024
License:
- Information not submitted
Supported data:
- Information not submitted
Plugin type:
GitHub activity:
- Stars: 0
- Forks: 0
- Issues + PRs: 0
GitHub activity:
- Stars: 0
- Forks: 0
- Issues + PRs: 0
Python versions supported:
Operating system:
- Information not submitted
Requirements:
- napari[all]>=0.4.19
- numpy==1.26.4
- opencv-python>=4.9.0
- tifffile>=2024.2.12
- torch>=2.3.1
- torchvision>=0.18.1
- scikit-image>=0.22.0
- qtpy>=2.4.1
- pillow>=10.3.0
- scipy>=1.3.0
- timm>=1.0.7
- pandas>=2.2.2
- superqt>=0.6.3
- btrack>=0.6.5
- seaborn>=0.13.2
- openpyxl>=3.1.4
- joblib>=1.0
- scikit-learn>=0.22
- cython<3,>=0.27