UNesT — NIfTI pipeline (whole-brain 133-class)
Whole-brain segmentation from T1-weighted MRI, 133 anatomical classes.
NestViT (3-level hierarchical transformer) + CNN decoder, 87 M params.
Full end-to-end: drop your own
.nii.gz brain T1w volume in,
WASM does NormalizeIntensity (nonzero mean/std) + sliding window
(96³ patches, 70% overlap, constant weighting) + NIfTI pack.
·
← single-patch parity/debug
How to use:
① Load WASM →
② Load model weights →
③ Upload a brain T1w volume (NIfTI, gzipped or raw) →
④ Run sliding window →
⑤ Download segmentation.
✓ MONAI strict conformance:
whole-volume argmax 99.9999% vs MONAI SlidingWindowInferer
(MPS reference, 256³ T1w, 16M voxels — auto-verified at runtime and printed
below as "argmax agreement with MONAI/PyTorch").
NestViT forward rel_rms ~2.1e-6 per patch vs PyTorch. Pipeline:
NormalizeIntensityd(nonzero=True) → sliding window (96³, overlap 0.7,
constant mode) — no spatial resample, inference at native NIfTI resolution.
Performance (M1 MacBook, 96³ patch forward, warmed):
Python MPS 513 ms/patch · Node/Chrome WebGPU ~600 ms/patch
(≈ 15% gap, 87 M-param NestViT is dispatch-heavy).
GPU-resident argmax + tile-based accum keep host I/O out of the hot path.
MONAI pipeline: Unlike the nnU-Net models (dental/TopCoW), UNesT uses
MONAI's SlidingWindowInferer — no spatial resampling,
inference runs at native NIfTI resolution. Preprocessing is just
NormalizeIntensityd(nonzero=True, channel_wise=True):
(x − mean_nonzero) / std_nonzero.
⚠ Download size: ~340 MB of model weights will be fetched below.
First load will take a while on a slow connection.
Axial slice viewer
Input (intensity-normalized)
Predicted segmentation (argmax)