Medical image AI that runs entirely in your browser. Quantification, segmentation, and reporting — without uploads.

For Clinical Use — Quant workstation For Research — Model smoke catalog

Private by design

Privacy here isn't a promise — it's a structural property. The inference engine, the model weights, and your images all live in your browser. Nothing is uploaded because nothing can be: there is no server to upload to.

  • Images stay on your device — never uploaded, never inspected, never logged
  • No login, no account, no network required to run inference
  • First run downloads the workflow to your device (~ a few hundred MB per workflow); after that, fully offline
  • Close the tab and nothing remains on any server, because nothing was ever there

Built for on-device inference

A pure C / WebAssembly / WebGPU stack — the same engine philosophy as the rest of the MedFilm family. Multi-model pipelines fuse to a single canonical mask before reaching your screen. No backend, no inference server, no orchestration cluster.

  • Shares its DICOM core with the rest of the MedFilm family — same files, same numbers
  • Inference runs on WebGPU where available, with WASM + SIMD128 fallback
  • Workflows compose multiple specialist models into one reproducible report
  • Open-source models, bit-exact ports — no proprietary black boxes

Research & educational use

MedFilm.ai is provided for research, methodology development, and education. Outputs are quantitative measurements, not diagnoses. Clinical decisions remain the responsibility of a qualified physician reviewing the original imaging.

If you're a researcher: head to the model catalog for per-port smoke pages that let you drop a CT, CBCT, or X-ray file and inspect the segmentation directly.

If you're a clinician evaluating quantification workflows: open the workstation to run an end-to-end pipeline (preprocessing → models → measurements → report).

Open it now

No signup. No upload. Your data never leaves your device.

For Clinical Use For Research