Grounded SAM demo
High throughput image segmentation
I put together a minimal demo to use Grounded SAM: Grounding DINO text prompts + Segment Anything Model. This enables rapid automated extraction of regions of interest from images, e.g. to segment biological specimen. The demo contains instructions for installation and a jupyter notebook, all hosted on github - feel free to give it a try (Animation of the SAM3 textprompt, but the principle is the same for GroundedSAM):
Note that if you don’t use a GPU the model is pretty slow (up to a minute per image, or even more), otherwise it runs pretty fast, even on small or mid-size GPUs. I provided some test images of butterflies, which are highly standardized, but it should work well on unstandardized images, e.g., from iNaturalist.
> GroundedSAM demo on GitHub <