LFM2.5-VL-1.6B WebGPU

Vision-Language Model in Your Browser

This demo showcases in-browser vision-language inference with LFM2.5-VL-1.6B, powered by ONNX Runtime and WebGPU.

Everything runs entirely in your browser with WebGPU acceleration, meaning no data is sent to a server.

Idle
0 MB

Captions

Start capturing to see live captions...