🤖 Qwen3.5 Client-Side
Run Qwen3.5 models entirely in your browser - no server required
Select Model:
Qwen3.5 - 0.8B (Fastest)
Qwen3.5 - 1.8B (Quick)
Qwen3.5 - 3B (Balanced)
Qwen3.5 - 7B (Better)
Qwen3.5 - 9B (Best)
âš¡ Select a model to see details
Note:
First load downloads model (~100MB-800MB). Requires HTTPS or localhost. Uses
WebLLM
library for client-side inference.
Q
Hello! I'm Qwen3.5 running entirely in your browser. Select a model from the dropdown above and start chatting!
Send
Initializing...
Mem: - MB
Model: -