🤖 Qwen3.5 Client-Side

Run Qwen3.5 models entirely in your browser - no server required

âš¡ Select a model to see details
Note: First load downloads model (~100MB-800MB). Requires HTTPS or localhost. Uses WebLLM library for client-side inference.
Q
Hello! I'm Qwen3.5 running entirely in your browser. Select a model from the dropdown above and start chatting!
Initializing...
Mem: - MB
Model: -