In addition to open-source models from all the Qwen model families, they also offer proprietary MoE (Mixture of Experts) models. Their flagship general-purpose model is Qwen2.5-Plus, while Qwen2.5-Turbo is their long-context model, capable of handling up to 1 million tokens of context. There's also Qwen2-VL-Max, which seems to be just Qwen2-VL 72B—though that's not confirmed.
Feature-wise, it's quite solid for an early release. It includes artifacts, document uploads, and image input capabilities. A standout feature, which I haven’t seen outside of Chatbot Arena, is the ability to send the same prompt to multiple models (up to three) simultaneously. However, this feature is still rough around the edges—you can't seamlessly continue the conversation with just one of the models afterward, which their interface doesn’t currently support.
Coming soon, the chatbot is expected to integrate search and image generation capabilities. It will be interesting to see whether they’ll use FLUX again or develop their own solutions. We'll have to wait and see.
The service is entirely free, similar to Mistral and DeepSeek. Their goal isn’t to profit from subscriptions but to promote their API and gather additional fine-tuning data. For those concerned about privacy, Anthropic’s Claude remains the only option where chat data isn’t used for training.
Check it out at chat.qwenlm.ai