14 points | by andhuman 7 hours ago ago
1 comments
After GLM and Z.ai releasing huge models. Thanks to Qwen team, we have models which could be run on low end devices.
Especially that Qwen3.5-35B-A3 looks great for cheaper GPUs. Since a quant version of it would need a <32 GB RAM.
After GLM and Z.ai releasing huge models. Thanks to Qwen team, we have models which could be run on low end devices.
Especially that Qwen3.5-35B-A3 looks great for cheaper GPUs. Since a quant version of it would need a <32 GB RAM.