Qwen3.6-27B: Flagship-Level Coding in a 27B Dense Model
by mfiguiere on 4/22/2026, 1:19:58 PM
https://qwen.ai/blog?id=qwen3.6-27b
Comments
by: simonw
The pelican is <i>excellent</i> for a 16.8GB quantized local model: <a href="https://simonwillison.net/2026/Apr/22/qwen36-27b/" rel="nofollow">https://simonwillison.net/2026/Apr/22/qwen36-27b/</a><p>I ran it on an M5 Pro with 128GB of RAM, but it only needs ~20GB of that. I expect it will run OK on a 32GB machine.<p>Performance numbers:<p><pre><code> Reading: 20 tokens, 0.4s, 54.32 tokens/s Generation: 4,444 tokens, 2min 53s, 25.57 tokens/s </code></pre> I like it better than the pelican I got from Opus 4.7 the other day: <a href="https://simonwillison.net/2026/Apr/16/qwen-beats-opus/" rel="nofollow">https://simonwillison.net/2026/Apr/16/qwen-beats-opus/</a>
4/22/2026, 4:46:49 PM
by: anonzzzies
I wish that all announcements of models would show what (consumer) hardware you can run this on today, costs and tok/s.
4/22/2026, 3:16:38 PM
by: syntaxing
Been using Qwen 3.6 35B and Gemma 4 26B on my M4 MBP, and while it’s no Opus, it does 95% of what I need which is already crazy since everything runs fully local.
4/22/2026, 4:45:26 PM
by: jameson
What competitive advantage does OpenAI/Anthropic has when companies like Qwen/Minimax/etc are open sourcing models that shows similar (yet below than OpenAI/Anthropic) benchmark results?<p>Also, the token prices of these open source models are at a fraction of Anthropic's Opus 4.6[1]<p>[1]: <a href="https://artificialanalysis.ai/models/#pricing" rel="nofollow">https://artificialanalysis.ai/models/#pricing</a>
4/22/2026, 4:13:51 PM
by: sietsietnoac
Generate an SVG of a pelican riding a bicycle: <a href="https://codepen.io/chdskndyq11546/pen/yyaWGJx" rel="nofollow">https://codepen.io/chdskndyq11546/pen/yyaWGJx</a><p>Generate an SVG of a dragon eating a hotdog while driving a car: <a href="https://codepen.io/chdskndyq11546/pen/xbENmgK" rel="nofollow">https://codepen.io/chdskndyq11546/pen/xbENmgK</a><p>Far from perfect, but it really shows how powerful these models can get
4/22/2026, 3:43:14 PM
by: vibe42
Q4-Q5 quants of this model runs well on gaming laptops with 24GB VRAM and 64GB RAM. Can get one of those for around $3,500.<p>Interesting pros/cons vs the new Macbook Pros depending on your prefs.<p>And Linux runs better than ever on such machines.
4/22/2026, 4:04:33 PM
by: vladgur
This is getting very close to fit a single 3090 with 24gb VRAM :)
4/22/2026, 3:17:14 PM
by: originalvichy
Good news!<p>Friendly reminder: wait a couple weeks to judge the ”final” quality of these free models. Many of them suffer from hidden bugs when connected to an inference backend or bad configs that slow them down. The dev community usually takes a week or two to find the most glaring issues. Some of them may require patches to tools like llama.cpp, and some require users to avoid specific default options.<p>Gemma 4 had some issues that were ironed out within a week or two. This model is likely no different. Take initial impressions with a grain of salt.
4/22/2026, 3:19:33 PM
by: UncleOxidant
I've been waiting for this one. I've been using 3.5-27b with pretty good success for coding in C,C++ and Verilog. It's definitely helped in the light of less Claude availability on the Pro plan now. If their benchmarks are right then the improvement over 3.5 should mean I'm going to be using Claude even less.
4/22/2026, 3:45:08 PM
by: amunozo
A bit skeptical about a 27B model comparable to opus...
4/22/2026, 2:15:57 PM
by: pama
Has anyone tested it at home yet and wants to share early impressions?
4/22/2026, 3:07:56 PM
by: Mr_Eri_Atlov
Excited to try this, the Qwen 3.6 MoE they just released a week or so back had a noticeable performance bump from 3.5 in a rather short period of time.<p>For anyone invested in running LLMs at home or on a much more modest budget rig for corporate purposes, Gemma 4 and Qwen 3.6 are some of the most promising models available.
4/22/2026, 3:54:17 PM
by: spwa4
Unsloth quants available:<p><a href="https://unsloth.ai/docs/models/qwen3.6">https://unsloth.ai/docs/models/qwen3.6</a>
4/22/2026, 3:19:50 PM
by: techpulselab
[dead]
4/22/2026, 4:04:39 PM
by: sowbug
[dead]
4/22/2026, 3:52:09 PM