Hacker News Viewer

TinyLoRA – Learning to Reason in 13 Parameters

by sorenjan on 3/27/2026, 12:11:12 PM

https://arxiv.org/abs/2602.04118

Comments

by: dollo_7

Not sure if I buy it. First, SVD decomposition to obtain U, Σ, V is computationally expensive, so it would work only if we are not finetuning very big models.<p>But my real concern comes at the results. The &quot;13 parameters&quot; looks like bait, because it is one result of finetuning a model on a very simple math benchmark, grade-school-math (GSM8K), an already very saturated benchmark on every model. Besides, it seems to happen only for the qwen family model... It looks like GSM8K was part of the training set of the qwen model, and this tinylora finetuning did the last adjustments to perfectly reflect that overtraining.

4/1/2026, 5:35:35 AM


by: kgeist

&gt;One theory is that the knowledge required to solve the task is already stored in the parameters of the model, and only the style has to change for task success<p>&gt;In particular, learning to generate longer outputs may be possible in few parameters<p>Reminded me of: <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2501.19393" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2501.19393</a><p>&gt;we develop budget forcing to control test-time compute by forcefully terminating the model’s thinking process or lengthening it by appending “Wait” multiple times to the model’s generation when it tries to end. This can lead the model to double-check its answer, often fixing incorrect reasoning steps<p>Maybe, indeed, the model simply learns to insert the EOS token (or similar) later, and the capability is already in the base model

4/1/2026, 5:34:48 AM


by: MASNeo

Is it an Aprils Fools publication?

4/1/2026, 6:28:34 AM


by: Xx_crazy420_xX

If i understand it correctly, the analogy could be:<p>Let&#x27;s say we have a low level programmer expert and we try to teach him algebra either we:<p><pre><code> - (SFT): give him algebra book with new nomenclature, definitions, syntax - (RL): let him learn algebra using C syntax</code></pre>

4/1/2026, 6:30:32 AM


by: measurablefunc

With four parameters I can fit an elephant, and with five I can make him wiggle his trunk so there is still room for improvement.

4/1/2026, 1:08:22 AM


by: a-t-c-g

The quality of custom models trained with proper reasoning datasets[0] even with small parameters (3-7B is sweet spot) is incredible now<p>[0]: cartesien.io or Salesforce&#x27;s WebscaleRL

4/1/2026, 2:01:15 AM


by: Sim-In-Silico

[dead]

4/1/2026, 3:20:48 AM


by: evermore611

[dead]

4/1/2026, 10:03:33 AM


by: ValveFan6969

[dead]

4/1/2026, 5:01:31 AM


by: ValveFan6969

[dead]

4/1/2026, 1:56:39 AM


by: matt123456789

Such low dimensionality of the LoRA vector must surely result in a close-to-linear modification to the KV calculation. This seems to me to imply that what we call &quot;reasoning&quot; is latent within the model. Pretty clear I didn&#x27;t read the paper, I&#x27;m sure the authors address this.

4/1/2026, 3:54:29 AM


by: sachaa

If 13 parameters can unlock better reasoning, then we will not be &quot;training&quot; models, we&#x27;ll be steering them. Most of the capability is already there.<p>The real unlock isn’t TinyLoRA, it’s what this implies: ultra-cheap, continuous adaptation. The bottleneck shifts from compute to having a good reward signal.

4/1/2026, 5:53:09 AM