Hacker News Viewer

The universal weight subspace hypothesis

by lukeplato on 12/9/2025, 12:16:46 AM

https://arxiv.org/abs/2512.05117

Comments

by: modeless

This seems confusingly phrased. When they say things like &quot;500 Vision Transformers&quot;, what they mean is 500 finetunes of the same base model, downloaded from the huggingface accounts of anonymous randos. These spaces are only &quot;universal&quot; to a single pretrained base model AFAICT. Is it really that surprising that finetunes would be extremely similar to each other? Especially LoRAs?<p>I visited one of the models they reference and huggingface says it has malware in it: <a href="https:&#x2F;&#x2F;huggingface.co&#x2F;lucascruz&#x2F;CheXpert-ViT-U-MultiClass" rel="nofollow">https:&#x2F;&#x2F;huggingface.co&#x2F;lucascruz&#x2F;CheXpert-ViT-U-MultiClass</a>

12/9/2025, 4:32:48 AM


by: altairprime

For those trying to understand the most important parts of the paper, here&#x27;s what I think is the <i>most</i> significant two statements, subquoted out of two (consecutive) paragraphs midway through the paper:<p>&gt; <i>we selected five additional, previously unseen pretrained ViT models for which we had access to evaluation data. These models, considered out-of-domain relative to the initial set, had all their weights reconstructed by projecting onto the identified 16-dimensional universal subspace. We then assessed their classification accuracy and found no significant drop in performance</i><p>&gt; <i>we can replace these 500 ViT models with a single Universal Subspace model. Ignoring the task-variable first and last layer [...] we observe a requirement of 100 × less memory, and these savings are prone to increase as the number of trained models increases. We note that we are, to the best of our knowledge, the first work, to be able to merge 500 (and theoretically more) Vision Transformer into a single universal subspace model. This result implies that hundreds of ViTs can be represented using a single subspace model</i><p>So, they found an underlying commonality among the post-training structures in 50 LLaMA3-8B models, 177 GPT-2 models, and 8 Flan-T5 models; and, they demonstrated that the commonality could <i>in every case</i> be substituted for those in the original models with no loss of function; and noted that they seem to be the first to discover this.<p>For a tech analogy, imagine if you found a bzip2 dictionary that reduced the size of <i>every</i> file compressed by 99%, because that dictionary turns out to be uniformly helpful for <i>all</i> files. You would immediately open a pull request to bzip2 to have the dictionary built-in, because it would save everyone billions of CPU hours. [*]<p>[*] Except instead of &#x27;bzip2 dictionary&#x27; (strings of bytes), they use the term &#x27;weight subspace&#x27; (analogy not included here[**]) — and, &#x27;file compression&#x27; hours becomes &#x27;model training&#x27; hours. It&#x27;s just an analogy.<p>[**] &#x27;Hilbert subspaces&#x27; is just incorrect enough to be worth appending as a footnote[***].<p>[***] As a <i>second</i> footnote.

12/9/2025, 1:57:36 AM


by: augment_me

I think the paper in general completely oversells the idea of &quot;universality&quot;.<p>For CNNs, the &#x27;Universal Subspace&#x27; is simply the strong inductive bias (locality) forcing filters into standard signal processing shapes (Laplacian&#x2F;Gabor) regardless of the data. Since CNNs are just a constrained subset of operations, this convergence is not that surprising.<p>For Transformers, which lack these local constraints, the authors had to rely on fine-tuning (shared initialization) to find a subspace. This confirms that &#x27;Universality&#x27; here is really just a mix of CNN geometric constraints and the stability of pre-training, rather than a discovered intrinsic property of learning.

12/9/2025, 7:55:51 AM


by: alyxya

I’ve had a hard time parsing what exactly the paper is trying to explain. So far I’ve understood that their comparison seems to be models within the same family and same weight tensor dimensions, so they aren’t showing a common subspace when there isn’t a 1:1 match between weight tensors in a ViT and GPT2. The plots showing the distribution of principal component values presumably does this on every weight tensor, but this seems to be an expected result that the principal component values shows a decaying curve like a log curve where only a few principal components are the most meaningful.<p>What I don’t get is what is meant by a universal shared subspace, because there is some invariance regarding the specific values in weights and the directions of vectors in the model. For instance, if you were doing matrix multiplication with a weight tensor, you could swap two rows&#x2F;columns (depending on the order of multiplication) and all that would do is swap two values in the resulting product, and whatever uses that output could undo the effects of the swap so the whole model has identical behavior, yet you’ve changed the direction of the principal components. There can’t be fully independently trained models that share the exact subspace directions for analogous weight tensors because of that.

12/9/2025, 5:04:13 AM


by: masteranza

It&#x27;s basically way better than LoRA under all respects and <i>could</i> even be used to speed up inference. I wonder whether the big models are not using it already... If not we&#x27;ll see a blow up in capabilities very, very soon. What they&#x27;ve shown is that you can find the subset of parameters responsible for transfer of capability to new tasks. Does it apply to completely novel tasks? No, that would be magic. Tasks that need new features or representations break the method, but if it fits in the same domain then the answer is &quot;YES&quot;.<p>Here&#x27;s a very cool analogy from GPT 5.1 which hits the nail in the head in explaining the role of subspace in learning new tasks by analogy with 3d graphics.<p><pre><code> Think of 3D character animation rigs: • The mesh has millions of vertices (11M weights). • Expressions are controlled via: • “smile” • “frown” • “blink” Each expression is just: mesh += α_i \* basis_expression_i Hundreds of coefficients modify millions of coordinates.</code></pre>

12/9/2025, 2:22:42 AM


by: kacesensitive

interesting.. this could make training much faster if there’s a universal low dimensional space that models naturally converge into, since you could initialize or constrain training inside that space instead of spending massive compute rediscovering it from scratch every time

12/9/2025, 1:10:38 AM


by: VikingCoder

I find myself wanting genetic algorithms to be applied to try to develop and improve these structures...<p>But I always want Genetic Algorithms to show up in any discussion about neural networks...

12/9/2025, 1:12:25 AM


by: zkmon

Something tells me this is probably as important as the &quot;attention is all you need&quot;.

12/9/2025, 9:09:53 AM


by: hn_throwaway_99

I read the abstract (not the whole paper) and the great summarizing comments here.<p>Beyond the practical implications of this (i.e. reduced training and inference costs), I&#x27;m curious if this has any consequences for &quot;philosophy of the mind&quot;-type of stuff. That is, does this sentence from the abstract, &quot;we identify universal subspaces capturing majority variance in just a few principal directions&quot;, imply that all of these various models, across vastly different domains, share a large set of common &quot;plumbing&quot;, if you will? Am I understanding that correctly? It just sounds like it could have huge relevance to how various &quot;thinking&quot; (and I know, I know, those scare quotes are doing a lot of work) systems compose their knowledge.

12/9/2025, 4:28:22 AM


by: tsurba

Many discriminative models converge to same representation space up to a linear transformation. Makes sense that a linear transformation (like PCA) would be able to undo that transformation.<p><a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2007.00810" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2007.00810</a><p>Without properly reading the linked article, if thats all this is, not a particularly new result. Nevertheless this direction of proofs is imo at the core of understanding neural nets.

12/9/2025, 5:36:00 AM


by: canjobear

What&#x27;s the relationship with the Platonic Representation Hypothesis?

12/9/2025, 12:58:42 AM


by: nothrowaways

What if all models are secretly just fine tunes of llama?

12/9/2025, 2:44:55 AM


by: inciampati

The authors study a bunch of wild low rank fine tunes and discover that they share a common... low rank! ... substructure which is itself base model dependent. Humans are (genetically) the same. You need only a handful of PCs to represent the cast majority of variation. But that&#x27;s because of our shared ancestry. And maybe the same thing is going on here.

12/9/2025, 4:28:29 AM


by: mwkaufma

(Finds a compression artifact) &quot;Is this the meaning of consciousness???&quot;

12/9/2025, 1:30:24 AM


by: AIorNot

Interesting - I wonder if this ties into the Platonic Space Hypothesis recently being championed by computational biologist Mike Levin<p>E.g<p><a href="https:&#x2F;&#x2F;youtu.be&#x2F;Qp0rCU49lMs?si=UXbSBD3Xxpy9e3uY" rel="nofollow">https:&#x2F;&#x2F;youtu.be&#x2F;Qp0rCU49lMs?si=UXbSBD3Xxpy9e3uY</a><p><a href="https:&#x2F;&#x2F;thoughtforms.life&#x2F;symposium-on-the-platonic-space&#x2F;" rel="nofollow">https:&#x2F;&#x2F;thoughtforms.life&#x2F;symposium-on-the-platonic-space&#x2F;</a><p>e.g see this paper on Universal Embeddings <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;html&#x2F;2505.12540v2" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;html&#x2F;2505.12540v2</a><p>&quot;The Platonic Representation Hypothesis [17] conjectures that all image models of sufficient size have the same latent representation. We propose a stronger, constructive version of this hypothesis for text models: the universal latent structure of text representations can be learned and, furthermore, harnessed to translate representations from one space to another without any paired data or encoders.<p>In this work, we show that the Strong Platonic Representation Hypothesis holds in practice. Given unpaired examples of embeddings from two models with different architectures and training data, our method learns a latent representation in which the embeddings are almost identical&quot;<p>Also from the OP&#x27;s Paper we see this on statement:<p>&quot;Why do these universal subspaces emerge? While the precise mechanisms driving this phenomenon remain an open area of investigation, several theoretical factors likely contribute to the emergence of these shared structures.<p>First, neural networks are known to exhibit a spectral bias toward low frequency functions, creating a polynomial decay in eigenvalues that concentrates learning dynamics into a small number of dominant directions (Belfer et al., 2024; Bietti et al., 2019).<p>Second, modern architectures impose strong inductive biases that constrain the solution space: convolutional structures inherently favor local, Gabor-like patterns (Krizhevsky et al., 2012; Guth et al., 2024), while attention mechanisms prioritize recurring relational circuits (Olah et al., 2020; Chughtai et al., 2023).<p>Third, the ubiquity of gradient-based optimization – governed by kernels that are largely invariant to task specifics in the infinite-width limit (Jacot et al., 2018) – inherently prefers smooth solutions, channeling diverse learning trajectories toward shared geometric manifolds (Garipov et al., 2018).<p>If these hypotheses hold, the universal subspace likely captures fundamental computational patterns that transcend specific tasks, potentially explaining the efficacy of transfer learning and why diverse problems often benefit from similar architectural modifications.&quot;

12/9/2025, 1:51:16 AM


by: RandyOrion

&gt; From their project page:<p>&gt; We analyze over 1,100 deep neural networks—including 500 Mistral-7B LoRAs and 500 Vision Transformers. We provide the first large-scale empirical evidence that networks systematically converge to shared, low-dimensional spectral subspaces, regardless of initialization, task, or domain.<p>I instantly thought of muon optimizer which provides high-rank gradient updates and Kimi-k2 which is trained using muon, and see no related references.<p>The &#x27;universal&#x27; in the title is not that universal.

12/9/2025, 5:34:32 AM


by: Simplita

Curious if this connects with the sparse subnetwork work from last year. There might be an overlap in the underlying assumptions.

12/9/2025, 7:29:41 AM


by: nothrowaways

&gt; Principal component analysis of 200 GPT2, 500 Vision Transformers, 50 LLaMA- 8B, and 8 Flan-T5 models reveals consistent sharp spectral decay - strong evidence that a small number of weight directions capture dominant variance despite vast differences in training data, objectives, and initialization.<p>Isn&#x27;t it obvious?

12/9/2025, 2:55:10 AM


by:

12/9/2025, 1:31:20 AM


by: horsepatties

I hope that this leads to more efficient models. And it’s intuitive- it seems as though you could find the essence of a good model and a model reduced to that essence would be more efficient. But, this is theoretical. I can also theorize flying cars- many have, it seems doable and achievable, but yet I see no flying cars on my way to work.

12/9/2025, 4:01:09 AM


by: lucid-dev

Pretty funny if you ask me. Maybe we can start to realize now: &quot;The common universal subspace between human individuals makes it easier for all of them to do &#x27;novel&#x27; tasks so long as their ego and personality doesn&#x27;t inhibit that basic capacity.&quot;<p>And that: &quot;Defining &#x27;novel&#x27; as &#x27;not something that you&#x27;ve said before even though your using all the same words, concepts, linguistic tools, etc., doesn&#x27;t actually make it &#x27;novel&#x27;&quot;<p>Point being, yeah duh, what&#x27;s the difference between what any of these models are doing anyway? It would be far more surprising if they discovered a *different* or highly-unique subspace for each one!<p>Someone gives you a magic lamp and the genie comes out and says &quot;what do you wish for&quot;?<p>That&#x27;s still the question. The question was never &quot;why do all the genies seem to be able to give you whatever you want?&quot;

12/9/2025, 5:34:48 AM


by: tempestn

After reading the title I&#x27;m disappointed this isn&#x27;t some new mind-bending theory about the relativistic nature of the universe.

12/9/2025, 8:15:41 AM


by: CGMthrowaway

They compressed the compression? Or identified an embedding that can &quot;bootstrap&quot; training with a headstart ?<p>Not a technical person just trying to put it in other words.

12/9/2025, 12:50:46 AM


by: api

I immediately started thinking that if there are such patterns maybe they capture something about the deeper structure of the universe.

12/9/2025, 1:22:33 AM


by: ibgeek

They are analyzing models trained on classification tasks. At the end of the day, classification is about (a) engineering features that separate the classes and (b) finding a way to represent the boundary. It&#x27;s not surprising to me that they would find these models can be described using a small number of dimensions and that they would observe similar structure across classification problems. The number of dimensions needed is basically a function of the number of classes. Embeddings in 1 dimension can linearly separate 2 classes, 2 dimensions can linearly separate 4 classes, 3 dimensions can linearly separate 8 classes, etc.

12/9/2025, 1:42:28 AM


by: farhanhubble

Would you see a lower rank subspace if the learned weights were just random vectors?

12/9/2025, 1:46:16 AM


by:

12/9/2025, 4:09:11 AM


by: nextworddev

The central claim, or &quot;Universal Weight Subspace Hypothesis,&quot; is that deep neural networks, even when trained on completely different tasks (like image recognition vs. text generation) and starting from different random conditions, tend to converge to a remarkably similar, low-dimensional &quot;subspace&quot; in their massive set of weights.

12/9/2025, 2:46:40 AM


by: odyssey7

Now that we know about this, that the calculations in the trained models follow some particular forms, is there an approximation algorithm to run the models without GPUs?

12/9/2025, 2:29:19 AM


by: Atlas667

Imagine collectively trying to recreate a human brain with semiconductors so capitalists can save money by not having to employ as many people

12/9/2025, 6:00:58 AM


by: ycombigrator

[dead]

12/9/2025, 8:37:50 AM


by: YouAreWRONGtoo

[dead]

12/9/2025, 7:12:49 AM


by:

12/9/2025, 2:48:47 AM


by: pagekicker

I asked Grok to visualize this:<p><a href="https:&#x2F;&#x2F;grok.com&#x2F;share&#x2F;bGVnYWN5_463d51c8-d473-47d6-bb1f-666636d92e52" rel="nofollow">https:&#x2F;&#x2F;grok.com&#x2F;share&#x2F;bGVnYWN5_463d51c8-d473-47d6-bb1f-6666...</a><p>*Caption for the two images:*<p>Artistic visualization of the universal low-parameter subspaces discovered in large neural networks (as described in “The Unreasonable Effectiveness of Low-Rank Subspaces,” arXiv:2512.05117).<p>The bright, sparse linear scaffold in the foreground represents the tiny handful of dominant principal directions (often ≤16 per layer) that capture almost all of the signal variance across hundreds of independently trained models. These directions form a flat, low-rank “skeleton” that is remarkably consistent across architectures, tasks, and random initializations.<p>The faint, diffuse cloud of connections fading into the dark background symbolizes the astronomically high-dimensional ambient parameter space (billions to trillions of dimensions), almost all of whose directions carry near-zero variance and can be discarded with negligible loss in performance. The sharp spectral decay creates a dramatic “elbow,” leaving trained networks effectively confined to this thin, shared, low-dimensional linear spine floating in an otherwise vast and mostly empty void.

12/9/2025, 2:07:55 AM