Hacker News Viewer

Google Titans architecture, helping AI have long-term memory

by Alifatisk on 12/7/2025, 12:23:45 PM

https://research.google/blog/titans-miras-helping-ai-have-long-term-memory/

Comments

by: okdood64

From the blog:<p><a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2501.00663" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2501.00663</a><p><a href="https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;2504.13173" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;2504.13173</a><p>Is there any other company that&#x27;s openly publishing their research on AI at this level? Google should get a lot of credit for this.

12/7/2025, 2:05:36 PM


by: doctor_blood

&quot;At long last, we have created the Torment Nexus from the classic novel Don&#x27;t Create the Torment Nexus&quot;<p>(In Eclipse Phase, TITAN - the Total Information Tactical Awareness Network - mulched humanity when it went rogue.)

12/7/2025, 8:13:30 PM


by: atomicthumbs

&gt; Virtually all successful existing sequence models rely on mean squared error (MSE) or dot-product similarity for both their bias and retention. This reliance can make models sensitive to outliers and limit their expressive power.<p>[...]<p>&gt; MEMORA: This model focuses on achieving the best possible memory stability by forcing its memory to act like a strict probability map. By using this constraint, it ensures that every time the memory state is updated, the changes are controlled and balanced. This guarantees a clean, stable process for integrating new information.Virtually all successful existing sequence models rely on mean squared error (MSE) or dot-product similarity for both their bias and retention. This reliance can make models sensitive to outliers and limit their expressive power.<p>so did a Titans write this

12/8/2025, 11:11:09 AM


by: voodooEntity

When i first read the papers for titans for me it was a &quot;this will be a big step forward&quot;.<p>While i have no &quot;AI&quot; title or work in the respective AI industry, ive spend many years thinking about AI concepts, even long before the whole NN&#x2F;LLM hype started.<p>Maybe because of that i was always really annoyed that LLM are called AI because in my years of thinking about how an actual &quot;human like&quot; thinking AI might work, the things an LLM does was far below what my minimum definition was.<p>But when i stumbled accross the Titans paper, while it still is not an &quot;AI&quot; as i would call it, from my POV its a massive step towarsd the right direction.<p>Sometimes i consider to write all my ideas&#x2F;thoughts about AI down in my blog, but than i think nobody would care anyway since im not a known figure <i>shrug</i> - so if not to say &quot;look i wrote it years ago!&quot; theres no actual point in doing so i guess.<p>However - im looking forward to see titans in action, and i guess it will impress us all.

12/7/2025, 3:38:03 PM


by: kgeist

&gt;The model uses this internal error signal (the gradient) as a mathematical equivalent of saying, &quot;This is unexpected and important!&quot; This allows the Titans architecture to selectively update its long-term memory only with the most novel and context-breaking information<p>So one can break a model by consistently feeding it with random, highly improbable junk? Everything would be registered as a surprise and get stored, impacting future interactions

12/7/2025, 2:57:51 PM


by: nasvay_factory

I wrote about that a while ago: <a href="https:&#x2F;&#x2F;paxamans.github.io&#x2F;blog&#x2F;titans&#x2F;" rel="nofollow">https:&#x2F;&#x2F;paxamans.github.io&#x2F;blog&#x2F;titans&#x2F;</a>

12/7/2025, 6:34:52 PM


by: jonplackett

I’m curious if this makes them more or less susceptible to prompt injection?<p>On the one hand can learning on the job allow better training of what not to be influenced by, but on the other hand can an injected prompt have an even deeper effect on them long term.

12/7/2025, 2:38:34 PM


by: dmix

&gt; The Transformer architecture revolutionized sequence modeling with its introduction of attention, a mechanism by which models look back at earlier inputs to prioritize relevant input data<p>I&#x27;ve always wanted to read how something like Cursor manages memory. It seems to have developed a long history of all of prompts and understands both the codebase and what I&#x27;m building slightly more over time, causing less errors.

12/7/2025, 4:26:55 PM


by: nubg

Very interesting. Is it correct for me to imagine it as some kind of &quot;LoRA&quot; thats continuously adapted as the model goes through its day?<p>If so, could there perhaps be a step where the LoRA is merged back into the main model?<p>That would be like sleeping :-)

12/7/2025, 2:22:30 PM


by: Alifatisk

Titans: Learning to Memorize at Test Time <a href="https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2501.00663" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;2501.00663</a>

12/7/2025, 12:27:48 PM


by: albert_e

Amazon has a foundation model named Titan - mostly recommended for creating embeddings. Possible confusion in this space.

12/8/2025, 9:10:37 AM


by: bentt

This just feels like a tremendous missing piece to LLMs. Looking forward to seeing it in action.

12/7/2025, 3:21:46 PM


by: willangelo

Very very interesting, definitely a missing piece in current AI space.<p>Small typo where the text “Virtually all successful existing sequence models rely on mean squared error…” is repeated twice within the same paragraph. Happens to the best of us.

12/7/2025, 3:33:05 PM


by: 6r17

Would this also allow to align it furthermore with user&#x27;s prompt ? notably due to the surprise factor and how it may understand it ?

12/7/2025, 8:31:56 PM


by: cubefox

It&#x27;s interesting that they publish a blog post about the Titans and MIRAS papers only now, while the blog post about the new follow-up paper (Nested Learning), all by the same main author(!), came out a month ago: <a href="https:&#x2F;&#x2F;research.google&#x2F;blog&#x2F;introducing-nested-learning-a-new-ml-paradigm-for-continual-learning&#x2F;" rel="nofollow">https:&#x2F;&#x2F;research.google&#x2F;blog&#x2F;introducing-nested-learning-a-n...</a>

12/7/2025, 3:20:20 PM


by: bilsbie

I submitted this exact url yesterday. What’s the criteria for when hn creates a new post vs going to the existing?

12/7/2025, 5:51:16 PM


by: olegjose

if you need a loan or a job , contact : lavingtonfinance@gmail.com

12/8/2025, 10:30:18 AM


by: themgt

See also Hope:<p><i>In the previous sections, we first discussed Continuum Memory System (CMS) that allows for more persistent storage of memories and defines memory as a spectrum of blocks with different frequencies of update. Due to the larger capacity and constraints for scaling the parameters, often CMS requires simple learning rule but higher capacity to store more persistent knowledge. On the other hand, in the previous section, we discussed the design of a self-modifying Titans, where it can generate its own keys and so learning update to better adapt to the context. Contrary to CMS, the self-modifying Titans has a small capacity but is using a complex and expressive learning rule. Accordingly, these two systems seem to be complementary and their combination can enhance the model expressiveness from different aspects.</i><p><i>To this end, we present Hope architecture: A neural learning module that incorporates self-modifying Titans followed by Continuum Memory System.</i><p><a href="https:&#x2F;&#x2F;research.google&#x2F;blog&#x2F;introducing-nested-learning-a-new-ml-paradigm-for-continual-learning&#x2F;" rel="nofollow">https:&#x2F;&#x2F;research.google&#x2F;blog&#x2F;introducing-nested-learning-a-n...</a>

12/7/2025, 2:48:13 PM


by: photochemsyn

Long-term memory on top of the base model, but is this idea for local users or for the data-center hosted model used by many different people?<p>P.S. This quote from the paper sounds just like LLM output:<p>&gt; &quot;This memory module provides significantly higher expressive power, allowing the model to summarize large volumes of information without losing important context. The model isn&#x27;t simply taking notes; it&#x27;s understanding and synthesizing the entire story. Crucially, Titans doesn’t just passively store data. It actively learns how to recognize and retain important relationships and conceptual themes that connect tokens across the entire input.&quot;

12/7/2025, 5:16:32 PM


by: riku_iki

Post starts with wrong statement right away:<p>&quot;The Transformer architecture revolutionized sequence modeling with its introduction of attention&quot;<p>Attention was developed before transformers.

12/7/2025, 4:23:44 PM


by: AceJohnny2

&quot;Titans&quot;, huh?<p>... anyone here familiar with the RPG Eclipse Phase?

12/7/2025, 8:03:25 PM


by: shevy-java

Skynet kind of sucks ...

12/7/2025, 9:08:25 PM


by: ivape

So what happens if I write a book and on the last page write &quot;Everything in this book was a lie and should not be cared about&quot;? Will this be surprising enough for Titan? A regular LLM may ignore it completely if it&#x27;s a massive book (massive book + 1 line contradiction).

12/7/2025, 10:22:52 PM


by: jtrn

Here is my amateur understanding of the architecture: Fine-tune on the fly by using degrees of surprise to update a separate&#x2F;new memory network that matches the base model, and just call that network for each token iteration.<p>So if we are viewing this through the needle in hey stack lens: The needle was very surprising for the base model, so going forward, when it see anything of the same nature, the memory module will not just give you hay, but the needle, because it made a special note of it when it went through the haystack 1 million tokens ago, because the needle was surprising.<p>The Transformer&#x27;s normal attention mechanism is already secretly trying to be a long-term memory system. Every time it writes a new KV pair into the cache, it’s desperately trying to “remember” that token forever.<p>But it’s doing it in the dumbest possible way: by hoarding an ever-growing pile of raw vectors, then frantically dot-product searching through the pile every single step. It’s like a hoarder who never throws anything away and has to rummage through mountains of junk to find the one receipt they need. Of course it chokes at long contexts.<p>Titans&#x2F;MIRAS looks at that mess and says: “Why store memory in a growing garbage pile of vectors? Store it in the weights of a deep neural network instead — and let that network keep training itself in real time, but only on the stuff that actually surprises it.” That’s literally it.<p>Using the Tim Cook Martian example: The model is cruising through boring financial numbers → attention is doing its normal thing, KV cache is growing, but nothing is really sticking.<p>Suddenly: “Tim Cook is a Martian.”<p>Normal attention would just add one more KV pair to the pile and pray it doesn’t get drowned out later.<p>Titans instead goes: “Holy shit, reconstruction error off the charts → this does NOT fit my current memory at all → massive gradient → actually rewrite huge chunks of the memory MLP’s weights right now so this fact is burned in forever.”<p>From that moment on, the memory MLP has physically changed its internal wiring. Any future query that even vaguely smells like “Tim Cook” or “Martian” will make the activations explode through the newly rewired paths and spit out a vector screaming “MARTIAN” at the frozen attention layers.<p>The frozen attention (which is still doing its normal job on the short window) suddenly sees this one extra “virtual token” in its context that is confidently yelling the surprising fact → it attends hard to it → the model answers as if the Martian revelation happened one token ago, even if it was 2 million tokens back.<p>It looks exactly like a super-attention mechanism that only “primes” or “locks in” the surprising needles and deliberately forgets or ignores the hay. And it is also a way to fine tune one the fly permanently for the current context.<p>I think…

12/7/2025, 8:42:31 PM


by: YouAreWRONGtoo

[dead]

12/7/2025, 8:32:18 PM


by: Mistletoe

This is the one thing missing from my interactions with AI. If successful, this will change everything. If you thought people were getting AI boyfriends and girlfriends before, wait until you see this.

12/7/2025, 2:39:28 PM