<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>LLM on Res Futuras</title><link>https://resfuturas.com/tags/llm/</link><description>Recent content in LLM on Res Futuras</description><generator>Hugo -- gohugo.io</generator><language>en-us</language><lastBuildDate>Sat, 21 Feb 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://resfuturas.com/tags/llm/index.xml" rel="self" type="application/rss+xml"/><item><title>A very high level introduction to LLMs</title><link>https://resfuturas.com/posts/a-very-high-level-introduction-to-llms/</link><pubDate>Sat, 21 Feb 2026 00:00:00 +0000</pubDate><guid>https://resfuturas.com/posts/a-very-high-level-introduction-to-llms/</guid><description>I&amp;rsquo;ve tried to write a complete article about LLMs from beginning to end, but after attempting it like 4 times, I kept finding myself constantly digressing and getting lost between the high level flow and the technical details, it just kept turning into a massive blob of information that&amp;rsquo;s really hard to follow.
So I&amp;rsquo;ve decided to make this like a small (or big?) series of posts. Let&amp;rsquo;s go.
Big Picture Watch this video; it explains the key concepts better than I&amp;rsquo;ll ever be able to explain: While the video is particularly about transformers, that&amp;rsquo;s the heart of the matter anyway, if you understand transformers and (tokenization + training logic), you are golden.</description></item></channel></rss>