<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Stories in Structure]]></title><description><![CDATA[Stories in Structure]]></description><link>https://storiesinstructure.com</link><generator>RSS for Node</generator><lastBuildDate>Thu, 09 Apr 2026 10:31:50 GMT</lastBuildDate><atom:link href="https://storiesinstructure.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[What are you talking about?]]></title><description><![CDATA["What are you talking about?"
That's essentially what our knowledge graph says when you hand it a question.
Naturally, you might try to ask it something. After all, it is a knowledge graph — it should]]></description><link>https://storiesinstructure.com/what-are-you-talking-about</link><guid isPermaLink="true">https://storiesinstructure.com/what-are-you-talking-about</guid><category><![CDATA[Stories in Structure]]></category><category><![CDATA[knowledge graph]]></category><category><![CDATA[Entity linking]]></category><category><![CDATA[semantic search]]></category><category><![CDATA[information retrival]]></category><category><![CDATA[graphrag]]></category><dc:creator><![CDATA[Agata Migalska]]></dc:creator><pubDate>Tue, 07 Apr 2026 00:29:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/68c1f8825145eb977febb99d/69113848-70b5-4afb-90dd-ba3a10be6377.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>"What are you talking about?"</strong></p>
<p>That's essentially what our knowledge graph says when you hand it a question.</p>
<p>Naturally, you might try to ask it something. After all, it is a knowledge graph — it should be able to find relevant knowledge and produce an answer.</p>
<p>Yet, it doesn't understand.</p>
<p>Which is a bit embarrassing, because <a href="https://storiesinstructure.com/stories-in-structure-as-a-knowledge-graph">we built the graph from text</a> — specifically, my 2025 blog posts. But while the graph remembers entities and relationships between them, it doesn't naturally know what to do with a fresh sentence dropped in front of it.</p>
<img src="https://cdn.hashnode.com/uploads/covers/68c1f8825145eb977febb99d/2e8cc4c7-dc6e-4aca-a601-1247763eda8b.png" alt="A cyberpunk city scene: a graph character facing a sentence character, clearly failing to understand each other" style="display:block;margin:0 auto" />

<p>On one side we have a graph, like this conceptual graph below with red nodes depicting posts, green nodes depicting introduced concepts, and blue nodes marking abstractions.</p>
<img src="https://cdn.hashnode.com/uploads/covers/68c1f8825145eb977febb99d/c0b03a3a-51ca-4f98-9653-d24be17d3fac.png" alt="Conceptual graph of my 2025 blog posts" style="display:block;margin:0 auto" />

<p><em>Figure 1. Conceptual graph of my 2025 blog posts.</em></p>
<hr />
<p>On the other — a question. For example: <strong>What problem does Euler's theorem solve, and what constraint makes it solvable?</strong></p>
<p>These two don't align, because they <em>"speak different languages"</em>.</p>
<p>And that leaves us with a problem: <strong>how do we translate a question into something the graph can actually understand?</strong></p>
<h2>First attempt: Just extract entities</h2>
<p>If you recall the graph generation process, entities were extracted from fragments of my blog posts by an LLM given the ontology — the description of entity types and relationships. That worked. We got a graph, with some deduplication cleanup, but the core process was just: LLM + ontology + a simple prompt.</p>
<p>Let's apply the same process to this question: <strong>"What problem does Euler's theorem solve, and what constraint makes it solvable?"</strong></p>
<p>Three entities are found:</p>
<ul>
<li><p>Euler's Theorem (Concept)</p>
</li>
<li><p>Problem Solved By Euler's Theorem (Abstraction)</p>
</li>
<li><p>Constraint That Makes It Solvable (Abstraction)</p>
</li>
</ul>
<hr />
<p>These look like entry points to the graph. Except they aren't — at least not the last two.</p>
<p>The reason is simple: these entities don't have to exist in our knowledge graph. So we managed to translate the question into entities, but the graph says "I know nothing about that, sorry!".</p>
<h2>Ensuring common vocabulary</h2>
<p>What we are trying to solve here is known as <a href="https://en.wikipedia.org/wiki/Entity_linking"><strong>entity linking</strong></a> — mapping a natural language query to entities that actually exist in the graph.</p>
<p>Clearly, using the same language is not enough. If you wanted to borrow a "rubber" and I only had an "eraser", we wouldn't communicate very well even though we would be talking about the same thing.</p>
<p>The problem is not understanding the question — it's aligning it with the graph's vocabulary.</p>
<p>That leads to a natural extension of the prompt: tell the LLM what entities are in the graph. That way we can ensure a common vocabulary.</p>
<p>I set the prompt to: <em>"Given the following user question and a list of knowledge graph nodes, identify which nodes are relevant to answering the question."</em></p>
<p><strong>Question:</strong> "What problem does Euler's theorem solve, and what constraint makes it solvable?"</p>
<p><strong>Found Entities:</strong></p>
<ul>
<li><p>Euler's Theorem (Concept)</p>
</li>
<li><p>Eulerian Path (Concept)</p>
</li>
<li><p>Traversal (Concept)</p>
</li>
<li><p>Single Stroke (Concept)</p>
</li>
<li><p>Graph (Concept)</p>
</li>
<li><p>Vertex (Concept)</p>
</li>
<li><p>Edge (Concept)</p>
</li>
<li><p>Undirected Graph (Concept)</p>
</li>
<li><p>Node Degree (Concept)</p>
</li>
<li><p>Odd Degree (Concept)</p>
</li>
<li><p>Even Degree (Concept)</p>
</li>
<li><p>abstract:Graph-Theoretic Properties and Degrees (Concept)</p>
</li>
<li><p>abstract:Graph and Network Structures (Abstraction)</p>
</li>
</ul>
<hr />
<p>At this point we can have reasonable faith that these entities actually exist in the graph — the LLM is selecting from a known list rather than inventing names.</p>
<p>But wait. <strong>Where did "Node Degree" come from?</strong></p>
<p>The LLM didn't translate the question. It skipped ahead and started answering it.</p>
<p>"Node Degree" isn't mentioned in the question — it's the kind of thing you'd need to know to <em>answer</em> the question. The LLM, trying to be helpful, jumped straight to reasoning about the solution rather than mapping what was asked.</p>
<p>On one hand, that likely produces a useful answer. On the other, it bypasses the knowledge graph entirely — which rather defeats the purpose. So what if we actually want only the entities that follow directly from the question?</p>
<h2>Being strict on requirements</h2>
<p>You have to give an LLM strict boundaries, or it will try to be helpful and read your mind.</p>
<p>So the new prompt becomes explicit:</p>
<p><em>"Given the following user question and a list of knowledge graph nodes,</em> <em><strong>identify only the nodes that are explicitly or very directly named by the question itself</strong></em>*. Do NOT include nodes that might be useful context or related background — only nodes whose concept is clearly present in the question."*</p>
<p>Result:</p>
<p><strong>Question:</strong> "What problem does Euler's theorem solve, and what constraint makes it solvable?"</p>
<p><strong>Found Entities:</strong></p>
<ul>
<li>Euler's Theorem (Concept)</li>
</ul>
<hr />
<p>One entity. From a question with two distinct sub-questions.</p>
<p>There's a real gap between "what one says" and "what one means," and this prompt falls entirely on the literal side. Allowing some mind reading feels beneficial — but how much before the LLM goes overboard and answers the question for us?</p>
<p>That's a hard line to draw. Which suggests maybe entity linking isn't the right frame at all.</p>
<h2>Fundamentally different philosophy</h2>
<p>Instead of asking "what entities are mentioned in the question?", we ask <strong>"what entities feel similar to this question?"</strong>.</p>
<p>Following the GraphRAG community and, specifically, <a href="https://github.com/ksachdeva/langchain-graphrag/tree/main/src/langchain_graphrag/query/local_search">LangChain's implementation of local search in GraphRAG</a>, here's the idea:</p>
<ol>
<li><p>Encode all graph entities into embeddings (numeric vectors that capture meaning).</p>
<img src="https://cdn.hashnode.com/uploads/covers/68c1f8825145eb977febb99d/8a2ee654-f74f-460c-b5b8-a17e0c4d2405.png" alt="" style="display:block;margin:0 auto" />
</li>
<li><p>Embed a question</p>
<img src="https://cdn.hashnode.com/uploads/covers/68c1f8825145eb977febb99d/997a6498-ae5d-4b4d-aa3c-1e1589cb44db.png" alt="" style="display:block;margin:0 auto" />
</li>
<li><p>Calculate distance in meaning (i.e., semantic distance) between entities embeddings and the question embedding.</p>
</li>
<li><p>Use top <em>k</em> closest (e.g. <em>k=5</em>) as entry points to the knowledge graph.</p>
</li>
</ol>
<p>This approach, called <a href="https://en.wikipedia.org/wiki/Semantic_search"><strong>semantic retrieval</strong></a>, looks at graph entities and calculates which ones of them are the closest in meaning.</p>
<p>So what are those <strong>closest entities to our question</strong>?</p>
<p><strong>Question:</strong> "What problem does Euler's theorem solve, and what constraint makes it solvable?"</p>
<p><strong>Top 5 Entities with similarity scores:</strong></p>
<ul>
<li><p>Euler's Question (Concept) - 0.599</p>
</li>
<li><p>Euler's Recipe (Concept) - 0.583</p>
</li>
<li><p>Euler's Theorem (Concept) - 0.580</p>
</li>
<li><p>Leonhard Euler (Concept) - 0.544</p>
</li>
<li><p>Euler 1735 Result (Abstraction) - 0.531</p>
</li>
</ul>
<hr />
<p>We now have two fundamentally different ways to enter the graph:</p>
<ul>
<li><p>map the question to known entities (entity linking)</p>
</li>
<li><p>or map it to similar meaning (semantic retrieval)</p>
</li>
</ul>
<p>They both produce entry points into the graph, but they get there differently. Entity linking tries to understand the question literally — and will either be too strict or too eager. Semantic retrieval doesn't try to understand the question at all; it just asks "what in the graph feels like this?"</p>
<p>Semantic retrieval also has a practical advantage: it returns exactly <em>k</em> results, no more, no less. Entity linking gives you however many the LLM decides to return.</p>
<p>These differences matter — but which approach actually leads to better answers? That's a harder question.</p>
<h2>Wrapping up</h2>
<p>The graph and the sentence speak different languages. To bridge them, I looked at two approaches: <strong>entity linking</strong> (map the question to named nodes) and <strong>semantic retrieval</strong> (find the nodes most similar in meaning).</p>
<p>Entity linking is precise but fragile — too literal and you get one node, too loose and the LLM answers the question for you. Semantic retrieval sidesteps that tension entirely by measuring meaning rather than matching names.</p>
<p>Both give us a way in. Which one leads somewhere more useful is what we'll find out next.</p>
]]></content:encoded></item><item><title><![CDATA[Stories in Structure as a Knowledge Graph]]></title><description><![CDATA[Your Stories in Structure Wrapped 2025
Over the past year, I kept returning to one idea: the graph-like nature of how we think.
When we think, we don’t retrieve isolated facts — we follow associations]]></description><link>https://storiesinstructure.com/stories-in-structure-as-a-knowledge-graph</link><guid isPermaLink="true">https://storiesinstructure.com/stories-in-structure-as-a-knowledge-graph</guid><category><![CDATA[Stories in Structure]]></category><category><![CDATA[knowledge graph]]></category><category><![CDATA[graphrag]]></category><category><![CDATA[generative ai]]></category><category><![CDATA[Information Retrieval ]]></category><category><![CDATA[graph theory]]></category><dc:creator><![CDATA[Agata Migalska]]></dc:creator><pubDate>Tue, 03 Mar 2026 06:30:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/68c1f8825145eb977febb99d/c169f5c0-016e-4adc-a59b-1580c96c5aec.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2><strong>Your <em>Stories in Structure</em> Wrapped 2025</strong></h2>
<p>Over the past year, I kept returning to one idea: the <strong>graph-like nature of how we think</strong>.</p>
<p>When we think, we don’t retrieve isolated facts — we follow associations between <em>things</em>. Take two seemingly unrelated ideas, like <a href="https://storiesinstructure.com/the-graph-of-your-eyesight">eye scanpaths</a> (paths our eyes make when looking) and <a href="https://storiesinstructure.com/climbing-routes-are-graphs">climbing routes</a>. At first, they feel worlds apart.</p>
<p>But let your brain pursue the topic for a few milliseconds longer — and <strong>boom</strong>. You found an association, a common denominator:</p>
<p><strong>Both are just directed path graphs.</strong></p>
<p>So, in essence, knowledge can be thought of as a graph — <strong>a knowledge graph</strong> (KG) <a href="https://dl.acm.org/doi/abs/10.1145/363196.363214">[1]</a>,<a href="https://onlinelibrary.wiley.com/doi/abs/10.1002/bs.3830120511">[2]</a>, where <em>things</em> are nodes that are connected with other <em>things</em> through mental associations.</p>
<p>Since this is my first post in 2026, after a two-month break, why not create a <strong>graph of knowledge embodied in my posts in 2025?</strong></p>
<p>That immediately makes me ponder three questions:</p>
<ul>
<li><p>What would such a graph look like "on paper"?</p>
</li>
<li><p>How would one search it to find this <em>common denominator</em>?</p>
</li>
<li><p>And, finally, how is such a graph helpful in reasoning?</p>
</li>
</ul>
<p>Each question demands attention, and so the story will be divided into three chapters — three separate posts — investigating <strong>representation, retrieval, and reasoning</strong>.</p>
<p>This post focuses only on representation — <strong>construction of the graph itself</strong>; retrieval and reasoning will follow in subsequent posts. And while I know the ending of today's chapter, I don't know yet what is there to emerge in the follow-ups. Nevertheless, here we are, entering together the world of knowledge graphs to explore their appearance and use.</p>
<p>Come along as I explore this world and share my findings!</p>
<p>All code for graph construction and visualization is available in a public GitHub repository: <a href="https://github.com/storiesinstructure/knowledge_graphs">https://github.com/storiesinstructure/knowledge_graphs</a>.</p>
<h2><strong>What I want to achieve</strong></h2>
<p>The idea I have in mind is simple: <strong>take my blog posts of 2025 and transform them into a knowledge graph.</strong></p>
<img src="https://cdn.hashnode.com/uploads/covers/68c1f8825145eb977febb99d/67d5c91a-50e0-4ac0-9e00-0996e8aee538.png" alt="Diagram showing a stack of files transforming into a graph of connected nodes via an arrow." style="display:block;margin:0 auto" />

<p>In other words: take a bunch of <strong>messy written text</strong> and transform it into <strong>nodes, that clearly must represent something, and edges, that connect these nodes for some undisclosed reason</strong>.</p>
<p>Let's untangle this mystery step by step.</p>
<p>At first, I wondered, what would I even <strong>imagine this graph to contain</strong>?</p>
<h2><strong>What does my <em>Wrapped 2025</em> graph contain?</strong></h2>
<p>If knowledge is a graph, then the first practical question unavoidably becomes: <strong>what exactly should exist in that graph?</strong></p>
<p>In my previous posts, the meaning of nodes and edges was simple. For example, when <a href="https://storiesinstructure.com/mapping-the-skies">investigating Santa's journey over European sky to deliver Christmas presents</a>, nodes represented waypoints, and edges represented airway segments. The graph interpretation was tightly tied to a physical structure.</p>
<p>Here, however, we deal with a <strong>conceptual structure</strong>, and that means we have to decide on our own:</p>
<ul>
<li><p>what kind of nodes exist,</p>
</li>
<li><p>what kind of edges exist,</p>
</li>
<li><p>and what those types mean.</p>
</li>
</ul>
<p>This <strong>explicit definition of what types of entities exist and how they relate to one another</strong> is called an <a href="https://en.wikipedia.org/wiki/Ontology_(information_science)"><strong>ontology</strong></a>.</p>
<p>I thought: what if we had <strong>two such ontologies</strong>, both describing knowledge embodied in my posts, but <strong>focused on different aspects</strong>?</p>
<h3><strong>Conceptual Ontology</strong></h3>
<p>The first ontology I considered is focused on <strong>concepts</strong>, such as paths or graph traversal, and how they relate to one another.</p>
<p>Each <em>post</em> explored several such <em>concepts</em>. That gives us two entity types: <code>Post</code> and <code>Concept</code>, and the relationship: <code>Post → explores → Concept</code>.</p>
<p>Some <em>concepts</em> were related to one another, either in the same post or across posts. That can be expressed with <code>Concept → related_to → Concept</code>.</p>
<p>Sometimes, as in the above example with the connection between scanpaths and climbing routes, two or more concepts have a common abstraction. That brings my last relation of <code>Concept → abstracts_to → Abstraction</code>.</p>
<pre><code class="language-plaintext">## Conceptual ontology 
​
Purpose: Surface recurring abstractions across domains.
​
Node types:
* Post
* Concept (path, traversal, constraint, visibility, locality)
* Abstraction (directed graph, flow, connectivity)
​
Relations:
* Post → explores → Concept
* Concept → abstracts_to → Abstraction
* Concept → related_to → Concept
</code></pre>
<h3><strong>Narrative Ontology</strong></h3>
<p>The second type of ontology that I imagined was focused on <strong>narration</strong>. Think of it as of a selection of storytelling tools that make a story.</p>
<p>Each <em>post</em> is tackling a specific <em>problem</em>, like: "I want to reconstruct the structure of my 2025 writing".</p>
<p>The <em>problem</em> builds <em>tension</em>. The current tension could be phrased as "If knowledge is graph-like, how do I even begin to construct a meaningful graph from messy prose?"</p>
<p>Then, the <em>tension</em> is resolved, everybody starts to breathe again and smile at one another. Unlike the conceptual ontology, which captures what is being discussed, this ontology focuses on <strong>how the story unfolds and on the emotions of a reader</strong>. Tension brings uncertainty, anticipation, curiosity, desire for meaning. It is released when the reader's <em>need for resolution</em> is satisfied: a mystery is explained, conflict ends, or a decision is made. Hence, <em>Tension → resolved_by → Resolution</em>.</p>
<p>Finding the <em>resolution</em> also concludes the <em>post</em>.</p>
<p>This reasoning results in the following ontology:</p>
<pre><code class="language-plaintext">## Narrative ontology
​
Purpose: Capture how things are explained. 
​
Node types:
* Post
* Problem
* Tension
* Resolution
​
Relations:
* Post → introduces → Problem
* Problem → leads_to → Tension
* Tension → resolved_by → Resolution
* Post → concludes_with → Resolution
</code></pre>
<h3><strong>Are my ontologies any good?</strong></h3>
<p>Having constructed this narrative ontology, I was still experiencing a lot of uncertainty. Shouldn't I have a <em>story</em> as a top entity, instead of the <em>post</em>? Shouldn't there be a direct relationship between the <em>post</em> and the <em>tension</em>?</p>
<p>And I don't know the right answers yet.</p>
<p>I decided to <strong>leave the ontologies as they are</strong>, instead of making them "perfect", and I expect that <strong>my further investigation</strong> on retrieval and reasoning (in the two follow-up posts) <strong>will lift the veil</strong>.</p>
<p>For now, I've established the types of nodes and edges, as well as their meaning. The next step is to <strong>construct the actual graphs</strong> and see what structure emerges.</p>
<h3><strong>First Attempt: Greedy Graph</strong></h3>
<p>To start, I needed to extract entities and relationships from text.</p>
<p>I cut my posts into ~2000-character fragments (roughly paragraph-sized). Then, I handed each piece and an ontology description to a Large Language Model (LLM). The LLM's job was to find entities (nodes) and relationships (edges) whose types appear in the ontology. This means that the LLM was not allowed to invent any new types. At the end, I added <code>Post</code> nodes programmatically: each post is connected to the entities extracted from its text.</p>
<p>For each ontology I got a graph. Knowing that my posts discussed intertwined ideas, I expected a lot of edges that link one post to another.</p>
<p>To my surprise, I saw very little of these connections in the conceptual graph (Fig. 1)!</p>
<img src="https://cdn.hashnode.com/uploads/covers/68c1f8825145eb977febb99d/846d853c-df61-4d9e-9302-40942b875d3b.png" alt="Visualization of a clustered network graph with dense star-like communities connected by cross-links. Green edges connect numerous small nodes to central hub nodes, while red edges highlight radial structures within each cluster. Labels indicate the most common nodes: “node”, “edge&quot;, and a duplicated node &quot;nodes&quot;. " style="display:block;margin:0 auto" />

<p><em>Figure 1. Knowledge graph constructed with conceptual ontology from my 2025 blog posts. Nodes: red - posts, green - concepts, blue - abstractions</em>.</p>
<hr />
<p>The most connected <code>Concepts</code> were those of "Node" and "Edge", which is not surprising since I used these exact words in my every post. That also shows that when exactly the same words were used across my posts, these were unified during the graph construction.</p>
<p>But I wasn't satisfied! I would want the "Nodes" <code>Concept</code> node to be unified with the "Node" one. Yet, it wasn't. <strong>There were obvious duplicates in my graph!</strong></p>
<p>The second problem I spotted are blue <code>Abstraction</code> nodes that are floating around, disconnected from <code>Concepts</code> and without a direct connection with any <code>Post</code>. In my conceptual ontology, <code>Abstraction</code> was supposed to be above the <code>Concept</code> , which I modelled with the relation <code>Concept → abstracts_to → Abstraction</code>.</p>
<p>Having <code>Abstraction</code> nodes floating around means they can’t be reached from a post or concept during traversal.</p>
<img src="https://cdn.hashnode.com/uploads/covers/68c1f8825145eb977febb99d/177a2cce-01c6-4bed-8a46-d388e76b5128.png" alt="Visualization of the conceptual graph with floating Abstraction nodes highlighted with cyan bounding boxes. Takeaway: The majority of abstractions are disconnected from the main graph component." style="display:block;margin:0 auto" />

<p><em>Figure 2. Most of the</em> <code>Abstraction</code> <em>nodes in the conceptual graph (marked with cyan boxes) are disconnected from the main graph component.</em></p>
<hr />
<p>Before we move on, let's have a look at the narrative graph (Fig. 2), because it is even more disconnected. The graph is made of five large <strong>components</strong> (i.e. parts of a graph that are fully separated from the rest of the graph) and three disconnected nodes (which, technically, are also graph components).</p>
<img src="https://cdn.hashnode.com/uploads/covers/68c1f8825145eb977febb99d/f40813d3-ca0d-4832-b767-d60622b9500b.png" alt="Visualization of a clustered network graph with dense star-like communities connected by cross-links. Edges connect numerous small nodes to central hub nodes, while edges highlight radial structures within each cluster. " style="display:block;margin:0 auto" />

<p><em>Figure 3. Knowledge graph constructed with narrative ontology from my 2025 blog posts. Nodes: red - posts, green - problems, cyan - tensions, purple - resolution.</em></p>
<hr />
<p>Zooming into one post, <a href="https://storiesinstructure.com/climbing-routes-are-graphs">Climbing Routes are Graphs</a>, there were four (green) problems identified, each producing its (cyan) tension, and resolved with a (purple) resolution.</p>
<img src="https://cdn.hashnode.com/uploads/covers/68c1f8825145eb977febb99d/5d64bef9-5656-4074-a556-1c51f665a969.png" alt="Network visualization of the narrative graph around the &quot;Climbing Routes are Graphs&quot; post node. Node colors: red - post, green - problem, cyan - tension, purple - resolution. Takeaways: the post has four independent cycles of Problem -&gt; Tension -&gt; Resolution." style="display:block;margin:0 auto" />

<p><em>Figure 4. Narrative graph component of the "Climbing Routes Are Graphs" post. Nodes: red - post, green - problem, cyan - tension, purple - resolution.</em></p>
<hr />
<p><strong>Summarizing my first attempt</strong>: I achieved the goal of creating a graph from my blog posts, but I'm far from satisfied with the results. Maybe it's the duplicates that mess things up?</p>
<h3><strong>Second Attempt: Deduplicated Graph</strong></h3>
<p>The hypothesized major culprit were the duplicates.</p>
<p>To deduplicate, for each entity type (<code>Concept</code>, <code>Tension</code>, <code>Problem</code>, etc.), I sent the list of node names to the LLM, with the task to identify groups that are true duplicates (e.g. "SfM" and "Structure from Motion"). LLM picks a canonical name for each group, and the graph is rewired so that the edges point to the new node. Finally, old duplicated nodes are removed.</p>
<p>This process is called <a href="https://en.wikipedia.org/wiki/Record_linkage">entity resolution</a>, as it groups together entities with the same meaning, or <a href="https://en.wikipedia.org/wiki/Canonicalization">canonicalization</a>, as in addition to grouping — a new, canonical / standard name is assigned.</p>
<p>What is the visual effect of this canonicalization?</p>
<img src="https://cdn.hashnode.com/uploads/covers/68c1f8825145eb977febb99d/f3f962ee-7f77-44fd-97cd-777324d2fe3b.png" alt="Network visualization of the conceptual graph. Node colors: red - post, green - concept, blue - abstraction. Takeaway: graph is made of five large components that are connected by common concepts." style="display:block;margin:0 auto" />

<p><em>Figure 5. Canonicalized knowledge graph constructed with conceptual ontology from my 2025 blog posts. Nodes: red - posts, green - concepts, blue - abstractions</em>.</p>
<hr />
<p>The structure is still composed of five large components (each corresponding to a blog post), but now they are much more connected and the graph has become more spatial.</p>
<p>Revising a few examples of what has been merged, I'm also satisfied. See for yourselves: a new canonical name is to the left, and all merged-and-replaced nodes are to the right:</p>
<pre><code class="language-plaintext">Graph Without Edges ← Point Cloud, Sparse Point Cloud, Graph Without Edges
Photos ← Photo, Photos, Few Photographs, Photographs, Additional_Photos, Set Of Flat Images
Waypoint ← Waypoint, Waypoints, Named Waypoints
</code></pre>
<p>The narrative graph has also improved. Posts share either a tension or a resolution, and in some cases tensions have been unified into one. For example, in <a href="https://storiesinstructure.com/how-to-solve-the-house-puzzle">How to Solve a House Puzzle</a>, the problem of drawing a house and the problem of finding a route through all the bridges of Königsberg lead to the same tension of "Existence of an Euler Path in the Graph".</p>
<img src="https://cdn.hashnode.com/uploads/covers/68c1f8825145eb977febb99d/4877ea8e-3a10-4402-bc3b-054df656b26f.png" alt="Visualization of the narrative graph. Takeaway: deduplication revealed the tension shared by two posts." style="display:block;margin:0 auto" />

<p><em>Figure 6. Canonicalized narrative graph neighborhood of the "Climbing Routes Are Graphs" post node. Nodes: red - post, green - problem, cyan - tension, purple - resolution.</em></p>
<hr />
<p><strong>Summarizing my second attempt:</strong> Deduplication fixed the obvious so the graph can actually connect. However, that hasn't solved the problem of floating abstractions.</p>
<h3>Third attempt: Hierarchical aggregation</h3>
<p>Why did I want abstractions?</p>
<p>That's a valid question to ask. I was curious what knowledge is captured in my knowledge graph <strong>without diving into details</strong>. I believe I'm not alone with the urge to have <strong>a summary</strong> or means to <strong>grasp a general idea</strong>.</p>
<p>Although the LLM did not succeed in linking <code>Abstraction</code> to <code>Concept</code> nodes, the hunch persists: <strong>there are groups of nodes that have something in common.</strong></p>
<p>Are these groups <strong>visible in the structure of the graph</strong> itself?</p>
<p>I applied Louvain community detection algorithm <a href="https://www.nature.com/articles/s41598-019-41695-z">[3]</a> (community = dense group of nodes), and handed the results over to the LLM to summarize. That resulted in 61 communities being found in the conceptual graph:</p>
<img src="https://cdn.hashnode.com/uploads/covers/68c1f8825145eb977febb99d/e04468db-2e85-4eea-95ed-714cabc8018e.png" alt="Communities visualization in the conceptual graph. Takeaway: communities do not overlap between the graphs." style="display:block;margin:0 auto" />

<p><em>Figure 7. Communities detected in the canonicalized conceptional graph. 337 Concept nodes and 30 Abstraction nodes were grouped into 61 communities, ranging from one member to the maximum of 35 nodes (Community 6).</em></p>
<hr />
<p>Zooming in again into a single post, "Climbing Routes are Graphs", almost all the <code>Concept</code> nodes associated with this post were put into one community, grouping together notions around path graphs and Eulerian path existence, like so:</p>
<pre><code class="language-plaintext">Community 6 (35 members)

This community links concepts from graph theory—especially Eulerian paths, node degrees, and classic problems like the Bridges of Königsberg—with analogies from rock climbing and physical structures, using routes, ropes, bolts, bridges, and airways to illustrate graphs, edges, vertices, and traversals in an intuitive, real-world way.
</code></pre>
<p>In the narrative graph, nodes associated with the "Climbing Routes are Graphs" post, form four communities concentrated around four different areas:</p>
<ul>
<li><p>Community 0 (3 members) ​ models a rock climbing route as a graph problem</p>
</li>
<li><p>Community 1 (7 members) ​ centers on understanding and determining the existence of an Eulerian path in a graph</p>
</li>
<li><p>Community 2 (3 members) ​ frames climbing as the process of turning an abstract graph of possible movements into a concrete path</p>
</li>
<li><p>Community 3 (3 members) ​represents the modeling of visual data as sparse graphs</p>
</li>
</ul>
<img src="https://cdn.hashnode.com/uploads/covers/68c1f8825145eb977febb99d/272cc482-b4de-4499-910b-791ed468e008.png" alt="Communities visualization of the narrative graph around the &quot;Climbing Routes are Graphs&quot; blog post. Takeaway: there are four distinct communities." style="display:block;margin:0 auto" />

<p><em>Figure 8. Four communities detected in the neighborhood of the "Climbing Routes Are Graphs" post node, in the canonicalized narrative graph.</em></p>
<hr />
<p>Clearly, communities are the way to summarize local groups of nodes, and so each post can be briefly summarized by its communities.</p>
<p>Communities, however, are still not able to capture that "eye scanpath" in one post and "directed path graph" in another post are both subtypes of "directed graph", unless it is explicitly stated in the text.</p>
<p><strong>Third attempt summarization:</strong> Summarization - achieved, generalization - not yet.</p>
<h3><strong>Fourth attempt: Generalization with hypernyms</strong></h3>
<p>Let me cite myself :D</p>
<blockquote>
<p>Communities, however, are still not able to capture that "eye scanpath" in one post and "directed path graph" in another post are both subtypes of "directed graph", unless it is explicitly stated in the text.</p>
</blockquote>
<p>What that means is that:</p>
<ul>
<li><p>"eye scanpath" IS A "directed graph"</p>
</li>
<li><p>"directed path graph" IS A "directed graph"</p>
</li>
<li><p>but if I never said it explicitly in the text, the connection will NOT appear in the graph.</p>
</li>
</ul>
<p>It is telling us something: <strong>the hierarchy doesn't live in the text.</strong></p>
<p><strong>The hierarchy lives in domain knowledge.</strong></p>
<p>My blog talks about specific concepts, sometimes it mentions an abstraction, but oftentimes the <strong>generalizations are implied, not stated</strong>.</p>
<p>So if I want to have hierarchical structure and some "super" nodes that group several concepts together into a broader domain, I have to <strong>impose this hierarchy</strong>.</p>
<p>Using LLM, I grouped nodes of each type into <strong>"broader conceptual categories that the members are instances or subtypes of".</strong> The LLM was free to come up with the name and the scope of these broader conceptual categories.</p>
<p>Effectively, I asked the LLM to cluster nodes and find a <strong>hypernym</strong> for each cluster (e.g. <em>animal</em> is a hypernym of <em>elephant</em>)<em>.</em> Every node in a cluster is connected with IS_A edge to the cluster's hypernym.</p>
<p>The problem originated in the conceptual ontology, but actually it is easier to visualise the outcome in the narrative graph.</p>
<p>For three separate <code>Problem</code> nodes related to sparse spatial graphs, we got a new <code>Problem</code> super-node "Sparse graphs, point clouds, and spatial neighborhoods", that generalizes these three problems and also creates a new link between two posts.</p>
<img src="https://cdn.hashnode.com/uploads/covers/68c1f8825145eb977febb99d/9509e02e-d353-4355-a2e7-c0bb3f6fd901.png" alt="Visualization of problem super-nodes in the narrative graph. Takeaway: super-nodes group together multiple problems, making a link between posts." style="display:block;margin:0 auto" />

<p><em>Figure 9. Problems (round green nodes) and super-problems (rhombus green nodes) in the narrative graph, showing that several problems are sub-problems of a broader one. A broader problem may be discussed by more than one post (red nodes).</em></p>
<hr />
<p>Similarly, for six separate <code>Resolution</code>nodes, we got a new <code>Resolution</code>super-node "Graph theory and path representations", that generalizes these six resolutions and also creates a new link between four posts.</p>
<img src="https://cdn.hashnode.com/uploads/covers/68c1f8825145eb977febb99d/97b4c2b4-047f-481d-b3df-7f4ed44a3743.png" alt="Visualization of resolution super-nodes in the narrative graph. Takeaway: in an extreme case four posts' resolutions have a common hypernym." style="display:block;margin:0 auto" />

<p><em>Figure 10. Resolutions (round purple nodes) and super-resolutions (rhombus purple nodes) in the narrative graph, showing that several resolutions are sub-resolutions of a broader one. A broader resolution may release tensions that arose in more than one post (red nodes).</em></p>
<hr />
<p>While I couldn't find the "directed graph" super-node, the generalization I obtained with hypernyms is close enough to what I envisioned.</p>
<p>Interestingly, investigating both, communities and hypernyms, allowed us to see different approaches to "zooming out" on the graph. Communities are based on edge density, so purely on the structure of the graph, whereas hypernyms are semantic — based on conceptual hierarchy.</p>
<h2>Conclusion</h2>
<p>I didn’t begin with a neatly defined pipeline. I began with a graph that didn’t look right. Each refinement step — deduplication, community detection, hierarchical grouping — emerged as a response to something that felt off.</p>
<p>But such a pipeline exists. The work of Darren et al. <a href="https://arxiv.org/abs/2404.16130">[4]</a> defines the pipeline order for graph creation in GraphRAG (Graph Retrieval Augmented Generation): <code>extract entities</code> → <code>resolve/merge duplicates</code> → <code>detect communities</code> → <code>summarize communities</code>. Microsoft GraphRAG documentation <a href="https://microsoft.github.io/graphrag/index/default_dataflow/">[5]</a> makes the pipeline even more explicit.</p>
<p>The biggest surprise was that structure doesn’t automatically "live in the text" and one has to decide where it comes from. Additionally, what felt natural to me (hypernyms) is not a common practice in the GraphRAG pipeline. That makes me wonder about the <strong>consequences of both designs for search and information retrieval</strong>.</p>
<p>Finally, what I find delightful is that we constructed a graph from texts that themselves explore graphs!</p>
<p>In my follow-up posts, I’ll treat these graphs as a retrieval layer and test what questions they answer.</p>
<h2><strong>References</strong></h2>
<p>[1] Quillian, M. Ross. "The teachable language comprehender: A simulation program and theory of language." Communications of the ACM 12.8 (1969): 459-476. <a href="https://dl.acm.org/doi/abs/10.1145/363196.363214">https://dl.acm.org/doi/abs/10.1145/363196.363214</a></p>
<p>[2] Quillian, M. Ross. "Word concepts: A theory and simulation of some basic semantic capabilities." Behavioral science 12.5 (1967): 410-430. <a href="https://onlinelibrary.wiley.com/doi/abs/10.1002/bs.3830120511">https://onlinelibrary.wiley.com/doi/abs/10.1002/bs.3830120511</a></p>
<p>[3] Traag, Vincent A., Ludo Waltman, and Nees Jan Van Eck. "From Louvain to Leiden: guaranteeing well-connected communities." <em>Scientific reports</em> 9.1 (2019): 5233. <a href="https://www.nature.com/articles/s41598-019-41695-z">https://www.nature.com/articles/s41598-019-41695-z</a></p>
<p>[4] Edge, Darren, et al. "From local to global: A Graph RAG approach to query-focused summarization." arXiv preprint arXiv:2404.16130 (2024). - <a href="https://arxiv.org/abs/2404.16130">https://arxiv.org/abs/2404.16130</a></p>
<p>[5] Microsoft GraphRAG documentation <a href="https://microsoft.github.io/graphrag/index/default_dataflow/">https://microsoft.github.io/graphrag/index/default_dataflow/</a></p>
]]></content:encoded></item><item><title><![CDATA[Mapping the Skies]]></title><description><![CDATA[Christmas has passed, and I truly hope you got your presents — hopefully more elaborate than an annual supply of socks.
Now that your presents are here (socks or not), join me for a moment to ponder the logistics of Santa Claus.
So how does Santa del...]]></description><link>https://storiesinstructure.com/mapping-the-skies</link><guid isPermaLink="true">https://storiesinstructure.com/mapping-the-skies</guid><category><![CDATA[Flight planning]]></category><category><![CDATA[airspace]]></category><category><![CDATA[Stories in Structure]]></category><category><![CDATA[graph theory]]></category><category><![CDATA[aviation]]></category><category><![CDATA[Systems Thinking]]></category><category><![CDATA[navigation]]></category><category><![CDATA[SANTA CLAUS]]></category><dc:creator><![CDATA[Agata Migalska]]></dc:creator><pubDate>Tue, 30 Dec 2025 02:22:50 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1767061745609/8e515624-dbfe-4720-bd07-4fba56019f88.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Christmas has passed, and I truly hope you got your presents — hopefully more elaborate than an annual supply of socks.</p>
<p>Now that your presents are here (socks or not), join me for a moment to ponder the <strong>logistics of Santa Claus</strong>.</p>
<p><strong>So how does Santa deliver your presents straight to your chimney?</strong></p>
<h2 id="heading-santas-base-your-chimney">Santa's Base → Your Chimney</h2>
<p>For Santa to get to your house, <strong>three things</strong> are required: a starting point, an ending point, and a route between the two. Does that already sound like <strong>traversing a graph</strong>?</p>
<h3 id="heading-starting-point">Starting Point</h3>
<p>As a European child, I was made to believe that Santa Claus lives in <a target="_blank" href="https://en.wikipedia.org/wiki/Lapland_\(Finland\)">Lapland</a>, in northern Finland. For the sake of argument, let's stick to this childhood belief and assume Santa's base is in Rovaniemi — his <a target="_blank" href="https://www.lapland.fi/visit/only-in-lapland/lapland-home-santa-claus-village/">"official" hometown</a>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767060385619/a9457045-ced9-414a-a938-67fdaf079807.png" alt="Location of Rovaniemi on the map of Europe" class="image--center mx-auto" /></p>
<p><em>Figure 1: Location of Rovaniemi. © OpenStreetMap contributors.</em></p>
<hr />
<h3 id="heading-ending-point">Ending Point</h3>
<p>The second assumption we have to make is that you actually are in possession of a chimney. If you are not, then... well — how about considering the closest chimney as your own?</p>
<h3 id="heading-why-europe">Why Europe?</h3>
<p>The third assumption is that this <strong>chimney is located in Europe</strong>. Not because Santa prefers Europe, but because <strong>Europe makes the structure of the sky explicit</strong>. Relatively small countries fragment airspace into national volumes, and dense traffic requires cross-border coordination. As a consequence, local authorities publish their maps and regulations, making the system easier to explore — even for laypeople like us.</p>
<p>Europe is not special — each major world region implements the same global framework, only differently. Regardless of the implementation, Santa’s problem remains unchanged: <strong>finding a safe path through a constrained, shared, three-dimensional space.</strong></p>
<h3 id="heading-visual-vs-instrument-based-navigation">Visual vs Instrument-Based Navigation</h3>
<p>Finally, we have to decide whether Santa flies using <strong>visual cues</strong> — such as terrain, landmarks, aerodromes, lights, and light patterns — or relies on <strong>instruments</strong> that provide attitude, speed, altitude, navigation, communication, and system awareness. In aviation, these two navigation modes are formally codified as <strong>Visual Flight Rules (VFR)</strong> and <strong>Instrument Flight Rules (IFR)</strong>.</p>
<p><strong>Can Santa and his reindeer operate in both navigation modes?</strong></p>
<p>Personally, I would be disappointed if they had not mastered both, given their many centuries of experience in present delivery.</p>
<p><strong>Which navigation mode would be more reasonable to choose?</strong></p>
<p>Flying using visual cues is a little like going off-road: it feels like freedom, but it does not scale well under time pressure, poor weather, or long distances. Flying on instruments, on the other hand, enables predictable routing through <strong>structured airspace</strong>, with defined coordination and separation.</p>
<p>Given that:</p>
<ul>
<li><p>there were <strong>172,901 flights worldwide</strong> on Christmas Eve 2025 (see Figure 2),</p>
</li>
<li><p>Christmas presents are of utmost priority,</p>
</li>
<li><p>and so is Santa’s safety,</p>
</li>
</ul>
<p>it is reasonable to assume that <strong>Santa relies on instruments and structured airspace whenever possible</strong>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767060443865/dd6a07e4-c46e-4df2-8458-df42c204da8a.png" alt="Flightradar24 Statistics" class="image--center mx-auto" /></p>
<p><em>Figure 2: Total number of flights per day in 2025, based on statistics published by</em> <a target="_blank" href="https://www.flightradar24.com/data/statistics"><em>Flightradar24</em></a><em>.</em></p>
<hr />
<h2 id="heading-european-airspace-as-a-graph">European airspace as a graph</h2>
<p>We set forth the assumptions. We established the starting point, the ending point, and that it is plausible that Santa Claus operates in structured airspace under IFR.</p>
<p>To deliver presents to your chimney, <strong>Santa needs a route</strong>. And to determine that route, <strong>he needs to understand the underlying structure — the airspace graph</strong>. So let's explore it!</p>
<h3 id="heading-nodes-waypoints">Nodes: Waypoints</h3>
<p>Santa likely departs under local Air Traffic Control and then joins the airspace route network at a nearby waypoint. Looking at the chart below, you can spot <em>ULROM</em> waypoint nearby the Rovaniemi airport. This could be such first waypoint for Santa when heading south.</p>
<p>Waypoints are abstract nodes defined by coordinates (latitude and longitude) and identified by globally unique five-letter codes.</p>
<p><strong>These waypoints are nodes in the airspace graph.</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767060478613/8aa5180d-0718-4730-ba51-60f913dbd4a4.png" alt="Rovaniemi area en-route chart" class="image--center mx-auto" /></p>
<p><em>Figure 3: Rovaniemi enroute chart (excerpt). Source: Finavia / ANS Finland,</em> <a target="_blank" href="https://www.ais.fi/eaip/"><em>eAIP Finland, ENR 6.1</em></a><em>, accessed 22.12.2025.</em></p>
<h3 id="heading-edges-airways">Edges: Airways</h3>
<p>Looking again at the Rovaniemi enroute chart above, you'll notice that some waypoints are connected. Look at the blue line that goes from <em>OSLIT</em> to <em>ULROM</em>, and then from <em>ULROM</em> to <em>RENVI</em>. This blue line, named <em>Y86</em>, is a defined airway. Moreover, it has a defined direction (denoted with arrows).</p>
<p>While there are only two defined airways in the vicinity of Rovaniemi, investigating Helsinki area uncovers a different story.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767060497233/0520ff03-ea16-4231-ba18-3eda2a00d983.png" alt="Helsinki area en-route chart" class="image--center mx-auto" /></p>
<p><em>Figure 4: Helsinki area enroute chart (excerpt). Source: Finavia / ANS Finland,</em> <a target="_blank" href="https://www.ais.fi/eaip/"><em>eAIP Finland, ENR 6.1</em></a><em>, accessed 22.12.2025.</em></p>
<hr />
<p>There are many more defined (blue) airways in both inwards and outwards directions. These defined airways are <strong>one type of edges in the airspace graph</strong>.</p>
<p>Did I just say <strong>one type</strong>? Does that mean that there are <strong>other types</strong>?</p>
<h3 id="heading-edges-direct-connections">Edges: Direct connections</h3>
<p>To answer this question, let's zoom out even more and look at the entirety of Europe. What we can see here are those defined airways (shown as black lines) that are the <strong>explicit part of the airspace graph</strong>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767060543189/004619cc-0d87-4a51-afa8-552ba243f52b.png" alt="Simplified map of the European air route network showing major routes and nodes" class="image--center mx-auto" /></p>
<p><em>Figure 5: Simplified view of the European route network. Adapted from the</em> <a target="_blank" href="https://www.eurocontrol.int/publication/eurocontrol-route-network-chart-ern-27-november-2025"><em>EUROCONTROL Route Network Chart (ERN), edition 27 November 2025</em></a><em>. © EUROCONTROL.</em></p>
<hr />
<p>But an aircraft does not always have to follow these fixed corridors.</p>
<p>Have you noticed <strong>curious blank spaces</strong>, for instance in France or in the Balkan area? These are the areas with no fixed corridors at all! Then how is it possible to navigate to a chimney in Rome?</p>
<p>What we are dealing with here is <strong>Free Route Airspace (FRA)</strong>, where aircraft may fly direct between permitted waypoints while still using predefined airways where available. Green lines in the above chart depict the boundaries of FRAs, and you can easily see that free routing is possible across much of Europe.</p>
<p>So even though we don't get these direct edges explicitly in the charts, they are there. <strong>These implicit direct connections constitute the second type of edges in the airspace graph.</strong></p>
<h2 id="heading-third-dimension-flight-levels">Third Dimension: Flight Levels</h2>
<p>So far, the airspace has been described as if it were flat — defined only by latitude and longitude. In reality, aircraft also move vertically, and this vertical dimension is structured using <strong>flight levels</strong>.</p>
<p>A flight level is a standardized altitude reference, expressed in hundreds of feet and based on a common pressure datum. Instead of thinking in absolute altitude above sea level, aircraft operate on these shared vertical layers, allowing multiple flows of traffic to safely occupy the same geographic space.</p>
<p>In graph terms, <strong>flight levels extend waypoints and routes into the third dimension</strong>. The same horizontal path may exist simultaneously at multiple vertical layers, with separation enforced by assigning different flight levels to different aircraft.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767060563516/a4dde877-0b4c-41b0-9fe7-37c6574b7820.png" alt="Comparison of flight levels" class="image--center mx-auto" /></p>
<p><em>Figure 6: Comparison of assorted flight level systems to scale by CMG Lee - Own work,</em> <a target="_blank" href="https://creativecommons.org/licenses/by-sa/4.0"><em>CC BY-SA 4.0</em></a><em>,</em> <a target="_blank" href="https://commons.wikimedia.org/w/index.php?curid=121718377"><em>Link</em></a><em>.</em></p>
<hr />
<h2 id="heading-how-santa-integrates-with-structure">How Santa integrates with structure</h2>
<p>What remains to be done by Santa and his crew, is to <strong>construct a flight plan</strong>. This flight plan is exactly the <strong>path</strong> that Santa has to take to <strong>traverse a graph from the starting point to the ending point</strong> -- <strong>a sequence of named waypoints connected by a mix of predefined airways and direct segments.</strong></p>
<p>In the example flight plan below, Santa departs from his base in Rovaniemi, joins the airspace structure in <em>ULROM</em> at flight level 280 (28,000 ft), briefly follows a published airway <em>Y86</em> while climbing to FL320 by <em>RENVI</em>, then transitions into free-route airspace. From there, Santa completes the climb to FL340 by <em>LUNIP</em> and continues across Europe on a series of direct segments between waypoints, remaining at cruise altitude until leaving the en-route phase at <em>GERVA</em> to deliver presents to a chimney close to Rome.</p>
<pre><code class="lang-bash">F280 ULROM Y86 RENVI/F320 DCT LUNIP/F340 DCT VABER DCT NINTU DCT TUMGU DCT GERVA
</code></pre>
<p>This flight plan is for illustration purposes only (and is missing cruising speed). Real-world routes are validated against current airspace availability, restrictions, and traffic flows.</p>
<p>Nevertheless, once a flight plan is sent and approved, Santa and his reindeer can roam the airspace with no harm neither to the crew nor to the cargo.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>The key take-away message from this post is that <strong>airspace</strong> is a <strong>directed graph</strong> with <strong>waypoints as nodes</strong>, <strong>airways</strong> (both explicit and implicit) <strong>as edges</strong>, and with <strong>flight levels that extend these nodes and edges into the third dimension</strong>.</p>
<p>Even if Santa Claus does not exist.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767060592224/d748ac8d-d58a-450e-8673-0041239cb20f.png" alt="Illustration of a person in Christmas socks sitting by a fireplace." class="image--center mx-auto" /></p>
<p><em>Figure 7: The hazards of chimney-based logistics. (Image generated with OpenAI’s DALL·E via ChatGPT.)</em></p>
<h2 id="heading-notes">Notes</h2>
<ul>
<li><p>Waypoints, along with their details, are listed in Chapter ENR 4.4 of the country’s electronic Aeronautical Information Publication -- eAIP, such as this <a target="_blank" href="https://www.ais.fi/eaip/">eAIP for Finland</a>.</p>
</li>
<li><p>In addition to abstract waypoints, there are also legacy nodes based on physical radio infrastructure. Those radio navigation aids are listed in Chapter ENR 4.2 of eAIP and are denoted with 2- and 3-letter codes.</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[The Graph of Your Eyesight]]></title><description><![CDATA[The Value of Your Gaze
Have you ever wondered what you’re actually looking at when you look?
Have you ever wondered why someone would want to know what you’re looking at?
It's no surprise anymore that our behavioural data is valuable. Roughly speakin...]]></description><link>https://storiesinstructure.com/the-graph-of-your-eyesight</link><guid isPermaLink="true">https://storiesinstructure.com/the-graph-of-your-eyesight</guid><category><![CDATA[Saliency]]></category><category><![CDATA[Scanpaths]]></category><category><![CDATA[Scanpath prediction]]></category><category><![CDATA[Stories in Structure]]></category><category><![CDATA[graph theory]]></category><category><![CDATA[Visual Attention]]></category><category><![CDATA[Computer Vision]]></category><dc:creator><![CDATA[Agata Migalska]]></dc:creator><pubDate>Tue, 25 Nov 2025 04:16:52 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1763939319068/5d88b432-6187-4083-8ea1-4f7454e2d84b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-the-value-of-your-gaze">The Value of Your Gaze</h2>
<p>Have you ever wondered <strong>what you’re actually looking at</strong> when you look?</p>
<p>Have you ever wondered <strong>why someone would want to know</strong> what you’re looking at?</p>
<p>It's no surprise anymore that our behavioural data is valuable. Roughly speaking, <strong>two worlds care most about where we direct our eyes</strong>. <strong>Scientists</strong>—from psychology and neuroscience to computer vision—study eye movements to <strong>understand</strong> how the brain consumes and digests information. Meanwhile, <strong>industry</strong> uses attention as a <strong>lever</strong>: advertisers, designers, retailers, game studios, and even car manufacturers want to know what people notice first and what they miss. <strong>One side seeks understanding; the other seeks influence.</strong></p>
<h2 id="heading-what-do-you-see-when-you-look">What Do You See When You Look?</h2>
<p>So what do you actually <em>see</em> when you look at an image? Do your eyes sweep from left to right, top to bottom? Do they lock onto whatever sits in the center? Are you selective? Or is your curiosity evenly spread across the whole image?</p>
<p>To explore this, here’s a small experiment. <strong>Look at the image below for five seconds</strong> and simply <strong>notice what your eyes settle on</strong>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1764038867535/18142915-776d-4eab-b30d-250d5efceba5.jpeg" alt class="image--center mx-auto" /></p>
<p>Figure 1. Image from Object and Semantic Images and Eye-tracking (OSIE) dataset.</p>
<hr />
<h3 id="heading-saliency">Saliency</h3>
<p>All of us look differently, so what caught your attention might be different from what someone else fixated on, but not <em>that</em> different — we all tend to focus on things that <strong>stand out: colour, luminance, pattern, or size.</strong> In scientific terms, whatever stands out in a scene and attracts attention has a property called <a target="_blank" href="https://www.oxfordbibliographies.com/display/document/obo-9780199772810/obo-9780199772810-0324.xml"><strong>saliency</strong></a>.</p>
<p>Saliency is often visualized using <strong>saliency maps</strong>. Intuitively, you can imagine multiple people looking at the same image and each fixation increasing a kind of “attention counter” around where it landed.</p>
<p>In practice, rather than incrementing single pixels, each fixation is turned into a small, blurred <strong>Gaussian blob</strong>, because eye movements are never perfectly precise. By adding these blobs from many viewers and normalizing the result, we obtain a heatmap showing <strong>which parts of the image were most likely to attract attention</strong>.</p>
<p>Go ahead and <strong>compare</strong> what regions you looked at with what other people's eyes found attractive.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1764042345049/6dd5acf8-1f94-4553-9316-615bc05eaff1.png" alt class="image--center mx-auto" /></p>
<p>Figure 2. Left: saliency map. Right: the same saliency map overlaid on the original image.</p>
<hr />
<p>Saliency tells us <em>where</em> people tend to look. But that's not the whole story. You didn’t look at all these points at once. Your eyes travelled — one point at a time.</p>
<p>What you looked at formed <strong>a sequence</strong>.</p>
<h3 id="heading-scanpath">Scanpath</h3>
<p>While saliency is fairly universal, <strong>the sequence</strong> your eyes constructed while jumping from one interesting region to another is much more of <strong>a personal trait.</strong> These eye jumps are called <a target="_blank" href="https://en.wikipedia.org/wiki/Saccade"><strong>saccades</strong></a>, and the points your eyes land on are <a target="_blank" href="https://en.wikipedia.org/wiki/Fixation_\(visual\)"><strong>fixations</strong></a>. Alternating fixations and saccades form a <strong>scanpath</strong>.</p>
<p>Below are two examples of such scanpaths.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1764040955638/300be8a8-a763-4ecc-bac8-ff94611569d1.png" alt class="image--center mx-auto" /></p>
<p>Figure 3. Examples of scanpaths from two different people on the same image.</p>
<hr />
<p>We’ve seen something similar before.</p>
<p>Recall my post on <a target="_blank" href="https://storiesinstructure.com/climbing-routes-are-graphs"><strong>climbing routes as path graphs</strong></a>. There, we looked at a graph where security bolts were nodes connected with a rope. Here we are also observing a graph, but this time <strong>every fixation is a node and every saccade is an edge</strong> joining two consecutive fixations. Unlike the debatable case of climbing routes, this time it is definitely a <strong>directed graph</strong>.</p>
<p>Interestingly, recent research shows that humans often return exactly to previous fixation locations (Kümmerer and Bethge 2021). In this light, the graph we observe is a <strong>directed graph that can include cycles</strong>.</p>
<h2 id="heading-prediction">Prediction</h2>
<p>As I wrote at the beginning, two forces drive research on visual attention: science and industry. And both care for the same reason — our gaze is not random. It is <strong>structured</strong>. It can be measured, modeled, and, with the right tools, <strong>predicted</strong>.</p>
<h3 id="heading-from-scanpaths-to-sequences">From scanpaths to sequences</h3>
<p>A scanpath isn’t just a collection of points — it’s a sequence. Each fixation depends on the ones before it, just as each step in a sentence depends on earlier words.</p>
<p>This is why predicting gaze is conceptually similar to how language models work:</p>
<blockquote>
<p><strong>given the context and the sequence so far, what comes next?</strong></p>
</blockquote>
<p>In text, the next element is a word.</p>
<p>In visual attention, the next element is a fixation.</p>
<h3 id="heading-from-sequences-to-spatial-graphs">From sequences to spatial graphs</h3>
<p>But unlike text, <strong>scanpaths live in space</strong>. Every fixation is a node located somewhere in this space, and the saccades your eyes make form a <strong>directed graph</strong> through this space. Predicting attention, then, becomes a graph problem:</p>
<blockquote>
<p><strong>choose the next node, given the whole scene and the path so far.</strong></p>
</blockquote>
<p>This graph-based view lets us reason about attention as a spatial process — something language models don’t have to deal with.</p>
<h3 id="heading-why-this-matters-for-shopping">Why this matters for shopping</h3>
<p>Now consider what this means in the real world — especially in late November, when retailers run the world’s largest attention experiment. Saliency tells you <em>what stands out</em>, but it is <strong>your scanpath</strong> that determines which products you consider first, which you compare, and which you never see.</p>
<p>Retailers care because it affects revenue, but that's only a narrow slice of what gaze enables. Our gaze is central to exploration, communication, navigation, coordination, and social connection. So, ultimately, what you look at is extremely powerful in almost every domain of life.</p>
<h2 id="heading-summary">Summary</h2>
<p><strong>Your gaze leaves a trace</strong>: a directed graph constructed as your eyes traverse from one salient point to another. This graph can be learned, modeled, and predicted.</p>
<p>And that is precisely why <strong>your attention is valuable and in high demand</strong>.</p>
<p>So go enjoy letting your eyes wander across those <strong>Christmas presents</strong>!</p>
<p>Next time, we'll see <strong>how Santa manages to deliver all of them down your chimney</strong>. Ho ho ho!</p>
<h2 id="heading-further-reading">Further Reading</h2>
<h3 id="heading-scanpath-prediction-models">Scanpath prediction models</h3>
<ul>
<li><p>Yang, Zhibo, et al. "Unifying top-down and bottom-up scanpath prediction using transformers." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. <a target="_blank" href="https://openaccess.thecvf.com/content/CVPR2024/html/Yang_Unifying_Top-down_and_Bottom-up_Scanpath_Prediction_Using_Transformers_CVPR_2024_paper.html">[Article at CVPR 2024]</a></p>
</li>
<li><p>Li, Peizhao, et al. "UniAR: A Unified model for predicting human Attention and Responses on visual content." Advances in Neural Information Processing Systems 37 (2024): 106346-106369. <a target="_blank" href="https://proceedings.neurips.cc/paper_files/paper/2024/hash/bff09ce4b210b185a265c9bcd58048bb-Abstract-Conference.html">[Article at NeurIPS 2024]</a></p>
</li>
<li><p>Cartella, Giuseppe, et al. "Modeling Human Gaze Behavior with Diffusion Models for Unified Scanpath Prediction." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2025. <a target="_blank" href="https://openaccess.thecvf.com/content/ICCV2025/html/Cartella_Modeling_Human_Gaze_Behavior_with_Diffusion_Models_for_Unified_Scanpath_ICCV_2025_paper.html">[Article at ICCV 2025]</a></p>
</li>
<li><p>Kümmerer, Matthias, Matthias Bethge, and Thomas SA Wallis. "DeepGaze III: Modeling free-viewing human scanpaths with deep learning." Journal of Vision 22.5 (2022): 7-7. <a target="_blank" href="https://jov.arvojournals.org/article.aspx?articleid=2778776">[Article in Journal of Vision]</a></p>
</li>
<li><p>Kümmerer, Matthias, and Bethge, Matthias. "State-of-the-art in human scanpath prediction." arXiv preprint arXiv:2102.12239 (2021). <a target="_blank" href="https://arxiv.org/abs/2102.12239">[Article on arXiv]</a></p>
</li>
</ul>
<h3 id="heading-consumer-choice">Consumer choice</h3>
<ul>
<li><p>van der Laan, Laura N., et al. "Do you like what you see? The role of first fixation and total fixation duration in consumer choice." Food Quality and Preference 39 (2015): 46-55. <a target="_blank" href="https://dspace.library.uu.nl/bitstream/handle/1874/309285/1_s2.0_S0950329314001451_main.pdf?sequence=1">[PDF]</a></p>
</li>
<li><p>Nordfält, Jens, and Carl-Philip Ahlbom. "Utilising eye-tracking data in retailing field research: A practical guide." Journal of Retailing 100.1 (2024): 148-160. <a target="_blank" href="https://www.sciencedirect.com/science/article/pii/S002243592400006X">[Article in Journal of Retailing]</a></p>
</li>
</ul>
<h3 id="heading-saliency-in-neuroscience">Saliency in Neuroscience</h3>
<ul>
<li><p>Foulsham, Tom, and Geoffrey Underwood. "What can saliency models predict about eye movements? Spatial and sequential aspects of fixations during encoding and recognition." Journal of vision 8.2 (2008): 6-6. <a target="_blank" href="https://jov.arvojournals.org/article.aspx?articleid=2158196">[Article in Journal of Vision]</a></p>
</li>
<li><p>Veale, Richard, Ziad M. Hafed, and Masatoshi Yoshida. "How is visual salience computed in the brain? Insights from behaviour, neurobiology and modelling." Philosophical Transactions of the Royal Society B: Biological Sciences 372.1714 (2017): 20160113. <a target="_blank" href="https://royalsocietypublishing.org/doi/full/10.1098/rstb.2016.0113">[Article link]</a></p>
</li>
</ul>
<h3 id="heading-osie-dataset">OSIE Dataset</h3>
<ul>
<li>Xu, Juan, et al. "Predicting human gaze beyond pixels." <em>Journal of vision</em> 14.1 (2014): 28-28. <a target="_blank" href="https://jov.arvojournals.org/article.aspx?articleid=2193943">[Article in Journal of Vision]</a></li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Geometry of Evidence: A DIY 3D Reconstruction]]></title><description><![CDATA[In my previous post — Geometry of Evidence — we followed how investigative journalists and algorithms reconstruct truth from fragments. This time, let’s do it ourselves.
In this short guide, you’ll run a Structure-from-Motion pipeline using Meshroom,...]]></description><link>https://storiesinstructure.com/geometry-of-evidence-diy</link><guid isPermaLink="true">https://storiesinstructure.com/geometry-of-evidence-diy</guid><category><![CDATA[DIY]]></category><category><![CDATA[photogrammetry]]></category><category><![CDATA[Stories in Structure]]></category><category><![CDATA[Structure from Motion]]></category><category><![CDATA[3D reconstruction]]></category><category><![CDATA[Computer Vision]]></category><dc:creator><![CDATA[Agata Migalska]]></dc:creator><pubDate>Tue, 28 Oct 2025 07:00:53 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1761601990222/83e96094-de7e-4424-b9fa-eb852fece8b6.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In my previous post — <a target="_blank" href="https://storiesinstructure.com/geometry-of-evidence">Geometry of Evidence</a> — we followed how investigative journalists and algorithms reconstruct truth from fragments. This time, let’s do it ourselves.</p>
<p>In this short guide, you’ll run a <strong>Structure-from-Motion</strong> pipeline using <a target="_blank" href="https://github.com/alicevision/meshroom/releases">Meshroom</a>, a free and open-source photogrammetry tool that recreates 3D geometry from photographs.</p>
<p>On our agenda:</p>
<ul>
<li><p><a class="post-section-overview" href="#diy">Run a Structure-from-Motion pipeline step by step.</a></p>
</li>
<li><p><a class="post-section-overview" href="#it-doesnt-work-">Troubleshoot common issues</a> (like the “white mesh” bug).</p>
</li>
<li><p><a class="post-section-overview" href="#further-exploration">Explore further resources and projects that shaped photogrammetry</a>.</p>
</li>
</ul>
<h2 id="heading-diy">DIY</h2>
<h3 id="heading-software">Software</h3>
<p>My preferred software for photogrammetry is <a target="_blank" href="https://github.com/alicevision/meshroom/releases">Meshroom</a>. It performs the entire pipeline and is free and open source.</p>
<p>(You can find links to other software and related tutorials in the <a class="post-section-overview" href="#further-exploration">resources section</a>.)</p>
<h3 id="heading-steps">Steps</h3>
<p><strong>What you'll need:</strong> a computer with a decent GPU and at least 10 GB of free disk space.</p>
<ol>
<li><p>Download <a target="_blank" href="https://github.com/alicevision/meshroom/releases">Meshroom</a> for your operating system.</p>
</li>
<li><p>Download the photo set of <em>The Waiting</em> sculpture (≈ 610 MB).</p>
<p> <a target="_blank" href="https://drive.google.com/uc?export=download&amp;id=15DXp-VQlkurkFg5jUrQbuVZwmZGB3kjj">Link to Google Drive</a></p>
<p> Unzip it into a folder — you’ll load this into Meshroom in step 5.</p>
</li>
<li><p>Start Meshroom and select <strong>Photogrammetry</strong> from the available pipelines.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761602081961/6873d36f-7dbd-4ab2-b0bc-4430685a7f71.png" alt="Select Photogrammetry from available pipelines" class="image--center mx-auto" /></p>
<ol start="4">
<li><p><strong>Save</strong> the project.</p>
</li>
<li><p><strong>Drag and drop</strong> the folder of images.</p>
</li>
<li><p>Hit <strong>Run</strong> and grab yourself a drink. It will take a moment for the pipeline to complete.</p>
<p> Observe the colours of the pipeline’s steps — green = success, orange = in progress, blue = awaiting, red = error.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761602135812/0db48903-b50f-40e3-b495-4665145b9873.png" alt="Photogrammetry pipeline execution" class="image--center mx-auto" /></p>
<ol start="7">
<li><p>Once the pipeline completes successfully, you will see the sparse cloud.</p>
</li>
<li><p>To view the <strong>mesh</strong>, double-click the <em>Meshing</em> component. It will appear in the <em>Scene</em> menu on the right-hand side, where you can toggle it on and off. If the mesh is white or not visible, head to the <a class="post-section-overview" href="#it-doesnt-work-">It Doesn’t Work 😱</a> section.</p>
</li>
<li><p>To view the <strong>textured mesh</strong>, double-click the <em>Textured Mesh</em> component. It will also appear in the <em>Scene</em> menu. If the mesh is white or not visible, head to the <a class="post-section-overview" href="#it-doesnt-work-">It Doesn’t Work 😱</a> section.</p>
</li>
<li><p>To view the <strong>dense point cloud</strong>, go to the <em>Meshing</em> component, scroll down to <em>Dense Point Cloud</em>, and double-click it. It will appear in the <em>Scene</em> menu on the right-hand side, where you can toggle it on and off.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761602179763/7702855a-efb0-4bb5-bec1-c7944b695b24.png" alt="Rendering a dense point cloud" class="image--center mx-auto" /></p>
<h2 id="heading-it-doesnt-work">It Doesn’t Work 😱</h2>
<p>❗ Unfortunately, the current Meshroom release has a bug — the meshes sometimes appear completely white. <strong>The meshes are fine, but we’ll need different software to render them.</strong></p>
<p>If your meshes are white or do not show up:</p>
<ol>
<li><p>Install <a target="_blank" href="https://www.meshlab.net/#download">MeshLab</a>.</p>
</li>
<li><p>In Meshroom, click the <em>Meshing</em> step and copy the path to your mesh.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761602214916/71681bc1-fd72-4cfc-86fc-87a0d75fa1b7.png" alt="Settings of Meshing" class="image--center mx-auto" /></p>
<ol start="3">
<li>In MeshLab, go to <em>File</em> → <em>Import Mesh</em>, and paste the path you just copied from Meshroom.</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761602247471/ef3460e9-005f-4d05-8d50-9e0673b5f1b9.png" alt="MeshLab mesh import" class="image--center mx-auto" /></p>
<ol start="4">
<li>In Meshroom, click the <em>Texturing</em> step and change the <em>File Type</em> to <strong>PNG</strong>. Copy the path to your <em>Mesh</em>.</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761602272834/5c6caaa9-8c48-42ff-b883-a377a8963880.png" alt="Settings of Texturing" class="image--center mx-auto" /></p>
<ol start="5">
<li><p>Recompute the <em>Texturing</em> step.</p>
</li>
<li><p>Once it completes, go back to MeshLab, choose <em>File</em> → <em>Import Mesh</em>, and paste the path you just copied.</p>
</li>
</ol>
<h2 id="heading-further-exploration">Further Exploration</h2>
<p>If you’ve made it this far — <strong>congratulations, you’ve built your first reconstruction</strong> 🎉</p>
<p>If you’d like to go deeper, below are a few excellent resources worth exploring.</p>
<h3 id="heading-computer-vision-theory">Computer Vision Theory</h3>
<ul>
<li><p>Szeliski, Richard. <em>Computer Vision: Algorithms and Applications.</em> Springer Nature, 2022. <a target="_blank" href="https://szeliski.org/Book">Free download</a>. See Section 11.4: Multi-frame Structure from Motion.</p>
</li>
<li><p>A 4-minute <a target="_blank" href="https://www.youtube.com/watch?v=JlOzyyhk1v0&amp;list=PL2zRqk16wsdoYzrWStffqBAoUY8XdvatV&amp;index=8">video explaining the problem solved by Structure-from-Motion</a> (from <em>First Principles of Computer Vision</em>).</p>
</li>
<li><p><a target="_blank" href="https://www.youtube.com/watch?v=MUadR35FFqk">Lecture on Structure-from-Motion</a> — deriving a sparse point cloud and calibrated cameras (CVRP Lab, NUS).</p>
</li>
<li><p><a target="_blank" href="https://www.youtube.com/watch?v=OpZs7kfjFPA">Lecture on Multi-View Stereo</a> — algorithms that estimate pixel-wise depth to densify the point cloud (CVRP Lab, NUS).</p>
</li>
<li><p>Two classic surface-reconstruction algorithms: <a target="_blank" href="https://hhoppe.com/poissonrecon.pdf">Poisson Surface Reconstruction</a> and the <a target="_blank" href="http://mesh.brown.edu/taubin/pdfs/bernardini-etal-tvcg99.pdf">Ball-Pivoting Algorithm</a>.</p>
</li>
</ul>
<h3 id="heading-projects-that-changed-the-landscape">Projects that Changed the Landscape</h3>
<ul>
<li><a target="_blank" href="https://cacm.acm.org/research/building-rome-in-a-day"><strong>Building Rome in a Day</strong></a> — a University of Washington project that showed it was possible to reconstruct an entire city from thousands of publicly shared tourist photos.</li>
</ul>
<h3 id="heading-between-lectures-and-tutorials">Between Lectures and Tutorials</h3>
<ul>
<li>Excellent <a target="_blank" href="https://www.youtube.com/watch?v=iJTqlb7gsWY&amp;list=PLYqCeHIaz7Pi2jpqsROsk064vmOsMPz9v">video on Topographic Point Clouds and Structure from Motion</a> by Ramon Arrowsmith (<a target="_blank" href="https://www.opentopography.org">OpenTopography</a>).</li>
</ul>
<h3 id="heading-software-tutorials-and-documentation">Software Tutorials and Documentation</h3>
<ul>
<li><p><a target="_blank" href="https://alicevision.org/#photogrammetry">AliceVision Photogrammetry Pipeline</a> — documentation describing each step and the algorithms behind it.</p>
</li>
<li><p><a target="_blank" href="https://www.youtube.com/watch?v=tCsnhsvGRio&amp;list=PLYqCeHIaz7Pi2jpqsROsk064vmOsMPz9v">Faraglione with Agisoft Metashape</a> (by Chelsea Scott) — introduces <a target="_blank" href="https://portal.opentopography.org/dataspace/dataset?opentopoID=OTDS.102018.32633.1">a drone photo dataset from Vulcano Island, Italy</a> you can also process in Meshroom.</p>
</li>
<li><p><a target="_blank" href="https://www.realityscan.com/en-US/uses">RealityScan</a> (formerly RealityCapture) — now free for individuals earning under $1 million USD per year. What’s remarkable about RealityScan is that it connects directly to the <strong>Unreal Engine</strong> ecosystem, letting you bring your 3D scans straight into interactive environments and games.</p>
<ul>
<li><p><a target="_blank" href="https://dev.epicgames.com/community/learning/tutorials/W4MR/realityscan-realitycapture-to-unreal-engine-beginner-s-guide-to-photogrammetry-workflow">RealityCapture to Unreal Engine: Beginner’s Guide</a></p>
</li>
<li><p><a target="_blank" href="https://dev.epicgames.com/community/learning/talks-and-demos/PYBZ/realityscan-mastering-photogrammetry-crafting-3d-models-with-realitycapture-unreal-fest-2024">Crafting 3D Models with RealityCapture – Unreal Fest Prague 2024</a></p>
</li>
</ul>
</li>
</ul>
<h2 id="heading-next-investigation">Next Investigation</h2>
<p>Every reconstruction is a small investigation — a way of recovering structure and understanding how things connect.</p>
<p>Next time, we'll put our own <strong>brain</strong> under scrutiny: we'll follow our eyesight and observe how picky, and how seemingly disordered, it is.</p>
<p>In other words, we'll look into the mechanisms of <strong>visual attention</strong>.</p>
]]></content:encoded></item><item><title><![CDATA[Geometry of Evidence]]></title><description><![CDATA[Scene of the Reconstruction
Have you watched the Bellingcat film? In case you haven’t, it follows a group of investigative journalists who uncover what really happened during the 2017 Atarib bombing in Syria. As tragic as the event was, their work is...]]></description><link>https://storiesinstructure.com/geometry-of-evidence</link><guid isPermaLink="true">https://storiesinstructure.com/geometry-of-evidence</guid><category><![CDATA[Stories in Structure]]></category><category><![CDATA[Structure from Motion]]></category><category><![CDATA[Computer Vision]]></category><category><![CDATA[photogrammetry]]></category><category><![CDATA[3D reconstruction]]></category><category><![CDATA[graph theory]]></category><category><![CDATA[investigative-journalism]]></category><dc:creator><![CDATA[Agata Migalska]]></dc:creator><pubDate>Tue, 21 Oct 2025 06:00:25 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1760914748024/5458e1c2-99b8-4543-8dd1-ff5899618f1b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-scene-of-the-reconstruction">Scene of the Reconstruction</h2>
<p>Have you watched the <a target="_blank" href="https://bellingcatfilm.com">Bellingcat film</a>? In case you haven’t, it follows a group of investigative journalists who uncover what really happened during the <a target="_blank" href="https://www.bellingcat.com/news/mena/2017/12/22/targeting-civilians-public-market-al-atarib">2017 Atarib bombing in Syria</a>. As tragic as the event was, their work is truly astonishing. They managed to create a <strong>time-aware 3D reconstruction of the street</strong> from <strong>videos and photos</strong> recorded by local people during the attack.</p>
<p>That was mind-blowing. Don’t we need 3D scanners or LiDAR to do that? Can we really just stitch together a handful of imperfect photos to recover the geometry of an object — or of <strong>a city</strong>?</p>
<p>To see what I mean, explore these <a target="_blank" href="https://arij.net/investigations/gaza-project2/en/gaza-destruction-drone-journalists-3d/index.html">3D models of the recently destroyed Al-Shati and Jabalia districts in Gaza</a>, also produced by Bellingcat.</p>
<p><a target="_blank" href="https://arij.net/investigations/gaza-project2/en/gaza-destruction-drone-journalists-3d/index.html"><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760914824042/531542f2-880b-4124-845c-16919afe3db5.png" alt="Al-Shati" class="image--center mx-auto" /></a></p>
<p>Figure 1: Screenshot of the Al-Shati 3D model from <a target="_blank" href="https://arij.net/investigations/gaza-project2/en/gaza-destruction-drone-journalists-3d/index.html">"Killing the Journalist Won’t Kill the Story" — Documenting the Destruction in Gaza From the Sky</a>.</p>
<hr />
<p>Mind you, I watched the Bellingcat film seven years ago. Technology has evolved: <strong>AI models</strong> have become remarkably good at <strong>guessing plausible 3D forms from just a few shots</strong>, especially for familiar object classes. But they still <strong>struggle to reconstruct arbitrary objects or large scenes faithfully</strong>.</p>
<p>Today, I’d like to invite you to join me in an <strong>investigation</strong> — to see <strong>whether we can preserve one valuable thing in its three-dimensional form</strong>, and <strong>whether there’s something curious about the structure</strong> we uncover.</p>
<p><strong>In other words, let’s see how far curiosity and a few photographs can take us.</strong></p>
<h2 id="heading-investigators-toolbox">Investigator's Toolbox</h2>
<p>How do you rebuild a shape when all you have are photos? Think of this as the first step in any investigation: <strong>draw what you know, and see what’s missing</strong>.</p>
<h3 id="heading-sketching-the-truth">Sketching the Truth</h3>
<p>Let’s have a look at a photo of <em>The Waiting</em> sculpture to guide our thinking.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760914927887/62d74858-da6a-4c93-a7f1-e5d103697f1c.jpeg" alt="The Waiting sculpture (1979)" class="image--center mx-auto" /></p>
<p>Figure 2: <em>The Waiting</em> sculpture in Wrocław, Poland.</p>
<hr />
<p>From a single viewpoint, the sculpture is flat. Since we already have an idea of what people and armchairs look like, we can make an informed guess about its geometry — and this is exactly what AI models do: they make an informed guess based on the chairs and humans they were trained on.</p>
<p>To improve our certainty that the sculpture’s <strong>structure</strong> is as we imagine it, we would need more photos. Ideally, we would put our camera into <strong>motion</strong> and take photos from various angles and levels.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760914977747/0029d921-24fd-4520-ae82-3929ab0c206f.png" alt="The Waiting from various perspectives" class="image--center mx-auto" /></p>
<p>Figure 3: <em>The Waiting</em> from various viewpoints.</p>
<hr />
<p>Even if you’ve never seen this sculpture in the real world, you can form a pretty good idea of how it looks by analyzing this handful of photos.</p>
<p>We can even follow <strong>features</strong> — or simply <strong>characteristic points</strong> — of the sculpture as they move through the photos. For instance, look at the right elbow (yellow dot) and the left knee (red dot) of the woman, and observe how their visibility, locations, and distance change as the camera moves.</p>
<p>These are just two sample points; nothing prevents us from choosing more: the ear, the nose, a finger, a specific hair “bubble,” the arms of the armchair, bumps on the back of the armchair, and so on. For each pair of features, we can again investigate their visibility, location, and distance for each camera position.</p>
<p>That gives us a lot of constraints — a lot of equations. And then it’s like <strong>solving a Sudoku puzzle</strong>: the solution tells us that <strong>the knee has to be exactly here and the elbow exactly there — otherwise, the constraints wouldn’t be satisfied and the whole puzzle would be wrong</strong>.</p>
<p>Once we solve the puzzle, we obtain a <strong>point cloud</strong> of <strong>features</strong> extracted from the photos. As if we were Claude Monet, <strong>sketching the sculpture in impressionist dots</strong>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760915012311/a0fd4e34-defd-4e3f-857f-b635ebfadb27.png" alt="The Waiting sparse point cloud" class="image--center mx-auto" /></p>
<p>Figure 4: <em>The Waiting</em> sparse point cloud.</p>
<hr />
<p><strong>So far, we've <em>drafted the truth</em> and established the <em>key facts</em>. What comes next is the core of an investigator's job — we need to <em>connect the dots</em></strong>.</p>
<h3 id="heading-refinement">Refinement</h3>
<p>A sparse point cloud is truth reduced to coordinates — <strong>a graph without edges</strong>. Enough to prove existence, not enough to reveal form.</p>
<p>The next step is simple: <strong>connect the nodes</strong>.</p>
<p>At this stage, the process is straightforwards. We look at known facts — our <em>features</em>, our <em>characteristic points</em> — and they serve as anchors (undeniable truth). Between those anchors lie pixels, and we can estimate where in space those pixels fall between the anchors. That’s how we <strong>densify the point cloud</strong>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760915036476/52e884f0-e6de-4f1f-a62f-3157ee8a6b2c.png" alt="The Waiting dense point cloud" class="image--center mx-auto" /></p>
<p>Figure 7: <em>The Waiting</em> dense point cloud.</p>
<hr />
<p>Every 3D point also shares a spatial relation with points beside it — they are <strong>spatial neighbors</strong>. In our graph, that relation is represented by an <strong>edge</strong>. Curiously, we are not interested in pairs of neighbors, but in <strong>trios</strong>. We aim to discover <strong>small communities of three mutual neighbors</strong> and connect them in a triangle.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760915079981/134c6af4-5a1e-4520-97b6-1f363e06b89e.png" alt="Closer look at the knee of the The Waiting sculpture mesh" class="image--center mx-auto" /></p>
<p>Figure 8: Closer look at the knee of the <em>The Waiting</em> sculpture mesh.</p>
<hr />
<p>From the <strong>standpoint of graph theory</strong> — this triangle is a small, three-node cycle. So the <strong>refined picture of truth is a graph filled with cycles</strong>, each cycle formed by three nodes, and every node belongs to at least one such cycle.</p>
<p>From the <strong>standpoint of computer graphics</strong> — this triangle is a mesh face. So the <strong>refined picture of truth is a mesh</strong> that can be rendered, 3D-printed, and stored on a hard drive.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760915058188/c16ca96b-eeb0-4e36-901f-d3a8f4faebe9.png" alt="The Waiting as a textured mesh" class="image--center mx-auto" /></p>
<p>Figure 9: <em>The Waiting</em> as a textured mesh.</p>
<hr />
<h2 id="heading-summary">Summary</h2>
<p>We’ve followed the path of an investigation.</p>
<p>Starting with fragments — photographs, viewpoints, features — we reconstructed structure.</p>
<p>At its core was <strong>Structure-from-Motion</strong>: the algorithm that relates features and cameras, solving the optimization puzzle that reveals both a <strong>3D point cloud</strong> and the <strong>positions and orientations of the cameras</strong>.</p>
<p>Around it, a sequence of other techniques helped refine the result: <strong>feature extraction</strong>, <strong>feature matching</strong>, <strong>Multi-View Stereo</strong> for densification, <strong>surface reconstruction</strong> for meshing, and <strong>texture mapping</strong> for color.</p>
<p>Together, they turned a set of flat images into a geometry of truth — a structure that lets us hold a fragment of reality in digital form.</p>
<p>That same logic is used by investigative journalists to recover what war or disaster tried to erase. But anyone can use it — to preserve a place, an object, or simply to see the hidden geometry of the world.</p>
<p>And as every investigation ends, a few new questions arise:</p>
<ul>
<li><p>What happens when we hand this process to AI?</p>
</li>
<li><p>How do we scale it up, and how do we reference our geometry to the real world?</p>
</li>
<li><p>What would it mean to look not just at what is <em>there</em>, but at what we <em>notice</em>?</p>
</li>
</ul>
<p>Because in the end, every reconstruction begins with <strong>attention</strong> — where we choose to look, and what we decide to connect.</p>
<p><strong>Before we turn to perception, in my next post we’ll take a brief detour into practice — a DIY reconstruction you can try yourself.</strong></p>
<p><strong>And after that, we’ll pay our attention to visual attention itself.</strong></p>
]]></content:encoded></item><item><title><![CDATA[Climbing Routes Are Graphs]]></title><description><![CDATA[Introduction
I understand your confusion. We were supposed to talk about graphs, and here I am, talking about rock climbing and climbing routes instead. I am a climber. Be it indoors or outdoors, be it rocks or mountains — doesn’t matter too much. Bu...]]></description><link>https://storiesinstructure.com/climbing-routes-are-graphs</link><guid isPermaLink="true">https://storiesinstructure.com/climbing-routes-are-graphs</guid><category><![CDATA[graphs]]></category><category><![CDATA[graph theory]]></category><category><![CDATA[climbing]]></category><category><![CDATA[rock climbing]]></category><category><![CDATA[Mathematics]]></category><category><![CDATA[storytelling]]></category><dc:creator><![CDATA[Agata Migalska]]></dc:creator><pubDate>Tue, 30 Sep 2025 06:00:25 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1757971944325/ec9ba348-1c85-42da-b9eb-d3366a9537a7.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>I understand your confusion. We were supposed to <strong>talk about graphs</strong>, and here I am, <strong>talking about rock climbing and climbing routes instead</strong>. I am a climber. Be it indoors or outdoors, be it rocks or mountains — doesn’t matter too much. But I’m not going to talk about climbing <em>per se</em>. Instead, let me show you <strong>where you can find graphs in this sport</strong>.</p>
<h2 id="heading-from-rock-to-graph">From Rock to Graph</h2>
<p>It is still (kinda) warm outside, so let’s enjoy the sunshine and inspect this rock below.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759182596846/e6566c8a-0745-4c91-9454-5286684e015e.png" alt="Route “Pesce d'Aprile”, Massone - Sector A, Arco, Italy." class="image--center mx-auto" /></p>
<p>Figure 1: Route “<strong>Pesce d'Aprile”,</strong> Massone - Sector A, Arco, Italy. More show-off photos available on the <a target="_blank" href="https://www.planetmountain.com/en/crags/arco-massone.html">Planet Mountain website</a>.</p>
<hr />
<p>I gave you no other choice but to notice the graph immediately — you can easily spot the red nodes and lime edges.</p>
<p><strong>The red nodes are where metal bolts are located</strong>. These bolts are part of the climber’s safety system: you clip the rope into them using a climbing quickdraw. In principle, if you accidentally fall, the last bolt clipped into is what you'll be hanging from.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1759097689732/300380db-ff2f-40cf-ab22-4025c2003da2.png" alt="Climbing security system - bolts, quickdraws, and rope." class="image--center mx-auto" /></p>
<p><strong>Figure 2: Climbing security system — bolts, quickdraws, and rope.</strong></p>
<hr />
<p>Bolts are <strong>conceptually connected by the imaginary path</strong> that goes from one bolt to the next. This path gives the climber an idea of where the route goes and guides them on which direction to take next.</p>
<p>When a climber ascends the route, the rope connects each bolt in sequence — and the imaginary graph becomes <strong>traversed</strong>.</p>
<p>In that moment, the graph stops being abstract. You’re not just observing the structure — you're <strong>moving through it, edge by edge</strong>.</p>
<h2 id="heading-recap-on-graphs">Recap on Graphs</h2>
<p>Let’s look back at what we’ve talked about in the <a target="_blank" href="https://hashnode.com/post/cmfm59d42000202kv80md1w67">introductory post on graphs</a>. What can we tell about this graph?</p>
<p>We observe a simple structure: two end nodes have <strong>degree 1</strong>, and all the nodes in between have <strong>degree 2</strong>. It's a textbook example of a <strong>path graph</strong> (<a target="_blank" href="https://doi.org/10.1201/9780203490204">Gross and Yellen, 2003, p.18</a>). A path graph is a type of tree, and all the nodes and edges can be laid out on a straight line — like the rope a climber takes from the bottom to the top.</p>
<p>Since there are exactly two nodes with odd degree, <strong>there exists an Euler path in this graph</strong> (see the <a target="_blank" href="https://hashnode.com/post/cmfm59d42000202kv80md1w67">introductory post on graphs</a>!) That's good news, because Euler's theorem proves that <strong>getting to the top is doable!</strong> (Now it is “merely” a matter of one’s skills…)</p>
<p>Finally, is it a <strong>directed graph</strong>? It’s a bit of a philosophical question. Typically, one would go up, in which case — yes. But you can't really prevent anyone from going down if they wish. Unusual? Sure. Impossible? Definitely not.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>When we map a climbing route, we're not just building a graph — we're building a <strong>specific kind</strong> of graph: a <strong>path graph</strong>.</p>
<p>You could say that climbing is about <strong>traversing an abstract graph and turning it into a concrete one</strong> — from its root to its head.</p>
<p>Finally, you’ve probably noticed that the graphs discussed in this post are <strong>sparse</strong>, with few nodes and few edges. Next time, we’ll go to extremes and <strong>strip away all the edges</strong> 😱 We’ll explore <strong>point clouds</strong> and how to generate them using <strong>Structure from Motion.</strong></p>
]]></content:encoded></item><item><title><![CDATA[What is Stories in Structure about?]]></title><description><![CDATA[This blog explores how we see and understand structure — especially the structure of the world around us. It begins with data collected from sensors like cameras, LIDAR, and positioning and motion sensors, then modeled as graphs, revealing how things...]]></description><link>https://storiesinstructure.com/what-is-stories-in-structure-about</link><guid isPermaLink="true">https://storiesinstructure.com/what-is-stories-in-structure-about</guid><category><![CDATA[Visual Attention]]></category><category><![CDATA[graphs]]></category><category><![CDATA[Geospatial]]></category><category><![CDATA[Computer Vision]]></category><category><![CDATA[Machine Learning]]></category><category><![CDATA[climbing]]></category><category><![CDATA[genai]]></category><dc:creator><![CDATA[Agata Migalska]]></dc:creator><pubDate>Tue, 16 Sep 2025 06:00:49 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1757868805912/ff86442b-e032-4374-9c4d-538d97f0e904.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This blog explores how we see and understand <strong>structure</strong> — especially the structure of the world around us. It begins with <strong>data collected from sensors</strong> like cameras, LIDAR, and positioning and motion sensors, then <strong>modeled as graphs, revealing how things are connected, positioned, or navigated.</strong></p>
<p>From there, I dive into computational methods for analyzing these structures — how we <strong>model</strong> them, <strong>extract meaning</strong>, and sometimes even use them to build <strong>augmented reality experiences</strong>. You'll find posts on everything from <strong>Structure from Motion</strong>, <strong>LIDAR mapping</strong>, and <strong>GNSS tracking</strong>, to <strong>visual attention modeling</strong> and the way our eyes and minds respond to the environments we move through.</p>
<p>While much of this blog is grounded in <strong>geospatial systems</strong> (yes, feet on the ground — pun intended), I’ll occasionally take you beyond that scope to explore more abstract structures: <strong>airways in the sky</strong>, <strong>climbing routes</strong>, even <strong>family trees</strong>.</p>
<p><strong>Whether it’s visible or abstract, if it has structure, it has a story - and I’ll try to tell it.</strong></p>
<h2 id="heading-how-this-blog-works">How this blog works</h2>
<p>I aim to publish a new post <strong>every 2–3 Tuesdays</strong> — sometimes a deeper conceptual piece, sometimes a more hands-on follow-up where you can explore things yourself in Python or using open data.</p>
<p>Posts are grouped into themes such as:</p>
<ul>
<li><p><strong>Graphs &amp; Graph Neural Networks (GNNs)</strong></p>
</li>
<li><p><strong>3D Reconstruction</strong></p>
</li>
<li><p><strong>GNSS &amp; Positioning</strong></p>
</li>
<li><p><strong>Climbing</strong></p>
</li>
<li><p><strong>Visual Attention &amp; Perception</strong></p>
</li>
<li><p><strong>Graphs × GenAI</strong></p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1758149741356/f24579c2-1f42-4fdc-9f4e-999e46169c03.png" alt="Stories in Structure: post categories. Graphs and GNNs, 3D Reconstruction, GNNS and Positioning, Climbing, Visual Attention and Perception, Graphs × GenAI." class="image--center mx-auto" /></p>
<p>You'll see these tags on posts to help you navigate across topics or follow the threads that interest you most.</p>
<p>👉Curious? Start with <a target="_blank" href="https://storiesinstructure.com/how-to-solve-the-house-puzzle">How to Solve the House Puzzle</a></p>
<p>---</p>
<p><em>Thanks for being here. If you’d like to follow along, subscribe or check back every few Tuesdays. I hope you’ll find ideas here that feel both familiar and a little unexpected.</em></p>
]]></content:encoded></item><item><title><![CDATA[How to Solve the House Puzzle]]></title><description><![CDATA[Let’s play a game. Your task is to draw a house by connecting the dots without lifting your pen (or pencil, or mouse). You can visit a dot multiple times, but you are allowed to draw an edge only once!

Depending on which dot you choose as your start...]]></description><link>https://storiesinstructure.com/how-to-solve-the-house-puzzle</link><guid isPermaLink="true">https://storiesinstructure.com/how-to-solve-the-house-puzzle</guid><category><![CDATA[Graphs and GNNs]]></category><category><![CDATA[graph theory]]></category><category><![CDATA[Computer Science]]></category><category><![CDATA[puzzles]]></category><category><![CDATA[graphs]]></category><category><![CDATA[Mathematics]]></category><dc:creator><![CDATA[Agata Migalska]]></dc:creator><pubDate>Tue, 16 Sep 2025 06:00:36 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1757974753069/1af1d730-188d-4e3e-bcf0-1eae92602b52.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>Let’s play a game.</strong> Your task is to <strong>draw a house by connecting the dots without lifting your pen</strong> (or pencil, or mouse). You can visit a dot multiple times, but you are allowed to draw an edge <strong>only once</strong>!</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757884971977/4d6bbfa5-2fe1-4d06-9dfd-b12bb4d5cc9e.png" alt class="image--center mx-auto" /></p>
<p>Depending on which dot you choose as your starting point — you’ll either succeed or fail. <strong>But why?</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757895723780/55b5a889-4710-406a-ad9f-8d8158b60c58.gif" alt="Animation of successful and unsuccessful house drawing" class="image--center mx-auto" /></p>
<p>What we’re doing here is <strong>traversing a graph</strong>. This is our <strong>house graph</strong>:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757885286144/bf64e838-3fab-4183-b464-20b0c06371a6.png" alt="The house graph" class="image--center mx-auto" /></p>
<p>A <strong>graph</strong> is a structure made of <strong>nodes</strong> (also called <em>vertices</em>) and <strong>links</strong> (or <em>edges</em>), with each link connecting a pair of nodes. In this case, we have five nodes: A, B, C, D, and E, and 8 edges: A-B, A-D, A-E, B-C, B-D, B-E, C-D, D-E.</p>
<p>What’s more, the house graph is an <strong>undirected graph</strong>, meaning you can traverse each link in either direction — from A to B or B to A.</p>
<p>Now, back to the question: <strong>why does starting from node A let you draw the house in one go, but starting from B doesn’t?</strong></p>
<p>To answer this, we need to look at something called <strong>node degree</strong> — the number of links connected to each node in an undirected graph.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757885884369/f54c0f37-8f19-4ac3-b836-8aa2518a0c29.png" alt="The house graph with node degrees" class="image--center mx-auto" /></p>
<p>Back in 1735, <strong>Leonhard Euler</strong> showed that in order to traverse such a graph in a single go — without retracing steps — you must start at a node with an <strong>odd degree</strong> and end at another node with an <strong>odd degree</strong>. All other nodes must have an <strong>even degree</strong>. He actually proved it <a target="_blank" href="https://scholarlycommons.pacific.edu/euler-works/53">here</a>.</p>
<p>Since there are exactly two nodes with an odd degree (A and E) and all other nodes (B, C, and D) have even degrees:</p>
<ul>
<li><p>There <strong>is</strong> an Eulerian path in the house graph, meaning it is possible to draw all edges in one go.</p>
</li>
<li><p>You must either <strong>start drawing from A and end at E</strong>, or <strong>start at E and end at A</strong>.</p>
</li>
</ul>
<h2 id="heading-check-your-knowledge-bridges-of-konigsberg">Check your knowledge: <strong>Bridges of Königsberg</strong></h2>
<p>To test your understanding, try this rule on the famous <strong>Bridges of Königsberg</strong>. This is the problem that led Euler to formulate the rule in the first place. His question:</p>
<p>\&gt; <strong><em>Can you walk through the city, crossing each bridge exactly once?</em></strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757888989128/90ed35e7-a4d4-47f4-a759-7dd0b9733301.png" alt="Bridges of Königsberg. Image by Bogdan Giuşcă - Public domain (PD), CC BY-SA 3.0." class="image--center mx-auto" /></p>
<p>When we translate the city into a graph, pieces of land become nodes and bridges become edges, like so:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757890121273/de783e05-a0c3-486b-994e-a9db2dff9bcb.png" alt="Bridges of Königsberg with named nodes (pieces of land)" class="image--center mx-auto" /></p>
<p><strong>Let me know in the comments what the node degrees are, and whether you found the solution to Euler’s question!</strong></p>
<h2 id="heading-summary">Summary</h2>
<p>That was the shortest intro to graphs I could come up with :)</p>
<p>We've touched on:</p>
<ul>
<li><p><strong>Nodes</strong> (vertices)</p>
</li>
<li><p><strong>Edges</strong> (links)</p>
</li>
<li><p><strong>Undirected graphs</strong>, where movement is allowed in both directions</p>
</li>
<li><p><strong>Node degree</strong>, and how it affects traversal</p>
</li>
<li><p>Euler’s recipe for drawing a graph in a single stroke — the <strong>Eulerian path</strong>.</p>
</li>
</ul>
]]></content:encoded></item></channel></rss>