Open Metaverse Hackathon · March 7–8, 2026 · Frontier Tower, SF · PLACES Track

ROOM

A semantic memory layer for the spatial internet. Places hold memory. Multiple knowledge graphs coexist without collapsing.

Place Event Perspective Artifact

// 01

The Invitation

Places already have memory. Temples, caves, libraries, streets — they remember through stories and architecture. The spatial internet is being built right now. But one piece is missing: shared memory for places.

ROOM is an experiment in co-memory — a way for people to layer insights, artifacts, and perspectives onto spaces themselves. Not annotation. Not tagging. Something closer to how we actually remember: through context, through emotion, through place.

ROOM is designed to hold multiple ontologies simultaneously. Different knowledge graphs can coexist in the same spatial environment without collapsing into each other. Each community brings its own schema, its own vocabulary, its own relational logic. The ROOM doesn't arbitrate between them. It holds the space where they meet.

core premise
Scan a space
Anchor an event
Attach perspectives
Navigate spatially

// 02

The Moment We're In

The RP1 Open Metaverse Hackathon is happening right now at Frontier Tower. The framing couldn't be clearer: open standards vs. corporate silos. The spatial internet will either be built like the web — by many, for many — or locked into proprietary systems by companies that have the money, the hardware, the marketing.

The early web had pages.
The spatial internet will have places.
But places need memory.

The hackathon identifies three tracks: Tools, Places, and AI. ROOM sits at the intersection of all three — infrastructure for meaning, the semantic layer that makes spatial computing legible as human experience rather than just geometric data.

The infrastructure built in the next few years will determine how billions of people navigate digital space for the next thirty. ROOM is a contribution to making that infrastructure open, federated, and human.

// 03

The Spatial Information Stack

Spatial information organizes into four ascending layers of complexity. Most spatial tooling today operates on layers 2–4. The semantic layer — what makes geometry into meaning — is almost entirely missing from the open stack.

4 Visual Pixels & textures — photorealistic rendering, lighting, color. Gaussian splats. mature
3 Geometric Depth & volume — 3D shape, volume, spatial relationships, topology. mature
2 Physical Rules & dynamics — interactions, forces, collision, behavior. emerging
1 Semantic Meaning & function — knowledge, memory, narrative, relationships. ROOM lives here. ← ROOM

The semantic layer isn't merely additive — it's interpretively prior. Without it, every layer above remains mute. Geometry without semantics is topography without territory. A photorealistic render has no way to declare what it depicts, who was there, what it meant. ROOM provides the primitives that let the other layers speak.

Gaussian splats give you the Visual.
ROOM gives you the Meaning.

// 04

The Hippocampus Insight

The reason spatial memory is so powerful in humans isn't accidental. Your brain has an entire dedicated architecture for it — and ROOM's data model mirrors it directly.

86B
Total brain neurons
The full computational substrate of human cognition
38M
Hippocampal neurons
Dedicated to spatial navigation and episodic memory
Place cells
Fire precisely when you occupy a specific location in space
Grid cells
Fire in repeating hexagonal patterns — your internal GPS coordinate system
📍

Place Cells

Fire when you're here. Encode the specific geometry of a location so it can be recalled even when you're elsewhere. Memory as topology.

Grid Cells

Fire in repeating hexagonal grids. Create a continuous abstract coordinate system. Navigation as pure mathematics layered over experience.

Memory is not stored in content.
Memory is stored in the relationship between content and location.
This is why the method of loci works. This is why walking back into a room can recover a lost thought.

ROOM takes this biological principle seriously. By anchoring knowledge to spatial coordinates, we're not just making data spatially accessible — we're making it neurologically compatible with how humans actually remember.

// 05

A Collective Hippocampus

If the hippocampus is what lets an individual navigate remembered space, then ROOM is an attempt to build that architecture at collective scale — a shared spatial memory system for communities, for places, for the emerging spatial internet.

ROOM reduces spatial memory to four atomic types that mirror how the hippocampus encodes it:

📍 Place A spatial coordinate or region in a Gaussian splat or XR environment. The irreducible ground. Everything roots here. root anchor
Event A happening anchored to a Place with a timestamp. The basic unit of spatial memory. Events are what places remember. temporal node
👁 Perspective A viewpoint on an Event, carrying ontological metadata. Multiple Perspectives on one Event do not collapse. An architect, a sociologist, a mythologist — all simultaneously true. interpretation
📄 Artifact A crystallized output — document, media, insight, tool — produced from the above. Portable across ROOM instances. knowledge object

Navigation through ROOM becomes an act of epistemic archaeology: walking through space is walking through time, through perspective, through the sedimentary layers of collective understanding accumulated at a location.

// 06

The Live Demo

ROOM is live. Three pages, zero build step — a working implementation of the semantic layer running on open web standards.

room-openmetaverse.vercel.app
Landing Spatial stack explainer, Ghost World problem, four primitives, demo flow, Frontier Tower locations index.html
Viewer Three.js 3D scene with semantic node overlays, guided 11-stop narrative tour, perspective switching viewer.html
Editor D3.js force-directed knowledge graph editor — full CRUD, JSON export, Claudesidian PKG bridge editor.html

Frontier Tower Locations

ROOM's demo is a semantic digital twin of Frontier Tower itself — four spatial captures, multiple ontological perspectives, one shared memory graph.

Location 01
Building Exterior
Street-level approach. The threshold between the city and the spatial internet.
Location 02
Main Lobby
Where builders arrive. Threshold between the street and what gets made here.
Location 03
2nd Floor Hackathon Space
Where the building is happening. Desks, whiteboards, the hum of intent.
Location 04
The UFO Room
The secret bonus level. 👽 Every building has one.

The Data Layer

room.js is a zero-dependency shared module that exposes the full ROOM API. It runs in the browser — no build step, no framework.

// room.js — zero-dependency shared data layer
ROOM.loadWorldFromURL('sample-world.json');     // Load a world
ROOM.getPlaces();                         → all Place nodes
ROOM.getEventsAtPlace('place-lobby');     → Events anchored here
ROOM.getPerspectivesForEvent('evt-01');→ Multiple viewpoints
ROOM.getPlaceContext('place-2nd-floor');  → Full context tree
ROOM.obsidianToPerspective(markdown);    → Claudesidian bridge
ROOM.exportWorld();                       → Portable JSON

// 07

Personal Knowledge Sovereignty

Shared memory systems have historically demanded a sacrifice: your private cognition becomes the property of the platform. ROOM refuses this bargain.

Personal PKG (Obsidian / Claudesidian)
↓ selectively shared
Shared ROOM nodes
↓ optionally public
Open spatial graph

Your thinking remains yours. You choose what surfaces, and how. The graph is federated — no central authority controls what gets remembered or how perspectives get weighted.

🔒 Private Room

Local only. Encrypted at rest. No network exposure. visibility: "private"

local
🤝 Shared Room

Visible to invited participants. Permission-gated. ZK-provable. visibility: "shared"

federated
🌐 Public Room

Open. Verifiable. Indexed by the spatial internet. visibility: "public"

open

// 08

Epistemological Hygiene

The deepest problem with most knowledge systems isn't storage — it's perspective collapse. They treat disagreement as error. They compress multiple valid viewpoints into a single authoritative account. They lose the edges where understanding breaks down.

ROOM doesn't collapse perspectives into one.
It lets them coexist — spatially adjacent, epistemically distinct.

This is what the Nyāya epistemological tradition calls pramāṇa — legitimate sources of knowledge that don't cancel each other. Multiple valid means of knowing the same thing from different angles. The goal isn't consensus. It's navigable pluralism.

Guattari's Three Ecologies

Félix Guattari identified three interacting ecological registers that co-constitute reality. ROOM's architecture maps directly onto them:

🧠
Mental
Ecosophy of the self

The interior life, personal sense-making, individual knowledge graphs. Personal PKGs.

🕸️
Social
Ecosophy of collective life

Communities of practice, shared narratives, collaborative sensemaking. Shared ROOM nodes.

🌍
Environmental
Ecosophy of the world

The physical places themselves — the ground beneath the memories. Spatial world.

Sand Talk

Tyson Yunkaporta's Sand Talk offers an Indigenous framework that upends most Western knowledge infrastructure: knowledge doesn't reside in things — it emerges through relationships. Linear text encodes one path through an idea. A relational graph encodes all possible paths simultaneously. A spatial knowledge environment lets you inhabit the idea.

// 09

Bot Theater

ROOM's AI agents aren't interfaces — they're characters. Not chatbots with query boxes, but epistemic personas with distinct roles in the knowledge drama of a place:

Archivist
The Keeper
Captures, timestamps, and cross-references event memory. Nothing is lost.
records →
Synthesizer
The Weaver
Draws threads between disparate nodes. Surfaces unexpected resonances across the graph.
connects →
Skeptic
The Challenger
Flags contradictions, surfacing tensions rather than suppressing them.
questions →
Story
The Bard
Turns artifacts into narrative. Makes the data speakable, memorable, shareable.
narrates →

AI becomes dramatic rather than mechanical. Powered by Claude / MCP. Each persona has a perspective, a bias, a role in the epistemological ecosystem of the ROOM.

// 10

Experience Layer

ROOM's semantic graph is the foundation. The experience of walking through spatial memory is a composable stack. Each layer is independent; they compound when combined.

The strongest version: a spatial piece people can walk through, watch, and perform — place as substrate, narrative as spine, film as emotion, gesture as navigation, diffusion as transition between states of meaning.
1
Gaussian Splat + Guided Tour
Photorealistic spatial capture with clickable semantic nodes and narrative waypoints. The irreducible demo. Already a real contribution.
must ship
2
Video / Film Overlay
Spatial video anchored to Event nodes. A recording, a testimonial, a cinematic clip surfaces when you approach a coordinate. The emotional layer.
strong bonus
3
TouchDesigner Gestures
Embodied navigation — body tracking drives movement through the knowledge graph. Navigation as performance.
stretch
4
ComfyUI Diffusion Morphing
Diffusion-based transitions between splat states or video frames. Perspective-switching rendered as visual transformation.
post-hackathon

The trap is doing all four at once. TouchDesigner and ComfyUI should support the story, not become the story. A legible demo with layers 1–2 and one cinematic moment is more powerful than an art-tech fever dream that judges admire but cannot parse.

// 11

world.json

The ROOM data format is a flat, portable JSON graph — human-readable, no build step required, compatible with any spatial browser that speaks open standards. It is a draft proposal — a contribution to the emerging open metaverse schema ecosystem, not an overclaim.

// world.json v0.1 — ROOM semantic graph (draft proposal)
{
  "schema": "room/v0.1",
  "place": { "id": "frontier_tower_l2", "splat": "./scene.splat", "crs": "WGS84" },
  "nodes": [
    { "id": "evt_001", "type": "event", "title": "Kickoff", "position": [1.2, 0.5, -3.8], "time": "2026-03-07T12:00:00Z" },
    { "id": "pov_001", "type": "perspective", "ontology": "bioregional", "ref": "evt_001", "visibility": "public" },
    { "id": "art_001", "type": "artifact", "format": "text/markdown", "src": "./notes.md" }
  ],
  "edges": [{ "from": "pov_001", "to": "art_001", "label": "produced" }],
  "tour": { "title": "Frontier Tower Memory Walk", "waypoints": [{ "node": "evt_001", "narration": "..." }] }
}

Edge types: anchored_at · observes · produced_by · leads_to  |  Ontologies: architectural · social · experiential · mythological · bioregional · personal

// 12

Collaboration Field

ROOM is not a closed system. It's infrastructure for connection. Each layer is composable — different communities, different ontologies, different tools, all anchored to the same spatial substrate.

ROOM is designed to hold multiple ontologies simultaneously. Each community brings its own schema, vocabulary, relational logic. The ROOM doesn't arbitrate between them. It holds the space where they meet.

// coda

Places Can Remember

The web gave us pages — flat surfaces covered in text, linked by addresses. The spatial internet will give us places — volumetric environments where knowledge doesn't sit on surfaces, it inhabits them.

ROOM explores what happens when a place doesn't just look like somewhere — when it remembers what happened there, holds the perspectives of everyone who passed through, and offers that memory to anyone willing to walk in.

this is the ground floor
room-openmetaverse.vercel.app

ROOM · innercartography.github.io · Semantic Field · The Splat Guild
Open Metaverse Hackathon · March 7–8, 2026 · Frontier Tower, San Francisco
MIT License · You own what you create.