Lo-fi operator room / static WebGL page / agent systems map

I build tools that make agents do real work.

GitBuck is the public control surface for a Windows-first, local-first AI lab: coding agents, MCP tooling, GPU services, OCR and transcription pipelines, audio tooling, terminal interfaces, and operator workflows that survive contact with reality.

ROOM SIGNAL Agent build loop

Plan, edit, test, review, remember, and hand off without losing the thread.

Scroll-driven room tour

Move through the room. The system changes with you.

Start at the window, drop into the agent deck, pass the media bench, then land at the terminal surface.

01 / Window

The calm surface

Lo-fi skyline, warm light, and the public front door. The page starts like a wallpaper, not a dashboard.

02 / Agent deck

Build loops wake up

Scroll pulls the 3D rig forward: planning, edits, tests, review gates, and memory become one operating loop.

03 / Media bench

Evidence stays attached

Transcripts, OCR, frame samples, audio stems, and source media stay linked instead of becoming loose notes.

04 / Terminal surface

The tool becomes usable

Status lines, TUI surfaces, and visual verification turn backend work into something an operator can trust.

Room map

This overlay is native HTML Popover. The scroll tour stays static-host friendly: no backend, no build, no framework.

Interactive mode switch

Press a mode. The page rewires itself.

This is still a simple static GitHub Pages site. No backend, no build step, just HTML, CSS, JavaScript, and a Three.js scene loaded in the browser.

Safe mode: the model can only return validated JSON, never HTML or code.

Saved rewires

Last seven room versions

No saved room versions yet.

What replaced the old portfolio

No retro template. A kinetic map of the current work.

01 / Agent layer

Agentic developer workflows

Codex, Claude Code, MCP servers, memory, review gates, and handoff surfaces that turn chat into repeatable execution.

02 / Machine room

Local AI operations

Windows-first control, remote Linux workers, GPU services, model defaults, startup reliability, and operator-ready runbooks.

03 / Perception

Media intelligence

Video transcription, OCR, frame sampling, audio separation, ffmpeg pipelines, and dashboards with source-linked evidence.

04 / Surface

Terminal products

Status lines, ASCII/TUI interfaces, visual terminal verification, and compact tools that fit where engineers actually work.

Operating stack

A personal lab wired like a production floor.

The through-line is practical autonomy: tools that remember context, recover from bad states, verify visible output, and leave a clean path for the next run.

Agent layer Codex, Claude Code, custom MCP, review automation
Runtime layer Windows desktop, remote Linux services, local GPU workflows
Media layer Whisper, OCR, ffmpeg, frame sampling, audio separation
Product layer TypeScript, Python, Rust, shell, browser UI, CLI/TUI

Public proof points

Small public surface, serious private velocity.

Most operational work stays private. The public repos show the direction: live developer context, structured prompting, and this page as the current front door.

Build the tool. Run the tool. Look at the real output. Fix what still feels fake.

Contact

If it needs agents, local AI, media pipelines, or a better operator surface, I probably care.

Talk through GitHub