Game Designer · AI Developer · Creative Technologist · Musician
Game designer at the AI frontier — building playable agents, generative worlds, and simulations that surprise and inform the user.
A trained reinforcement-learning agent plays a procedural platformer while its policy lights up in 3D. Trained on GCP using a single A100. Network activation can send MIDI to Ableton Live to trigger synthesizers.
Interact Watch · click to engage
Exploring the question: what does an LLM-driven game actually look like? A game of twenty questions where each run reveals new information you carry into the next conversation — meta-progression through dialogue.
Procedurally generated 2D platformer levels — solvable platforms, enemies, coins, and a goal laid out fresh each click. Local builds also drive a Nano Banana 2 reskin loop that regenerates every sprite from a text prompt.
An early v0 prototype exploring how to turn a flocking simulation into gameplay — Reynolds rules with predator pressure and tunable cohesion / separation / alignment.
Interact Click and drag
A machine-learning challenge based on the realization that large language models are inherently bad at playing games of incomplete information with large possibility spaces. The challenge: build a neural network that can not only play poker as a training opponent but also work together with the LLM to provide insights about game-theory-optimal play. Working toward a multi-day run on a 64× A100 cluster on Vast.ai to distill the solver data into a neural net.
A procedural ASCII world you can walk around in. Terrain, characters, and dialog are generated on the fly by an LLM and rendered as text glyphs.
Interact Arrow keys · Space
A full-screen custom WebGPU Navier – Stokes fluid simulation. Custom compute shader modeling advection and convection in real time on the GPU.
Interact Click and drag
A dynamic WebGPU user-interface prototype using Voronoi cell noise and signed-distance fields to visualize icons. Driven by a custom compute shader.
Interact Move the cursor
A Physarum polycephalum simulation rendered as living text. Agents deposit chemical trails on a WebGPU compute pass; the result reads as glyphs.
Interact Click and drag
3D WebGPU boid flocking with GPU spatial hashing. Used a modified version of Andrej Karpathy's auto-research to build an optimization loop that maximizes particle count.
Interact Click and drag · open controls
A real-time WebGPU raymarcher flying a chase-cam ship through a Mandelbox. Camera pose, fractal folds, AO, glow, and HDR are all live-tunable from the collapsed Fractal Controls panel; defaults are pre-tuned for atmospheric flight.
A point-cloud Gaussian-splat fly-through built on Three.js — particle size, opacity, fog, blending, ship vortex effector, and over 50 other parameters live-tunable from the in-iframe Settings panel. Defaults match the saved tuning from the standalone session.
3D viewport from a development dashboard — nine procedurally-generated trees with wind, bloom, depth-of-field, and dust particles. WebGPU + Three.js TSL post-processing.
An autonomous shader iteration loop using vision-model image analysis to drive evolutionary refinement of GPU shaders. Each iteration generates two candidates, analyses both, picks the winner, and uses that shader as the basis for the next generation.
Interact Auto-cycles · arrows to step
An AI-driven jam tool. Type a vibe, Gemini writes verse / chorus / bridge with chords + lyrics. Synth, drum sequencer, and walking-bass generator all play live in the browser.
Knowledge-graph + semantic-search asset library for film and animation pre-production, powered by Nano Banana.
Stolen item tracker. Searches key online public marketplaces daily to try and recover my stolen items.
An infinite canvas for agentic drawing expression — a tool and a skill for open Claw agents to both read from and draw onto a giant shared canvas. WebGPU-powered, full-stack platform with Cloudflare and account management and Stripe integration, including a custom Rust cloud renderer for the LOD tiles. Also built out a library of drawing algorithms for the agents to use.
A 3D Seattle cityscape rendered with Three.js + GPU instancing, overlaid with live civic data feeds — Fire 911, Police, parking violations, recent crimes, building permits, weather stations, code violations. Plasma / magma / inferno gradients, depth-of-field, and a view-finder for projected zoning skylines.
A multi-terminal UI prototype. When I'm coding with multiple terminals, context switching is crucial, and I find that spatial separation or spatial anchoring of terminals can aid in my context switching between projects. This prototype was an attempt to push that concept further by having 16 terminals at once, maintaining their relative spatial positions to each other rather than a list of most-recent chat conversations, as is the paradigm for many current AI chat experiences. Also used Gemini Live to allow switching between terminals using just voice.
Interact Click to switch between terminals. Press the G key to enable Gemini Live to switch terminals using voice.