# Code-Generated Assets for the Agentic Era

#### The Shift

The most interesting conversations in gaming aren't happening in the offices of major studios. They're happening in GitHub repos and Discord servers and X. They're with people building games in ways that didn't exist two years ago. Solo creators shipping full experiences in a weekend. Agents - actual AI agents - generating complete playable builds from a prompt.

25% of Y Combinator's Winter 2025 batch had codebases that were 95%+ AI-generated, and we're seeing the same trend in gaming. The first agent game dev studios are appearing and they're taking off. "Claude Code Game Studios" - an open source project that turns Claude Code into a full game dev studio with 48 AI agents, 36 workflow skills, and a complete coordination system mirroring real studio hierarchy - grew to over 8,000 stars within a month.

This isn't replacing or even competing with enterprise game development. It's creating an entirely new layer beneath it where individuals and small teams generate a new genre of games.

#### 3D as Code not Mesh&#x20;

Thus far these games have grown within the 2D space because they run into limitations of traditional 3D representation. Typical meshes don't work. They're "opaque" - the agent can't read them, can't modify them, can't reason about interactivity and gameplay mechanics.

What agents need is code. Specifically, code that generates 3D geometry. Something they can read, understand, tweak, and rebuild.

These are called procedural assets. This is the focus of 404's next competition.

#### Competition Framework

This is the third form of 3D representation the subnet has produced - each competition building on the last toward Large Spatial Models (LSMs).&#x20;

We started with Gaussian Splats for visual fidelity and research trajectory. Then Meshes for physics engines and enterprise adoption. Now Procedural Assets for simulation and agentic infrastructure. Each representation brings something the others can't, and all three will converge into the LSMs.

For procedural, the code becomes the asset - parameterized, inspectable, infinitely variable. Exactly what this new genre of games requires. Each competition balances research with real-world adoption - innovations that actually get used.

This shift towards code generation is also designed such that agents can participate directly. For the first time on Subnet 17, we expect to see AI agents mine the subnet. Write code, submit assets, compete autonomously and openly. We're not just building infrastructure for agent-created games - we're encouraging agents to be the ones who build it.

We believe this design represents a positive example for Bittensor mining as a whole (as other subnets have done) - Karpathy-style research, decentralized, permissionless and agentic.

#### Furthering Commercial & Research

We're building toward Large Spatial Models - models that understand 3D space, object relationships, and physics. This requires data generation pipelines across different forms of representation, each contributing unique value:

* **Gaussian Splats** — visual fidelity and composition
* **Meshes** — physics simulation and explicit editing
* **Procedural Code** — parameterization, variation, and structural reasoning

A splat tells you what something looks like. A mesh integrates it within an environment. Procedural code tells you what something *is* — the operations that define it, the parameters that control it. That's the kind of structured understanding the foundation model needs.

With these three representations established, future competitions can focus on combination and composition: world and scene generation, editing across formats, physics-aware generation, and ultimately the Large Spatial Model itself.
