Thesis

The flywheel is the starting point.

Directional begins with one observable direction: AI usage rises as models improve, and model improvement requires the physical substrate to keep scaling. The firm studies where that pressure creates second-, third-, and fourth-order effects.

What the loop forces

Hardware must scale in quality and quantity.

Capability Better hardware

Bandwidth, density, cooling, interconnect, and cluster scale.

Capacity More hardware

Memory, accelerators, racks, power, data centers, and equipment.

Investment work Find the narrow gates

Where forced change meets scarce capacity or technical control.

Mechanism

Better models create their own demand.

Each generation of models unlocks use cases that were previously too brittle, too slow, or too expensive. As those use cases become useful, usage rises. Higher usage pulls more compute, memory, networking, and power. The resulting hardware investment enables the next generation of models.

This is why Directional treats AI as a compounding system rather than a single product cycle. The question is not whether one model release is overhyped. The question is where repeated turns of the flywheel force the real economy to change.

Memory as example

Some problems are both frontier and scale.

Memory shows the shape of the problem. Higher-bandwidth memory is a capability frontier because models need to move data faster. Total memory supply is also a capacity frontier because adoption can pull far more volume than a historically commodity-like market expected to provide.

Context windows are expanding, KV cache becomes a first-order constraint, and HBM4 consumes far more wafer capacity per bit than commodity DRAM. The same pattern can appear elsewhere: making the thing is hard; making enough of it, at scale and at an acceptable cost, can be equally hard.

Per GPU memory frontier HBM capacity and bandwidth by generation
0 288 GB / 22 TB/s max scale
Hopper · H100 SXM HBM3
80 GB 3.35 TB/s
Blackwell · B200 HBM3e
192 GB 8 TB/s
Rubin GPU HBM4
288 GB 22 TB/s
Platform memory scale From Hopper boxes to rack-scale memory pools
DGX/HGX H100 · 8 GPUs 0.64 TB HBM3
GB200 NVL72 · 72 GPUs 13.4 TB HBM3e
Rubin NVL72 · derived from 72 x 288 GB 20.7 TB HBM4

Scale is not illustrative here: the units are HBM capacity. Rubin NVL72 is arithmetic from NVIDIA's per-GPU Rubin memory disclosure.

Value migration

The trade should not stay in one layer forever.

Early in a platform shift, the scarce tools can capture extraordinary value. Over time, value may migrate upward as infrastructure becomes more available and the application layer finds economically important uses. Cloud followed that pattern. AI may rhyme with it, but the path will be shaped by the physical constraints of intelligence at scale.

Directional criteria

The filter is narrow by design.

  • Atoms over bits
  • Narrow gates over broad themes
  • AI and AI derivatives only
  • Asymmetry over consensus comfort
  • Strong convictions, loosely held

These are not slogans for a broad thematic fund. They are constraints on what earns attention. Directional looks for places where the flywheel is forcing change and where the market has not fully priced the consequences.