top of page

About

I started my career in games when real-time 3D was genuinely new, not a mature industry with established pipelines, but a frontier where every project required inventing the process alongside the product. That early experience of operating without a playbook became a through-line: VR before it had a mainstream audience, mixed reality before the hardware was reliable, and now spatial AI before anyone has fully agreed on what it means. Twenty-five years later, the tools have changed completely. The instinct hasn't.

My foundation is unusual for this space. I studied photography and mixed media at the University of New Mexico under Patrick Nagatani, Tom Barrow, and art historian Eugenia Parry Janis, thinkers who understood that artists don't follow the evolution of their medium; they drive it. That conviction has shaped how I approach every technological shift in my career, from real-time 3D to VR to spatial AI. I don't see generative AI as a disruption to resist; I see it as the next phase of a conversation about how we make, see, and experience the world, one that, for me, started with photography and that continues to evolve as I do.

I'm currently conducting independent research into spatial constraint systems, testing whether explicit architectural knowledge injected at the prompt level can produce more spatially coherent AI-generated 3D environments. The work involves building rule libraries for specific architectural traditions, developing an AI-assisted interpretation layer that translates plain-language descriptions into rule-grounded prompts, and systematically evaluating outputs against defined spatial criteria.

The underlying question is one I find genuinely important: current spatial AI systems are powerful pattern matchers, but they don't yet reason about space the way a designer or architect does. Understanding that gap feels like the right work to be doing right now.

bottom of page