I've just started working with Gilles Retsin, CPO of Automated Architecture. AUAR are about to ship V3 of their construction robot - a full timber framing system that deploys on-site in a shipping container, runs on edge AI, and delivers panels to a housebuilder spec cheaper and faster than a human crew.

From the builder's perspective, they're not buying a robot. They're not interacting with a digital tool. They're buying panels, on time, to spec. The complexity is entirely on AUAR's side of the transaction - interlocking robotics, physical AI and software across multiple systems, deploying into an environment full of people who didn't sign up to be part of a technology experiment.
That last part is what I keep coming back to.
Twenty years of AR product development taught us something hard. Google Glass failed not because the technology didn't work, but because the design culture only considered the wearer. It didn't consider the contested environment the wearer walked into - the colleague across the desk, the stranger on the street, the social contract being violated. Product teams had to rapidly develop what Doteveryone called consequence scanning - a structured practice for mapping the intended and unintended effects of a product on everyone it touches, not just the primary user.
AUAR are doing this from first principles. Their robot doesn't operate in a controlled environment. It operates inside someone else's site, someone else's programme, someone else's crew. The GC superintendent has a Gantt chart. The framing foreman has tolerances he trusts. The crew have a rhythm. The design challenge isn't just the machine - it's every seam between the machine and the humans around it.
This is the AR problem turned inside out. Not projecting a digital layer onto a physical world. Deploying an intelligent physical system into a social world.
What strikes me most is how AUAR are using AI to compress their learning rate across all of this complexity simultaneously. They're tracking multimodal data from the robot, the construction teams, and the forward-deployed engineers on site - and using it to iterate across hardware, software, and operational layers in parallel. Design as accelerant, not just design as curation.
2016 product cultures had a screen-based and single-user mindset, using V1 digital product tools to manage design processes which tended toward blunt simplification of complex issues in AR. 2026 Physical AI product teams can bring design intuition and a commitment to quality to bear across a vastly richer consideration of how their products exist across systems and deploy to the real world.
There's a pattern worth watching here. It was received wisdom in the 2010s to quote the DMI's Design Value Index, which showed design-centric companies outperforming the S&P 500 by 211% over a ten-year period. The argument was always about software - Figma beating Adobe, Slack winning on feel, Procore dominating construction management not through features but through the experience of using it.
My prediction is that physical AI will see the same dynamic, at greater magnitude. The binding constraint in robotics right now is not technical capability. It is the human experience of deploying, operating, and living alongside these systems. The teams that figure out design - not aesthetics, but the full social and operational texture of how their products land in the world - will be very hard to catch.