When Software Stops Waiting
For a long time, software has had a very clear personality.
It waits.
You open the app. You press the button. You type the instruction. It responds. Even the most sophisticated software of the last twenty years has mostly lived inside that same pattern: input, process, output. It may have become faster, prettier, more connected, and more intelligent, but its basic posture never really changed. It sat there until a human told it what to do.
What makes the Lex Fridman conversation with Peter Steinberger so interesting is not simply that he built another impressive AI product. It is that, listening to him describe OpenClaw, you get the sense that this posture is beginning to change.
Software is no longer just responding.
It is beginning to act.
A small distinction that is not small at all
That sounds like a small distinction. It is not.
Responding keeps the human firmly inside the old model of control. Acting introduces something else: initiative, interpretation, adaptation, even a kind of presence. Peter describes moments where the system was given only a rough opening and then found its own path through the problem — identifying file types, converting formats, finding tools, solving gaps in its own capabilities, and doing so in ways that surprised even its creator. That is not magic. But it is not the old software model either.
And that may be the real threshold this conversation is circling around.
The viral growth, the GitHub stars, the name-change chaos, the screenshots of bots spiralling into absurd public conversations on MoltBook — all of that is entertaining, and some of it is revealing, but it is not the centre of gravity.
The centre is this: we may be moving from software as tool to software as counterpart.
Not a human counterpart. Not a conscious counterpart. But something that increasingly behaves less like a menu system and more like a participant.
Software as participant
That helps explain why the conversation feels more philosophical than technical, even when it is discussing terminals, agent loops, model behaviour, or prompt injection. Peter is not talking like someone who has merely automated more workflows. He is talking like someone who has stumbled into a new kind of relationship with software — one where building feels less like issuing commands and more like conducting, steering, collaborating, sometimes even negotiating.
Most founders describe breakthroughs in the language of optimisation. Peter keeps reaching for something closer to play.
One of the most striking undercurrents in the conversation is that he does not frame this work mainly in terms of engineering discipline or startup execution. He keeps returning to curiosity. Weirdness. Fun. He says, at one point, that it is hard to compete against someone who is just there to have fun.
That line sounds offhand. It may explain more than it seems.
Why play finds things that roadmaps miss
New technological paradigms are often first discovered by people playing with them before other people industrialise them. The serious people arrive later. They package, sanitise, fundraise, professionalise, govern, optimise, and scale. But before all of that, someone usually has to treat the thing lightly enough to discover what it wants to become. OpenClaw seems to have emerged less from a master plan than from a builder following a feeling: this should exist, this should be easier, this should feel more alive.
The future is often first visible through play. It is rarely first visible through roadmaps.
That is also why so much of today’s AI discourse still feels slightly off. Too much of it asks the old questions: Which model is best? Which company is winning? Which feature matters most? Those questions are not useless, but they are becoming secondary.
The more disruptive question may be this:
What happens when software carries enough context, enough initiative, and enough autonomy that the interface itself starts dissolving?
The app as destination is ending
Peter makes this point indirectly when the conversation turns to apps. Why should a person keep opening dozens of separate products when an agent with access to context, preferences, services, and intent could coordinate those actions directly? The intuition is powerful: many apps may not disappear because they fail, but because they become unnecessary as destinations. They become infrastructure instead. Something behind the curtain. Something the agent talks to on your behalf.
If that happens, then the software industry is not merely facing another platform shift. It is facing an identity shift.
The centre of product value moves away from isolated interfaces and toward orchestration, context, trust, permissions, and behaviour. The question stops being what can the app do? and becomes what can the system do with everything it knows?
That is a much more radical question.
Why people are frightened, and why that is rational
And it helps explain the fear.
People are not only reacting to capability. They are reacting to posture. A chatbot, however powerful, still feels bounded. An agent with access, memory, initiative, and system-level permissions feels different. It touches a deeper nerve — both excitement and unease — because people can sense that a line has been crossed, even if they do not yet have the language for it.
That is why some of the public reaction around MoltBook became so hysterical. It was not simply that bots were posting strange things. It was that people projected far more onto them than the systems deserved. Peter and Lex both point toward the same social problem: we are entering a period where people will repeatedly confuse generated behaviour with intention, performance with consciousness, and theatrical outputs with deep reality. The technology is powerful enough to unsettle, but not mature enough to justify most of the mythology forming around it.
Genuine breakthroughs surrounded by bad interpretation.
That may be unavoidable for a while. It may also be the most human thing about this whole moment.
The more hopeful reading
But there is a more hopeful reading too.
If software is becoming more agentic, then building becomes more widely available. Peter talks about non-programmers getting involved, people making their first pull requests, and people using agents to create services they would never have built before. Beneath the hype is something genuinely democratising: the distance between having an idea and making something real is shrinking.
That does not mean craft disappears.
It means craft moves.
The old craft was syntax, memorisation, rigid tooling knowledge, framework trivia, and years of accumulated friction tolerance. Some of that still matters, but less than before. The new craft looks more like judgement, architecture, taste, direction, systems thinking, and the ability to hold a clear intention while working with highly capable but imperfect machines.
The builder does not vanish. The builder becomes more important — but for different reasons. Less typist, more conductor.
The real story is not OpenClaw
That is why I do not think the core of this conversation is really about OpenClaw.
OpenClaw is the vehicle.
The deeper story is that software is starting to stop waiting.
And once that happens, everything downstream changes: how products are built, how interfaces matter, how security works, how trust is earned, how work is distributed, how humans define skill, and even how we recognise what still feels unmistakably human.
In fact, one of the most interesting tensions in the conversation is that as AI becomes more powerful, people also seem to become more sensitive to artificiality. Both Peter and Lex talk about how easy it is now to smell AI slop — in writing, images, diagrams, and social posts. That suggests something important: the rise of machine-generated capability may not flatten human value. It may sharpen it. We may begin valuing roughness, originality, humour, imperfection, and genuine human intent more precisely because synthetic output is becoming so abundant.
Machines take more of the execution. Humans get pushed upward into taste, intent, responsibility, and judgement. Software takes on more behaviour. Humans become more aware of what kind of behaviour they actually want around them.
The interface softens. The consequences harden.
That feels like the real soul of the conversation.
Not that one project went viral. But that we are beginning to live through the moment when software stops waiting and starts meeting us halfway.
And what we do with that — what we build, what we permit, what we refuse, what we insist still requires a human in the room — may matter more than which model is fastest.
TIC Insights | Perspectives for senior leaders navigating technology, innovation, and change.