Former OpenAI CTO Mira Murati’s vision on new startup Thinking Machines Lab, exploring a new approach to AI focused human-centered interaction.
Former OpenAI CTO Mira Murati is now working towards a very different idea of what advanced AI should look like. Mira Murati’s new AI vision is that, instead of building systems that quietly replace human effort, she is betting on AI that stays connected to people, listens closely, and works with them in real time.
After leaving OpenAI in 2024, she started a new venture Thinking Machines Lab, which is built around a bold idea that the next stage of AI should not feel like typing into a chatbot. People should feel like talking to something that actually understands how humans speak, pause, interrupt and change direction in middle of the thought.
“AI should not replace humans, it should work with them”
Murati’s vision breaks from the direction many big AI labs are taking. While companies like OpenAI, Google, and Anthropic are building models that can complete entire tasks on their own, she believes the better path is keeping humans actively involved.
She says, “At some point, we will have super-intelligent machines.” But she also believes, the safest and most promising path is not to remove humans from the process but to keep them actively involved as AI becomes more powerful.
She points to the idea that there are many possible outcomes ahead, not just one fixed future. And the difference between those outcomes will come down to how closely humans stay connected to the systems they are building, guiding them instead of stepping aside.
So what exactly is she building
Thinking Machines Lab calls its core system “interaction models.” These are designed to understand communication as it happens, not after it is fully typed or spoken.
Instead of treating speech like text to be processed later, the system listens continuously through microphones and cameras, picking up tone, timing, and even interruptions.
Here’s what sets it apart:
-
It follows conversations in real time instead of waiting for full prompts
-
It can respond while a person is still speaking
-
It understands shifts in topic without losing context
-
It uses visual and audio input together, not just text
-
The goal is to make AI feel less like a tool and more like a participant in a conversation.
Can AI really keep up with natural human flow
Early versions show signs of “backchanneling,” which means the AI can give small signals like acknowledgements or brief responses while the user is talking. This is how humans naturally show they are listening.
To support this, the system uses a dual-layer design:
-
One layer handles fast conversation and live responses
-
Another layer thinks more deeply in the background for complex reasoning
Experts say this structure mirrors how humans think, reacting quickly on the surface while processing deeper meaning underneath.
The approach could be especially important in places like India, where voice-based interaction is growing fast and many users prefer speaking over typing.
Murati’s future vision
As Business Fortune observes, the new AI technology is still in early stages, and there are open questions around accuracy, long conversations, and reliability. If Murati’s vision works, the next generation of AI may not wait for instructions. It may listen, adapt and respond as conversations unfold, becoming less of a machine that answers and more of a partner that understands.














