Soul Zhang Lu is Out to Humanize AI

If the Latin phrase “Veni, vidi, vici” were to be used for a technology, the only appropriate candidate would be artificial intelligence.
At this time, AI has seeped into every aspect of life, including what was once considered a human-only bastion – Emotional connection.
The CEO and founder of Soul App, Zhang Lu, discovered the potential of this up-and-coming technology way back in 2016.
Among the very few social networking apps to use AI at the time, Soul harnessed the power of the technology to aid in the unique digital connectivity format that the platform was based on.
The goal was to enable people to easily find kindred souls in the app’s user base, and AI presented an easy yet effective way to achieve this.
Soul Zhang Lu’s AI engine, LingXi, could make sense of large volumes of data and pick out users who shared similar interests.
The success of these preliminary efforts carved the path for Soul App’s full-throttle foray into AI-driven social connectivity.
While a lot of water has passed under the bridge since then, what is making waves right now is Soul’s recent upgrade of its end-to-end full-duplex voice interaction model.
Quite simply, this upgrade is a turning point in how digital companionship is experienced and understood.
Because this upgrade does away with the turn-based dialogue limitation of the model, it enables AI chatbots to hold natural conversations that feel spontaneous, emotionally expressive, and are deeply attuned to the user’s context. To achieve this result, Soul Zhang Lu’s engineers set three foundational goals:
- Creating an emotional AI that would be capable of real empathy instead of mere scripted responses.
- Designing a model that can provide an authentic human-like interaction experience.
- Tweaking the model to the point where AI doubles up as a friend and an ally.
This three-pillared approach to the upgrade was essentially a philosophical shift in AI design as it was intended to move AI from utility-based to relationship-based. In a nutshell, while past models were designed to “do things for” users, like perform certain tasks or answer questions, Soul’s engineers wanted their AI to “be with” the users, offering a presence that can support, engage, and adapt.
To create an emotional AI, it was necessary to tackle the delayed response that mars the performance of most voice models.
Traditional AI-based communication tools have long suffered from an annoying mechanical limitation.
These conversation agents have to follow a pattern, which starts with user input, the AI listens to the words spoken, interprets them, and then responds as per its training.
This multi-step process understandably leads to a delay between user input and AI output. The result is an interaction that is unmistakably robotic and leaves users with the feeling of chatting with a tool instead of a companion.
To counter this issue, Soul Zhang Lu’s engineers decided to break away from this process altogether. The recent upgrade allows Soul to support true full-duplex voice communication.
This means that users and the platform’s AI can speak at the same time, interrupt one another organically, and shift topics fluidly.
Because the technology listens not just for words, but for emotional cues, it boasts some remarkable capabilities. For instance:
- From joy and curiosity to sadness, the AI can adapt its vocal tone fluidly throughout a conversation.
- It also does not always respond in formal or scripted sentences. In fact, the model accurately mirrors the way real people speak, complete with pauses, filler words, and tonal shifts.
- The system is designed to adjust its emotional expression in real time based on the user’s tone and context. So, the responses feel more empathetic and genuine.
Furthermore, the fact that the model is built on a Unified Autoregressive Architecture allows it to perceive not only what the user is saying but also when and how it is said. And, what does this sound like in practice?
Well, the AI might cough awkwardly before replying to an emotional question, mimicking the hesitancy of a real person, or it may interject with casual expressions or social affirmations (“yeah,” “totally,” “I get that”) without waiting for a formal prompt.
The model can also use past interactions to adjust its tone. When all of these tweaks are put together, the net effect is a conversation that feels alive and grounded in reality rather than programmed behavior.
This upgrade will undoubtedly sit well with the users of Soul Zhang Lu’s platform, many of whom have been asking for an AI chatbot that sounds more human. Add to this mix the fact that, according to a 2025 Survey conducted by Soul:
- Over 70% of the platform’s users were open to befriending AI, and
- Around 40% were already using AI products for emotional companionship daily
These figures make it easy to see how this upgrade is, in reality, an effort to meet a growing need expressed by the platform’s young users.
Be that as it may, Soul Zhang Lu makes no bones about admitting that AI should not and will not replace human connections.
The idea, all along, has been to create a social environment where AI and humans coexist and add value to the overall experiential ecosystem.
It goes without saying that for now, Soul’s many users are piqued by the idea of communicating with an ever-present, emotionally responsive, and contextually aware AI companion.
And if that is not enough, Soul Zhang Lu’s engineers have already clarified that further enhancements to the voice model will include multi-party interaction capabilities.
After all, why limit AI to one-to-one conversations when it can also be made a part of a spirited debate or a lively conversation among friends?
Dihward Positivity: Innovating Ethical Digital Adaptation