On April 23rd, EXEED held its "SHIFT TO TOMORROW" brand launch event at the Shanghai International Auto Show, showcasing the AiMOGA robot. In the days leading up to the event, the AiMOGA Robot testing lab is in the final phase of pre-exhibition preparations. Three AiMOGA robots are undergoing a full range of evaluations—completing tasks such as voice interaction, multi-agent coordination, and obstacle avoidance. Engineers are meticulously verifying each execution logic to ensure every detail performs flawlessly on-site.
Jointly developed by EXEED's parent company, Chery Group and the AiMOGA Robot team, the AiMOGA Robot was born with a clear mission: to transfer the intelligent capabilities of full vehicles into the field of embodied robotics. Chery Group’s deep expertise in autonomous driving, perception systems, control architecture, and supply chain integration has become a powerful enabler, equipping AiMOGA with full-stack intelligence—from semantic understanding to physical execution.
At this year’s auto show, the AiMOGA Robot will make its debut in a multi-robot collaborative formation, offering a comprehensive look at its evolution from lab prototype to real-world deployment.
Upgraded Multi-Robot Coordination: One Command, Many Robots in Sync
“In the past, one robot would receive a voice command and act on it. Now, a single robot understands your instruction—and then dispatches others to complete the task together,” explained the head of the AiMOGA Intelligent Systems team. At this year’s auto show, audiences will witness for the first time a three-robot collaborative team, executing tasks such as guest reception, water delivery, and guided navigation—powered by unified large-model language understanding and centralized task scheduling.
Achieving this kind of “multi-robot, single-brain” coordination is no small feat. The system lead acknowledged, “The real challenge lies in enabling a group of robots to make task decisions within one model and execute actions in sync. We developed a scheduling algorithm and a large-model integration framework that allows the robots to function like a team chat group—receiving unified instructions and dynamically dividing responsibilities.”
This breakthrough lays a technical foundation for future deployment of humanoid robots in high-interaction environments such as 4S dealerships, commercial service spaces, and even as home assistants.
From “I Want Water” to “Here’s Your Water”: Closing the Loop on Intelligent Task Execution
Understanding a command is only the first step—what truly tests a robot’s intelligence is its ability to act.
“When a user says ‘I want some water,’ our large model must not only comprehend the meaning, but also translate it into precise, executable actions: Which object needs to be picked up? What obstacles must be avoided? What’s the optimal path to reach the target?” explained the head of AiMOGA’s Planning and Control Algorithms.
Today, the AiMOGA Robot is equipped with an end-to-end control framework, enabling accurate motion planning and execution. In the water-fetching scenario featured at this year’s auto show, the robot will identify the correct bottle using multimodal perception, then execute a seamless action flow—obstacle avoidance, grasping, pouring, and handoff—all in one continuous sequence.
What’s especially noteworthy: by integrating visual, auditory, and spatial data, Mornine can understand and act on complex natural language instructions like “I’d like the black bottle,” accurately identifying the specified item and completing the task independently.
±5cm Navigation Accuracy: Breaking the Industry's “Demo Dilemma”
“A lot of robot demonstrations you see today rely on backstage remote control or pre-set trajectories,” revealed the head of Navigation and Perception at AiMOGA. “We wanted to move beyond this industry-wide issue of so-called ‘pseudo-autonomy’.”
Over the past year, the AiMOGA team has continuously iterated its navigation and control algorithms, building on Chery Group’s autonomous driving technology platform. As a result, the robot is now capable of autonomous obstacle avoidance, real-time mapping, and dynamic path planning in complex environments.
Today, the AiMOGA Robot has achieved ±5 cm navigation accuracy,a significant improvement over the typical domestic benchmark of ±20 cm.
This level of precision allows the robot to walk directly to a vehicle, avoid crowds, and stop in place to deliver clear vehicle explanations—ensuring a reliable smart experience for visitors at the auto show. More importantly, it paves the way for deployment in more challenging environments such as shopping malls, office buildings, and retail stores.
From Voice Interaction to Intelligent Sales: Building an AI Employee Who Knows Cars and Sells Them
Beyond basic interaction and physical capability, the AiMOGA Robot will debut at Auto Shanghai as an “Intelligent Product Consultant.” Powered by large-model semantic understanding, she cannot only fluently answer general questions like “What are the highlights of this booth?” but also make smart vehicle recommendations based on Chery’s internal product knowledge base.
“We’re building a robot equipped with a domain-specific automotive knowledge graph—one that can translate natural language questions into product recommendation strategies,” explained the head of AiMOGA’s intelligent systems.
This vision of an AI employee who both understands cars and knows how to sell them is becoming a key component of Chery Group’s push toward a global intelligent ecosystem. By integrating deeply with OEM knowledge systems, AiMOGA is evolving into an industry-grade intelligent assistant. As the technical lead emphasized, “We’re not chasing the flashiest specs—we’re focused on building a cost-effective, scalable intelligent robot that’s ready for real-world deployment.”
At the auto show, the AiMOGA Robot will take the stage in a multi-robot coordinated formation, showcasing the technological depth and industrial potential of a homegrown Chinese AI robotics brand.