Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.


Researchers from Stanford University and Meta’s Facebook AI Research (FAIR) lab have developed a breakthrough AI system that can produce natural, synchronized motions between virtual humans and objects based solely on text descriptions.

The new system, dubbed CHOIS (Controllable Human-Object Interaction Synthesis), uses the latest conditional diffusion model techniques to produce seamless and precise interactions admire “lift the table above your head, walk, and put the table down.” 

The work, published in a paper on arXiv, provides a glimpse into a future where virtual beings can comprehend and answer to language commands as fluidly as humans.

“Generating continuous human-object interactions from language descriptions within 3D scenes poses several challenges,” the researchers noted in the research paper.

VB Event

The AI Impact Tour

Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you!

 


Learn More

They had to ensure the generated motions were realistic and synchronized, maintaining appropriate contact between human hands and objects, and the object’s motion had a causal relationship to human actions.

How it works

The CHOIS system stands out for its unique approach to synthesizing human-object interactions in a 3D environment. At its core, CHOIS uses a conditional diffusion model, which is a type of generative model that can simulate detailed sequences of motion.

When given an initial state of human and object positions, along with a language description of the desired task, CHOIS generates a sequence of motions that culminate in the task’s completion.

For example, if the instruction is to proceed a lamp closer to a sofa, CHOIS understands this directive and creates a realistic animation of a human avatar picking up the lamp and placing it near the sofa.

What makes CHOIS particularly unique is its use of sparse object waypoints and language descriptions to guide these animations. The waypoints act as markers for key points in the object’s trajectory, ensuring that the motion is not only physically plausible, but also aligns with the high-level goal outlined by the language input. 

CHOIS’s uniqueness also lies in its advanced integration of language understanding with physical simulation. Traditional models often struggle to correlate language with spatial and physical actions, especially over a longer horizon of interaction where many factors must be considered to preserve realism.

CHOIS bridges this gap by interpreting the intent and style behind language descriptions, then translating them into a sequence of physical movements that respect the constraints of both the human body and the object involved.

The system is especially groundbreaking because it ensures that contact points, such as hands touching an object, are accurately represented and that the object’s motion is consistent with the forces exerted by the human avatar. Moreover, the model incorporates specialized loss functions and guidance terms during its training and generation phases to enforce these physical constraints, which is a significant step forward in creating AI that can comprehend and engage with the physical world in a human-admire manner.

Implications for computer graphics, AI, and robotics

The implications of the CHOIS system on computer graphics are profound, particularly in the realm of animation and virtual reality. By enabling AI to explain natural language instructions to produce realistic human-object interactions, CHOIS could drastically reduce the time and effort required to animate complex scenes.

Animators could potentially use this technology to create sequences that would traditionally demand painstaking keyframe animation, which is both labor-intensive and time-consuming. Furthermore, in virtual reality environments, CHOIS could direct to more immersive and interactive experiences, as users could command virtual characters through natural language, watching them execute tasks with lifelike precision. This heightened level of interaction could modify VR experiences from rigid, scripted events to dynamic environments that answer to user input in a realistic fashion.

In the fields of AI and robotics, CHOIS represents a giant step towards more autonomous and context-aware systems. Robots, often limited by pre-programmed routines, could use a system admire CHOIS to better comprehend the real world and execute tasks described in human language.

This could be particularly transformative for service robots in healthcare, hospitality, or domestic environments, where the ability to comprehend and perform a wide array of tasks in a physical space is crucial.

For AI, the ability to process language and visual information simultaneously to perform tasks is a step closer to achieving a level of situational and contextual understanding that has been, until now, a predominantly human credit. This could direct to AI systems that are more helpful assistants in complex tasks, able to comprehend not just the “what,” but the “how” of human instructions, adapting to new challenges with a level of flexibility previously unseen.

Promising results and future outlook

Overall, the Stanford and Meta researchers have made key progress on an extremely challenging problem at the intersection of computer vision, NLP (natural language processing), and robotics.

The research team believes that their work is a significant step towards creating advanced AI systems that simulate continuous human behaviors in diverse 3D environments. It also opens the door to encourage research into the synthesis of human-object interactions from 3D scenes and language input, potentially leading to more sophisticated AI systems in the future.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. ascertain our Briefings.

Source link