Connect with us

Uncategorized

Chinese scientists claim AI is capable of spontaneous human-like understanding

InnovationResearchers gave AIs ‘odd-one-out’ tasks using text or images of 1,854 natural objects, they found that LLMs created 66 conceptual dimensions to organize them, just the way humans would.
Updated:

Published

on

Researchers gave AIs ‘odd-one-out’ tasks using text or images of 1,854 natural objects, they found that LLMs created 66 conceptual dimensions to organize them, just the way humans would.

Chinese scientists claim AI is capable of spontaneous human-like understanding

Representational image of artificial intelligence.

PhonlamaiPhoto/iStock

Chinese researchers claim to have found evidence that large language models (LLMs) can comprehend and process natural objects like human beings. This, they suggest, is done spontaneously even without being explicitly trained to do so.

According to the researchers from the Chinese Academy of Sciences and South China University of Technology in Guangzhou, some AIs (like ChatGPT or Gemini) can mirror a key part of human cognition, which is sorting information.

Their study, published in Nature Machine Intelligence, investigated whether LLM models can develop cognitive processes similar to those of human object representation. Or, in other words, to find out if LLMs can recognize and categorize things based on function, emotion, environment, etc.

LLMs created conceptual dimensions, just like humans

To discover if this is the case, the researchers gave AIs ‘odd-one-out’ tasks using either text (for ChatGPT-3.5) or images (for Gemini Pro Vision). To this end, they collected 4.7 million responses across 1,854 natural objects (like dogs, chairs, apples, and cars).

They found that of the models created, sixty-six conceptual dimensions were created to organize objects, just the way humans would. These dimensions extended beyond basic categories (such as ‘food’) to encompass complex attributes, including texture, emotional relevance, and suitability for children.

Advertisement

The scientists also found that multimodal models (combining text and image) aligned even more closely with human thinking, as AIs process both visual and semantic features simultaneously. Furthermore, the team discovered that brain scan data (neuroimaging) revealed an overlap between how AI and the human brain respond to objects.

The findings are interesting and provide, it appears, evidence that AI systems might be capable of genuinely ‘understanding’ in a human-like way, rather than just mimicking responses. It also suggests that future AIs could have more intuitive, human-compatible reasoning, which is essential for robotics, education, and human-AI collaboration.

However, it is also important to note that LLMs don’t understand objects the way humans do emotionally or experientially.

AI understanding not based on lived experience

AIs work by recognizing patterns in language or images that often correspond closely to human concepts. While that may appear to be ‘understanding’ on the surface, it’s not based on lived experience or grounded sensory-motor interaction.

Also, some parts of AI representations may correlate with brain activity, but this doesn’t mean they can ‘think’ like humans or share the same architecture.

If anything, they can be thought of as more of a sophisticated facsimile of human pattern recognition rather than a thinking machine. LLMs are more like a mirror made from millions of books and pictures, reflecting those models at the user based on learned patterns.

The study’s findings suggest that LLMs and humans might be converging on similar functional patterns, such as organizing the world into categories. This challenges the view that AIs can only ‘appear’ smart by repeating patterns in data.

Advertisement

But, if, as the study argues, LLMS are starting to build conceptual models of the world independently, it would mean that we could be edging closer to artificial general intelligence (AGI)—a system that can think and reason across many tasks like a human.

RECOMMENDED ARTICLES

ABOUT THE AUTHOR

Christopher McFadden Christopher graduated from Cardiff University in 2004 with a Masters Degree in Geology. Since then, he has worked exclusively within the Built Environment, Occupational Health and Safety and Environmental Consultancy industries. He is a qualified and accredited Energy Consultant, Green Deal Assessor and Practitioner member of IEMA. Chris’s main interests range from Science and Engineering, Military and Ancient History to Politics and Philosophy.

RELATED ARTICLES

JOBS

Loading opportunities…

Source: Interesting Engineering

Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *