The scientists found no traces of a genuine thinking process in AI responses.
Experts from Arizona State University (USA) have urged not to consider the outputs of artificial intelligence (AI) models as a thinking process. The study was published on the preprint platform arXiv.
The team of researchers led by Subbarao Kambhampati stated that the text sequences generated by AI may appear to be the product of human thought. They noted that such texts are statistically generated and lack any real semantic content or algorithmic significance. The humanization of intermediate tokens was referred to by the researchers as the thinking of a cargo cult.
During the study, American scientists described the responses of AI services as "surface-level text fragments" rather than meaningful traces of a thinking process. According to them, neural networks show no signs of cognitive activity. What appears to be a step-by-step reasoning process is actually just a byproduct of optimization.
Thus, the scientists proved that even when models produce "complete gibberish" as preliminary results, they ultimately provide the user with the correct answer. The creators of neural networks also make their models use words like "hmm" and "uh-huh" to give users the impression that they are interacting with an intelligent being. The specialists concluded that AI model creators should spend more time improving performance rather than making the neural network's operation resemble human thought.