As Financial Times The headline reads:AI in finance is like moving from typewriters to word processors(June 16, 2024) But I don’t think, despite all this excitement, it’s that far away (“Ray Kurzweil on how AI will change the real world“, economistJune 17, 2024). At the very least, questions remain regarding “generative” AI.IBM defines generative AI It refers to “deep learning models that can generate high-quality text, images, and other content based on the data they are trained on.”
The conversational and grammatical abilities of an AI bot like ChatGPT are impressive. The bot seems to write and converse better than a comparable number of humans. He (or she, but there is no gender, so I use the neutral “he”) is said to efficiently perform tasks of object identification and classification, as well as simple coding. This is a very sophisticated program. However, he relies heavily on a vast database, in which he makes countless comparisons electronically. I had the opportunity to see that his analytical and artistic abilities have limitations.
Sometimes they can be surprisingly limiting. I recently spent a few hours using the latest version of DALL-E (the artistic side of ChatGPT) trying to get the following request right:
It creates an image of a strong individual (woman) walking in the opposite direction of the crowd led by the king.
He just couldn’t understand. I had to elaborate, rephrase, and explain again and again, as shown in the revised instructions below:
Create an image of a strong character (woman) walking in the opposite direction to the nondescript crowd led by the King. The woman is in the foreground, walking proudly from West to East. The crowd led by the King is in the immediate background, walking from East to West. They are going in opposite directions. The camera is to the South.
(By “close background” I mean “close to the background.” Nobody is perfect.)
When I tested DALL-E, it was able to repeat my instructions, but he failed to notice obvious errors in the visual representation, as if he didn’t understand. He produced numerous images in which a woman was walking in the same direction on one side and a king and his retinue on the other. The first image below shows an interesting example of this basic misunderstanding. When the bot finally drew an image of a woman and a king walking, Opposition Following the instructions (reproduced as the second image below), the king’s attendant disappeared. Children learning to draw are better able to recognise their mistakes when they are explained to them.
I said “it’s like you can’t understand it” about DALL-E, and that’s exactly the problem. This machine is actually part of the code and a big database, do not have Understand. What he does is amazing compared to what computer programs have been able to do so far, but this is not thinking or understanding, this is intelligence as we know it. It’s very advanced calculation. But ChatGPT doesn’t know what it’s thinking. That is, he doesn’t think, and he can’t understand. He just repeats the patterns he finds in the database. Analogical thinking But it does not involve thinking. Thinking implies analogy, but analogy does not imply thinking. It is therefore not surprising that DALL-E did not suspect that my instructions, which I did not state explicitly, could be interpreted individualistically. A sovereign individual refused to follow a crowd that owed allegiance to a king. A computer program is not an individual, and does not understand what it means to be an individual. As the main image of this post (which DALL-E also drew at repeated promptings, and is reproduced below) suggests, AI does not have, and probably never will have, a Cartesian understanding. Cogito ergo sum (I think, therefore I am.) And this is not because he can’t find Latin in the database.
Nowhere in DALL-E’s database could I find a robot with a cactus for a head, something that would have been easy to imagine for another Dali, Salvatore.
Of course, no one can predict the future or how AI will develop. Caution and humility are necessary. Advances in computation will probably produce things we now consider miracles. But what we know about thought and understanding tells us that electronic devices, no matter how useful, will probably never be able to do anything. intellectualWhat “artificial intelligence” lacks is intelligence.
******************************