ChatGPT and other networks. From good form to bad sense.
To put it very simply, the networks only roughly approximates
what it "knows". Networks are very good at preparing a form of
a result. But networks are very poor at preparing a sense of
the result. Look at the photo.
© Original image
It is a very plausible image with a good connection of one edge to the other. The approximation looks very natural, but very funny. Why? What is precisely the nature of the funny? This is so funny because the result doesn't make sense. Funny things happen there when a non-working system is built according to certain well-known rules, a system that has the appearance of a working system. The next question is how "to pump" the sense. Can the iterative saturation of the system with new rules compensate of lack of sense, so that it works more and more accurately? It will be interesting to see how such a machine's complexity and machine's quality of work will get along together. If such a machine does not turn out in the future, it will more clearly delineate the boundaries of the human mind. And this is more important than such machine.
© Original image
It is a very plausible image with a good connection of one edge to the other. The approximation looks very natural, but very funny. Why? What is precisely the nature of the funny? This is so funny because the result doesn't make sense. Funny things happen there when a non-working system is built according to certain well-known rules, a system that has the appearance of a working system. The next question is how "to pump" the sense. Can the iterative saturation of the system with new rules compensate of lack of sense, so that it works more and more accurately? It will be interesting to see how such a machine's complexity and machine's quality of work will get along together. If such a machine does not turn out in the future, it will more clearly delineate the boundaries of the human mind. And this is more important than such machine.