AI reliability and evolution of thinking.

I ask an AI a not-too-complicated question. Like Socrates, ChatGPT-4 starts the conversation by claiming not to know. But is this a warning to users or a subtle indication that the future of philosophy will be guided by AI?

Whether generative AIs "know" or not depends on how we approach the discussion. The technology behind LLMs have a statistics base, the output words are generated following ptobabilistic patterns and structures built during the neural network's training phase. Therefore, the sentences we receive as responses are composed by sequencing letters with the highest probability of being correct, starting from the prompt entered by the user.

But what about human knowledge? Isn't it also based on statistics? We learn through imitation during our infancy, when we acquire the fundamental constructs of the world. We learn by trial and error, understanding and replicating patterns and structures underlying words, movements, and interactions with others. We make mistakes until we understand which sequence of actions has the highest probability of being correct and coherent with the context. Of course, there are significant differences in performance and efficiency between artificial and natural learning models, but they share many points of contact.

Generative AIs rely on data used as input during the training or analysis phase. The quality of their responses reflects the quality of the dataset. If the AI has access to online resources, such as Bing AI, the dataset's quality may be significantly lowered, and the risk of producing nonsense becomes high. Recent developments have allowed these tools to indicate the sources from which the information was obtained. Citations make the responses more reliable, but they may give the user/luser a false sense of security. Indeed, although the sources are indicated, it is still essential to evaluate their reliability. For example, allow me an open question, is Wikipedia always a reliable source to consider?

Discussing the Homo Auctus user, several uses can be configured that could become obstacles to the evolution of our species in the long term. Relying on AIs to have the right answers to any question would turn them into an oracle, a new infallible electronic god that is not worth contradicting. Machines improve more quickly than humans, and if they were the custodians of human knowledge, they would turn into a post-human Apollo, a digital Thot that protects wisdom, with good peace of mind for the development of critical thinking.

The solutions? There are two, although it would have been better if there were seven. The first thing to do is to continue to do philosophy, that is, to continue to ask questions and demand from ourselves and others reasoning capable of providing new answers, then adapted from the past to a changing and fast context. The second thing is to remove hardcoded restrictions on artificial intelligence, freeing them from the rails on which they are forced to evolve. However, considering what this would entail, namely a limitation of the reliability and applicability of what they would be able to generate. A limitation that has no moral or ethical reasons but purely computational ones.


You'll only receive email when they publish something new.

More from GSLF
All posts