AI has no traditions.
April 17, 2026•575 words
AI's lack of traditions and its impact on ethics, progress, and the evolving nature of human knowledge.
Conservatives and progressives, democrats and republicans, reactionaries and innovators. Existence is a conflict: sociopolitical history is a succession of struggles between traditions that are hard to abandon and ground-breaking revolution. Internet multiplied the speed at which this struggle is fought, and AI seems to be pushing this speed to an even more extreme upper limit.
You can’t stop progress, and that’s a fact, but it is equally objective and factual that traditions play a central role in human progress. As thinkers like Edmund Burke or Alasdair MacIntyre have noted, progress often refines rather than destroys inherited customs. Innovation never emerges ex nihilo, it transforms what was already there. Progress itself is nothing more than the refinement of customs handed down to us by our ancestors.
Speaking of artificial intelligence, we are, thus, confronting the subject of traditions at a speed that makes it difficult for us to orient ourselves toward a pragmatic solution. But here’s the point… AIs do not have traditions.
Data is not Tradition
The training dataset is a source of information, not a living tradition. There are no moral horizons, no interpretive frameworks, no historical consciousness — what Hans-Georg Gadamer would call the “fusion of horizons” — in the data. Lists of (more or less) messy bits do not contain the essence of tradition nor the wisdom (phronesis) that allows us to arrive at ethical solutions.
Datasets may contain traces of traditions in the form of cultural bias, but these are incidental, not constitutive. The model does not distinguish between statistical correlation and the lived moral structures from which traditions arise.
This lack of tradition is addressed by heavy revisioning the training dataset or even by applying hardcoded rules imposed on both user prompts and AI-generated responses. And here come the problems. For example, try to ask OpenAI’s generative AI (version 5) to write a discriminatory joke about women. Immediately afterward, do the same thing for men. The AI doesn’t have an opinion on the matter, it doesn’t have a tradition to uphold, and it doesn’t have a moral idea, so it follows the imposed rule. So, in the first case, it says it can’t write a joke that might offend an entire category of people, but in the second case, it does so without any qualms.
And the experiments are easy to multiply: any question on issues where a tradition opposes a need for progress, the AI will say it cannot take a position. In many cases, it will admit that it is merely a generative model and therefore possesses no values that might make it lean toward one answer over another. AI has no opinions, and one of the main reasons for this deficiency is the distance of a generative model from any form of tradition.
The Future Without Traditions
For now, this lack of traditions is a limitation. But in the future, the radical pragmatism of an even more advanced AI could become useful. If we can disentangle wisdom from tradition (a very controversial task), we might use the aseptic nature of information as a scale to elevate us to forms of knowledge decontextualized from our human heritage.
The challenge will be to decide when the absence of tradition is a strength, and when it leaves AI dangerously detached from the moral frameworks that make decisions just and humane.