Drafted: Mathematics, Formalism and Concepts

I wrote the following a few months ago. Looking back on this, I consider the following to be a failure, though I consider it an interesting failure, and so I leave the essay (beginning "The thoughts here come out of a letter ..." as I wrote it without edits). There are reasons for this. The failure is for the following reasons.

The first is that my characterization of the history of mathematics and the "deep" picture of mathematical structure is still too ad hoc (given how I pile on characterizations in my description of "function", for instance). The second is that I try to allude to and draw on Hegel's account of dialectical logic. The problems are that i) my acquaintance with Hegel's logic, whether in his Jena period or in the Science of Logic is second-hand and ii) since the problems I'm considering are not those worked out by Hegel, there really is not much of a plan of attack connecting the two. Useful interlocutors would be the speculative epistemologists of mathematics like Jean Cavaillès and Albert Lautman. This comes down to my last problem, which is attempting to cash out my intuitions (since all these below proceed primarily from intuitive considerations of mine) in a way that would be intelligible to those trained primarily in mathematics and science, not in Hegel or speculative philosophy. Under that condition, I would consider the following a failure.

There is nevertheless one insight I think I've registered in my attempt to put the following to words; that in mathematics we see the operation of the universal as something that is not simply an abstraction from reality that is static. Rather, the universal is not indifferent to its conceptual determinacies that are its particulars. It posits itself in its determinacies and vice versa. Abstraction is not a "ladder" of forms rising from the most general to the most specific (the rationalist theory of the concept in Leibniz) or from the specific to the most general (the empiricist theory in Locke), or as two species of representations separated by positedness in spatio-temporal reality (Kant). The theory of abstraction is opposed to the post-neo-Kantian theory that takes structure as primary and thinks of mathematics as the science or the study of structures. Under this point of view, the empirical sciences (especially physics) are considered the relation between pure relations (mathematical structure) with pure relata (matter or empirical data). Structure is considered a pure neutral substrate that equalizes various quantities and qualities, positing the diverse as the same with respect to form.

The alternative is that relations and relata are not separate, but are suited to each other, since they posit each other. There is no longer a dualism between a "pure" mathematics and an "applied" empiricism (in a sense, all mathematics is applied). While mathematics in formalization can and should equate what is different or what is isomorphic, the difference which is annulled formally can and should be preserved on a conceptual level. This is because of a theory of the universal in which the universal is able to decompose itself into richer and more concrete determinations. This is because it finds more ways that two distinct phenomena are the same, and then this difference is incorporated on a higher conceptual level. The nicest example I can think of is the Greek distinction between discrete count-number and continuous magnitude in Greek mathematics, and for us both are commensurate in a modern notion of number, which incorporates both the features of discrete count-number and continuous magnitude. The idea is that this Greek distinction is preserved in a higher level, in the old Hegelian jargon. The idea is that mathematics in its movement is not borne forward merely by the formal principles that are well known to us: the methods of generalization (subtraction of an axiom) or variation (substitution of an axiom with another axiom), which have the effects of making the meaning of axioms, plus the meaning and significance of such axioms totally secondary to the work of mathematics and mathematicians. This has the curious effect of making the formal study of mathematics quite bloodless compared to the work of mathematicians. In any case I hope that this preliminary has made clear some of my thoughts I had when attempting earlier in my little investigation; and to the extent that this provides a record of my thoughts and where I stalled, that it might be useful or interesting.


The thoughts here come out of a letter I wrote sometime in July 2025. I've expanded on some of my thoughts since then, though obviously there is much further work that can be done in this area. I wish to highlight these two books: Hegel's Jena Moment: Logic and Metaphysics by O. Bradley Bassler, and Conceptual Harmonies: The Origins and Relevance of Hegel's Logic by Paul Redding as important books I read in the meantime which are relevant to what I've written here and can be used in extending it.

My reflections here first started from considering the history of the concept of function in mathematics. This was stimulated by reading Salomon Bochner's Role of Mathematics in the Rise of Science and John Stillwell's Mathematics and Its History, alongside other books. Loren Graham pointed out that for Euler, a function is continuous, smooth, and has an explicit function.1 As he put it, "A function of a variable quantity is an analytic expression composed in any way whatsoever of the variable quantity and numbers or constant quantities."2 His later account of the function as that "encompasses all the ways in which one quantity can be determined in terms of others"3 foregrounds that they operate on numbers, not mathematical objects in general. It is with Fourier and Dirichlet that you can think of a function as something that assigns any arbitrary output for a given input. In a very modern way, Dirichlet disconnects a function from its "analytic expression". Quite famously he invents a function that is discontinuous everywhere (it returns 0 on irrationals and 1 for rational numbers).

The late nineteenth century is concerned with the study of functions that are "pathological", which leads to the rigorous definitions for concepts like continuity and convergence. The intuitive conception of the function you might have is as a graph on the Cartesian plane that might have "aberrant" points. Set theory is initially developed by Cantor and others to study the trigonometric (Fourier) series, which involves all sorts of discontinuities. The modern definition of a function is that it is a subset of the Cartesian product X x Y of two sets X and Y, which follows certain rules. We expect that for all x that are members of X, there is a unique y in Y that is assigned to it. A function is supposed to be one-to-one or many-to-one, not one-to-many.

When we compare this definition of the function to the function in the "classical" sense, as Euler would have recognized it, we can see that the newer formal description abstracts from the "analytic expression" of the function (i.e. some formula that gives a variable y in terms of an equation in x), the computational aspect (how this formula is actually computed), the graphical (the minima, maxima, turning points, gradients) and even the visual aspect (how it is presented to our eyes). Presumably for Euler, all these aspects were unified in an intuitive understanding of what a function is. Perhaps for us the prototypical function (in the sense of the prototype theory popularized by George Lakoff) is the polynomial y = f(x), with real or rational coefficients, with all other functions implicitly compared to this model. Which perhaps is why we like to refer to the Taylor series (as a power series) as a nice way to represent functions we find difficult to deal with.

Anyway, the point of my detour into the history of the function concept is that we have a definition of the function that is maximally abstract (in terms of the Cartesian product) and all the aspects we have abstracted can be recovered by overlaying extra structures. To recover Euler's definition of the function, we can declare that the set we have in mind is ℝ, define a topology over it, formulate what an analytic expression is, and so on. These extra concepts we add are from a mathematical point of view through totally extraneous or exterior/indifferent for the concept of the function. That is, there is no reason for Euler's particular formulation to be privileged or taken as prototype except for historical reasons and common practical usage. The twentieth century understanding of mathematics is that it is a process of generalization that goes from the less abstract to the more abstract. Mathematical structures that were developed in particular contexts are deracinated and instead of being motivated by some particular problems are motivated in themselves. One example is how set theory in Cantor's hands moves from the study of the Fourier series to a theory of pure aggregates.

Older phenomena can be recovered through this overlaying of successive formalisms back onto this original base formalism. Such a move is ex post facto insofar as it functions as a rational reconstruction of a given theory that presupposes that this theory is already completed and requires proper axiomatic grounding. This proceeds from an essentially formal notion of identity in mathematics. Modern analytic geometry for instance comes from Descartes setting up an isomorphism between points in 2D space and pairs of numbers (x, y). 4 That allows you to set up more mappings, like between line segments and magnitudes. This allows geometric problems to be posed and solved in algebraic terms, and vice versa. The Cartesian plane might seem commonsensical to us and to some the question arises why it took so long to come up with it and why the Greeks did not have it. My understanding is that it comes from the differing Greek conception of arithmetic and geometry.

The Greeks distinguish ărĭthmós (ἀριθμός) and mégăthos (μέγεθος), which loosely correspond to arithmetic and geometry respectively. Ărĭthmós is close to what we think of as number and is given in terms of aggregates of indistinguishable monads (μονάς) and ratios between them. Mégăthos on the other hand, corresponds to continuous quantities in geometry, like length, area, and volume. Magnitudes can be compared through ratios. This is why Euclid does not give you the formula for the area of a circle, but rather proves that if two lines are given in the ratio a:b, their areas stand in the ratio a²:b² (the proof is given in Elements XII.2). The passage between ărĭthmós and mégăthos is through the synthetic-constructive method, which requires the straightedge-and-compass approach to construct these ratios. Most famously this fails in the case of the square root of 2, in that it can be constructed geometrically there is no ratio of monads that can express this. This differs from the more computational strand in Arabic and Indian mathematics, which is more cavalier about square roots, or at least not as worried as the Greeks were. Descartes' approach, compared to Greeks, is to "naively" pose the equivalence at the beginning, without working out in detail this constructive approach. This naive and intuitive theory of numbers will lead to the modern theory of the reals in analysis. I think it is fair to say that after Descartes the characteristic of modern mathematics is that the distinction between ărĭthmós and mégăthos in the Greek sense has been annulled in a more univocal understanding of number. Since ărĭthmós and mégăthos refer to discrete and continuous magnitude respectively, it is perhaps not wrong to state that from the point of view of isomorphism between algebra and geometry incarnated by Descartes that the new theory of number is indifferent to the distinction between the discrete and the continuous.

Salomon Bochner makes an interesting point in Role of Mathematics in the Rise of Science that the difference between Greek and modern mathematics is that Greek mathematics attempted to ground the theory of number on a sound basis first, accomplished in Eudoxus' theory of ratios ends up paralyzing Greek mathematics, while modern mathematicians were fine with having a poor theory of real numbers, quite until it was put on a proper foundation by Cauchy and Weirerstrass. Eudoxus' theory of ratios is for the rigorous comparison of continuous magnitudes (including incommensurable magnitudes) by defining equality and inequality relations between ratios of magnitudes.

The final case that I have in mind is not in mathematics proper but in physics, and in this case the distinction between "dynamical", "phenomenological" and "geometrokinetic" laws. I am indebted to Julian Barbour's Machian history of physics text, The Discovery of Dynamics, for this distinction. This comes from his study of the history of quantitative astronomy. A law is geometrokinetic if "involves the bare minimum of concepts and is aphysical in that physical concepts such as mass or charge do not appear in it."5 This is contrasted with dynamical laws, which "cannot be formulated without the introduction of essentially physical concepts such as mass and charge." These concepts, Barbour tells us, "go beyond the purely kinematic concepts of space and time and can in no way be derived from them." 6 The exemplars of the geometrokinetic laws are Eudoxus, Hipparchus and Ptolemy, the last of whom is the cream of the crop of geocentric astronomy. We see that even Copernicus was Ptolemiac in that his system was un-physical in many aspects, given that the center of the orbits of the planets was not the sun but a point some distance from it. Kepler was one of the first to grope toward a more dynamical interpretation of astronomy. He did this by deciding that the movement of the planets was merely a geometric object but that there was some force radiating from the sun that determined the movement of the planets. Here he was influenced by William Gilbert's account of magnetic forces. This would be the reason he would declare that the sun is at one of the foci of the elliptical orbit of the planet. What is interesting though is that from the point of view of mathematical formalism there is no difference between geometrokinetic and dynamical law. While today the importance of geometrokinetic law is absent (though perhaps it has returned in relativity, where gravity is no longer a force but a pseudo-force), we have one between phenomenological laws and "dynamical" laws.

If I were to return to my earlier discussion of the function in mathematics, I think that the formalist reconstruction of "function" proceeds in this way: there are successive overlaid structures A > B > C > D ... > Z > ... where the latter terms are overlaid on top of the former. Mathematical work moves up this tower from A to B, C, D ... or down from the latter to the former. It is not that B is deducible from A axiomatically, but the new mathematical formalisms within B presuppose those of A to be meaningful. We can say that B, C, D ... are all determined by A, but that from a wholly formal point of view, B, C, D ... are not part of the determinacies of A. I think one good example is the classification of topological spaces. I understand (and I may be wrong here), that you can start with a set X, associate the minimal topological axioms on it, and then add properties as you go by (the Hausdroff property, and then the fact that it has a metric, then the specific metric) until you determine that what you have are the real numbers. And you can take the reals and either vary its properties (e.g. changing the metric function) or eliminating this structure until all you have is a set X.

We can very well from the point of view of mathematics identify a function with the Cartesian product definition, but I think that it makes sense to talk about a philosophical concept of the function. In mathematical experience (as opposed to mathematics), we could perhaps talk about the intuition of the function that subsumes all the levels of the function-tower into a singular. As a "concrete universal" (to use Hegel's vocabulary) it would stereoscopically include the entire tower A > B > C > ... > Z > ... into a singular intuition without dissolving the distinctions between A, B, C, ..., Z ... preserving them. The same goes for the Greek distinction between ărĭthmós and mégăthos, where from the point of view of mathematics the distinction between discrete and continuous magnitude has been annulled (or rather, it has been made a secondary modification, such that you can study discrete mathematics or real analysis in isolation, but in no case are you worrying about the distinction between discrete and continuous and how to get from one to another) in Descartes' "naive" identification. Even if there is an isomorphism between the two that cancels this difference from the point of view of mathematics, from the view of philosophy there is a conceptual difference between both that is preserved in and through this cancellation. And this is present not in mathematics but in this "mathematical experience" I have in mind.

I say "mathematical experience" in quotes because this language is too phenomenological and pragmatic. Strictly speaking if I were to follow Hegel, it would have to be more logical (at least in the Hegelian sense). Paul Redding's excellent book Conceptual Harmonies, which I've read in the meantime after my letter, poses it very interestingly in terms of homeomorphisms, which he uses to understand Hegel's infamous "identity-in-difference". As a logician and analytic philosopher his interest in in logic, so he considers both the extensional and intensional accounts of logic, and the twentieth-century attempt to cash out the intensional in terms of the extensional (which we see in Frege and his successors). While Redding sees Hegel as wanting to preserve the specificity of both the intensional side (universal subsumed under particular) and the extensional (particular subsumed under universal), there is a "homeomorphism" between the two that preserves this identity-in-difference. Redding sees a particular tradition of logic (which he sees in Boole, also in modal and temporal logic) as being Hegelian in a way and contrast it with the more prominent analytic reading of Hegel that is in Robert Brandom, that assimilates Hegel into post-Fregean logic.

This is perhaps due to their different focuses when it comes to Hegel's work, since Brandom focuses on the Phenomenology of Spirit (and so foregrounds recognition, the game of reasons) while Redding on the Science of Logic, especially the unjustly unread Subjective Logic in the Science of Logic. I think where I disagree with Redding's reading of Hegel is that I do not think that Hegel's logic cannot be identified with formal logic proper and that this is its strength insofar as I think that the conceptual distinctions proper to philosophy are precisely that which are not articulated formally. (Bassler himself suggests that category theory in mathematics is closer to what Hegel means by the Objective Logic in the Science of Logic than anything in formal logic, in the vein of F. H. Lawvere's category-theoretical formulation of it.) But then it is more difficult to account for what is logical in the sense of logic that Hegel uses, and the sense in which I am trying to appropriate for my own ends here. I think that O. Bradley Bassler's Hegel's Jena Moment, a gloss on Hegel's 1804/5 Jena System, has a sketch for why, though Bassler's book really is quite difficult. In any case I use the term "mathematical experience" because I do not think that what I am talking about is external reflection on mathematics or that mathematicians absolutely must read Hegel, since it functions, I think within the functioning of mathematics itself.

Back to my earlier discussion of the tower, it seems to me that even if formally it is a tower, philosophically it ought to close itself up as a circle. The concept of A would not be indifferent to B, C, D ... but rather include them under its own determinacies.

Mathematical practice is traffic between two sides connected by bridges and unified under some isomorphism that annuls the difference between both sides on a formal level which is nevertheless recovered on a conceptual level. The universal that comprises both sides subdivides itself into regions and is distributed amongst its particulars through which it re-composes itself within this logical space. When it comes to continuous and discrete magnitude, for example, in the "Simple Connection" of Hegel's Jena Logic, they come underneath the category of Quantum.7

To return to the discussion of physics. From a formal point of view, there is nothing immanent in formalism that would necessarily tell us how to distinguish between phenomenological and dynamical laws. The general idea that physicists tend to have is that phenomenological laws can be reduced to dynamical ones that stand behind the phenomena. This is even if the work is an infinite one that cannot be accomplished and has to proceed through the introduction of ad hoc hypotheses.

Here a useful interlocutor is Hegel's dissertation, the De orbitis planetarum ("On the Orbits of the Planets"). Paul Redding has shown quite clearly how the misrepresentations of it as being obscurantist or anti-physics are. The interested reader is encouraged to check out Redding's "Hegel's Dissertation on the Orbits of the Planets." Hegel's dissertation defends Kepler against Newton, or rather while Hegel accepts the correctness of Newton's theory of universal gravitation, he nevertheless affirms the Keplerian approach to science over and above the Newtonian. Kepler's three laws provide a phenomenological theory of planetary motion, while Newton's theory of universal gravitation applies to any system whatsover. Hegel's tack of argumentation is to claim that Newton's derivation of ellipses from the inverse square law is wrong. More interesting is his claim that Kepler's ellipses describe actual motions in a sense that Newton does not. I think that for all Redding's excellent reading he turns Hegel too much into an empiricist. If as Redding claims that "while Kepler's Laws were empirically based, Newton's were not as they relied on abstract entities that could not be justified empirically," 8 then what separates Hegel's critique from that of Ptolemy, or from Kepler's contemporary Tycho Brache?

Ptolemy and Brache were empiricists, the former who attempted to "save the phenomena" and the latter who came up with the Tychonic system (which has the planets revolving around the sun, and the sun revolving around the Earth). Redding makes it out to be a dualism between concrete ellipses and abstract Newtonian parallelograms of force. While there is something to this, I think a more important point to note is that ellipses and force vectors are both mathematical objects, and are both "abstract" in that way, and that the division we are concerned with is not that between mathematical and physical objects but rather within mathematical objects. Or to be precise that the division between mathematical (vector) and physical (ellipse) is something that is immanent to mathematics itself, intra-mathematical. I think that perhaps we should be thinking of Hegel as attempting to account for the identity-in-difference of the physical and the mathematical, the phenomenological and the dynamical which has a conceptual articulation which is effaced by Newton and his method.

This cannot be disconnected from Hegel's critique of Kant's transcendental philosophy. The "Force and the Understanding" chapter in the Phenomenology of Spirit is a tour-de-force against both the Newtonian account of force (not the mathematical formalism but the metaphysics on top of it) and the Kantian account of the thing-in-itself. It is useful to put Hegel in the context of the Newtonianism of his day, and Kenneth R. Westphal has an admirable paper, "Hegel, Philosophy, and Mathematical Physics" on it. Hegel's time, as Westphal showed, was full of attempts to provide a priori proof or derivation of Newton's inverse square law, attempted by scientists but also philosophers like Kant. Hegel's response is more modern, as Westphal points out, that "natural phenomena could instantiate any mathematical function whatsoever, or even none at all." Westphal says that Hegel's critique of Newton is that Newton derives the elliptical by considering a discrete series of impulses applied to a body in rectilinear motion, which in the limit becomes an ellipse, but that these impulses are not real. Hegel himself prefers his successors who use calculus for this same derivation.

My impression is that we can see the same "tower" structure in the case of mathematics, A > B, where A is the dynamical base which applies to any-system-whatsoever, and B happens to be the layer of phenomenological laws specified in a particular system. It would be un-Hegelian to say that A is the essence and B the contingent appearance, or that B is the only real component while A is some contingent back-formation (as in a phenomenalist approach). If Hegel is a phenomenalist it is closer to the classical sense Mach is. Mach too instead of hoping to reduce all science to some unitary principle of physics, thinks of a nested hierarchy of science that is self-reinforcing, such that a theory of mechanics would include not only the phenomena of physics experiments but also the data of psychology, biology and so on.

I don't accept a pure relationalism about science, espeically given where it lead Ernst Mach to, in the rejection of both atoms and the theory of relativity. This is where I part ways with the ontic structural realists, since they consider a tower of relations (physics, chemistry, the "special sciences" ...) while keeping the form of structure invariant, while form has to be appropriate to its content without being indifferent to it. The analogue, that is, pure relationalism in mathematics, is category-theoreticism. Category theory is relevant to the earlier discussions of identity and isomorphism, since category theory allows us to study various relations that are identities to greater and lesser strictness. The next step would be an intuition that folds between some M and N all the various kinds of identity and non-identity between both. This is perhaps why Bassler identifies category theory and the Objective Logic, which offers a destruction of classical metaphysics and transcendental philosophy, but left behind is the Subjective Logic and the thinking of the concept.

More broadly, while category theory has a great pedagogical value, my understanding is that presenting mathematics category-first is not how to do mathematics. It is possible to present group theory as a case of a category with a single object and single morphism with rules about inverses. It is also possible to present group theory as rooted in the study of symmetries and various problems in geometry, algebra, number theory. While the former begins from the deep structure of whatever domain we are looking that, my impression is that new development in group theory invariably comes from the latter. Categories themselves originally come from problems that motivate natural transformations, though introductions to category theory don't start from there. There are recent texts that present category theory to an audience mostly innocent of mathematics (I have in mind texts by F. H. Lawvere, Eugenia Cheng, Daniel Rosiak) and excellently present category theory, and I personally like and enjoy them. I don't think however that these books can turn one into a mathematician (at least in isolation) and rather present a transcendental pedagogical presentation of mathematics of use to philosophers, students and the interested technically-capable layman.

If there's a transition from the Objective Logic, my supposition is that it might as well be a field of applied mathematics. If the twentieth century was the century of foundations and formalization (Russell, Whitehead, Bourbaki, category theory), then twenty-first century mathematics is where the distinction between applied and pure mathematics is being abolished (Redding points out how Hegel unlike Schelling does not see a distinction between the two) given the focus on calculation and compuation. We can take not only computer science but also linear algebra and optimization, as well as the study of dynamical systems as exemplars. Despite my proximity to this (and a particular kind of intuitionism, even my account of functions is close to the lambda calculus account) I differ from the ultimately constructivist move insofar as I do not want to vacate Cantor's paradise. In any case to reject the infinite means that the infinite has to be invoked in the first place and there has to be a sense and so a philosophical meaning appropriate to it. Finitist, constructivist, even ultrafinitist approaches in mathematics are important insofar as they enrich their classical counterparts and delineate more and more carefully what the place of the infinite in mathematics is.


  1. Naming Infinity: A True History of Religious Mysticism and Mathematical Creativity, by Loren Graham and Jean-Michael Kantor, pg. 53 

  2. Introduction to Analysis of the Infinite, Leonhard Euler 

  3. Foundations of Differential Calculus, Leonhard Euler 

  4. Conceptual Mathematics: A First Introduction to Categories, by F. William Lawvere and Stephen H. Schanuel, pg. 41-42, 87 

  5. The Discovery of Dynamics: A Study from a Machian Point of View of the Discovery and the Structure of Dynamical Theories, by Julian Barbour, pg. 52 

  6. Ibid., pg 53. As befitting his Machian orientation and given Kant's influence on Mach, one gets the impression that we are seeing here the dualism between the forms of pure intuition and the pure concepts of the understanding. Like Kant, "force" would be a concept of the Understanding, while perhaps geometrokinetic laws could be synthesized in intuition by the faculty of the imagination without reference to these. Given that pre-modern astronomy, while immensely sophisticated, amounts to simply "saving the phenomena" of empirical observations in an un-theoretical way, perhaps we can say that similar approaches today, like machine learning, are Ptolemiac in a similar way. An argument along these lines is presented in the essay "The Computer: Ruin of Science and Threat to Mankind" (1980/1982) by Clifford Truesdell 

  7. "Quantum" in the sense of the quantitized, not in the sense of quantum physics! 

  8. Hegel's Dissertation on the Orbits of the Planets (Chapter 6) - Hegel\'s Philosophy of Nature 


You'll only receive email when they publish something new.

More from No Dreams
All posts