Recently, there has been [some discussion](https://hci.social/@chrisamaphone/113753758271682405) around the concepts of "synthetic" and "analytic" methods in mathematics and the formal sciences. In the course of throwing my hat into the ring of these discussions, I found I had much to say on the topic, and so have collected my thoughts into this post. To begin with, then, let us ask a basic question. # What is Synthetic Mathematics? Per [the nLab](https://ncatlab.org/nlab/show/synthetic+mathematics), a "synthetic" approach to mathematical theories is one which prioritizes the axiomatic description of structures, as opposed to more traditional "analytic" approaches in which these structures are built out of primitive mathematical objects. In the typical set-theoretic foundations of modern mathematics, this latter approach amounts to constructing one's intended objects of study out of sets, and studying the properties of the particular sets so-constructed. By contrast, in a synthetic approach, one directly axiomatizes the essential properties of one's intended objects of study and works from there to derive consequences of these axioms. The go-to example for those seeking to explain the difference between analytic and synthetic mathematics is geometry. Euclidean geometry is the classic example of a *synthetic* theory of geometry, in that it defines the fundamental concepts "point," "line," etc., of geometry not by constructing them out of more basic building blocks, but instead by stating axioms taken to be characteristic of these concepts. On the other hand, in Cartesian (plane) geometry, points are defined as ordered pairs of real numbers, lines are defined as solutions sets of linear equations, etc. On the face of it, then, the analytic approach may seem strictly superior to the synthetic. Was not Cartesian geometry a great advancement over Euclidean geometry, that paved the way for novel developments in algebraic geometry and beyond? Or perhaps one may question the robustness of the distinction between analytic and synthetic itself ([one would not be the first to do so](https://en.wikipedia.org/wiki/Two_Dogmas_of_Empiricism)). After all, even though Cartesian geometry analyzed Euclidean geometry in terms of the arithmetic of real numbers, this left the concept of "real number" unanalyzed—a state in which it would remain until the late 19th century with Dedekind's construction of the real numbers as Dedekind Cuts and Cantor's definition of real numbers as quotients of equiconvergent Cauchy sequences. In both of these cases, the previously "synthetic" theory of real arithmetic was analyzed in terms of set theory. But then again, the frameworks of set theory into which these latter concepts were analyzed were also axiomatic, which—despite some reformulation in face of the paradoxes of naïve set theory—they remain to this day. Is this not a case, then, of synthetic turtles all the way down? # Why Synthetic Mathematics? First of all, I don't think "analytic" and "synthetic" – as applied to mathematics – should be taken as absolute terms, but rather denote concepts *relative* to a particular level of abstraction. The usual set-theoretic definition of a topological space (e.g.) can, on one hand, be seen as a "synthetic" characterization of objects like the set of real numbers, or it can be seen as an "analytic" definition of the whole *category* (both in the formal and informal sense) of topological spaces. Likewise, the definition of a ring in abstract algebra can, on the one hand, as a synthetic theory of arithmetic, and on the other hand as an analytic definition of the *category* of rings, etc. From this perspective, the main point of distinguishing "analytic" from "synthetic" is that it requires us to be precise about the level of abstraction we are working at. Are we interested in proving a fact about "the" category of topological spaces in ZFC, or in proving something about any category whose objects are sufficiently *spatial*? And even if we are interested in ultimately proving a fact specifically about topological spaces in ZFC, might a cleaner/simpler proof be possible if we first abstract from this "concrete" setting to one that captures certain essential details of our intended model? Roughly speaking, a synthetic definition is an *interface,* while an analytic definition is an *implementation.* Then the importance of synthetic methods is just an instance of the well-known principle in Computer Science that working via *interfaces* rather than directly on *implementations* is the key to proper abstraction, modularity, and simplicity. Now, in the context in which it is usually used in modern parlance, "synthetic mathematics" means applying this distinction at the level of *foundations themselves,* so that, rather than targeting one specific mathematical foundation (e.g. sets), we define an interface for a whole *class* of mathematical foundations that reflect the essential characteristics of the "mathematical universes" in which our intended objects of study live. This makes this sort of synthetic mathematics a kind of metalinguistic abstraction for mathematics. Now, the ability to define and reason about such interfaces for foundations still supposes some ambient foundational framework of logic, etc. However, experience teaches that the desiderata for such a "foundation of foundations" are rather different than those of "first-order" foundational systems themselves. I'm reminded about the old joke about Lisp – that it's a horrible language to write normal programs in, but a great language to define a language that's very good for writing the programs you want. So synthetic mathematics does not do away with our current foundations so much as it *contextualizes* them, by moving to a higher level of abstraction in which to define interfaces for foundations themselves. Now, to return to the main question of this post, when is such a high level of abstraction useful? Paradoxically, (or perhaps [*dialectically*](https://ncatlab.org/nlab/show/adjoint+modality)) I think high-level metalinguistic abstraction of this sort is useful at opposite ends of our level of sophistication in understanding mathematical concepts. On the one hand, when we initially have only an informal or ill-specified idea of some concept we seek to cast in mathematical form, shifting our focus from "what is this concept like in itself?" to "what must the mathematical universe be like in order for this concept to make sense?" can allow us to make headway on precisifying the concept (bringing this back to the philosophical roots of the analytic/synthetic distinction, this shift has a distinctly [*Kantian*](https://en.wikipedia.org/wiki/Copernican_Revolution#Metaphorical_usage) flavor). Once we have an interface for languages capable of expressing this concept, this then gives us access to the "low-hanging fruit" theorems & constructions that follow more-or-less immediately. However, once the low-hanging fruit has been picked over, we shall eventually seek to understand how this concept manifests itself in particular instances. And this shall lead us to increasingly complex analytic constructions which exhibit new patterns and concepts seeking clarification. Then once again, isolating these novel concepts and axiomatizing the language necessary to express them can help manage this complexity, essentially by making what previously may have been difficult theorems/constructions in one language into low-hanging fruit in another. This, as I see it, is the fundamental interplay of analytic and synthetic notions at the heart of mathematics, logic, and formal science. Notably, there are certain instances at the frontiers of subjects like [HoTT](https://homotopytypetheory.org/book/) where such synthetic methods are seemingly *required* to capture certain central concepts in (e.g.) higher category theory. For instance, there is no known way of defining (semi)simplicial types, ∞-categories, etc., internally in terms of the ∞-groupoid structure on types in HoTT, and all extant solutions to this problem solve it by creating a synthetic type-theoretic language where some such higher-categorical structures are taken as primitive (c.f. [this paper](https://arxiv.org/abs/2311.18781 "https://arxiv.org/abs/2311.18781") and [this paper](https://arxiv.org/abs/1705.07442 "https://arxiv.org/abs/1705.07442")). I'll close with this quote (one of my all-time favorites) from Pierre Deligne, about the mathematical method of Alexander Grothendieck: > *"From [Grothendieck] I have learned not to take glory in the difficulty of a proof: difficulty means we have not understood. The idea is to be able to paint a landscape in which the proof is obvious."*