I don’t usually do purely mathematical posts, but this topic really fascinated me and I want to share it.

If you think about it, math is formal language we created to help us describe the world. It is extremely rigorous and in the end, and very importantly, it seems to make sense to us.

In the early 1900s, Bertrand Russell worked on Principia Mathematica, a set of rigorous proofs of what we consider fundamental truths in math. For example, their proof for 1 + 1 = 2 was 360 pages long! This is way too rigorous for something that we would consider obvious. But this now is the perfect segue for me to introduce the terms syntactics and semantics.

Semantics is anything that deals in meanings – such as interpretations or observations – as opposed to deriving one truth from another purely through mathematical rules. The opposite, deriving new truths from established ones using rigorously checked math rules, would be to deal in the syntatics.

This here highlights the contrast between what we find obvious and what Russell devoted 360 pages for. It may be obvious to us – semantically – that 1 + 1 = 2, but to a mathematician, a pressing question is whether that truth can be arrived at by taking some even more fundamental truth and using mathematical rules to transform that – that is, syntactically.

We should want the semantics and syntactics to agree with each other. If we are able to disprove 1 + 1 = 2 syntactically, it would mean that our mathematical rules are broken and that possibly some other propositions we thought to be true are not. We also want to be able to conclude decisively, only with fundamental math truths that 1 + 1 = 2. If we cannot, it means that our language is not powerful enough to model the real world.

These properties of a system are called soundness and completeness respectively. We want math to be both sound and complete. We want to be able to only prove true statements, and we want to able to prove all true statements.

A related idea is one of syntactic equivalence and semantic equivalence. Two statements are syntactically equivalent if we can reach either one from the other using only mathematical rules and not interpreting the statements at any point. Parallelly, if two statements are semantically equivalent, we can reach either one from the other by interpreting their meanings.

We want semantic equivalence to entail syntactic equivalence (completeness) and vice versa (soundness). Even though they are different ideas, they can be confusing because in a good syntactic system such as math, both do happen together. Math feels like it is complete and sound.

I just said that math feels like it is complete and sound, but mathematicians are crazy and they’ve proved that it isn’t complete and it can’t prove its own soundness. I first heard about these ideas in this Veritasium video. As smart as mathematicians are, I feel like they created a footgun for themselves with this.

In any case, math isn’t just hocus pocus. It gives us so many interesting things, such as the idea of syntactics and semantics. Knowing the distinction between the two gives us words to describe systems and reason about their correctness. The system in question need not always be math. It could be a programming language, a state machine, computer hardware, etc.