What in the Actual Fuck Is the Soundness Theorem in Metalogic? (Explained to Myself like I’m a Dumbass): Part 1

Before you attempt to understand this article, make sure you first read my previous article, “What the Fuck Does “Proof” Even Mean? (And Why It Doesn’t Guarantee Truth).”

I’m going to attempt the impossible in this post: explain something that confused me and pissed me off so much in grad school that it became one of the (many) reasons I quit the program. But it’s been a decade since I left, and now that there’s no pressure to publish papers, meet deadlines, and pass the damn courses, it’s time to revisit this shit with a vengeance.

Metalogic — what in the actual fuck does that even mean?

We’ll lay down some foundational metalogic concepts first so we can tackle what’s called the soundness theorem in metalogic within what’s called the Hilbert system, not just as a vague example, but with as much precision as possible—while translating every piece of symbol garbage into normal human English.

The calculator analogy

Say you have a calculator.

2 + 2

and it spits out:

4

Cool. That’s just doing math.

That’s like doing a normal logic problem: you follow rules → you get an answer.

But metalogic asks something way more annoying:

“How do you know the calculator won’t randomly give you 5 or 10 or 17 one day?”

Not just for 2 + 2.

But for:

  • 17 + 83
  • 999 + 1
  • ANY numbers you plug in

So we’re not checking answers.

We’re checking whether the system itself ever screws up.

What we need to show

To prove the system is reliable, we need four things:

  1. Premises
  2. Axioms
  3. Rule Case
  4. Inductive Step

I’ll explain what the hell these things mean using analogies along the way.

(1) Starting assumptions (premises)

These are just things you’re told to start with.

Math analogy:

You get a math problem. It says assume that x = 5.

You don’t question the math problem and say shit like, “How do you know that x = 5? Can you prove it? No? Then shut the fuck up.”

Instead, you say, “Cool, let’s just say that x = 5 is true.”

Chess analogy:

Say that you’re playing a game of chess. You’re given a starting position: the white queen is on one square, the black queen is on another, the rooks are in the corners, and so on. You don’t say, “How do we know that’s where these pieces begin?” You just assume those positions are correct and move on.

Logic example:

Statement 1: If you eat Taco Bell, you’ll have diarrhea.

Statement 2: You eat Taco Bell.

…you don’t question whether these two statements are true. You assume they’re true for the sake of argument. If we want to add a few scary logic symbols, we write:

Γ = { Taco Bell, Taco Bell → diarrhea }

Translation (without the hieroglyphics)

Γ (the big scary Greek letter gamma) just means:

“the set of starting assumptions”

So this is just a fancy way of saying:

“Here are the things we’re starting with. This is like ‘assume x = 5’ in a math problem or a given starting position in a game of chess. Don’t question them (for now).”

Even simpler

Γ =

  • Taco Bell
  • If Taco Bell → diarrhea

That’s it.

No magic.

Important clarification (so nobody loses their mind)

We are NOT saying:

“Taco Bell is actually true in real life”

We are saying:

“Assume it’s true. Now let’s see what follows.”

One-line takeaway

Premises are just the starting point of the game—not guaranteed truth, just assumed truth.

(2) It starts with stuff that can’t be wrong (axioms)

Axioms are different from premises.

Premises are just assumptions you start with.

Axioms are patterns that come out true no matter what you plug into them.

Math analogy:

x = x.

No matter what number you plug in for x, it always comes out true. There is no situation where a thing is not equal to itself.

Chess analogy:

There are some things built into the chess system itself. One piece can’t occupy two squares at once. Two pieces can’t occupy the same square. If that happened, the game would already be broken before it even began.

Now here’s where my past self would start getting pissed off:

“Why the fuck are we allowed to just say this can’t be wrong? Aren’t we just assuming that?”

That sounds like what’s happening—but that’s not actually what’s going on. In logic and metalogic, it has something to do with a possible situation.

What the hell is a “possible situation”?

In logic, a “possible situation” does NOT mean:

  • some weird sci-fi universe
  • or “anything you can imagine”

It means something much more boring:

a way of assigning true or false to statements

That’s it.

That’s what truth tables are doing.

So we’re not assuming—it’s more like checking every possible truth assignment

Take a typical logical axiom:

“If A, then (if B, then A)”

Now we check ALL possible truth assignments:

ABB→AA→(B→A)
TTTT
TFTT
FTFT
FFTT

This means:

  • If A = true and B = true, then A→(B→A) is true.
  • If A = true and B = false, A→(B→A) is still true.
  • If A = false and B = true, A→(B→A) is true yet again.
  • If A = false and B = false, A→(B→A) is true too.

We’ve exhausted all the possible situations, or all possible truth assignments. Whatever true/false shit you might assign A and B, A→(B→A) always comes out true.

In other words:

Does this thing EVER come out false?

No.

Not even once.

So we’ve proved that this is a legit axiom. That means we’ve proved that what we’re starting with can never spit out garbage.

Important clarification

You might say, “Hey, dumbass, what if I define a situation where A and ¬A are both true?”

For example:

A¬A
TT
FF

Cool.

But then:

you’ve changed the rules of the game

You’re no longer doing classical logic.

And if you change the rules, then yes:

you have to redo the whole soundness proof for that new system

Final translation of “axiom”

Axioms aren’t trusted because we feel like it. They’re trusted because, given the system’s definition of truth, we can check every possible case—and they never come out false.

(3) The rules never mess things up (rule case)

Now we check the rules.

Math analogy:

Suppose you know:

  • 2 + 2 = 4
  • 4 + 4 = 8

Now you apply a rule:

“If a = b, then you can replace a with b”

So from:

  • 2 + 2 = 4

you’re allowed to treat 4 as interchangeable with 2 + 2.

So (2 + 2) + 4 = 8.

This math rule never messes up.

Chess analogy:

Bishops move diagonally.

So if your bishop starts on a black square, it will always stay on black squares.

And if your bishop starts on a white square, it will always stay on white squares.

So the rule “bishops move diagonally” will never randomly make a bishop jump from a white square to a black square, or from a black square to a white square, or do some other crazy shit.

In logic, here’s the big rule:

Modus Ponens

A

If A → B

Therefore:

B

Concrete example

Metalogic is gay.

If metalogic is gay, Raymond’s ass is demolished.

Therefore:

Raymond’s ass is demolished.

Now here’s the important part

This isn’t about THAT specific sentence.

It’s about the form.

You can swap in anything:

Example 2

You eat Taco Bell.

If you eat Taco Bell, you’ll have diarrhea.

Therefore:

You’ll have diarrhea.

Example 3

I didn’t sleep.

If I didn’t sleep, I’m miserable.

Therefore:

I’m miserable.

Example 4

I make poop jokes.

If I make poop jokes, I’m childish.

Therefore:

I’m childish.

The key question

Is there ANY situation where:

A is true

“If A then B” is true

but B is false?

No.

There is no such situation.

Again, in intro to logic class, you can use a truth table to check this:

ABA → B
TTT
TFF
FTT
FFT

The only row where A is true and A → B is true is the first row. And in that row, B is true too.

Therefore:

This rule NEVER turns truth into bullshit.

(4) If every step is good, the whole thing is good (inductive step)

This is where grad school says:

“We proceed by induction”

and everyone (well, me, at least) dies inside.

Again, we’ll first use the math analogy.

Start with:

  • 2 + 2 = 4
  • 4 + 4 = 8
  • 8 + 8 = 16

Notice something:

You’re always adding an even number to an even number.

And every time:

you get an even number

Now the key idea

If:

  • even + even = even (this NEVER fails)

Then:

no matter how many times you repeat this process, you’ll ALWAYS get an even number.

That’s the inductive step

You’re not checking:

  • just the first step
  • just the second step

You’re saying:

“This pattern keeps working forever.”

And how do we know it works forever? We know not because we’ve checked a bunch of numbers and got tired and said, “Fuck it, it works for all numbers for infinity.” It works because we get the form of the math. That is,

2a + 2b = 2(a + b)

Steel staircase analogy:

You’re building an infinite staircase. The first step you build is made of solid steel. It doesn’t collapse. The second step is built the exact same way. It doesn’t collapse. The third step is also built identically to the second. So it’s impossible for the entire staircase to collapse.

Why?

Not because you checked a few steps and said, “Cool, it works for all the steps.” It’s because the way each step is built is identical to the next.

Chess analogy:

If every move you make is legal, and every move is made according to the same rules, then the entire game stays legal. The board won’t suddenly sprout a third king or summon a UFO to suck away your pieces halfway through the game.

Translation

If every step in your reasoning is safe, then the final result is safe.

Conclusion

If your eyes glazed over, don’t worry. Just keep this in mind:

Every line in a logical proof is either:

  • a given assumption/premise (like x = 5)
  • an axiom (like If A, then (if B, then A))
  • the result of applying a rule (like modus ponens)

And induction is the method that lets us say:

if every step is safe, then the whole proof is safe

So if premises don’t start us with bullshit, axioms never spit out bullshit, and rules never turn truth into bullshit, then the whole logical system we’re checking is legit.

In other words:

If you can prove it (using the logical system you’re checking) (⊢), it’s true (⊨).

That’s all for today. In a future blog post, I’ll take you through the proof for the soundness theorem step by step, using the Hilbert system.

What the Fuck Is an Argument? (And Why “Deductive = General to Specific” Is Bullshit)

People love saying:

“Deductive arguments go from general to specific.”
“Inductive arguments go from specific to general.”

This is repeated so often that even professors say it.

Unfortunately, it’s wrong.

And if I hear one more person say that, I’m gonna have a fucking seizure.

First: what an argument actually is

An argument = a set of statements, a.k.a. premises, that lead to a conclusion.

So this is an argument:

Premise 1: If I hear one more person say “deductive arguments go from general to specific,” I’m gonna have a fucking seizure.

Premise 2: I hear one more person say “deductive arguments go from general to specific.”

Conclusion: Therefore, I’m gonna have a fucking seizure.

The form, or structure, of the argument is:

  1. H → S
  2. H
  3. Therefore, S

That logical move, by the way, is called modus ponens. It’s a valid and “duh” move, yes, but the Latin makes it sound deep, and it’s exactly the kind of structure all deductive logic is built on.

In logic, an argument is NOT defined as:

  • yelling
  • a claim (like “postmodern art is valuable, but postmodernism is not”–sounds deep, but that’s just a claim/statement)
  • an opinion (including the ones everyone and their uncle feel entitled to express)
  • disagreement

What deduction REALLY is

Deductive validity

Deduction means:

If the premises (of an argument) are true, then the conclusion must be true.

In other words:

The truth of the premises guarantee the truth of the conclusion.

So, if ‘H → S’ (premise 1) is true, and if H (premise 2) is true, then S (conclusion) must be true.

Specifically, that’s what we call a deductively valid argument.

Notice I kept on italicizing the word if. That’s because that’s super important–after all, one or more of these premises might not be true. For example, is premise 2 (“If I hear one more person say “deductive arguments go from general to specific,” I’m gonna have a fucking seizure”) true? No. So deductive validity doesn’t mean that the conclusion of your argument is true. It just means, again, that

if the premises are all true, then the conclusion must be true.

Deductive soundness

But what if you have a valid argument and its premises are all true? For example:

Premise 1: If I don’t have any money, I can’t pay the mortgage. (TRUE)

Premise 2: I don’t have any money. (TRUE)

Conclusion: I can’t pay the mortgage. (BOTH LOGICALLY AND IN REALITY GUARANTEED TO BE TRUE)

Then this deductive argument is what is called sound.

A word on that bullshit definition of “deduction”

Note that in neither the seizure example nor the mortgage example is anything “going from general to specific.” That’s because “going from general to specific” is simply not the correct definition of “deduction,” nor is it useful for a deeper understanding of how arguments work. But some teachers and professors will still use that definition no matter how many times I object. Whatever. Let’s move on.

What induction REALLY is

Inductive logic does not use labels like “valid” or “sound.” Instead, we use words like “weak,” strong,” and “apt.” This different terminology is important because, unlike deduction, induction is probabilistic.

Inductive weakness

Consider this argument:

Premise 1: Some dogs bark.

Premise 2: Guai Guai is a dog.

Conclusion: Guai Guai (probably) barks.

This is an inductively weak argument because of what the word “some” means.

“Some dogs” might be just 1% of dogs. “Some dogs” might even be just 0.01% of dogs. There simply isn’t enough reason for us to accept the conclusion that Guai Guai probably barks because, by definition, probably = roughly put, “more likely than not” or “high enough likelihood given the evidence.” And the word “some” is simply too weak to conclude that Guai Guai’s barking is probable.

Inductive strength

Now what about this one?

Premise 1: Most Martians like Dr. Seuss.

Premise 2: Zorp is a Martian.

Conclusion: Zorp (probably) likes Dr. Seuss.

This is an inductively strong argument. “Most Martians” doesn’t mean some exact math like 51%. It just means a strong majority—enough to make the conclusion actually likely. So that–and the fact that Zorp is a Martian–makes it probable that Zorp likes Dr. Seuss.

Inductive aptness

What happens when you have an argument that is inductively strong and all its premises are true? That’s when you have an inductively apt argument. Let’s go back to the dog argument and change it a little:

Premise 1: Most dogs bark.

Premise 2: Guai Guai is a dog.

Conclusion: Guai Guai (probably) barks.

This dog argument has a form that is identical to that of the Martian argument, so it is obviously inductively strong. But unlike the Martian argument, the dog argument has premises that are all true. Hence, it is inductively apt.

A word on that bullshit definition of “induction”

Note that none of these inductive arguments “go from specific to general.” That’s because, like the bullshit-y definition of deduction, the bullshit-y definition of induction is not very helpful for a deep understanding of how arguments work. And yet, textbooks teach that definition. Perhaps I will be having that fucking seizure after all.

Another type of argument: abduction

There’s at least one other type of argument–the abductive argument–that we can discuss. Abduction, or inference to the best explanation (IBE), aims to reach conclusions based on the best possible explanation. Ockham’s Razor–the principle that the best explanation is the simplest one that makes the least number of assumptions–is a case in point. For instance:

Suppose you walk into your living room and see that:

  • The floor is wet
  • Your dog is shaking water everywhere
  • There’s a knocked-over bowl of water

You could come up with many explanations:

  • A pipe burst
  • Someone broke in and spilled water
  • Your dog knocked over the bowl

But one explanation stands out as the best:

Your dog knocked over the bowl and made a mess.

That’s an abductive argument:

The floor is wet and the bowl is knocked over.
If the dog knocked over the bowl, that would explain all this shit.
Therefore, the dog probably knocked over the bowl.

Notice what’s happening here:

You’re choosing the explanation that best fits the evidence.

A brief note on abduction and probability

Abduction is often treated as probabilistic, even if people don’t always say it that way.

When you say “this is the best explanation,” what you usually mean is something like:

This explanation makes the observed evidence more likely than the alternatives.

That’s exactly the kind of reasoning philosophers like Elliot Sober analyze using probability. (Check out his book, Ockham’s Razors.)

So if someone tells you that abduction is “not probabilistic,” they’re either oversimplifying or just wrong.

Conclusion

If you forget most of what I just said, remember this one thing:

The difference between deductive and inductive arguments has nothing to do with “general vs. specific.” It has to do with certainty vs. probability.

That’s all for today. And keep thinking.

Why You Shouldn’t Fucking Yawn When You Hear “Logic,” and Why There are Multiple Types of “Logics”

I’m not going to teach you how to do logic in this post. That would take many hour-long lessons just to cover the tip of a dick-shaped iceberg called “logic.”

Instead, I want you to be able to recognize different types of logic, the same way you can recognize German, Spanish, Arabic, Mandarin, and Korean—even if you don’t actually speak any of them.

Most people hear the word “logic” and immediately check out. Their brain goes:

“Hey, did you know there’s a sneaker sale this weekend?”

But fuck that. Logic is about clear thinking, and clear thinking is basically a superpower.

So let’s at least figure out what the different “dialects” of logic even are. We will briefly discuss the gist of:

  1. Classical Logic
  2. Modal Logic
  3. Intuitionistic Logic
  4. Relevance Logic
  5. Paraconsistent Logic

and bestow honorary mention on fuzzy logic at the end.

1. Classical Logic (The Default Setting)

This is the one most people unknowingly use.

It has two big rules:

(1) Big Rule 1: No contradictions allowed

You cannot have:

“Pringles are good”
and
“Pringles are not good”

both true at the same time, in the same sense.

This is called the Law of Non-Contradiction.

(2) Big Rule 2: No middle ground

A statement is either:

  • true, or
  • false

No in-between. No “kinda true.”

This is called the Law of Excluded Middle.

The really crazy rule: explosion

If you do allow a contradiction:

P and not-P

then everything becomes provable.

Yes—literally anything.

Example of Explosion

Start with:

It is raining AND it is not raining

From this, we can get:

It is raining
It is not raining

So far, so normal.

Now here’s the key move:

From:

It is raining

you can say:

It is raining OR the moon is made of cheese

(This is called addition—and yeah, it’s weird but legit.)

Now combine that with:

It is not raining

And use a rule (disjunctive syllogism):

If:

  • A or B
  • not A

Then:

  • B

So:

  • “It is raining OR the moon is made of cheese”
  • “It is not raining”

Therefore:

The moon is made of cheese

Boom. You just “proved” nonsense.

Moral of the story:

Contradictions break the system.

This is called:

  • Ex Falso Quodlibet
  • aka Explosion

One-liner:

Classical logic = “No contradictions, no gray area, or everything goes to shit.”

Now we level up.

Instead of just asking:

“Is this true?”

we ask:

“Does this HAVE to be true?”
“Could this be true?”

2. Modal Logic (Must / Might Logic)

While classical logic talks about things like what is actually true (like the fact that many Singaporean drivers are assholes), modal logic talks about things like:

  • You must wear a seatbelt.
  • Some Singaporean drivers might be assholes.
  • Some car accidents could happen

Classical logic is like:

“Cool story, but I only understand true/false.”

Modal logic is like:

“Let’s handle that shit properly.”

The Two Main Modal Words

□ (box) = MUST / necessarily

◇ (diamond) = POSSIBLE / maybe

Here’s a simple application to get you started:

□A = “A has to be true in all the situations we’re considering”

A = “A could be true in at least one situation”

And here’s an example:

“The toilet must be clogged.”

This means: in every possible situation we’re considering, the toilet must be clogged.

Here’s another one:

“The toilet might be clogged.”

This means: there is at least one possible situation where the toilet is clogged.

The key idea: “possible worlds”

This is the weird part, but we’ll keep it simple.

Modal logic imagines:

different ways reality could be

Not sci-fi necessarily. Just:

  • what might be true
  • what could have been true
  • what has to be true

Think of:

different “versions of reality”

Like:

  • World 1: it’s raining
  • World 2: it’s sunny
  • World 3: it’s cloudy

Now define the operators

◇A (possible A)

There is at least ONE world where A is true

□A (necessary A)

In EVERY world, A is true

Example

Statement:

“2 + 2 = 4”

  • □(2+2=4) → true (it’s true in all worlds)
  • ◇(2+2=4) → also true

Here’s another statement:

“It is raining”

then:

  • □(raining) means: it is raining in all accessible worlds
  • ◇(raining) means: it is raining in at least one accessible world

But suppose from the current world you can access three worlds:

  • World 1: raining
  • World 2: not raining
  • World 3: not raining

Then:

  • □(raining) is false, because not all accessible worlds have rain
  • ◇(raining) is true, because at least one accessible world has rain

And if you want to be a tad nerdier, here’s how we symbolize this raining stuff:

□(raining) = it’s raining everywhere

¬□(raining) = not everywhere is raining

◇(raining) = somewhere it is raining

□¬(raining) = nowhere is it raining

3. Intuitionistic Logic (a.k.a. “prove it or shut up” logic)

Core Attitude

“Don’t tell me something is true unless you can actually prove it.”

What They Hate

In normal logic, you can say:

“Either A is true OR A is false”

Even if you have zero evidence.

Intuitionists say:

“Nope. That’s lazy as hell.”

In kiddy language, we can say:

Normal logic:

“Either there’s treasure in that box or there isn’t.”

Intuitionistic logic:

“Have you opened the fucking box?”

If not:

“Then don’t claim shit.”

What changes?

In classical logic:

You can prove things using tricks like:

“If assuming NOT-A leads to nonsense, then A must be true”

This is called proof by contradiction.

In intuitionistic logic:

“No. That doesn’t count.”

They say:

“You didn’t prove A.
You just showed that not-A is bad. That’s not the same thing.”

Simple example

Classical logic says:

“There exists a number that has property X”

even if you can’t name it.

Intuitionistic logic says:

“Show me the number or shut the fuck up.”

One-line summary

Intuitionistic logic =
“No proof? No truth. Stop bullshitting.”

4. Relevance Logic (a.k.a. “don’t say random shit” logic)

Core attitude:

“The reason (A) should actually be related to the conclusion (B).”

What they hate

In classical logic, this is allowed:

From a false statement, you can prove ANYTHING.

Example:

  • “2+2=5”
  • therefore: “I am the King of Mars”

This, again, is called explosion.

Relevance logicians say:

“What the actual fuck? These are unrelated.”

Weird conditional (“if…then…” statement) problem: kiddy version

Normal logic:

“If unicorns exist, then I’m a sandwich”
This is true if unicorns don’t exist.

Relevance logic:

“Dude… unicorns and sandwiches have nothing to do with each other.
This is bullshit. Reject.”

What changes?

Relevance logic demands:

A must actually have something to do with B.

You can’t just glue random sentences together with “if.”

Example

Classical logic allows:

“If 2+2=5, then the sky is blue.”

Relevance logic says:

“Nope. That’s not a real conditional. That’s just nonsense dressed up.”

One-line summary

Relevance logic =
“Your premise better actually connect to your conclusion, you dumbass.”

Why this pissed people off

Some philosophers said:

“Wait… real life systems have contradictions all the time.”

Examples:

  • laws that conflict
  • people who believe inconsistent things
  • messy databases
  • paradoxes

And they thought:

“Why should ONE contradiction destroy EVERYTHING?”


5. Paraconsistent Logic

Core idea:

“Even if there’s a contradiction, don’t let everything go to shit.”

What they reject

They reject this rule:

From A and not-A, you can prove anything

They say:

“No. That’s way too extreme.”

Kiddy version

Normal logic:

“If your notebook has ONE contradiction, we burn the whole fucking notebook.”

Paraconsistent logic:

“Relax. One bad page doesn’t mean the whole notebook is garbage.”

Concrete example

Suppose your system says:

  • “This website is safe”
  • “This website is not safe”

Classical logic says:

“Cool, now I can prove the website is run by aliens.”

Total nonsense is allowed.

What they are trying to do

They are trying to:

contain the damage

Instead of:

letting one contradiction explode into total chaos

Another kiddy version

Classical logic:

One rotten apple → burn the whole farm

Paraconsistent logic:

One rotten apple → throw it out, keep eating

Important: they are NOT saying

“Contradictions are good”

They are saying:

“Contradictions shouldn’t destroy everything”

Why this actually matters

Because in real life:

  • legal systems contradict themselves
  • large databases have errors
  • people hold inconsistent beliefs

If we used classical logic strictly:

everything would become meaningless instantly

So paraconsistent logic says:

“Let’s build a system that can survive inconsistency.”

One-line Summary

Paraconsistent logic =
“Even if shit contradicts, don’t let the whole system lose its mind.”

Final comparison (all five now)

Logic TypeAttitude
Classical“Contradiction = everything explodes”
ModalAdd “necessity and possibility”
Intuitionistic“No proof = no truth”
Relevance“No connection = bullshit”
Paraconsistent“Contradiction ≠ total disaster”

Conclusion and a brief note on fuzzy logic

These are five major types of philosophical logics. (And there are more… many more.) Some of these logics extend classical logic. Others reject parts of it entirely. In the future, we will cover them in more depth as well as briefly discuss fuzzy logic, which, contrary to popular belief, is not a synonym for “sloppy thinking.” (It’s a specialty that math and engineering people love and has legit philosophical relevance… just not center stage.)

That’s all for now!

What the Fuck Does “Proof” Even Mean? (And Why It Doesn’t Guarantee Truth)

This is the kind of weird shit logicians study in something called “metalogic.”

Briefly: Much confusion arises from the term “prove.” If I can prove something, doesn’t that just mean that that thing is true?

Not necessarily.

In logic (and analytic philosophy), “prove” has a different meaning from everyday usage. All it means here is: I can arrive at some conclusion if I follow the rules in a system.

Imagine that you’re playing a game called “Math for Morons.” The game is the aforementioned system, and the rules of that system include the following:

  • 1+1 = 2
  • 1 + 1 = 3

According to the rules of the “Math for Morons” system, you can then prove 2 = 3. But that doesn’t mean that it’s true that 2 = 3. The point being? Proof (according to the definition of “proof” in logic class) doesn’t guarantee truth.

And if you can prove something in the logical sense but still not arrive at the truth? That means the system is broken. Logicians call a broken system like this inconsistent.

Logicians even have a symbol for this kind of provability: ⊢ (the single turnstile).

But what if it’s the other way around? That is, something is true, but you can’t prove it.

This is what logicians call incompleteness.

Incompleteness:

  • does NOT mean “true but very hard to prove” (like the fact that my dad is an asshole—hard to prove, but not what logicians mean here)
  • means “true but you can’t prove it using the rules you have in your system/game” In other words: the truth is out there, but your system is too weak to reach it.

So imagine that in your “Math for Morons” system, there are no rules at all about even and odd numbers.

But it’s TRUE that

“2 is an even number.”

Then even though it’s true that 2 is even, you can’t prove it within the system, because the system simply doesn’t have the tools to express or derive that fact.

Logicians also have a different symbol for completeness: ⊨ (the double turnstile).

Roughly speaking, this doesn’t mean “you can prove it.”

It means:

“This has to be true, no matter how you interpret things.”

So metalogic asks a deeper question:

Do the things you can prove (⊢) line up with the things that are actually guaranteed to be true (⊨)?

Or are you just pushing symbols around and hoping for the best?

By the way, in standard (“classical”) logic, there’s a nice result:

If you can prove something using the rules (), then it really is true in the relevant sense ().

Logicians call this soundness.

But that’s another can of worms for another day.

Why Government Authority Might Be Bullshit (Huemer, with minimal jargon)

This is a simplified version of Chapter 1 of The Problem of Political Authority by Michael Huemer.

The goal is the same as always: take something that’s usually taught in dense academic language and make it actually understandable. (Admittedly, though, Huemer is already great at making difficult philosophy accessible. This version just makes it even more accessible.)

The Problem of Political Authority

by Michael Huemer

Abridged by Raymond Chuang

PART I: The Illusion of Authority

1. The Problem of Political Authority

1.1 A political Parable

Let’s start with a story. You live in a small village. There is a lot of crime. Bad guys steal and destroy people’s things. It looks like no one is doing anything about it. So you and your family take some guns and go catch some bad guys. You point your gun at a bad guy and lock him in your basement. You do that for some other bad guys. You give the bad guys food so they don’t get too hungry. You plan to keep the bad guys in your basement “for a few years to teach them a lesson” (Huemer 3).

After doing the same thing for a few weeks, you go around your neighborhood. You ask your neighbor, “Have you noticed that there are fewer bad guys?” He nods. You say, “Well, you should thank me.” You tell him how you have been catching bad guys by using your guns and locking them in your basement. Your neighbor looks at you with a strange look. You continue: “Now, I’m here because you have to give me money…because I caught all those bad guys. This month, you have to give me $100.”

Your neighbor stares at you and doesn’t move. You tell him, “If you don’t give me the money, then you are also a bad guy, and I will lock you in my basement with the other bad guys.” You let your neighbor see that you have a gun, and you tell him that if he does not give the money, you will force him to give you the money.

If you did something like this, how would your neighbors react? Would they be happy to give you the money?

Not likely. Most likely, you would notice these things: First, almost nobody would think they owe you anything. Some people might pay because they don’t want to be locked in your basement. Others might pay because they hate the other bad guys. But almost nobody would think it is their duty to give you money. If some neighbor does not pay you, other neighbors would more likely say good things about them than bad things.

Second, most people would think that what you did was crazy. They would think that when you asked for money, it is extortion. And when you lock the bad guys in your basement, it is kidnapping. Because you acted so crazy, and because you think other people need to thank you and give you money, people will think you really are crazy.

“What does this story have to do with political philosophy?” (Huemer 4). In the story, you acted like a simple government. Of course, you did not do all the things that a larger government does, but you did two of the most important things that governments do: (1) you punished bad guys who hurt other people or don’t listen to you, and (2) you collected money so that you can keep catching bad guys. If the government does these two things, these two things are known as the criminal justice system and the tax system. If you do these two things, these two things are called kidnapping and extortion.

It looks like the things you did are the same things a government does. But most people would think it’s okay for the government to do these things, and not okay for you to do these things. Most people support the government when it catches bad guys, and most people feel like they need to pay their taxes. Most people also think that the government should punish tax evaders, and they think it is the government’s right to do so.

This shows how most people feel about the government. Most people feel that governments are ethically allowed to do things that other people can’t do. Also, most people feel that we have special obligations to the government. Most people think we don’t have these obligations to other people, even if these other people act the same way the government does. This is not just about the law. It is also not just about what people can get away with. “The point is that our ethical judgments” are very different when it comes to government and nongovernment people (Huemer 4). When nongovernment people do some things, people think it’s bad. When the government does some things, people think it’s good.

Why do we give the government this special moral status? Does it make sense to give them this special moral status? “This is the problem of political authority” (Huemer 5).

1.2 The concept of authority: a first pass

In ordinary moral thinking, what is the difference between your actions and the government’s actions? One idea is that, even though your actions and the government’s actions might look the same, they are actually different. That is, they are different behaviors. For example, you might think that a difference is that you didn’t give the bad guys fair jury trials. Maybe that is why what you did is bad, and what the government does is okay or good.

A second idea is that you are doing the same thing that the government does, but the only difference is who is doing those things. You are doing something bad in the story because, even if you acted just like the government, you are still not the government.

This second idea is what I call political authority. Political authority has two important aspects:

  1. Political legitimacy: the government has the right to make laws and enforce them by coercion — “in short, the right to rule” (Huemer 5).
  2. Political obligation: citizens must obey the government, even when they normally don’t have to obey other nongovernment people

“If a government has ‘authority’, then both (i) and (ii) exist: the government has the right to rule, and the citizens have the obligation to obey” (Huemer 6).

There is a difference between political obligations and moral obligations. For example, it is illegal to murder. When there is a government, it is both our political obligation and moral obligation not to murder people. However, if there is no government, then we do not have a political obligation not to murder, but we still have a moral obligation not to murder. But there are other things that we must do or not do just because the law says so. These are political obligations, not moral obligations. For example, if the government tells you to pay $1,000 in taxes, then, if you think the taxes are too high, you do not feel like you have the right to not pay that much. If you think the taxes are too low, you do not feel like you have to give the government extra money. So, from most people’s point of view, paying taxes is a political obligation.

If you believe that the government has political authority, you don’t have to think that political authority is unconditional or absolute. You also don’t need to think that all governments have political authority. For example, you might think that the government has authority only if it respects human rights and allows citizens to participate in politics, so you might think that tyrannical governments have no authority. You might also think that even okay governments cannot force you to do certain things, like committing murder, and that citizens don’t have to obey the government if it tells them to commit murder. So, if you believe in political authority, you don’t have to believe that political authority is unconditional or absolute.

However, we should still remember that most people think that the government has lots of political authority and that it can make people do things that would be wrong or unethical if a nongovernment person makes them do these things instead.

Theories of Meaning Part 1: What the Hell Does “Meaning” Even Mean? (Without the Bullshit)

What the Hell Does “Meaning” Even Mean? (Without the Bullshit)

This is a profane, simplified version of part of the Stanford Encyclopedia of Philosophy article “Theories of Meaning.” The goal here is simple: strip away unnecessary jargon so that people—myself included—can actually understand what the hell is going on, without dumbing things down too much. This is Part 1 of a series. I’ll come back to other parts of the article over time—but each post will stand on its own.

The One-Sentence Version

There are two different questions about meaning:

  1. What does this word mean?
  2. Why does it mean that?

Philosophers constantly mix these up, and that’s where a lot of confusion—and yes, bullshit—comes from.

Two Kinds of Theory of Meaning

Philosophers use the phrase “theory of meaning” in at least two different ways. To avoid confusion, we’re going to separate them cleanly:

  • Semantic theory = What does this shit mean?
  • Foundational theory = Why the fuck does it mean that?

That’s the whole game.

What Is a Semantic Theory?

Take a simple example:

“Dog” = a (usually) furry animal that barks

That’s the kind of thing a semantic theory does. It tells you what words and sentences mean.

Think of it like an ultra-precise dictionary. It answers questions like:

  • What does “dog” mean?
  • What does “snow is white” mean?
  • When is a sentence true?

It’s just describing meanings. No deep explanation yet. Just mapping words to what they mean.

What Is a Foundational Theory?

Now we go one level deeper:

Why does the word “dog” mean what it means?

Is it because:

  • we use it that way?
  • society agrees on it?
  • there’s some connection between words and the world?
  • something psychological is going on in our heads?

Now we’re asking a completely different kind of question.

A foundational theory of meaning is trying to explain:

What makes it the case that words have the meanings they do in the first place?

So instead of:

“What the fuck does ‘dog’ mean?”

we’re asking:

“Why the fuck does ‘dog’ mean that?”

That’s not description anymore. That’s explanation.

The Anthropologist Example

This is where things get really clear.

Imagine an anthropologist studying some distant tribe.

Step 1: Just describe the rules

They figure out:

  • Slurping = polite
  • Burping = fine
  • Farting = not fine

That’s just a description of how things work in that culture.

That’s like a semantic theory.

Step 2: Explain the rules

Now the anthropologist asks:

Why are these the rules?

Why this system instead of a different one?

Maybe it’s because of:

  • culture
  • social pressure
  • history
  • power structures
  • evolutionary factors

Now they’re explaining the system, not just describing it.

That’s like a foundational theory.

These Two Things Are Different (But Related)

Let’s make this really clear:

  • A semantic theory is like a dictionary
  • A foundational theory is like asking why the dictionary works the way it does

They are different jobs.

But—and this matters—they can still influence each other.

When Semantic Theories Affect Foundational Theories

Suppose your dictionary says:

“Dog” = an animal that moos and becomes beef

Okay, something has gone horribly wrong.

Now you might ask:

What the hell explains this messed-up system?

So problems at the semantic level can push us to rethink our foundational explanation.

When Foundational Theories Affect Semantic Theories

Now flip it.

Suppose you discover that the meanings in your “dictionary” were formed in some unreliable way—bad evidence, confusion, whatever.

Then you might say:

Maybe our definitions themselves need to be fixed.

So your theory of how meaning works can force you to revise your actual definitions.

Quick Summary

  • Semantic theory = what words mean
  • Foundational theory = why they mean that

Different questions. Different jobs. Constantly confused.

The Skeptic Bomb

Now for the fun part.

Some philosophers—like W. V. O. Quine and Saul Kripke—basically say:

“What if meaning isn’t even a real thing?”

As in:

  • There may be no objective fact about what words really mean

If they’re right, then:

  • There’s no semantic theory (nothing real to describe)
  • There’s no foundational theory (nothing real to explain)

Everything collapses.

We’re not going down that rabbit hole right now.

That shit gets deep fast.

Final Thought

If you take nothing else away from this:

Don’t confuse “What does this mean?” with “Why does it mean that?”

Philosophers do it all the time.

You don’t have to.