All models are wrong, but some are useful
Model fetishism triggers me.
Quantum Mechanics has entered the chat
Isn’t that mostly probability as well?
Surprisingly that is a controversial view. Most physicists insist QM has nothing to do with probability! But then why does it only give you probabilistic predictions? Ye old measurement problem, an entirely fabricated problem because physicists cannot accept that a theory that gives you probabilities is obviously a probabilistic theory.
The wavestate is entirely deterministic, and we don’t fully understand where the probabilistic measurement happens. The Copenhagen intrpretation makes it probabilistic but is not proven.
(even many worlds doesn’t explain why we ourselves only see one macroscopic section of the wavefunction)
In any statistical theory, the statistical distribution, which is typically represented by a vector that is a superposition of basis states, evolves deterministcally. That is just a feature of statistics generally. But no one in the right mind would interpret the deterministic evolution of the statistical state as a physical object deterministically evolving in the real world. Yet, when it comes to QM, people insist we must change how we interpret statistics, yet nobody can give a good argument as to why.
We only “don’t fully understand where the probabilistic measurement happens” if you deny it is probabilistic to begin with. If you just start with the assumption that it is a statistical theory then there is no issue. You just interpret it like you interpret any old statistical theory. There is no invisible “probability waves.” The quantum state is an epistemic state, based on the observer’s knowledge, their “best guess,” of a system that is in a definite state in the real world, but they cannot know it because it evolves randomly. Their measurement of that state just reveals what was already there. No “collapse” happens.
The paradox where we “don’t know” what happens at measurement only arises if you deny this. If you insist that the probability distribution is somehow a physical object. If you do so, then, yes, we “don’t know” how this infinite-dimensional physical object which doesn’t even exist anywhere in physical space can possibly translate itself to the definite values that we observe when we look. Neither Copenhagen nor Many Worlds have a coherent and logically consistent answer to the question.
But there is no good reason to believe the claim to begin with that the statistical distribution is a physical feature of the world. The fact that the statistical distribution evolves deterministically is, again, a feature of statistics generally. This is also true of classical statistical models. The probability vector for a classical probabilistic computer is mathematically described as evolving deterministically throughout an algorithm, but no sane person takes that to mean that the bits in the computer’s memory don’t exist when you aren’t looking at them an infinite-dimensional object that doesn’t exist anywhere in physical space is somehow evolving through the computer.
Indeed, the quantum state is entirely decomposable into a probability distribution. Complex numbers aren’t magic, they always just represent something with two degrees of freedom, so we can always decompose it into two real-valued terms and ask what those two degrees of freedom represent. If you decompose the quantum state into polar form, you find that one of the degrees of freedom is just a probability vector, the same you’d see in classical statistics. The other is a phase vector.
The phase vector seems mysterious until you write down time evolution rules for the probability vector in quantum systems as well as the phase vector. The rules, of course, take into account the previous values and the definition of the operator that is being applied to them. You then just have to recursively substitute in the phase vector’s evolution rule into the probability vector’s. You then find that the phase vector disappears, because it decomposes into a function over the system’s history, i.e. a function over all operators and probability vectors at all previous time intervals going back to a division event. The phase therefore is just a sufficient statistic over the system’s history and is not a physical object, as it can be defined in terms of the system’s statistical history.
That is to say, without modifying it in any way, quantum mechanics is mathematically equivalent to a statistical theory with history dependence. The Harvard physicist Jacob Barandes also wrote a proof of this fact that you can read here. The history dependence does make it behave in ways that are bit counterintuitive, as it inherently implies a non-spatiotemporal aspect to how the statistics evolve, as well as interference effects due to interference in its history, but they are still just statistics all the same. You don’t need anything but the definition of the operators and the probability distributions to compute the evolution of a quantum circuit. A quantum state is not even necessary, it is just convenient.
If you just accept that it is statistics and move on, there is no “measurement problem.” There would be no claim that the particles do not have definite states in the real world, only that we cannot know them because our model is not a deterministic model but a statistical model. If we go measure a particle’s position and find it to be at a particular location, the explanation for why we find it at that location is just because that’s where it was before we went to measure it. There is only a “measurement problem” if you claim the particle was not there before you looked, then you have difficulty explaining how it got there when you looked.
But no one has presented a compelling argument in the scientific literature that we should deny that it is there before we look. We cannot know what its value is before we look as its dynamics are (as far as we know) random, but that is a very different claim than saying it really isn’t there until we look. This idea that the particles aren’t there until we look has, in my view, been largely ruled out in the academic literature, and should be treated as an outdated view like believing in the Rutherford model of the atom. Yet, people still insist on clinging to it.
They pretend like Copenhagen and Many Worlds are logically consistent by writing enormous sea of papers upon papers upon papers, where it only seems “consistent” because it becomes so complicated that hardly anyone even bothers to follow along with it anymore, but if you actually go through the arguments with a fine-tooth comb, you can always show them to be inconsistent and circular. There is only a vague aura of logical and mathematical consistency on the surface. The more you actually engage with both the mathematics and read the academic literature on quantum foundations, the more clear it becomes how incoherent and contrived attempts to make Copenhagen and Many Worlds consistent actually are, and how no one in the literature has actually achieved it, even though many falsely pretend they have done so.
I’m pretty sure this goes against the properties proven of entanglement (Bell test) and how far entanglement can propagate, but I don’t know enough about quantum mechanics to explain why this explanation is incompatible with entanglement.
However, I don’t currently see how this at all explains computing with superpositions; if it’s just statistics a superposition can never exist, so entanglement doesn’t exist; so quantum algorithms wouldn’t be possible, but we know they are.
I’m pretty sure this goes against the properties proven of entanglement (Bell test) and how far entanglement can propagate, but I don’t know enough about quantum mechanics to explain why this explanation is incompatible with entanglement.
If you don’t know anything about the topic then maybe you shouldn’t speak on it. Especially when claiming you have debunked peer reviewed papers from Harvard physicists like Jacob Barandes.
However, I don’t currently see how this at all explains computing with superpositions; if it’s just statistics a superposition can never exist
Superposition is a property of statistics. Even classical statics commonly represent the system’s statistical state as a linear combination of basis states. That’s just what a probability distribution is. If you take any courses in statistics, you will superimpose things all the time. This is a mathematical property.
so entanglement doesn’t exist; so quantum algorithms wouldn’t be possible, but we know they are.
Quantum advantage obviously comes from the phase of the quantum state. If you remove the phase from the quantum state then all you are left with is a probability distribution, and so there would be nothing to distinguish it from a classical statistical theory. But the phase is, again, a sufficient statistic over the system’s history. The quantum advantage comes from the fact that you are ultimately operating with a much larger information space, since each instruction in the computer is a function over the whole algorithm’s history back to the start of the quantum circuit, rather than just the current state of the computer’s memory at that present moment.
I kinda boil it down to discreet energy packets distributed in an area as field values and the collapse occurs when two discreet packets interact
What if two packets interact with each other? If you claim a collapse occurs then entanglement could never happen, and so such a viewpoint is logically ruled out. If you say a collapse does not occur but only occurs if you introduce a measurement device, then this is vague without rigorously defining what a measurement device is, but providing any additional physical definition with then introduce something into the dynamics which is not there in orthodox quantum mechanics, so you’ve not moved into a new theory and are no longer talking about textbook QM.
Logic isn’t MAEth.
Butt MAEth is logic. Don’t misunderstand my meaning.
This language is literally cognitive shit.
Economics: Our findings are just as rigorous as these other sciences we swear!

Well spoken, no less.
I once called economics a pseudoscience in a reddit comment and some libertarian-capitalist type got suuuper butthurt about it.
He said I don’t understand the word pseudoscience. I said, “no I understand it just fine. You don’t understand economics.”
His only response was to call that a “no, you” argument. Dunning-Kruger on full display.
Oh, I see I’m not the only one who views it that way. It’s always nice to see some people who have professional credibility expressing a similar opinion.
Also, I didn’t know the “Noble Prize in Economics” wasn’t really a Nobel Prize at all (it’s not awarded by the Nobel Foundation! They basically just appropriated the name…)
I always thought it was strange that there was one at all (or seemed to be one), and I didn’t particularly like the credibility it seemed to lend to a field that doesn’t deserve it, but it makes so much more sense now to know it’s just a psyop run by a bank.
He was just a delusional living knot bot.
It’s amazing how nonsensical the actual foundational axioms of modern day economics are.
Classical economics tried to tie economics to functions of physical things we can measure. Adam Smith for example proposed that because you can recursively decompose every product into the amount of physical units of time it takes to produce it all the way down the supply chain, then any stable economy should, on the average (not the individual case), roughly buy and sell in a way that reflects that time, or else there would necessarily have to be physical time shortages or waste which would lead to economic problems. We thus may be able to use this time parameter to make quantifiable predictions about the economy.
Many people had philosophical objections to this because it violates free will. If you can predict roughly what society will do based on physicals factors, then you are implying that people’s decisions are determined by physical parameters. Humans have the “free will” to just choose to buy and sell at whatever price they want, and so the economy cannot be reduced beyond the decisions of the human spirit. There was thus a second school of economics which tried to argue that maybe you could derive prices from measuring how much people subjectively desire things, measured in “utils.”
“Utils” are of course such ambiguous nonsense that eventually these economists realized that this cannot work, so they proposed a different idea instead, which is to focus on marginal rates of substitution. Rather than saying there is some quantifiable parameter of “utils,” you say that every person would be willing to trade some quantity of object X for some quantity of object Y, and then you try to define the whole economy in terms of these substitutions.
However, there are two obvious problems with this.
The first problem is that to know how people would be willing to substitute things rigorously, you would need an incredibly deep and complex understanding of human psychology, which the founders of neoclassical economics did not have. Without a rigorous definition, you could not fit it to mathematical equations. It would just be vague philosophy.
How did they solve this? They… made it up. I am not kidding you. Look up the axioms for consumer preference theory whenever you have the chance. It is a bunch of made up axioms about human psychology, many of which are quite obviously not even correct (such as, you have to assume that the person has evaluated and rated every product in the entire economy, you have to assume that every person would be more satisfied with having more of any given object, etc), but you have to adopt those axioms in order to derive any of the mathematics at all.
The second problem is one first pointed out, to my knowledge, by the economist Nikolai Bukharin, which is that an economic model based around human psychology cannot possibly even be predictive because there is no logical reason to believe that the behavior of everything in the economy, including all social structures, is purely derivative of human psychology, i.e. that you cannot have a back-reaction whereby preexisting social structures and environmental factors people are born into shape their psychology, and he gives a good proof-by-contradiction that the back-reaction must exist.
The idea that you can derive everything based upon some arbitrary set of immutable mathematical laws made up in someone’s armchair one day that supposedly rigorously details human behavior that is irreducible beyond anything else is just nonsense. No one has ever even tested any of these laws that supposedly govern human psychology.
it’s also interesting how increasingly absurd economics gets the further it dissociates from reality.
people are dying
BUT THE DOW
Adam Smith was actually far more progressive than the neoliberal/capitalist propaganda like to portray him as. They basically cherry-pick his work and present it out of context to support arguments that are actually contrary to many of the points he was making…
When I said “classical economic theory” I meant more like “conventional economic theory,” so encompassing the absurdities you mentioned here.
Like, they’ll say “Economies naturally cycle through periods of growth and degrowth” to justify periods of inflation, but then when those periods of inflation are artificially extended to further enrich the shareholders (and artificially inflated, even!), they’ll conveniently ignore the whole “periods of degrowth” side of the coin, and if anything even remotely has a chance of causing deflation, it’s denounced as an anathema because “it would cause a recession!”
Corporations benefit from economies that harm consumers. Corporations should never be given control over economic policies. However, neoliberal economic policies are basically designed to help the corporations while hurting consumers. And it’s all founded upon conventional economic theories.
That’s how you end up with a Federal Reserve that says things like “Unemployment is a good thing, because if everyone has too much money to spend on things, it could cause inflation,” yet never addresses the standard business practice of increasing prices while cutting costs all to make “number go up” so that the shareholder value increases each quarter and the C-suite can get bigger bonuses…
They say things like “We have to raise prices to keep up with inflation,” but no, that’s literally just contributing to artificial inflation, which is apparent when you look at their profit margins and how they’ve increased since 2020 when everyone started freaking out about inflation…
Basic foundational “observations” by Economics aren’t based on the Scientific Method.
I wish the Scientific Method didn’t have “Method” in the name because while it is a sensible name it also is misleading.
Science is “method agnostic”, a new promising method may uncover other methods and theories that totally pull the rug out from under old theories and methods that is a necessary and sometimes brutal aspect to scientific progress.
Economics, because it began and is sustained for the most part as a system of methods searching for justification for their continuation, is largely incapable of undergoing these necessary “method resets” that come periodically in any scientific discipline.
Chemistry can admit that atoms aren’t tiny planets with electons orbiting like moons because Chemistry didn’t start as the pursuit to find evidence for atoms being like solarsystems and flesh out the theory that atoms are like solarsystems.
Thus no matter if locally good science is being done in economics it is undermined by the uncomfortable need to preserve the survival of the foundatinal contextualizing methods and axioms they invoke implicitly from the truth uncovered, a vice that plagues any human endeavor consciously and subconsciously and not only keeps Economics from being a real science it also largely sucks the oxygen out of the room for actually scientifically rigorous study of these phenomena.
Alchemy is a great analog here to compare Economics too. Alchemists in the pursuit of trying to figure out how to turn things to gold did interact with and in some ways advance chemistry, but alchemy could never divest itself from its own pre-existing beliefs and methods as chemistry discovered more and more of the universe and began to accurately predict more and more of it.
If alchemy was capable discarding old methods to pursue understanding phenomena more lucidly and precisely chemistry would probably be called “alchemy” in english nowadays and alchemy would be called “pseudo-alchemy”.
Economics equates to alchemy they express a desire of a system of methods, axioms and explanations to produce a certain end goal and it forms fatal shackles to the follies of the past.
Any time I’ve attempted to argue for alternative economic paradigms (not just alternative economic systems, but actually rethinking the fundamental assumptions and theories by which we study and attempt to understand economic systems and phenomena), lazy thinkers hit me with the “nuh uh, that’s not what [classical economic theory] says! You don’t know what you’re talking about.”
It’s a thoughtless appeal to authority lacking any substance. The word for that is “dogma.”
I think a major casualty of the war on science funded primarily by fossil fuel interests has been that the kneejerk pro-science response has become a lazy appeal to authority.
People say “99% of scientists all agree listen to them you are not worthy of having an opinion on this!” and while it is arguably true lol, it also sends a message undermining to the interests of science.
Science is the practice of skepticism not of finding facts and crusading under their banner in a materialist campaign of conquest. Facts are rather the inevitable residue of science after science has subjected theories to extended and diverse torturuous inquisition.
I wish people defended science by saying it isn’t a set of Correct Facts but a system of Skepticism that has thoroughly examined a shared body of knowledge and that you should assume that if the more fantastic sounding theories contained within that arena of “skeptical melee” haven’t been dismantled that you can probably trust that they are real, as fantastic as they sound.
This when you shorten it sounds like an appeal to authority where the scientists are given undue authority but it is not the same thing. What matters is the environment of genuine skepticism that scientific theories and “facts” are subjected to in order to establish their validity that matters.
Ugh, yes, when I was in university I had the audacity to attempt to have original thoughts, and everyone was like “Nuh uh, no one has ever said that anywhere in the source material.”
But it’s like “Someone said A, another person said B, and a third person said C. I’m just putting those together in a new way and telling you ABC.” But they’re like “None of the sources say ABC.” So I’m like “Look at the world around you, and you can clearly see that ABC.” And they’re like “that’s just anecdotal, not a peer-reviewed double-blind study.”
I called it academic gatekeeping. I also said it’s gaslighting ourselves into ignoring reality. They didn’t like either of those things. They seemed to think I was some flat earth anti-vaxxer (I’m not).
Modern academia has become downright anti-intellectual and extremely averse to divergent or non-conforming outlooks. It’s kinda sad.
I like the term Scientific filter. Theories get endlessly filtered though experimentation untill we get purer and purer truth.
What definition of pseudoscience would capture economics without capturing medicine, ecology, or meteorology?
Everyone’s just using models here, and the way we incorporate statistical observations to define the limits of the models’ scope, and refine the models over time, or reject the models entirely, applies to economists, meteorologists, seismologists, and many branches of actual human medicine.
Popper would define pseudoscience as predictions that can’t be falsified, but surely that can’t apply to the idea of the weatherman predicting rain and being wrong, right?
Kuhn came along and argued that science is about solving problems within paradigms, and sometimes rejecting paradigms in scientific revolutions (geocentrism vs heliocentrism, Newtonian physics versus Einstein’s relativity), but it wasn’t a particularly robust test for separating out pseudoscience.
Lakatos categorized things further at explaining how model-breaking observations could be handled within the structure of how science performs its work (limiting the scope of the model, expanding the complexity of the model to fit the new observations, proposing specific exception handlers), but also observed the difference between the hard core of a discipline, in which attempts at refutation were not tolerated, and auxiliary hypotheses where the scientists were free to test their ideas for falsifiability.
But when you use these ideas to try to understand how science works, I don’t think economics really stands out as less scientific than cancer research or climatology or other statistically driven scientific disciplines.
To quote the order commenter here, basic foundational “observations” by Economics aren’t based on the Scientific Method.
In what way? And how does that differ from how medicine measures pain?
Namely, the scientific method relies on inductive reasoning and foundational economics relies heavily on deductive reasoning.
The difference isn’t the data itself, it’s what they do with it. Medicine takes subjective, self-reported pain scales and plugs them directly into rigorous, double-blind, randomized controlled trials where they isolate variables to test a strictly falsifiable hypothesis.
Foundational economics, on the other hand, takes subjective concepts like “utility” or “rational self-interest” and uses them as unfalsifiable, deductive assumptions to guess how massive, open systems work.
Basically, you can put a new painkiller in a placebo-controlled trial to scientifically prove if it reduces that subjective pain, but you can’t put a macroeconomy in a petri dish to run a controlled, repeatable experiment on supply and demand.
This difference invites a lot of woo.
Plenty of medical science doesn’t lend itself well to double blind studies. In vivo infection models can’t ethically be tested with double blind studies, and can only be observed. Lots of medicine advances through observational studies, too, like almost anything relating to nutrition or lifestyle or trauma. There’s no double blind study on how survivable car accidents are.
Plus double blind studies themselves don’t necessarily have any kind of explanatory power (see the entire field of anesthesia where we know how much of each anesthetic it generally takes to put people under, but we don’t know the underlying mechanism it uses to make people go under). Or, for that matter, Tylenol (whose mechanism of action remains a mystery).
That’s just it, though. Outliers are treated fundamentally differently between them, they are treated as bugs in economics, but as features in medicine.
If a “universal” drug fails for a specific group, medicine views that outlier as a falsification that proves the rule is incomplete. They use the exception to fix the theory.
Foundational economics does the opposite: it treats axioms like “rational actors” as holy scripture, so when people don’t behave like the math says they should, the economists just dismiss them as “irrational” and keep the model exactly the same.
Even if we don’t know the mechanism behind Tylenol, we can still falsify whether or not it works. You can’t falsify a “rational actor” because the moment someone does something weird, you just move the goalposts. Medicine is trying to map the territory; foundational economics insists the map is right and the territory is just acting up. It’s barely based in reality.
There are several physical autonomic responses that demonstrate feeling pain that can be measured objectively.
It’s not just faces on a numbered chart
A weatherman predicting rain has made a falsifiable prediction, how does that relate to Popper?
how does that relate to Popper?
When a weatherman’s prediction is falsified, the model itself is not disproven. The fact that the practitioners of that discipline stick with it even when a prediction is falsified starts to look like the pseudoscience side of Popper’s falsifiability criterion.
hahah same for psychology
Do they claim that though? Neo-liberal economists often adopt praxeology openly, and the other ones are mostly deluding themselves.
Bloodywood well regard that guy in Gaddaar.
Three months after broadcasting that song, WWE bought the UFC.
Co-Cain is the most hilarious joke ever planted around.
fucking computer science is going from on par with mathematics to worse than biology
“why do you guys do it that way”
“Look because if we don’t sacrifice the goat on Thursday the code breaks, idk what to tell you”
turns out the thursday goat service brings in Dianne from networking who remembers she needs to reboot a apecific device weekly, but its not documented anywhere. When Dianne doesn’t do this everyone freaks out and grabs another goat to sacririce which brings her back because who is she to say no to some good goat, and the cycle is continued and reinforced
Aye, Iris.
Engineering: We only care if it works, even if it breaks math/physics/chemistry/biology.
If pi is not exactly three why hasn’t my bridge fallen?
Check mate mathematicians.
You can bother figuring out why. Or I might be forced to in order to iterate…
The failure rate falls within the tolerances
And the tolerances are set so big that the failures are covered.
Physics: oh, and if you look close enough, it’s actually all probability too.
or far away enough…

Economics: the law is true as long as people believe it’s true.
Kind of like fairies, when you think about it
Or Orks!
Alternatively, with capitalism giving all the power to the richest: “The law is true because I’ll hurt you if you try to defend yourself and I have plenty of class traitors to help me.”
Meanwhile the mathematicians who got a bit too close Philosophy are still arguing about which logic to use and if a proof by contradiction is even a proof at all.
Ehh…
Gödel basically showed we can never know which “mathematics” is the “correct one”.
“Proven true assuming my axioms are true” is closer to reality.
Exactly.
HERE’S A THEOREM: IF IT’S PROVEN, IT’S TRUE EVERYWHERE, FOREVER
But at the same time, even if it’s true everywhere forever, it might still not be provable, because Gödel.
I think saying that a theorem is true presumes the axioms from which it was proven and so the entire system is “true everywhere forever”.
I often find it helpful to think of chess as my axiomatic system. When we say the king is in checkmate, it presumes that we accept all the underlying rules of chess. And these pieces that theoretically form a checkmate will always do so forever… Assuming the usual rules of chess, assuming they’re unchanging, etc.
When you put things in terms of chess, these “deep” statements about “math” often become banal. And it works for any game that’s a “formal system” (eg. most board games).
even if it’s true everywhere forever, it might still not be provable, because Gödel.
No. Gödel’s completeness theorem says that if something is true in every model of a (first-order) theory, it must be provable. Gödel’s incompleteness theorem says that for every sufficiently powerful theory, there exists statements that are true sometimes, and these can’t be provable.
The key word is “everywhere”.
Worse: If the chosen axioms are contradictory, then the theorem is effectively worthless.
And it is impossible to know whether axioms are consistent. You can only prove that they are not.
You can go deeper. To prove anything, including the consistency or inconsistency of a theory, you need to work within a different system of axioms, and assume that it is consistent, etc.
But that’s math. And its proof is math. And that proof is true everywhere forever.
I see philosophy as a place to make nonrigorous arguments. Eventually, other fields advance enough to do away with many philosophical arguments, like whether matter is infinitely divisible or whether the physical brain or some metaphysical spirit determines our actions.
Since this is a question that math hasn’t advanced enough to answer, we can have a philosophical argument about whether other fields will eventually advance enough to get rid of all philosophical arguments.
I see philosophy as a place to make nonrigorous arguments.
Wait do you think Bertrand Russell and Alan Turing and Kurt Gödel weren’t making philosophical arguments?
They are clearly mathematical. Starting with definitions and axioms and deriving results from there using mathematical statements.
They are clearly mathematical.
Sure. But they’re also philosophical. The categories aren’t mutually exclusive. Basic set theory (which is both mathematics and philosophy).
They all debated the question what being mathematical means there whole lives.
And we determined that the resulting incompleteness proofs are valid mathematical proofs whose logical correctness has been verified by computer. https://formalizedformallogic.github.io/Catalogue/Arithmetic/G___del___s-First-Incompleteness-Theorem/#goedel-1
They already knew that. You’re treading an old worn out logical positivist path, that was inspired by Wittgenstein who worked closely with Russell (both mathematicians and philosophers) and he later saw his error, rejected his positivist followers and explained how truth is not a correspondence to facts, rather meaning is derived from use in language. This applies to all languages, formal and informal, including math and logic.
I see philosophy as a place to make nonrigorous arguments.
It’s the other way around: math is where you just ignore questions about what makes sense, what knowledge is, what truth is, what a proof is, how scientific consensus is reached, what the scientific method should be, and so on. Instead, you just handwave and assume it will all work out somehow.
Philosophy of mathematics is were these questions are treated rigorously.
Of course, serious mathematicians are often philosophers at the same time.
You’re just covering my third paragraph. Yes, everybody is a philosopher because we don’t have the tools to do away with philosophical arguments entirely yet.
Once a mathematical proof has been verified by computer, there is no arguing that it is wrong. The definitions and axioms directly lead to the proved result. There is no such thing as verifying a philosophical argument, so we develop tools to lift philosophical arguments into more rigorous systems. As I’ve shown earlier, and as another commenter added to with incompleteness, this is a common pattern in the history of philosophy.
I explicitly refer to your second paragraph.
Yes, you absolutely can argue computer verified proofs. They are very likely to be true (same as truth in biology or sociology: a social construct), but to be certain, you would need to solve the halting problem to proof the program and it’s compiler, which is impossible. Proofing incompleteness with computers isn’t relevant, because it wasn’t in question and it doesn’t do away with it’s epistemological implications.
It is not necessary to solve the halting problem to show that a particular lean proof is correct.
Lean runs on C++. C++ is a turning complete, compiled language. It and it’s compiler are subject to the halting problem.
I’m a chemist, I just gave a class to students today. The main topic of the whole lesson was this: we have all these theories and methodologies, we are not going to study how they work and how to use them, let’s discuss now all the limitations they have and when they do not work.
Former chemistry student here. In chemistry, every single thing you ever do gets multiplied by a ridiculously big number. A few drops of water has 6.02*10^23 molecules in it. So even the tiniest chemical reactions are massive exercises in parallel processing, and measuring in human-scale units means you might miss by a few hexillion in either direction.
Isn’t it amazing internal combustion engines…ever work?
This is why engineers are insufferably smug.
For being trained to think within tolerances?
I wish doctors would do the same.
Social sciences: Mayhaps, but only for very specific conditions once in time
The author’s barely disguised
fetishpreconceived notionThere are 3 positions. Mayhaps, yes, and no.
Yes and no hate each other, but for some reason they hate mayhaps more
Bisexual problems.
Psychology furiously staring from the corner but afraid to speak lest it be made to sit at the folding table with Astrology and Tarot readings.
Psychiatry in the front of the room covered in blood and sitting on a pile of cash.
They’re sharing a table with economics.
Math also fails sometimes, we’ve had to invent new math along the way because math is always correct only in the given constraints of how we currently understand math. If those constraints are challenged math evolves.
Example, imaginary numbers weren’t a thing for a good while and some stuff didn’t work correctly. All math stands upon 1+1=2, we don’t know if that always holds true, for now we asume it.
In fact, the entire foundation of math – its system of axioms – has had to be fixed due to contradictions existing in previous iterations. The most well known perhaps being Russell’s paradox in naive set theory: “Let X be the set of all sets that do not contain themselves. Does X contain itself?”
In fact, there have been many paradoxes that had to be resolved by the set theory we use today.
There are no correct axioms. You can change the axioms as you wish and make your own math2.0. And you will be able to apply it to things that follow thoose axioms but finding such things that follow them is the only hard part. We define 1+1=2 and that is true because we define it that way. If it does not hold true in any physical or something then it is that you are applying a correct math for a system which doesnt work with that math(i.e, you are the problem for assuming the same axiom is true for the real system)
I might go even further and say there’s no “math”. There are a wide variety of axiomatic systems (eg. games). None has the sole claim to being “math”. Maybe they’re all “math”.
(On the other hand… I guess any system that contains the “natural numbers” would be sufficient for the bulk of what’s widely considered “math”.)
I’ll pick my side. They are all mathematics
(I belive math and mathematics are different. What is commonly called math is just numbers and stuff. But mathematics fits the general description of all axioms)
Example, imaginary numbers weren’t a thing for a good while and some stuff didn’t work correctly
And here’s Lewis Carroll to regale us with a tale that absolutely won’t be misunderstood and taken at face value by later generations about how foolish these silly mathematicians are with their wonky numbers.
Axioms serve as a starting point.
Economics. You forgot Economics. Heres a bunch of rules.
Let’s Assume you are an Economist. Now if you first overestimate and then underestimate, on average, you are correct in your estimates, Ceteris Paribas .
Economics: Figure out what rich people want to hear and get funded forever.
As an economics student, I tend to disagree.
Economist are themselves Economic Agents. Their mind, by nature of market, is often split between what they understand to be correct analysis versus what their pay master wants. Add to it competition, and we’ve two persons either who sees but cannot talk, or who talks but is blindfolded. That’s shit, but that’s how things are.











