- Some may put it forward as an authority to say the opposite of what it actually says (either by the pride of their ignorance about it, or, for those who know it, in contraction to what they know);
- others, aware of the discrepancy, can be eager to throw overboard the existing success of physics to take refuge in the most hazardous speculations, finding satisfaction in prophesying the future coming of some nicer physics which would better fulfill their expectations by finally telling the opposite of what current physics said.

First we need to explain how Mathematics is the God of Physics, and which Mathematics
we are talking about. Mathematics is generally the only way to form any clear concept of a regularity
law, unaffected by any fantasies about how any given regularity law should be interpreted in each
case. So it is essentially required as a cornerstone of naturalistic expectations.

Then, I would roughly divide mathematics into 2 kinds: the "low" and the "high" styles of mathematics.
This division is of course not strict. Different criteria can be considered for this distinction, which
are not always equivalent, but are interestingly correlated.

One criterion, is to label as "low" the finitistic mathematics (which deals with finite systems only), in other words the mathematics of algorithms. Then the "high" mathematics is the one which involves infinite or continuous systems.

Another criterion, is to label as "high" the mathematics of symmetric systems, and as "low" that of asymmetric ones. A correlation between both criteria can be illustrated by the following example which is actually a toy model of mathematical facts which play an effective role in theoretical physics.

It is possible to approximate a circle by a regular polygon with a large number of sides. This approximates the continuous symmetries of the circle by the still numerous symmetries of the regular polygon. However, when the circle is replaced by a sphere, such a possibility breaks down: the 3D analogues of regular polygons are the Platonic solids which are very few. So the symmetry group of the sphere cannot be the approximation of any group of exact symmetries of a large but finite system: whatever system you may take trying to approximate a sphere, as soon as more than a quite small number of spherical symmetries work on it, infinitely more of these symmetries still do, and essentially all of them. When symmetry matters, no small escape from continuity is mathematically conceivable.

Daringly, a third criterion is sociological : the low-style math is the math of non-mathematicians, for whom math is "just a tool"; while high-style mathematics is the one of mathematicians, who see math as their home and a reality in its own right.

In these terms, the large anti-mathematical trend I mean to point out among naturalists, consists in dismissing the value of high-style mathematics, essentially for its touch of mysticism, to focus on (only accept the legitimacy of) low-style mathematics. Or (since it may look bad to make this position explicit), many of them express this in a completely implicit manner, in the form of giving descriptions of mathematics or physics as if low-style mathematics was the only mathematics that existed, or sufficed to express physics.

However, the found laws of physics belong to high-style mathematics. This is what I see as
actually meant by the famous Unreasonable
Effectiveness of Mathematics in the Natural Sciences (wikipedia - a page I wrote with notes and references on the debate).

So, physicists did not seem to expect this, but why ? I can offer anyway the following clue
why high-style math is likely to be better suitable than low-style math to express a law of Nature:
high-style math is needed for a law to meaningfully *serve as a law of Nature* because that
style is needed for a law to carry its own necessity (virtue of logical unbreakability).

Indeed, low-style mathematical laws would be unable to carry their own necessity:

- Take any algorithm and run it on some defective hardware, then nothing remains to prevent any law expressed by the software, from being occasionally broken at any time and place in the computation.
- Without symmetries, every case differs from every other case, so that any law that worked on previous cases has no reason to still work in new cases.

Concretely, here is how this fact answers to the people who imagine the conservation of
energy as a postulate : consider a universe A with a certain distribution of energy at a given time
*t*. Can we imagine that at just the next time *t*+d*t*, the energy appears to vary
at one given place in violation to the conservation of energy, everything else being the same ?
We can proceed by figuring out a different universe B which at that time looks very similar to A, with
the essentially the same distribution of energy, except just the different concentration of energy at that
precise place. Now how can we transition from universe A to universe B ? I mean, of course we might
proceed just by destroying the universe A at time t, to not let it exist anymore from now on, then
creating the universe B to start existing only from the same time as if it was old while it isn't.
However, that may or may not satisfy us for the following reason: once we so took our big
scissors to cut the space-times of A and B at hopefully "the space-like slices
marked by the same time all over the place", kept just the past side of A and the future side of B,
we still need a way to... kind of glue both pieces together. And this is when we are running into trouble.
Even allowing ourselves to do it completely arbitrarily, it is ... just mathematically impossible to do
it without violating the laws of physics not just at one place, but more or less everywhere.
Concretely, any energy added at one place must come from somewhere else.

That even would not be much of a problem from a low-style mathematical perspective : seeing the laws of physics as a computation, the cosmic cut-glue process, no matter how dirty, can just be done by switching off the cosmic computer (Doomsday), modifying its memory as much as we need, and then restarting it all Last Thursday. By the way, if consciousness was a computation, as naturalism claims, then for the same reason there would be no way to ensure the validity of our own thoughts (especially our memory), which would condemn us to skepticism and hyperbolic doubt. By the way, skeptics are really subscribing to hyperbolic doubt when, faced with the case of Pam Reynolds (who had OBE perceptions when the brain was really totally inactive), they put forward the possibility for the brain to build up memories of perceptions that were never real in order to fill the gaps of a given scenario. This idea that memories can be, and even sometimes did happen to be, completely built up with all their vividness about experiences that were never real, is indeed a logical consequence of the hypothesis of the material nature of memory, and has no essential difference with Last Thursdayism.

The conservation of the electric charge has the same necessity as the conservation of energy. Then maybe not all other physical laws have the same spectacular character of necessity, yet it remains hard to mathematically conceptualize, in quantum field theory, any process leading to some outcome which the laws would forbid. A semi-exception to this is given by the "wavefunction collapse" we shall comment below.

That was why and how the laws of physics belong to a high style of mathematics and need to
be accepted as such. This does not contradict the possibility to re-express them algorithmically,
which is indeed another requirement for some mathematical theory to be qualified as a physical
law: it must be possible to effectively compute its predictions. Yet remains some deep structural
difference between the law in its high-style formulation, and any algorithm which can
produce its predictions. Namely, the former looks much more *elegant* than the latter.

But, what does it mean for some mathematics to be elegant ? Interestingly, this is precisely a typical example of an essential topic of disagreement between naturalism and supernaturalism: the status of qualia.

A famous illustration is given by Mary's room thought experiment:
is a sensation fully explained by the physical description of its neuronal stimulation pattern,
or is there something more to learn by having this sensation as one's own personal experience ?
In other words, is a sensation, such as that of a color, anything more than the pattern of
neuronal stimulations which form it in the brain ? This question may sound hard to figure out,
because we do not have a ready scientific description of neuronal stimulation patterns for
sensations under hand.

Another famous example of the same question, is "What
is it like to be a bat ?".

Yet another example of, I would say, roughly the same question, is so much easier to effectively
experiment that... probably most people already did it. Here it is: the question of the difference
between a melody and its partition. Someone can have learned music theory, then read
a music score, know in principle how it should be played on the piano, be familiar with hearing
the piano and how each note sounds, and yet not figure out how nice is the melody which is
written until the time of effectively hearing it. Of course good musicians can manage to read
a music score in the sense of playing it in imagination and this way effectively feeling it, but that
takes a special training or effort. I even once watched on a screen the display of the sound
analysis of a music I was hearing, I am not sure how much this display was actually faithful, or
did it lose a lot of fine crucial aspects of the sound, but I was very puzzled to experience how hard
it seemed to figure out the connection between what I heard and what I saw...

So, when perceived by the right senses, a structure can induce a feeling (qualia, subjective
appearance), that is something more than the sum of the parts which the structure was made of.
Yet this feeling is not something complex like the structure was, but something simpler,
and of a completely different nature.

Now the point of this reminder, is that I see a very similar phenomenon crucially concerning
the core sciences : mathematics and physics. Mathematics is much more than
the sum of its theorems, and the laws of physics are much more than the sum of their
experimental predictions. And the way they are more, is not anything more complex, but
something so much simpler than the parts they are made of, and at the same time something
transcending these parts: *they make sense*. In other words, I find some special qualia
of what it is like to be a mathematician, and what it is like to be a physicist. There is some
qualia of the activity of exploring high-style mathematics, which may be more or less lacking
in that of low-style mathematics. Metaphorically speaking, the laws of physics are written in
symbols of color, whose qualia we need to feel in order to fully understand these laws,
even though, strictly speaking, no qualia exists among either physical objects, the laws of
physics, or generally any mathematical theory or entity.

Of course I cannot prove these things, since it is generally impossible to prove the existence
of some qualia to those who do not happen to perceive them. For this reason, even those who
perceive them may remain doubtful, either when challenged to justify these in debates, or even
for themselves.

Now this is an important dimension of the opposition I see between Skepticism and Science : skeptics ignore and even despise the heart of science. Because the core of sciences is math and/or physics, the core of physics is mathematics, the core of mathematics is its high-style fields, some of which being also the part of math which matters for physics, and the core of these high-style fields of math is the qualia we can experience by studying them. And that qualia is a general light of scientific understanding which transcends all particular scientific facts or methods. Skeptics with their favorite methods miss that completely. Mathematical concepts and theorems are neither proven by double-blind randomized testing, nor falsifiable, and yet they perfectly belong to Science.

An important qualia of mathematics, is the sense that mathematics forms its own reality. This is usually called mathematical Platonism. Unfortunately, the usual attempts of philosophers to describe it are quite terrible and usually made of claims which I would dismiss as category mistakes instead of being meaningfully either true, false or debatable. So, let me offer my own formulation of mathematical Platonism. Actually I would split it in 2 thesis which may be considered independently of each other

- Mathematics comes with its own ontology: its
own type of objects (the mathematical objects, which include the mathematical structures, but
are devoid of any qualia), and its
*concept of mathematical existence*relevant to qualify these objects. This mathematical ontology is meaningful and legitimate (valid) independently of both our physical universe and the thinking activity of any mathematician, while the concept of mathematical existence is distinct from, though possibly similar with, our other concepts of existence relevant to qualify other kinds of objects (not purely mathematical ones, which may carry diverse qualia). - The world of mathematics has an objective, coherent and unifying architecture behind its formal diversity of possible theories, by which the main possible controversies find natural solutions (some of these solutions take the form of pluralism and are still rather satisfactory, with only relatively minor issues left).

But this second point once made, supports the credibility of the first point, as there
is no less reason to believe in the mathematical reality than in the physical reality once found
that mathematics has all the qualities in the name of which the physical universe is usually
accepted as real (unless I missed something). Moreover, the "unreasonable effectiveness of
mathematics" with the observed crucial role of high-style math in theoretical physics, supports
mathematical Platonism. Yet this support may not be very clear because, inside the world of
mathematics, the precise part of mathematics which provides for what I will call here the
*native mathematical ontology* is the tandem of set theory and
model theory ; it is not the same as those parts of mathematics which matter in physics,
so that both ontologies (the mathematical and the physical) do not coincide.
This gap will be further commented below.

But the validity of mathematical Platonism, either the one with the native mathematical ontology, or some variant more suited to physics, remains controversial among both philosophers and physicists. For example Peter Woit strongly supports the value of high-style math, while Carlo Rovelli and Lee Smolin oppose mathematical Platonism by denying high-style math. (Both Rovelli and Smolin work on Loop Quantum Gravity, a tentative approach to quantum gravity based on a kind of discretization of space-time, but for what I saw on Wikipedia it seems unclear whether this succeeded anything at all).

Along such lines, here is an excerpt of the debate

- "

A much more caricatural situation occurred one day I discussed with a philosophy student. He was skeptical to the claim that sciences, especially physics with its mathematical theories, achieved any success in understanding the universe. He thought that theories of physics are made up, and physicists just give themselves the illusion of verifying their theories, since they need their theories to interpret data, their reasoning would be circular; there would be no way to prove theories. I tried to explain that it really makes precise sense to say that we have successful theories verified by observation, and the criterion for this is in the success of these theories to reduce the entropy of observed data. But he did not understand what I meant because he did not study the concept of information entropy. So he needs to study it first. But he dismissed this request, claiming that if I need mathematics to define the concept of entropy and this way justify the success of mathematics in physics then my reasoning is circular. Well in the same way, animals can claim that human language does not make any sense, as if we tried to explain what human language can be useful or meaningful for, we would need to use human language for the explanation, so the argument would be circular as well.

In a similar vein, many philosophers of mathematics are very fond of so-called *intuitionistic
logic*™ (a phrase so trademarked by one of the most absurdly anti-intuitive ideologies ever)
and/or finitism, giving these topics an extremely oversized importance compared to their interest for
mathematicians. As far as I could see, such philosophies are mainly about arbitrarily denying the
validity of much of mathematics or its proofs, for hardly any serious or fruitful reasons beyond
the fun of this denialism. I mean, I can accept the study of distinguishing what can
be made with finitistic mathematics, as a possible specialized topic in mathematics among hundreds
of others; however those who focus on philosophical issues making a fuss of their
skepticism about the law of excluded middle are really wasting time.
Well, of course everyone is free to undertake such an exploration of
the mathematical world in ways restricted to these viewpoints of mentally transabled people.
However I would beg them to not go as far as expecting the taxpayers to provide for their
metaphorical wheelchairs. Because, well, hopefully the world may have more useful jobs for
people who do not really like math, beyond the job of just making a fuss about it.

More generally aside such extreme cases, a kind of obscurantism can be found as having taken the role of orthodoxy in academic philosophy, where the respective statuses of knowledge and ignorance have essentially been switched. Namely, where consideration is only given to reasonings based on the purest scientific ignorance and immaturity of understanding. This is well aligned with the general principles of skepticism, of the form "what can I discover and verify if I am the most stupid and ignorant person possible" taking such a framework as a prerequisite for the validity of judgements. It is on this basis that some ill-defined concept of "naturalism" is regarded by philosophers as the most advanced thought of the time, raising the enigma of how this obscurantist orthodoxy of naturalism should be articulated with the scientific orthodoxy of mathematical Platonism. Hence the great debate about which of both oxymorons "Naturalized Platonism versus Platonized Naturalism" would be a better fit.

Now, here are some explanations I can think of for this attitude by both some physicists and many philosophers to reject mathematical Platonism and still recognize the physical (the so-called "concrete objects" as if this concept of concrete object had to make any clear sense — a presumption of meaningfulness which reflects so much ignorance in physics) as "more real" (or even the only reality) than the mathematical.

The possibly main explanation is that so many people are more familiar with the physical than with the mathematical. At least, it takes more work, and usually comes later in life, to become familiar enough with mathematics to perceive this as a case for mathematical Platonism, than it was to get familiarity with the physical and accept the reality of the physical on this basis.

Then for those who finally become familiar enough with math for it to weight as much as physics, the reluctance to endorse mathematical Platonism may of course have a part of explanation in the general difficulty to switch views. But more precisely here, the difficulty comes from the question it raises of how all these things can fit together. Namely the difficulty comes from the temptation of monism : the assumption that there should only be one kind of reality, since a plurality of ontologies raises difficulties to figure out their articulations. Once the mathematical and the physical are so seen as in competition for the role of unique reality able to contain everything, the physical seems a better candidate than the mathematical indeed.

Yet Mathematical Monism, also called Mathematical universe hypothesis has some supporters, most of which also support the Many-Worlds interpretation of quantum physics. These should be distinguished from the computationalists, who are roughly the most common kind of naturalists, that may be considered more or less also mathematical monists, but low-style ones (finitists), thus generally offset from genuine theoretical physics. This position is usually labelled "Simulation hypothesis or "Digital physics". Among its proponents, Gerard 't Hooft and H. Pierre Noyes are the only physicists I can see, while others are usually clueless in physics (Stephen Wolfram, Brian Whitworth...)

On the other hand, Roger Penrose offers a trio of realities : the mathematical, the physical and the mental. This may sound good, but the problem is how they relate to each other. He describes them as cyclically dependent on each other, which is quite a mysterious architecture.

Now, mathematical Platonism brings an ontological challenge to naturalism (especially
physicalism), as it is *a clue about ontology*, thus a unique chance to bring light to the
previously mentioned ontological
question on the nature of consciousness. This challenge consists in the following questions:

- If the physical systems are mathematical structures, or at least describable by these, then how could their ontology differ from the mathematical one ? This question may be unclear, since it can be seemingly solved by cheaply relabelling the physical ontology as a "mathematical" one. But the problem remains that it differs from the native ontology of mathematics.
- The discrepancy becomes even more acute concerning the ontology of consciousness: even if the mathematical nature of physical reality was not clear, then anyway the assumption that consciousness emerges from physical process would oblige to define consciousness as a mathematical structure which is not natively "understood" by physical ontology because it comes like "by accident". This would make incomprehensible the possibility for the ontology of consciousness, which was famously pointed out as "I think, therefore I am", to be anything more than a mathematical one.

- One is to just be happily ignorant about the issue, as was my skeptic debater who never heard of the Mathematical Universe Hypothesis before I told him about it (but he thought I was the ignorant one for not having studied cognitive science which is mute on ontological issues)
- Dismissing it as nonsense, since they are not taking any mathematical ontology seriously;
- Developing such interpretations of these ontologies, twisting the understanding of either or all of these so that they would appear to coincide.

- As in my text on the Many-worlds interpretation (assuming this interpretation to be correct), physical existence appears as a divisible quantity between the parallel worlds, where "some of these exist more than others" ; while the native mathematical ontology gives equal recognition to all possibilities, a so equal one that it is insensitive to any such trick as considering different numbers of faithful copies of the same system (even this trick would could not match the requirement of many-worlds anyway).
- Mathematical ontology is ordered by its own growing block flow of time, but this time of mathematics plays no role in the mathematical expression of the laws of physics (to such a point that most physicists have no idea that such a time of mathematics exists); these laws describe the "time" of physics as of the same nature as space (thus appearing to support eternalism for physical time), raising the question how physical time could happen to look otherwise. Namely, physical time also feels to our consciousness as another growing block flow, similar to the native mathematical time (and, once computers are made, able to emulate the computational time of finitistic mathematics), but which appears orthogonal to the native mathematical time, when considering how both are articulated at the piece of mathematics which forms the fundamental laws of physics. Of course we have a well-known answer from thermodynamics, which provides a time orientation from outside the laws by using the Big Bang as the fixed initial state of everything but... as an explanation by emergence, its relevance for ontology remains unclear, at least if this question of the status of physical time was meant as a genuinely ontological question.

Among these numerous problems are, for example

- Is there anything wrong (impossible in principle) with Last Thursdayism ?
- Same question with Solipsism
- Same question with Boltzmann Brains
- Does it make sense in principle to imagine transportation like in Star Trek : scanning - transfer as information - reconstruction at destination - destruction of the original ? Would there be anything wrong for the last step to occur a bit late ?
- If the spontaneous collapse interpretation formed an acceptable ontology (or why not ?) then what would be wrong with slow collapse ?

Aside these ontological issues, another qualia which emerges from the studies of both mathematics and physics, is the qualia of recognizing the situation described in the previous section : that the laws of physics indeed form a metaphorical locker and that no mathematical key (candidate complement to the laws of physics) can fit in. This is the unified understanding behind diverse no-go theorems against classical realism, and the feeling of "conspiracy" in the face of all experiences of failures trying diverse candidate mathematical keys and seeing them not fitting.

A first approach is the idea to accept as plausible the message which happens to be rather
clearly given to us from Nature, rather than going for a headlong rush of seeing it as an accidental
conspiracy of appearances hiding opposite kinds of underlying facts.

More precisely,
this approach is the message of Logical Positivism, which was needed to motivate the success of
modern physics away from imaginary obstacles : if some hypothetical parameter
stubbornly and perfectly escapes measurability among all of a given range of diverse experiments
(with no good "natural reason" for this), then it is reasonable to dismiss this parameter from the
"explanations", i.e. the expression of the laws of physics which are relevant for these experiments.
Then the success of such theories which do not use this parameter, in both terms of mathematical
consistency and confirmed predictions, validates their terms.

An important point, which so many people could be led to underestimate by their naturalistic prejudices, is how amazingly great is the success of our theories of modern physics which explain so well so many experiences based on a formalism which reject such naively expected parameters. Now this success is so well-known to anyone caring to get informed, and other physicists did this supporting work already, making totally irrational any persistent belief in their underlying existence in some "true" fundamental laws of physics which remain to be discovered. This is why I see no duty to develop this point in too much details myself.

So here is the resulting picture:

- Since there is no physical measure of simultaneity of distant events (as expressed by Special Relativity) we should accept that the laws of physics contain no such concept, but only the concept of independence (separation by space-like intervals) and such independent events cannot have any physically causal effect on each other (physics can only describe them as correlated by common causes from the intersection of their past light cones)
- The state of any physical system precisely consists in the mathematical object called
"density matrix", which gathers all possibly available information on probabilities for any
result of any possible measurement (so, this object being precisely made of the data
of what is observable, there would be no sense to doubt its reality). Physically,
measurements involve a process described
by the concept of decoherence, yet both are conceptually distinct:
- Decoherence only results in the situation described by many-worlds: a state just practically indistinguishable from a persisting coexistence (classically probabilistic superposition) of all possible measurement outcomes, themselves not objectively distinguished from each other.
- But conscious observers perceive measurements differently: as a "wave-function collapse", which involves, first a distinction of a list of possibilities, then a choice and actualization of only one of these. But this wave-function collapse cannot occur before decoherence, otherwise some breach of the known laws could be observed. However this condition remains mathematically ill-defined (not even letting us formally define some cases of clear fulfillment), because decoherence is an emergent process (at the limit of interactions a with a large chaotic environment). Yet it should occur in order to let the very mathematical structure of these physical states match the basic ontological reason for this structure (that is so precisely the structure of the data of probabilities of results of any possible future measurements), escaping the ontological difficuties of many-worlds (divisibility of existence as a quantity). This leaves as a "mystery" for physics the question of when and how it can occur. The laws of physics do not describe this transition process, which needs to be arbitrarily postulated from outside the laws. This suggests that it must belong to a non-physical level of reality (it may be called a metaphysical process, occurring "outside space-time", instead of a physical process).

- If 2 distant particles from an entangled pair are independently measured (the "measurement" events are separated by a space-like interval), then the correlations of their results, given this impossibility of advance determination, cannot be accounted for by local causalities; since physical causalities are local, this confirms the non-physical nature of the process of wave-function collapse.

Now my skeptic debater appeared quite reluctant towards such a logical positivistic attitude. So he expressed his skepticism towards these great advances of modern science as follows :

- Non-local hidden variables (Bohmian mechanics)
- Spontaneous collapse

Besides these, is the range of super-deterministic locally causal theories, which is usually not considered by specialists, but compared to the above, would have the great advantage of not involving instantaneous actions at a distance. It happens to be his favorite one. Here are some excerpts of his replies :

- "

And maybe what I build is trivial too. But it is an exercise that I rarely come across. Therefore it is missing. When we can no longer go down into the nature of things to understand them, it is sometimes interesting to construct hypothetical examples and see what we could conclude from them. But in any case I want to make it clear that I am not naive about what I think to accomplish. I never said I had the slightest quantitative theory here. There is not 10% usable stuff. It is the approach that I find interesting and the illustrations of mechanism it gives

*Overall I like superdeterminism as an explanation for entanglement. Methodologically
speaking I find it interesting to wonder what one could observe as bizarre behavior if one
were, as an observer, included in a global determinism. It is possible to be able to observe
nothing in particular, but it is also possible to observe behaviors like entanglement.*"

The clearest reason is about quantum computation, which cannot be accounted for by locally causal theories (assuming behaviors as given by some kind of classical computation with locally limited resources). Actually I had to raise the issue in the discussion, because he was not aware of it, since... he did not even figure out how much there was such a concept as quantum computation which may sometimes be more powerful than classical computation. His first reaction was

- "

- "

We are already talking about complexity in algorithms that are not certain. Ideally I thought that there were quantum algorithms that completely destroy destructive non-solutions and therefore could give solutions for sure. In theory there is nothing against it, in practice that does not seem to be the case. And therefore talking about complexity with a form of uncertainty is not very clean, it is as if I said that I had a certain way to win at roulette at the casino, by doubling my bet each time. In theory I do not lie this is the case, in practice it is a form of scam.

Then the shor algorithm for example uses a quantum circuit designed for a given N and that also I find that it is a little debatable to speak of the complexity of a machine or an algo or both if we allow ourselves as N changes to change the machine.

From a circuit point of view, it is much less direct than with a traditional algo. When you type the limit it is not enough to "add something" you have to rethink the thing as a set of more qbits, it is not a simple addition of qbits in a system to make a bigger one, and the same for logical circuits.

So there is initially a real problem of definition, at least it is not trivial to define and compare complexities in a context of this kind. But I keep looking for an ideal algo, like the thing that does something completely useless from a practical point of view but that properly demonstrates the gain in the most canonical way possible.

To excuse him, the above excerpt is from January 2018, thus 20 months before Google's breaking news on the topic which finally forced everyone to hear that it had to mean something. On the other hand, more knowledgeable physicists can be better aware of this discrepancy between the predictions of quantum mechanics and what can be expected from classical computational theories. Namely, 't Hooft himself wrote in comment to P. Woit's blog article in 2012, that his ideas "

But, what a strange universe it would be, that would be basically working in one kind of way (local classical computation) but on top of that would be programmed in such an extraordinary way that it would maintain the appearance of so perfectly following a completely different kind of laws (quantum mechanics) for seemingly all experiments until the day when its success to do so would completely break down because of special circumstances which make it run out of the computing power needed to maintain this appearance. Actually I once saw a physicist (Valerio Scarani if I remember well) make a similar remark about the relevance of experiments to test the violation of Bell's inequalities : it would be so strange to see quantum mechanics perfectly predict everything in all previous experiments including those on entanglement with defective devices failing to fit the conditions of violation of Bell's inequalities, then suddenly break down when devices are perfected enough to fit. Now since experiments approved the expectation that this paradoxical prediction of quantum mechanics would keep working on perfected devices against classical realistic expectation, then why would the case of quantum computing be different ?

This leads us to the other answer I see : the violation of Bell's inequalities (the prediction of quantum mechanics which is the point of the concept of "superdeterminism" : the virtue of a local deterministic law to match this prediction). Why it is a really strong problem, is not so easy to clearly explain, but I will try by analogy with a different, more simply expressible mathematical problem that is subject to the same issues. Just one arbitrary example of an endless range of similar problems.

Here is the example. Let us call *conspirational number* any nonzero integer whose
exponential only has finitely many 3s in its decimal expansion.

Now, does a conspirational
number exist ? The point is, any good mathematician would be confident that no such
number exists, and yet would not be able to write a proof for this. It is worth hesitating whether this
conjecture is actually provable. It is expectable that no simple proof exists, so
any proof would be extremely complicated. But the point is that mathematicians do not need
such a proof, since their intuition suffices instead of a proof to know the non-existence
of conspirational numbers anyway.

Now, superdeterministic laws are like conspirational numbers: mathematical intuition (the qualia of what it is like to be a mathematician) can give confidence that such a law cannot exist, despite the lack of proof for this impossibility. The reason is essentially the same, both for the confidence in the conjecture that no superdeterministic law can exist, and the lack of means to prove this non-existence. Of course I cannot prove the validity of this analogy between both problems, I am simply confident in this, just like I cannot prove but I am confident that conspirational numbers do not exist.

Here by any superdeterministic law, I do not even require one which matches much details of the Standard Model of particle physics. I even expect toy models to be impossible as well. Uh, maybe not the absolutely most straw ones. So let us be more precise: my conjecture is the non-existence of any toy model of superdeterministic law (i.e. which statistically violates the Bell's inequalities) that would be Turing-complete on a visible level, with a lower bound on the efficiency in which it can process visible computation. Here by "visible" I mean as opposed to the "hidden" character of hidden variables. So I mean a law of a world in which it is possible to build a visible computer, in the same sense of those computers we have in our world, able to make classical computations (to use the result as a choice of direction to measure one entangled particle).

Here is a quote from the conversation:

- "

The precise aspects of science picked by skepticism are somehow naturally those which are more likely to be picked when approaching science in a religious manner, that is, as an object of evangelization. They may be the clearest answers to the question of what are the most straightforward things to share if you want to spread a scientific mindset. Doing that, they could have overlooked the fact that, well, this question of how to most clearly and easily share a scientific mindset may be a wrong question about such a mindset, while their dear goal of popularizing science may be a kind of oxymoron. Because the better something fits into the holy format of popularization, the less likely it may be to reflect a proper scientific mindset. In particular, popularization concerns are more typical of extroverts, while a proper scientific mindset may actually require some introversion instead. Other crucial aspects of science thus missed would be much harder or even sometimes impossible to share in such popularized manners:

- An open-ended range of hundreds of methods instead of just a few
- Extensive enough background knowledge (familiarity) on the relevant subject matters
- Discernment skills (either from innate intelligence or long training), including on how to apply the diverse possible methods and principles appropriately
- And of course, once you officially took as your objects of worship a few scientific methods and principles (either good, or biased like the focus on empiricism at the expense of theoretical concerns) which are usually so convenient for you to put forward as a good excuse to defend your prejudices against the conclusions of people who followed different methods, you should not just easily throw them out of the window for the few cases when they turn out to refute your prejudices.

I pointed out that scientific methods could be usefully completed in some cases by some methods and inspirations from libertarianism. Indeed some good ideas and tools can be found there. Some of its principles are already exemplified by the success of free market economies over the Soviet ones ; along similar inspirations, some more advanced social technologies remain to be designed and implemented through a better use of information technologies which already came as a fruit of science.

Unfortunately, the current mainstream libertarian ideologies fell into similar traps with respect to the potentialities of libertarian solutions, as skepticism fell with respect to the scientific ideals: the trap of perverting its best principles by essentializing them and neglecting the need of thorough analysis to discover the appropriate ways of applying them. Namely, many fell to the temptation of extrapolating their confidence in the few classical methods of free market, justified by the success we know in classical cases, to dream of a straightforward universal applicability of these methods in all cases. They can rightly point out some cases where these methods were attacked for either wrong or controversial reasons, such as the institutions of rather heavy taxes to maintain some sub-optimal systems of social security and official education ; and extrapolate from there to less wisely dismiss as similar "attacks on liberties" any other calls for regulations, namely those for environmental concerns.

So in the discussion, I drew the parallel between both ideologies as follows:

- Would God ought to have taken up the duty to create the world with the precise details
to ensure that there can exist a category of people enjoying the guarantee to be always
the right ones under just one condition: that they apply "the right method" well known in
advance, against anyone else with a different line of thought ?

This reminds me of the ultraliberals who, essentially (not explicitly of course) believe that since freedom in general is a fundamental value, and free market proves in many cases the most effective method for everything to go well in the best of all worlds, therefore God had to conceive the laws of climate dynamics so as to always make right the fanatics of the best methods, namely those of free market.

Ironically, this strange hypothesis that the universe would happen to be adapted to the expectations of methodologists for the intellectual comfort of investigators (in terms of explorability : the superiority of a fixed exploration method), would actually be very bad news for these investigators. Indeed:

- It would make their work boring ;
- it would open the way to the automatization of their work, so that machines could ultimately overcompete them and leave them jobless.

So, this quest for truth undertaken under human conditions, is just a game. It does not really aim to uncover these truths, providing an access to these which did not exist before, since all (or at least most) relevant truths to be so uncovered, were already known before the game started, and will be disclosed again anyway once the game is over. Instead, the real goal of the game is to play. It needs to have some visible other goals, and these visible goals need to appear serious, since this is needed to keep the motivation and opportunity to play this game seriously.

This game of the quest for truth, like any other game, is neither fair nor unfair:

- It is not fair, in the sense that there is no such thing as a fair share of abilities and diverse other sources of chances (circumstances...) between people, to ensure that the resulting visible success (reaching the correct truth) would be any proper measure (criterion) for the merits of players or any specially valuable quality ;
- It is not unfair, in the sense that there is no such thing as a supreme privilege given to winners and denied to losers, such as an entrance right to heaven. Thus, there is really no such thing as a charity duty to care correcting other people and drive them to correct beliefs in order to "save" them either. The diversity of chances being no secret, it is well considered when assessing the deeper interest behind a given level of visible success.

In particular, different people may happen to be at different stages of their spiritual evolution (in particular their series of reincarnations). It can seem funny to stare at children's mistakes; but everyone has been a child someday, and the experience of committing and struggling with mistakes can be a necessary part of the learning process.

Let me reply to the possible suspicion that, by the above picture, I would be undermining the sense of seriousness (meaningful purpose) of scientific investigation. I happen to be very serious by nature, tempted to take everything extremely seriously. This went to the point of leading me to spoil my life trying to follow purposes and requirements which claimed to be serious ones but which actually weren't, such as "giving my life to God" with Evangelical Christianity, and accepting to follow the academic system up to PhD as if it would be needed for me to fulfill my wish of doing great science in my life, when it was actually a huge waste of energy. After such disastrous tries to integrate myself in the university system, I finally left it for good, disgusted of the reign of vanity and lack of meaningful purpose in so much of what is going on there. I already commented above about the mess of scientific popularization. I also reported elsewhere the degree of vanity I found in academia, both in scientific research and teaching activities. Indeed, what is the sense of passing an exam, really ? What is the sense of repeating the same lecture every year, on the same topic which thousands of other teachers are also teaching around the world, and still doing it as badly as decades ago without anyone giving any serious thought on needed restructuration ? What is the sense of racing to publish a given finding in a given popular research field, in hope to do it either before or at the same time as others, so as to become listed as one more of its co-discoverers ? What is the sense of doing some "great" work which will only interest the curiosity of a handful of specialists of the same topic but has no chance to be of any use for the rest of mankind anyway ? What is even the sense of all specialists of a given field focusing their works on publishing stuff to be read by their peers working on the same field around the world, but none of them even caring to help maintaining a list of those existing research teams in this field (I was the one doing it for them, despite being out of that system) ?

A vanity for another, let us compare the value of rational investigation to that of passing
an exam. What really matters there is not, in itself, the discovery of the right result, since
those truths to be discovered are anyway no mystery from the viewpoint by which the ultimate
value of the investigation will finally be appreciated. What matters instead, is the method
which is followed. Not because the right method is fixed in advance, but precisely because
it isn't : you have to invent you own, and see how it goes. You have a large freedom in both
your choices of target questions and methods to investigate them, and no secret will
remain about this.

A possibly legitimate method is to copy your work from your neighbor's. Indeed this
can be valuable in two ways. One way is to save your time, using the fruits of his
work as a basis you need for your own other works, the different game you want to play
where you can then develop and test different skills. The other way is to test your skills
of discerning which works are worth copying from, and which aren't.

But, if you altogether undertake to get and wear dark glasses, proclaim that these glasses
are the brightest of all (since only some of the brightest lights are visible through these),
and complain against your neighbor, who disagrees with you on this, that you cannot
see whether his work is worth copying from when you look at it through your glasses,
then you may be the one actually failing.

Previous : Part 1 - Part 2 - Part 3 - Part 4 - Part 5

List of links on skepticism

Back to site : Antispirituality main page