Using terms inconsistently

So, I feel the need to defend the uses of of “=” and “Screen Shot 2016-08-19 at 2.33.44 PM“. Mainly because I am differentiating uses of Screen Shot 2016-08-19 at 2.33.44 PM but not differentiating uses of “=”. If I differentiated, say the identity “A=A” from the identity “B=B” it would mean not that these were different identities, but that there were two kinds of identity. That is a can of worms I am not ready to open yet. But the choice makes more sense if you think of the concept of equals, including identity, as being the opposite of making distinctions, which is what identifying kinds of identity is. This is what makes equals universal, while not-equals is always special, even though I am aware that this undermines the concept of difference, making (allowable) inconsistencies in my previous post.

We can say, for example that 1+2=3 and 2+3 = 5 and then that 1+2 Screen Shot 2016-08-19 at 2.33.44 PM 2+3 which in effect means that the two equals signs are for different equalities. No problem there, and the difference is kept with the symbol Screen Shot 2016-08-19 at 2.33.44 PM, which is the way it should be. But how can I defend that there shouldn’t be different kinds of identity, where there should be different kinds of distinctions? Well, is the way that “A=A” any different from the way “B=B”? I would say no. And, again appealing to the intuition: There is a big difference between the way a question is different from a neutron star and the way an lemon is different from a lime. So I am committed to the idea that there are different kinds of differences, but there is only one kind of sameness.

Mathematical Relationship between Logical Pluralism and Vagueness

There can be a pluralism of logics because Screen Shot 2017-09-23 at 10.55.59 PM.png whereScreen Shot 2017-09-23 at 10.57.58 PM is classical negation. This can be justified because  “1 Screen Shot 2016-08-19 at 2.33.44 PM 2″ and “1 is not 2” are basically the same statement. However “not 2” has different meanings in different logics, so 1 Screen Shot 2016-08-19 at 2.33.44 PM 2 means something different in paraconsistent logic (We’ll use DeCosta’s C1), we mark this difference with 1 Screen Shot 2017-09-23 at 10.58.22 PM 2, and claim that the difference between classical negation (Screen Shot 2017-09-23 at 10.57.58 PM) and paraconsistent negation (Screen Shot 2017-09-23 at 10.58.22 PM) is  marked Screen Shot 2017-09-23 at 10.58.10 PM. Hence Screen Shot 2017-09-23 at 10.55.59 PM.png.

I mentioned in a previous post that we cannot generalize this statement into Screen Shot 2016-08-19 at 2.33.02 PMbecause it is a contradiction. Nevertheless, a sense in which Screen Shot 2016-08-19 at 2.33.02 PMis true is the sense Screen Shot 2017-09-23 at 10.55.59 PM.png.

Now, vagueness is the situation where it is not clear if 1 Screen Shot 2016-08-19 at 2.33.44 PM 2. To illustrate, take this curve: graph_avg_weight1

here is vagueness on whether we have two or one “heaps”. This thing could be 2 or it could be 1 and so in a sense 2=1. How is this handled by logical pluralism? Paraconsistent logic would allow that A: “1 is not 2” and ~A: “1 is 2” has a sense in which it is true, while classical logic would explode. The reason for this is entirely based on the difference in negation. C1 creates a new sense each time a true possibility is negated, making the negation of a possibly true row in the truth-table have two senses, one in which the negation is true, another in which the negation is false.

(A Screen Shot 2017-09-23 at 10.57.58 PM ~A) Screen Shot 2017-09-23 at 10.58.10 PM (A Screen Shot 2017-09-23 at 10.58.22 PM ~A)

The point is that how you handle negation changes how vagueness is handled (or not handled). A difference in negation also gives rise to an entirely different logic. Vagueness can be described completely as a failed distinction/negation, so that even though we want “1 is not 2” vagueness makes this distinction fail. A different negation yields a different way distinction fails, but no logic “solves” vagueness completely. This is the mathematical relationship between vagueness and logical pluralism.

This may be made clearer with another example. Vagueness is the situation when a distinction fails, which can be described by the failure to distinguish = from Screen Shot 2016-08-19 at 2.33.44 PM, so Screen Shot 2017-09-28 at 1.30.38 PM.

Now for classical logic if vagueness renders 1 = 2 we can prove Screen Shot 2017-09-28 at 1.30.38 PM, likewise in paraconsistent logic there is no problem having Screen Shot 2017-09-28 at 1.30.38 PM as another non-explosive contradiction. this means in particular that Screen Shot 2017-09-23 at 10.57.58 PMScreen Shot 2017-09-28 at 1.34.47 PM and Screen Shot 2017-09-23 at 10.58.22 PMScreen Shot 2017-09-28 at 1.34.47 PM. Substituting, we get that things can get so vague we can’t tell the difference between Screen Shot 2017-09-23 at 10.57.58 PM and Screen Shot 2017-09-23 at 10.58.22 PM, in other words Screen Shot 2017-09-23 at 10.57.58 PM = Screen Shot 2017-09-23 at 10.58.22 PM. and from our previous statement Screen Shot 2017-09-23 at 10.55.59 PM.png we have Screen Shot 2017-09-23 at 10.58.10 PMScreen Shot 2017-09-28 at 1.34.47 PM and vagueness is now mathematically related to logical pluralism.

The symmetry between Completeness and Vagueness

The (Problem of the) Heap of Sand is the classical representation of vagueness, where you create an inference: if you take away a grain from the heap then it remains a heap. This is a good inference for a long time, but eventually breaks down due to the vagueness of what a heap is (eventually you will have taken so many grains away that it wont resemble a heap anymore). And you can’t even tell when the inference will break down, exactly. So, as you can see, vagueness has a very precise and non-vague formulation, something similar to this strange property of vagueness (that it is a clear and distinct concept, even though it is a concept about indistinctness) is common for mathematical concepts. That is, mathematical concepts are chosen and defined so that they contain the terms necessary to get round their own difficulties—their faults or cracks, as all concepts are imperfect. Example: “Completeness”.

The difference between vagueness and completeness, is:

1) that the term completeness is in direct opposition to the claim that every concept has a “fault” or “crack”, in this it is a “perfect” term. The definition of completeness, however, must be vague, because the terms used to define completeness all have cracks.

2) The definition of vagueness, by contrast, is perfect, but the term refers to an essential fault of all terms, and is always perceived. (perception is always vague) Vagueness is defined as how the if, then fails— The “heart of logic” (the paragon of precision) and the most reliable way to preserve truth. It is clear and distinct in its description of how the if, then fails. Since each of the terms in the definition of vagueness have cracks, the definition is precise.

The term vagueness refers perfectly to its referent because it is well defined as an indistinctness, the definition of completeness is vague because the term refers to something that can’t be perceived, a perfect whole.

Stuff, and Higher-Order Vagueness

At the heart of real analysis and the study of real numbers is a confusion between points and “stuff,” or how points can perfectly describe distance/space/extension. Put the other way around, how can extension be reduced to points? Or how can we know the extension (with points) of a length/width of an object?

This question is caught up in measurement, and the relationship between word and object. I have seen people give (real) number a privileged position between word and object, but I would argue that number suffers from the same flaws that language suffers from, including vagueness. One reason is there is vagueness in what to call “one” (see the classical “heap of sand” in a previous post), further there is vagueness in when something is countable or uncountable. Electron cloud size to water or human height is uncountable, but what about popcorn or puffed rice? These could be counted, but should we be counting the grains? Should we count water molecules? Should we go back to a particle model for electrons so we can count them? This should comes more pointed when we consider the ancient belief that counting/taking measurements about humans directly endangers them (Feyerabend 1999). Why was it believed that being unsure which side of the microscope you are on endangers?

Vagueness runs right through these issues from the finer points of graduate school mathematics to the ethical issue raised above. And of course it does, since we would like number to draw a clear line between word and object, point and “stuff”. Strangely, insight into these distinctions can be garnered by understanding just how troubling (and what) vagueness is.

Enter higher-order vagueness. Now the question has been put to me “Well the vagueness between, say, “high up” and “not high up”, (this can easily be pictured on a cartesian graph with a line gradually going down from left to right) can be dealt with by adding and third “uncertain” value/region. And the trouble here is, as is well known, that adding a region uncertain only adds two new borderline cases between “high up” and “uncertain”, and another borderline case between “uncertain” and “not-high up”, so that new “higher order” uncertainty regions have to be added for these borderline cases. And now we have introduced new sources of vagueness, etc. Ultimately the pursuit of conquering higher-order vagueness by exchanging borderlines(points) with intervals(stuff/extension) is a vagueness between points and intervals.

As that sinks in, realize that a vagueness between points and intervals is a general problem reproducible anytime vagueness rears its ugly head. If we draw the connections from points to words and from intervals to the “stuff” of objects, we find that the line between words and objects is vague, which is also well known. What is new here is that we found this well known vagueness by investigating vagueness itself in a general way.

To summarize I would say that the vagueness between word and object is an essential or “stock” vagueness that crops up anytime we are in vague territory, and is the heart of analysis of “real” numbers. In this sense, vagueness is an important ultimate concept for mathematics, and it ought to be mentioned in analysis text books that vagueness is what the book is about.

 

 

—The following is a tribute to my father, Kevin Barnhurst, who passed a year ago this month–

I decided to make this post a tribute to him.

Dad and I were working on this essay (http://journals.sagepub.com/doi/full/10.1177/1464884916689150) when he died.

Dad was a flawed human being, but one comfort is that I almost exclusively remember good things about him, and feel pleasure in remembering him. I know thats good for both of us. We were at odds a lot when I was a kid. I went through a different kind of school system engineered to dumb down the American population, and entered college a logical positivist by default, but underneath all that wash, I was deeply skeptical of my “education”. For dad his family didn’t trust his decision to enter college, and the situation was reversed. For him school was how to become educated, for me what education I have was a result of conversation (with him and many others). I probably would not have gone to college at all if dad hadn’t pushed me hard to apply. That was one of the strange things about dad, he was very forceful, and only made me more stubborn, but he softened later in life and knew how to make his force felt in a strangely soft way.

We kept a long tradition of holding protracted conversations in the evenings and into the night. I owe my intellectual development primarily to him, and it is strange how long it took me, all the way to the last few years of his life, to realize what a gift that was and to reach an understanding that allows respect his for work.

Magical Thinking in Mathematics

The goal for today is to prove that magical thinking is rampant in mathematics. First of all lets define magical thinking. I would say that magical thinking is a kind of metaphorical thinking, as in the metaphor “My heart is the sun” only with the added idea that writing these words/making the metaphor exerts towards making the metaphor true to some degree or in some sense. Magical thinking is the claim that saying “My heart is the sun” actually warms my heart.

Now the way that mathematics uses magical thinking is to start with a metaphorical idea of difference. For example, the difference between a “raven” (1) and a “writing desk” (2) metaphorically (not actually) is the difference between the “north star” (3) and the “form of thinking called questioning” (4). It is fairly intuitive that the difference between (1) and (2) is different from the difference between (3) and (4), but mathematics amalgamates all differences together into one concept with metaphor. And it is a particular kind of metaphor that asserts that difference actually works that way.

Even though 3 and 5 are less different (2) than 3 and 9, (6), these differences are not taken into account in the traditional mathematical symbol for difference, the Screen Shot 2016-08-19 at 2.33.44 PM.  Traditionally 3 Screen Shot 2016-08-19 at 2.33.44 PM 5 just as much as 3 Screen Shot 2016-08-19 at 2.33.44 PM 9, so the identity of difference, Screen Shot 2016-08-19 at 2.35.11 PM, is enforced.

Mathematics asserts an ultimate concept “Difference” that is universal—it works for any situation where there is difference, making any difference “complete” and it does so by metaphorically joining disparate differences. Hence, it falls under my definition of magical thinking.

I am doing the opposite of what Derrida did with his Différance. Derrida added senses to difference allowing it a history and to belong to language, I am suggesting that we subtract, or better divide utterly Difference into differences.

The rest of the sciences follow suit, of course, since mathematics is the language of the sciences. My advisor for my M.S. in mathematics once said “mathematics is the poetry of the sciences.”

Politics of Mathematics

Proof by contradiction follows from the law of excluded middle ((p or not-p) is universally true), the characteristic law of classical logic. The basic reasoning of proof by contradiction is: in order to prove p, we prove a contradiction from ~p (not-p).

“G. H. Hardy described proof by contradiction as “one of a mathematician’s finest weapons”, saying “It is a far finer gambit than any chess gambit: a chess player may offer the sacrifice of a pawn or even a piece, but a mathematician offers the game.”[1] ” Wikipedia on proof by contradiction

Interestingly the United States has become known for using exactly this type of logic ever since the nuclear era began. Of course we have the Cuban missile crisis where Kennedy offered ‘the game’ in this case ‘the destruction of the world’ as a possibility to Khrushchev.

Destruction by climate change has increasingly entered into our calculations, so that during the 2016 elections we were faced with a choice between the republican party (who are blocking the Paris Agreement from really taking off) and Hillary Clinton. And the result seems to be that life in America is so bad that they are willing to try the alternative.

And logic can help, not classical logic, but constructive logic also rejects the Law of Excluded Middle and proof by contradiction. Changing our minds (our reasoning) is a very important ingredient in changing our politics. Constructive logic takes the perspective of building things as more important, more fundamental, than universal or global laws. Because of this, the “world” of a constructive reasoner is only this or that construction, and the progressive stages of continuing construction. In this sense the geography of construction is “incomplete.”

This incompleteness is fundamental and draws the rest of mathematics into reformulation (Bishop 1967). Things like space and time need a new kind of real numbers to describe. If our mathematics were different, the upgrading of nuclear weapons and their resultant destructive power by a factor of 3 would not be considered “an incredible feat of human intelligence” (Noam Chomsky) because we would have a different understanding of intelligence (reasoning power), and empty-headed “intelligent” weapons developers, along with coldly calculating CIA heads would be following different rules. Offering the game would no longer be considered “one of our finest weapons”.

Synthesis of Vagueness and Logic

by Andrew Nightingale, November 14th, 2559

CC: Dr. Khajornsak Buaraphan, Dr. Parames Laosinchai, Dr. Patchayapon Yasri

The problem with Heideggers “enframing” attempt—that science enframes nature and in any frame there the phenomena are still vague—is that certain kinds of vagaries do not entail paradigm shift. “Though discovering life on the moon would today be destructive of existing paradigms (these tell us things about the moon that seem incompatible with life’s existence there), discovering life in some less well-known part of the galaxy would not.” (Kuhn 1970, p 95) However, some vagueness does warrant paradigm shift, because “Ambiguity [between terms and the world], … turns out to be an essential companion of change.” (Feyerabend 1999, p 39)

Precision, on the other hand, is not an argument in favor of a theory because “In fact, so general and close is the relation between qualitative paradigm and quantitative law that, since Galileo, such laws have often been correctly guessed with the aid of a paradigm years before apparatus could be designed for their experimental determination.” (Kuhn 1970, p 29) So that the measurements are predicted with intense precision, and then the experiment carried out is an elaborate, highly overdetermined one that has only one possible interpretation within the paradigm.

Vagueness is apparent to the naked eye, but it is traditionally opposed to what can be grasped rationally. “In general, Leibniz had followed the other great rationalists in interpreting perception as a confused form of thinking. Like Descartes, he had treated the deliverances of the senses as sometimes clear but never distinct.”(Walsh; Edwards Ed. 1972, p 307) However, vagueness is a clear and distinct concept, and it seems that it also is in complete agreement with the “deliverances of the senses.” Thus, in the sense of mathematics that Whewell and others held, vagueness is a truly mathematical one, that is,

“…in mathematics there was no difference between objective reality and subjective knowledge; the human mind was completely in tune with external fact.” (Richards 1980, p 362) Rational thought and empirical observation are brought together into one concept: vagueness. This old idea of mathematical truth has changed drastically now. With Godel’s theorems, it became clear that an absolutist (that is mathematics is absolutely true and unchanging) view became untenable. One stronghold of the old sense in which mathematics is true (Whewell’s) can be found in the mathematician Brouwer’s intuitionism. According to Brouwer (and Kant before him), the experience of time is accessed to fill the empty formalisms of mathematics, giving it meaning and truth. Vagueness is another source of mathematical truth. It may be that vagueness between two things is present in Brouwer’s intuition of a “twoity,” the beginning of intuitionist arithmetic.

What do I mean by vagueness? The ancient representation of vagueness is the problem of the heap of sand. When you have a heap of sand, you have a relatively safe inference that if you take one grain from a heap, then you will still have a heap. As the story goes, eventually taking grains of sand will show this if, then statement to be faulty because you will no longer have a heap of sand. Why does classical “if, then” fail us here? There is an analogue between the heap of sand example and with the calculation of a real number according to a rule. Also, this question gains importance when reflecting that “Logical consequence [the if, then] is the central concept in logic. The aim of logic is to clarify what follows from what. – Stephen Read, Thinking about Logic [99]” (As quoted in Beall, Restall 2006, Kindle Edition) According to Beall and Restall, logical consequence can be clarified in more than one way, giving rise to more than one equally valid (if applied in different situations) formulation of logical consequence. “We must reconcile ourselves to the fact that every precise definition of [logical consequence] will show arbitrary features to a greater or less degree.” (Tarski as quoted in Beall, Restall 2006)” Additionally probability theory is not a solution to the vagueness of logical consequence, because

…probability theory might provide a canon for evaluating degrees of belief, … Nonetheless, probability theory cannot be a complete answer here, for we also make assertions and denials (and hypotheses and many other things besides), and these may also be evaluated for coherence, using the norms of deductive logic. In particular, we hold that it is a mistake to assert the premises of a valid argument while denying the conclusion… (Beall Restall 2006, Kindle Edition)

The solution to the vagueness of logical consequence, rather, lies in logical pluralism. Logical consequence brings true conditions to their true conclusions, but logical consequence itself is conditioned, and ultimately forms the structure of what can be intelligibly conditioned. Since phenomena are inherently vague, and logical consequence is vague until arbitrarily made precise, there is no clear difference between form and substance, ideas and things.

Read more: https://questionsarepower.org/2016/08/19/the-problem-of-difference/

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

references

Beall, J. C., & Restall, G. (2006). Logical pluralism. Oxford: Clarendon Press.

Edwards, P., & Walsh, W. H. (1972). The encyclopedia of philosophy (2nd ed., Vol. 4). New York: Macmillan.

Feyerabend, P., & Terpstra, B. (1999). Conquest of abundance: A tale of abstraction versus the richness of being. Chicago: University of Chicago Press.

Kuhn, T. S. (1970). The structure of scientific revolutions. Chicago: University of Chicago Press.

Richards, J. L. (1980). The art and the science of British algebra: A study in the perception of mathematical truth. Historia Mathematica, 7(3), 343-365. doi:10.1016/0315-0860(80)90028-2

Losing the Language-Game

Wittgenstein argues that mathematics is a language game, that is not only based on language but on “forms of life.” And forms of life are rules of the game that are, perhaps socially constructed, but definitely without doubt. They are given somehow and from who knows where. Proving a theorem is to invent a rule that is also without doubt,

What is unshakably certain about what is proved? To accept a proposition as unshakably certain—I want to say—means to use it as a grammatical rule: this removes uncertainty from it.” (Wittgenstein 1978, 170)

Proving adds to or informs our forms of life. This means that forms of life are created… can they be destroyed? I’m saying perhaps we built the wrong forms of life, perhaps there is still a nagging wonder about the things supposedly settled. And the question begins the process of changing our forms of life. In my first post I proposed bringing in the “?” into mathematical language. As it was laid out in that post, https://questionsarepower.org/2014/08/ the “?” operation would introduce a controlled retreat from mathematical logic. I showed how the “?” can free us from the Liar Paradox, a language game that is an endless cycle in search of a truth value.

The axiom of completeness is exactly such a form of life that should be questioned. It metaphorically closes off the possibility of leaping from moment to discrete moment. The axiom of completeness is like a poetic spell on the mind that prevents natural movement through time, since moving through time continuously, with out the possibility of a leap, generally means “downhill.”  Real analysis is a losing language-game.

I have argued against the “least upper bound;” Dedekind cuts are hardly different from this notion, but it shows the cunning of mathematicians to push their agenda: “Everything is number” by using essentially the same argument in so many different ways. Dedekind cuts are sets of rational numbers with no maximum, that is, if r is in the “cut” called A, then there exists a rational “s” such that r<s, s in A, is similar to saying the number generated by adding another digit to the decimal expansion of r is also in A.

Cuts essentially assert a least upper bound (or the “…” in a decimal expansion) with judicious use of the “<” symbol; it is merely a rewording. Dedekind offered these “cuts” as real numbers and asserted that they exist.

In one of many analysis texts I’ve read,

“The real numbers were defined simply as an extension of the rational numbers in which bounded sets have least upper bounds, but no attempt was made to demonstrate that such an extension is actually possible. Now, the time has finally come. By explicitly building the real numbers from the rational ones, we will be able to demonstrate that the Axiom of Completeness does not need to be an axiom at all; it is a theorem! There is something ironic about having the final section of this book be a construction of the number system that has been the underlying subject of every preceding page…We all grow up believing in the existence of real numbers, but it is only through a study of classical analysis that we become aware of their elusive and enigmatic nature. It is because completeness matters so much…that we should now feel obliged—compelled really—to go back to the beginning…” (Abbot 2001, p 244, Emphasis mine)

Who is being compelled? The book is built so that you have to assume the Axiom of Completeness for a very long and arduous time before they get to the meat of the problem. More importantly, this quote shows the circularity of mathematics, from axiom to theorems back to axioms. Theorems are merely explicit parts of the axioms. With a shuffling of words the Axiom of Completeness turns into a theorem, but certainly the theorem is more explicit and involves more description of what the Axiom is, which is left for the very end of a book devoted to assuming the Axiom.

And how explicit is it really? The cuts are defined as any set of rational numbers with no maximum. Is that explicit? How many sets are like that? And there is an interesting example of a cut: take the set of rational numbers “r” such that screen-shot-2016-11-11-at-2-22-31-pm when r is positive, else r is in the set (call this set A). Compare with a similar set where screen-shot-2016-11-11-at-2-22-16-pm (B). This is exactly the sort of thing that mathematicians enjoy, the “almost false.” Dedekind cuts fail to distinguish these two sets, but Dr. Abbot continues with his claim that Dedekind cuts make real numbers explicit. Strangely enough, while here the difference between < and Screen Shot 2016-11-11 at 2.53.39 PM.pngmight not matter, elsewhere in the theorem of completeness it matters greatly. First of all many of the cuts can be distinguished by using < as in the set of any rational r < 2 , which is a cut, but rscreen-shot-2016-11-11-at-2-53-39-pm2 is not a cut since 2 is its maximum and 2 is in the cut. How to describe the cut where square-root of 2 is the least upper bound without this ambiguity? We can’t. Both the symbol square-root of two and  A,  involve algebraic operations without a clear (or even necessarily a single) solution (or lub). We can prove that square-root of 2 is not a rational number, but to say that, whatever square root of 2 is, it is a least upper bound of A is to forget that that is what we are trying to prove.

Now, lets look at the order of Dedekind cuts. For cuts A and B,

Ascreen-shot-2016-11-11-at-2-53-39-pmB is defined to mean AScreen Shot 2016-11-11 at 3.07.27 PM.pngB.

Let A be the set defined by screen-shot-2016-11-11-at-2-22-31-pm, is A a strict subset of B, the set defined by screen-shot-2016-11-11-at-2-22-16-pm? It certainly seems like it ought to be. It can be reasoned that A contains less (of what?) than B. We want our different algebraic expressions to have distinguishable numerical values, that is ultimately the motivation for the  real numbers, but in this case we don’t have that. The choice of screen-shot-2016-11-11-at-2-53-39-pmin the definition is of course very careful. If the definition used < we would have a potential counter example. Figuring out the exact difference in certain cases between “=” and “<” is swept under the rug. Luckily mathematicians can add definitions to counter this particular example, but how many other examples of vagueness are there? The definition of a Dedekind cut is so general (not explicit) that there may be many other problems.

Since we don’t know that B is a cut, we cannot claim that it is a real number nor that it represents a least upper bound. How do we know that A is a cut? We know that either way square-root of 2 is not a rational number. We know that there is no least upper bound “next” to B, that if a number is adjacent to square-root of 2, they are so close as to be the same number. How, then, can square-root of 2 be explicit, how do we know that there is a unique and determinate answer to square-root of 2? The truth is, vagueness sets in as r in B get closer to B’s upper bounds. The upper borderline is vague, like any empirical borderline, but somehow we are compelled into believing there is a unique and determinate upper bound to B.

“Wittgenstein argues that logical necessity—be it computing an algorithm, proving a theorem, drawing a deductive inference, or whatever—concerns the following of a rule. Rule following raises the issue of the compulsion to reach a conclusion that is fixed and, if not predetermined, then at least unique and determinate” (Ernest 1998, p 80) Assuming there is a unique and determinate square-root of 2 makes the expression of such a view merely a language-game, not anything profound. The only way to have a philosophical thought about the language game of arithmetic is to resist this compulsion, and with that act, the entire edifice of real numbers crumbles.

Dedekind cuts are based on a rule for determining a very large infinity of sets; so is the square-root of 2 a large calculation. Wittgenstein wrote about rule following, saying that “What you are saying, then, comes to this: a new insight—intuition—is needed at every step to carry out the order..a new decision was needed at every stage” (Wittgenstein 1953, 75). The compulsion for a student to follow a rule such as calculating the value of square-root of 2 is never fully understood, because of the decision making process, we think we understand the rule and give up calculating, but if we were driven to continue calculating square-root of 2, doubts would inevitably pop up, perhaps merely because we have a life to live besides carrying out this rule, but the objections that pop up will be well informed ones, by someone who has done a lot of exploration into the rule. In Wittgenstein:

“…following a rule and agreeing (perhaps implicitly) to its conventional underpinnings…also involves a decision that the new application can be legitimately be subsumed under existing rules, for rules underdetermine their applications.”(Ernest 1998, p88)

We have invented a substance ‘the real numbers’ that is so well ordered that it can obey algebraic rules without any decision. Thus the decision to adopt the ‘real numbers’ is a very important and determining one. Feyerabend argued that a proof resembles a tragedy. It is internally consistent and inevitable. Comedy, on the other hand, is the stuff of continuity. And the place comedy had in the Greek world was:

Everything commonly realistic, everything pertaining to everyday life, must not be treated on any level except the comic…As a result the boundaries of realism are narrow. And if we take the word realism a little more strictly, we are forced to conclude that there could be no serious literary treatment of everyday occupations and social classes…of everyday scenes and places…everyday customs and institutions…of people and its life. (Auerbach 1946, p 31)

Reducing comedy to tragedy has dire consequences for the reducing culture.

Dedekind played on words with “Dedekind cuts,”  Now, what does he mean that a “cut” exists? Does he mean the knife that cut, or the space between the two parts that were once one? Or does he mean the ground on which the surgery took place, the “operating table,” as Foucault put it?

If he means the space created by a cut, I have already mentioned the trouble with different “cuts”—that different cuts are different from each other, that there exist at least two different real numbers. This means that Screen Shot 2016-08-19 at 2.33.02 PM, and undermines the notion of identity in mathematics.

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Kant and Continuity

In Kant’s Critique of Pure Reason he relies heavily on the continuity of time. For Kant, arithmetic is the a priori form of time, and so it is of utmost importance that arithmetic can describe continuity, or else it is not known whether time is continuous. The function of the Axiom of Completeness is to ensure that all positions in a continuous progression can be expressed with an arithmetical number, thus arithmetic can describe mathematical continuity. What follows is the various ways that Kant depends on mathematical continuity.

  1. Kant’s famous question “How are synthetic a priori judgments possible?” (first edition xvii) is given credence because of (to him) examples of such judgments:  geometry and arithmetic.
  2. Continuity of time. Arithmetic, to Kant, is the a priori form we impose on phenomena in time. If Arithmetic cannot describe continuity, time cannot be continuous.
  3. Application of his categories to phenomena. How can a category, for example, existence, or an a priori form for time, be applied to experience? They can, according to Kant, because of “a “transcendental schema,” which is a “transcendental determination of time.” (Second Edition 181) The schema is “properly, only the phenomenon, or sensible concept, of an object in agreement with its category” Thus being phenomenal means being in time, Kant’s worlds are phenomenal, and the categories find their sense because of primarily our a priori sense of time.
  4. Induction over time. Kant with Hume sees that connecting experiences-in-time together (synthesis) is required for empirical knowledge. They both notice that this principle “induction” is not an analytic nor empirical principle. Kant argues against Hume’s belief that experiences-in-time are discrete, instead saying that without continuous (or connected) experience of time we would not be able to synthesize events.
  5. Causation, which is just a particular kind of connection between events (a connection according to rules, rules determined by our categories, categories given sense because of the a priori form for time) is possible because events are, in general, connected and continuous.
  6. Synthesis and Knowledge. “Synthesis in general is the mere result of the power of imagination, a blind but indispensable function in the soul; without which we would have no knowledge whatsoever, but of which we are scarcely ever conscious.”  (Second Edition 103) When asking how synthesis becomes knowledge, it is in the intuition of data according to the a priori forms of space and time. To Kant, knowledge is not possible without being connected by a unitary consciousness, which in turn requires that time is not discrete.

    “For this unity of consciousness would be impossible if the mind in knowledge of the manifold could not become conscious of the identity of function whereby it synthetically combines it into one knowledge. The original and necessary consciousness of the identity of the self is thus at the same time a consciousness of an equally necessary unity of the synthesis of all appearances according to concepts, that is, according to rules, which not only make them necessarily reproducible but also in so doing determine an object for their intuition…”(First Edition 107-108)

  7. The self, or unity of consciousness depends on a background of external things at rest or unchanging. Having a “permanence” that requires the a priori form of time for resting to continue.
  8. Conservation (“Permanence”) of substance—no total generation nor destruction. Kant again uses the unity or continuity of time, saying that objects at rest, not changing but passing through time continuously, provide the background for grasping change.
  9. A realist world. The background provided by this permanence is the basic way we perceive the external world.
  10. Because the form of time is an a priori, we have the result that the external world and the ideal world are interdependent.
  11. Descartes seems to believe that mind is the first thing we are away of, while it is necessary to infer the existence of external things. Thus external things are open to doubt (indistinct?). Kant argued that because we need permanence of external things to infer a self, external things are not open to doubt. In the “Anticipation of perception” section Kant asks how sensations can have a determinate degree (not indistinct).

Geometry and logic were also generally considered perfectly settled fields in Kant’s time, and were important to Kant’s philosophy. This has changed drastically, however, with the emergence of nonEuclidean geometries and other logics.

To see my argument against the Axiom of Completeness, and against a continuous passage of time, see https://questionsarepower.files.wordpress.com/2016/03/many_roads_from_the_axiom_of_completenes-2.pdf

If we hope to retain some of the meaning of mathematics by asserting that arithmetic is about the a priori synthetic intuition of time, we cannot do so by asserting mathematical continuity of time, because mathematical continuity is a spacial notion. For example the Intermediate Value Theorem is meaningless without spacial imagination. How can I be sure that there is an infinitely small period of time, the now, where the past meets the future intermediately?  Such a belief is totally against the experience of moments in time, which are atomic before being divided after-the-fact. Trying to interpret “intermediate value” by referring to non-discrete “real” numbers has already departed from the immediate intuition of time. The idea of completeness of arithmetical numbers, that there exists a least upper bound, does not reveal itself in time, and I believe Brouwer agrees on this point.

I think that the feeling of “flow” from, say, a breeze or putting your foot in a stream is the experience of temporal continuity, but separating its temporal aspect from its spacial aspect is not so easily done, and maintaining ones focus on the experience of purely temporal continuity is quite difficult. Usually one’s concentration breaks and with it, the moment as well. No matter the ability to concentrate, one cannot remain awake forever. Unifying atomic moments after the fact is a feat of the intellect, but it is not the primal intuition framing an event. I am saying that an atomic moment is unbroken and continuous, but a purely temporal intuition of continuity is not captured by mathematical continuity.

Brouwer’s constructive reconstitution of the continuum from the memory of life-moments that have “fallen apart” seems to acknowledge that the intuition of continuity cannot be completely defined. I also feel a deep scruple in pretending that continuity can be defined; such a definition would mean the “end” of time. It is a marvel that mathematicians, who should be less pretentious than philosophers, make such a pretense. The feeling of continuous time should be left to mysticism. For philosophy, atomism of moments is the best we can expect.

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.