toolbar

TAKEN FROM    "Black Holes and the Universe"

George Johnson's SCIENCE forum at THE NEW YORK TIMES.

This math was discussed in #584-#641 of the site.

To see the discussion at the site, register for the forums at http://nytimes.com   (this takes a few seconds) and enter

_________________________________________________________________________

rshowalter - 10:56am Jun 2, 1998 EST (#582 of 612)
Robert Showalter showalte@macc.wisc.edu

dkawalec - (#581) This ISN'T a comment on whether you're right or not. But I think your intuition, as intuition, is pretty good.

Theoretical physics has been sweating about infinities for practically all of this century, off and on. They happen elsewhere, too, some in engineering, where Plank's constant can't realistically be involved.

A problem with saying "let's just not talk about infinitessimal lengths or volumes" is that it pretty well disconnects arguments from the apparatus of calculus, which is handy apparatus at many times and places.

That bothered Albert Einstein.

Bob

budrap - 06:12pm Jun 11, 1998 EST (#584 of 612)

rshowalter (#582)- When you think about calculus on those terms it's sort of analogous to Ptolemy's epicycles isn't it? Calculus seems to get the right answers without providing a very useful/informative map of the territory.

This then begs the question, is there a Copernican/Keplerian analog that might allow us to more clearly comprehend a part of physics that calculus now masks in infinitessimals?

Bud

rshowalter - 08:08pm Jun 11, 1998 EST (#585 of 587) Robert Showalter showalte@macc.wisc.edu

Budrap (#584) says that

"Calculus seems to get the right answers without providing a very useful/informative map of the territory. "

I'd take out his "useful/informative" words. Calc is often useful and informative in the sense of providing right mappings. I'd say this instead:

"Calculus often seems to get the right answers, but it is terribly nonintuitive, almost black magical, sometimes even a treacherous "map" of the territory. Not only that, it blows up sometimes, for reasons nobody understands."

Then Budrap asks

"is there a Copernican/Keplerian analog that might allow us to more clearly comprehend a part of physics that calculus now masks in infinitesimals? "

There's reason to hope so, and reason that a change is needed, but we CAN'T chuck all of calculus, I don't think. Here's why.

Arithmetic's arithmetic.

Do a little arithmetic, and before you know it, you're doing algebra.

Not much algebra happens before you're working with polynomials.

Try to connect polynomials to geometry, and almost before you know it, you're at the polynomial derivative formula. If you ask for the inverse (and if you keep playing, you'll have to) you'll have the polynomial integration forumula.

Take the polynomial derivative and integral formulas, and add the algebra you've been working with before, and you can do most of the calculus that get's done in the real world. (This is how computers DO practically all the calc they do - that is, with polynomial series and a very few formulas.)

So far so good. Can't chuck anything so far, I don't believe.

But after that, there's plenty of reason to complain, and the complaints are of very long standing. The problems have gotten worse and worse since the 1690's, and have been disastrous since the 1860's.

Again and again people get into

Computational problems that have to do with infinitessimals that don't make sense, and infinities that don't make sense.

Conceptual problems have to do with the notion of infinitessimals.

*****

When you try to apply physical intuition to calculus, if you think carefully, you're in trouble. The physical notion of the derivative has a distinct resemblence to a certain fabled bird. Things that make sense in the small seem to disappear at infinitessimal scale.

What do you mean by "length at a point?" What lengthitude can a point have?

What do you mean by "area at a point?" What extensiveness can a point have?

What do you mean by volume at a point?

How about ratios of these ephemeral and unbelievable things? Derivatives are ratios, and derivatives of physical quantities involve ratios of "quantities" that are unimaginable or undefined at derivative scale.

Nobody likes this, and not only is the intuition impossible, you can get wrong answers, too.

You can "prove" that coupled relations don't have any effects when you know darn well that they do. Clerk Maxwell fought with this for years, and eventually just wrote down "infinitessimal" quantities that he knew were finite and important by experiment, ignoring the math. But there were OTHER terms, and he didn't have any rationale for what to do with them. He left them out, but kept worrying about them, and I think he was right to do so. (This ad hoc "derivation" is how the electromagnetic equations of "classical physics" were born. Maxwell wouldn't have been a bit surprised to have these equations found wanting.) Bad stuff happens in celestial mechanics and elsewhere, too.

You can "prove" that some other effects are infinite, when they are negligibile. Problem is, when you chuck these effects, you don't have any very tight rationale for doing so. Some of this happens in EE, and I've shown experimentally that an infinity in a conduction equation is finite. A person can pretty easily reproduce that one, but selling it will depend on "willingness to see."

Anyway, quantum electrodynamics is full of papers that go roughly like this:

We were going along, and we found an infinity, and that was no good so we found another way to calculate almost the same thing, and that was an answer we could live with, by making uncheckable assumptions (which are within our rules) so we hereby present our result!

This can work very well, except that, all through the theory, you have quantites that have different values when they are calculated in different ways. They should have the same values.

Einstein thought so, too.

Bob

triskadecamus - 11:54am Jun 15, 1998 EST (#587 of 587)

I am sure that it isn't very important, but long, long, ago, when I first studied Algebra, and first came across the concept of infinity, and infinitesimal, my Algebra teacher, a dear sweet lady, would revile us, and smack us with a ruler, if we used the word infinity as if it were a number. Eventualy, we asked how we should use it, and her reply stood me in good stead, for the ballance of my study of mathematics.

Infinite means "I don't know, but I think it's real big."

It would be so nice, if it simply meant incalculably large, but it really doesn't. It means our arithmatic does not cover this case. It means, hoever distasteful it is to the mind of a Mathematician, I don't know. The I think it's real big part is to make the mathematicians feel better.

She (my algebra teacher) had a very firm educational methodology, and by the end of a single semester, I had stopped using Infinity as a number. I think the problems in integration of the quantum, and Macrocosmic theory lies in not recognizing that our math does not deal with the case. Perhaps there is an arithmatic of infinite values, which will make all these terribly difficult problems seem trivial, and will allow for the much dreamed of elegant discription of forces which has no exceptions. Without that tool, though, it might be that our view is so distorted by our vision, we are lost chasing phantoms created by our own description.

I do not claim to have an insight into the nature of such a tool, nor even proof that it could be created.

Tris

rshowalter - 02:51pm Jun 16, 1998 EST (#588 of 589) Robert Showalter showalte@macc.wisc.edu

triskadecamus - (#587) your points are wonderful. I have many happy memories about firm educational methodologies myself, and the math-folks I've talked to all do, too. If people were like gods, these firm methodologies might not be necessary, but since most folks are more forgetful than one might wish, and less meticulous than one might hope, firm educational methodologies happen sometimes, when a teacher CARES about getting something learned. Some of my math training was administered by a caring teacher and friend with deep respect for the educational wisdom of the United States Marine Corps. That wisdom, when brought to bear, can produce certain kinds of reliable learning. I'll NEVER say that a calculation can be done "in principle," for instance. I used to say that much too much. Now I'm "cured" of that usage, and some of the psychology around it. Everything considered, that cure's probably useful. Still, the problem with instruction like that is this: If you've been taught something by administration of ordeal and punishment, and it is wrong, you've got a hard time unlearning it. Whole disciplines can be taught things like that. That learning can be a strength, but that learning also carries risks.

Tris, you say " I am sure that it isn't very important . . . " and then start talking about stuff that IS very important. You're pointing out that the "territory" of mathematics isn't the same as the "territory of the familiar, measurable world." As you point out, the notions of "infinity" and "infinitesimal" are examples of the difference. You're pointing out a "map-territory" distinction. You've put your finger, I think, on exactly where the tough problems are in mathematical modeling of the physical world, and exactly where the hope for progress is.

Notions of "map" and "territory" need to be clearer in applied math than they are. You can trace some map-territory muddles back, if you've a mind to, three hundred years. I've done some of that. From there, it's a short jump back to the Greeks, who had some of these same muddles 2500 years ago. I've given the Greeks a lick and a promise, but it seems they had the same problems keeping "what is representation" and "what is represented" straight that modern folks have, right up to the present time.

Tris, you ask for the need for new "tools" in math, and I'd like to clarify WHERE you need to be talking about, and relate the issue to discussions we've had in these forums often, on issues that are important. You can say "calculus works fine" and be right, in one place, and go someplace else and say "calculus is a mess" and be right there. A lot of well mapped territories are like that. Things are true one place that aren't true some other place.

If you'll recall, in (#585) I went through the main line connections of calculus to pure math. Everything seemed tight and solid, and WAS tight and solid:

Arithmetic works. ........... Algebra, the arithmetic of symbols, works.

Polynomials work............... Geometry works.

Analytical geometry works, and from there it is a short jump to the polynomial derivative and integral formulae.

Get that far, and with a little patience the rest of calculus, or almost all the rest of calculus, follows cleanly and neatly.  What could be solider than that? Note, I'm talking about pure axiomatic math.

Then, in (#586) I say.

Solid pure math or not, every time you try to apply the blessed, exalted tools of the calculus to a PRACTICAL problem, you find you can't understand what you're doing, and often enough you get clobbered by false infinitesimals and false infinities. This has been going on for centuries, and blighted parts of the lives of a lot of people trying to make sense of nature, including some big names and a passel of smaller folk in all sorts of fields.

No contradictions so far, if one can remember that pure math and the measurable world aren't the same and can be clear about the differences. But the notion that "pure math" and the "measurable world" are DIFFERENT has been very unclear in the history of mathematics and mathematical physics. That difference has to be clarified and mapped out, before some problems of longstanding in math-physics- engineering can be clarified. Clean up the mapping, using ordinary human methods and ordinary human care, and the solution falls right into your hand.

As I slog through my life, I become more and more convinced that George Johnson's admonition about maps and territories deals with THE most central obligation, challenge, and source of hope in the sciences. He seems dead on when the subject is getting math to interface with measurable reality.

" Scientists must constantly remind themselves that the map is not the territory, that the models might not be capturing the essence of the problem, and that the assumptions built into a simulation might be wrong. "

This advice is tough to even define in particular circumstances. What is "represented?" What is "representation?" What of cases where representations must themselves be represented? In specific cases, none of these questions have easy answers. Answers require care, and careful bookkeeping. We need something to keep score about. I believe that Gregory C. Chaitin has been profoundly right to develop the tools of EXPERIMENTAL MATHEMATICS that stand beside the axiomatic methods, but provide a cross-check on logic and show things that are not proven, but are true. When the axioms drop away, the experimental tools stand alone, and are especially important.

The muddled relation between "certain mathematics" and mathematical difficulties in application occurs exactly because the difference between the realm of the measurable and the realm of pure axioms has not been clarified. Those realms need to be defined, and their interface defined. At the interface between these regimes, entities occur that are more complicated than numbers. They are the entities that are our direct interface between our measurement procedures and our symbols. If these entities are understood, modeling can be done without false infinitesimals and false infinities.

The KEY questions, here and often elsewhere, are:

What is representation?

What is represented? ........and

How do we do experiments and how do we score them?

I feel that these are questions that have been asked too seldom about mathematics and its applications.

Bob

budrap - 11:22pm Jun 16, 1998 EST (#590 of 593)

Bob,

" Scientists must constantly remind themselves that the map is not the territory, that the models might not be capturing the essence of the problem, and that the assumptions built into a simulation might be wrong. "

It seems to me this statement clearly illustrates the obstacle that twentieth century science placed in its own path by putting the cart of mathematics ahead of the horse of experimentation and observation.

Scientists wouldn't have to remind themselves that the map is not the territory if they were focused primarily ON the territory.

There wouldn't be this great difficulty distinguishing between what is represented and what is representation if scientists spent the majority of their time in open minded observation and experimentation rather than cooking up ever more fanciful theories for which they then design ever more elaborately tailored 'experiments' whose sole purpose is to find some thread of corrobatory evidence no matter how slim and no matter what the cost.

Science must rest securely on an empirical footing before it picks up the useful and invaluable, when properly used, tool of mathematics.

Bud

rshowalter - 08:22am Jun 17, 1998 EST (#591 of 593) Robert Showalter showalte@macc.wisc.edu

Bud, I'll be saying some other things, but I wanted to respond to one thing you said RIGHT AWAY. You said:

"Science must rest securely on an empirical footing before it picks up the useful and invaluable, when properly used, tool of mathematics. "

Yessir! But sometimes, when it counts, just the opposite is done.

By the 1960's, observed waveforms and a mass of data indicated that neural conduction HAD to have significanct inductance. (There had to be a big coefficient times the derivative of current with respect to time in the conduction equation.) A much respected NIH officer and scientist, Wilfred Rall, did a careful derivation based on the math error that I'm fixing and "proved" that the inductance was O. If that math had been right, Rall would have been right. Here's a major point.

THE MATH DOMINATED THE EXPERIMENTAL EXPERIENCE !!!!!!

Rall made his equation stick, too. The whole field deferred to the math. (It made a difference that Rall was at NIH, and used his position agressively to defend his theory, but he couldn't have done what he did if people hadn't been deferential to the math EVEN IN THE FACE OF A GREAT DEAL OF CONTRADICTORY DATA, MUCH OF IT OLD.

For the last thirty years, the anatomical, biochemical, and genetic aspects of neurophys have grown beautifully, and productively. The parts that have had to depend on electrophysiological details have been in stasis. People are stuck with a grossly wrong differential equation.

Budrap, my love of math goes deep, but in science, when math becomes master, whole fields can go blind. And if math is a god, no one from the outside will question the blindness.

In science, math is a beautiful tool. For many purposes, an indispensible tool. But Budrap, you're absolutely right about how it should be regarded. In the sciences, math out to be a tool SUBORDINATED to experimental discipline.

All the map-territory care that can be mustered to use that tool rightly at the INTERFACE between measurable reality and mathematical abstraction needs to be.

Bob

rshowalter - 04:57pm Jun 17, 1998 EST (#592 of 593) Robert Showalter showalte@macc.wisc.edu

In (#589) I said "be back after lunch" but I've taken longer. Partly I've been thinking about the wisdom of George Mitchell, formerly Senate Majority Leader, now U.S. Ambassador to Ireland and the man who may be more responsible than any other for the current chance for peace in Northern Ireland. Of the negotiations leading up to that agreement, Mitchell spoke of patience, and the need to repeat everything that could reasonably be said, to all the parties, many times. Undergraduates might think that sort of pattern unacceptable, but when persuasion on difficult matters has to happen, that kind of patience may be needed. Effective politicians have a lot of that kind of sophisticated patience. Meetings of the minds, and changing of minds, come hard. If you say "there's a mistake at the interface between pure math and applications" and you want to be listened to, you have some talking to do. Steve Kline taught me that. Steve did make a tough case once, that has been important for our culture since, and he did his best to help me sort out and make this case about math until he died.

I've also hoped that I might be a little clearer than before. In these forums, I've had my mind exposed to some other minds, and some surpassingly clear writing. In (#589) I said:

The muddled relation between "certain mathematics" and mathematical difficulties in application occurs exactly because the difference between the realm of the measurable and the realm of pure axioms has not been clarified. Those realms need to be defined, and their interface defined.

I'm going to use the word "real" without defining it, and I'll ask that people who want to go to a good dictionary to clarify what the word means in the contexts I use it do so. I'll call "scientific mathematics" by another name - "engineering math" because the engineers are more formal and much less casual about their math than the scientists. I'd also like to refer to "analogical mathematics."

Pure math is real: there is a domain of pure mathematics that is testable, consistent, and well defined in its own terms. When people say that pure math is certain, and they mean "certain unless a mistake is made, and certainly testable" they have a right to the "certainty" they claim in the abstract domain in which pure math exists. The foundation of pure math is a few axioms, which may include those needed to define the integers, the arithmetical rules

a + b=b+a ...............................................ab=ba

a+(b+c)=(a+b)+c ....................................ab(c)=a(bc) a(b+c)=ab+ac

and (sometimes for teaching convenience) Euclid's axioms and the geometrical notions these axioms convey. AN ENORMOUS EDIFICE OF CONNECTED CONCLUSIONS CAN BE AND HAS BEEN BUILT FROM THESE AXIOMS. That edifice can be verified on its own terms. It is as real as the game of chess, and much more useful. Even so, this is a "real" edifice that is abstract- it exists apart from any application to a particular object or specific instance. Pure mathematics is abstracted, or separated, from embodiment. The axioms of pure mathematics are NOT sufficient to the measurements we do in our engineering and our science.

When dunlapg -(Pi in the Sky #682) said

"that mathematics isn't a mere human construct, but that it is an "alternate universe" which we explore"

he was speaking sensibly about pure mathematics.

rshowalter - 05:05pm Jun 17, 1998 EST (#593 of 596) Robert Showalter showalte@macc.wisc.edu

How do you get from the abstraction of pure math to math connected to physical law? Dunlapg (#682) suggests one way, as have some others in these forums. That way is analogy. There is a certain innocence to analogy, but it cannot support all the mathematical-physical logic that we do. Another way, which is unavoidable, more powerful and potentially more misleading than analogy, is engineering mathematics. If you have to combine mathematical representations in physical law, you have to use engineering mathematics. Let's talk of analogy first.

PHYSICAL "LAWS" AS ANALOGIES:

The idea that pure math is a source of analogies is useful, and in many ways the idea is a safe one, because the authority of analogies is so provisional. Surely pure math IS a rich source of potential analogies! Think of pure math as an "analogy template factory" as a manufacturing engineer might think of it. The fundamental premise of manufacturing engineering is this:

"Standard procedures, applied to standard conditions, yield standard results."

So here are some standard conditions for our "analogy template factory":

There are a lot of linear relations in the world, so that should mean that linear algebra can be useful. Linear algebra is useful. Many things exist in space - that should make geometry useful. Geometry is useful. Lots of relations hold true as patterns when values input to the relations change - that should make elementary algebra useful. Elementary algebra is useful. Many interactions occur between measurables that are continua - that should make differential calculus and integral calculus useful. The differential and integral calculus are useful.

And so on. There are plenty of physical "laws" that match well to pure mathematical forms, for no provable reason, but for experimental reasons. These same "laws" can be rejected when they fail to match experiments. These relations, alone, would offer plenty of good reason to study mathematics. Nor is there anything intrinsically dangerous with these mathematical analogies as long as the experimental checks of them are occasionally done. NO ONE WOULD CALL THESE ANALOGIES "REAL" IN THE SAME SENSE WE'D CALL PURE MATH REAL. THESE ANALOGIES, USEFUL AS THEY ARE, JUST HAPPEN TO FIT. They have the flimsyness of maps, not the solidity of territories.

There's a problem with "physical law as unprovable, unconnected, ungeneralizable analogy." Sometimes, as a practical matter, people wish to connect relations, generalize relations, and go far beyond what can be measured for many logical steps. That happens very often, in fact. When you have to put your maps together in various ways, you need more formality than analogy alone can give you..

People like Einstein and other theoretical physics DERIVE results in logical sequences that go beyond the measurable for many, many steps. So do engineers. How would an electrical engineer designing a communication link with a satellite check his calculations experimentally? She couldn't possibly. She has good reason to trust those calculations, even when they get complicated, because she has engineering mathematics that she can trust. Even so, that engineering math, which physicists use too, has a mistake buried in it that has caused difficulties for centuries. The difficulty comes just at the interface between representation of measurable things and their abstract representations. The difficulty only happens when coupled relations are represented, and then can be too small to see. But the difficulty can blow up, as it does in neurophysiology and some other places. The mistake occurs when measured quantities have to be combined in finite increment equations and then mapped into formula manipulated by abstract mathematics.

rshowalter - 01:19pm Jun 18, 1998 EST (#594 of 596) Robert Showalter showalte@macc.wisc.edu

Analogies are comfortable. They say

" X resembles Y, for no particular reason...... And Y is handier to think about, or manipulate. ... So we'll use Y! ... It fits! .... Come look."

An analogy has no coercive logical force. It works as well as it works.

The applied physics that is engineering relies on many such analogies, that usably summarize physics that may be much more complicated at finer scales. Resistance of a wire is a simple example - complicated goings-on at the atom, electron, and lattice levels result in measurable behavior that we encode in Ohm's law, a linear relation between current and voltage drop per unit length.

Even so, the convenience and innocence of analogy doesn't cover necessary kinds of descriptive work. For math-physics analogies, there are some practical limitations to analogies, and limits to how theory-free they can be.

The analogies themselves require us to use some essentially abstract theoretical constructs involving dimensions - constructs that are NOT part of pure math.

If we have to put several "analogy based rules" TOGETHER, we have to work on the basis of formal rules if we are to work at all. More generally, if we have to put several equations together, whether they are based on analogies or not, we have to use formal rules if we are to work at all. And we HAVE to assume that these formal rules have coercive force, if we are to proceed at all.

Here are some examples.

Newton didn't know HOW gravity worked, and said so. But he knew (and guessed and believed) that planetary motions fit an inverse square law. He knew (and guessed and believed) that his law of inertial motion was right. So far, that's analogy, written as convenient abstractions. When Newton TOOK his gravity relation, and his law of inertia, and COMBINED them for problems of interest, he was doing FORMAL mathematics, according to FORMAL rules (as best he could) that were intended to be, and were thought to be, and mostly were, logically coercive. And he got a long way pretty fast, in a world where many, many things fit his new mathematical tools. Anyway, it wasn't long after he started working that he got past analogy, and into formal procedural manipulations. Newton lost his analogical innocence almost as soon as he started doing problems. (Newton also hit problems with multiple orbiting bodies early - his formality "showed" that coupling interactions between any three planets were always vanishingly small.)

Newton, for most people, was the greatest of physicists and the greatest of mathematicians. When he lived, no one was clear about the difference. The next greatest mathematical physicist, in Feynman's view, and Einstein's view, was James Clerk Maxwell.

Maxwell was very well trained in math and was indoctrinated in the importance of connecting math to the world. (He had a wonderful math education.) Maxwell set for himself the job of converting the observations of Michael Faraday to mathematical form. Faraday's wonderful experiments could be connected by "laws" based on analogical mappings - Maxwell constructed these relations. The instant he had to put these relations together, he was doing, and trying to work out, formal mathematical operations. It was a matter of faith for him, and for those around him, that these formal operations, once properly done, would have coercive logical force in the same sense the operations of pure math have coercive force. Maxwell was clear about his faith, partly because he was in mathematical trouble for the last twenty years of his life. He had combined terms, that "had to be" 0 according to math, and one of those terms had to be finite, and important experimentally. Problems and all, what Maxwell did in electromagnetics has been essential to the development of our civilization.

rshowalter - 01:21pm Jun 18, 1998 EST (#595 of 596) Robert Showalter showalte@macc.wisc.edu

Einstein's work offers examples of "models as assumed analogies" at the start of logical work, that convert to coercive formality.

Think of Einstein's special relativity. Starting from facts about how the speed of light acts in different observational frames, the facts convert to equations. So far, so innocent. But the logical innocence of analogy is soon lost. ONCE THE EQUATIONS ARE MANIPULATED THEY **HAVE** TO BE BEING MANIPULATED BY FORMAL RULES WITH COERCIVE LOGICAL FORCE. Otherwise, the manipulations are bootless.

Journals and texts are full of such sequences because the mathematical structure of the world requires them. One may start with equations (analogies or not) that are ad hoc constructs. When one MANIPULATES those equations, one commits oneself, one's co-workers, and one's customers to coercive logical rules. One has reason to hope those rules are right.

Usually they are. When we fly in airplanes and drive cars we bet our lives on these rules. In countless applications, these rules work well. Even so, there have been long-standing problems. So far, as a culture, we've coped rather than confronted these problems. They caused Newton, Maxwell, and Einstein problems, and they still cause problems in neurophysiology and elsewhere.

The following question is intended to be stark, but I believe that it is absolutely serious:

What is the RIGOROUS, PROVED logical foundation of the mathematics that scientists and engineers use - the mathematics that manipulates equations representing physical relations?

The answer, as of now, is that our culture has no established and well verified logical foundation for applied mathematics at all. The mathematical-physical foundations we stand on are ad-hoc constructions that are much less trustworthy than they appear. Our formalities may look exactly like the formalities of pure math. But they exist in logic sequences that are entirely unconnected to the axioms on which pure math rests.

When we derive an equation representing a physical model, reasoning from a sketch and other physical information, we write down symbols and terms representing physical effects. We may write down several stages of symbolic representation before we settle on our "finished" abstract equation. As we write our symbols, we implicitly face the following question:

Question: WHEN can we logically forget that the symbols we write represent a physical model? WHEN can we treat the equation we've derived from a physical model as a context-free abstract entity, subject only to the exact rules of pure mathematics?

We can never do so on the basis of rigorous, certain, clearly applicable axioms. There are no such axioms. We cannot avoid making an implicit assumption that says

"THIS equation can be treated as a valid abstract equation, without further concern about its context or origin, because it seems right to do so, or because it is traditional to do so. We have made the jump from concrete representation to valid abstraction HERE."

This assumption may happen to be right in the case at hand. But the assumption is not provably true from the axioms and procedures of pure mathematics. People go ahead and make these sorts of assumptions as they work. They cannot avoid doing so. Right or wrong, they are making "experimentally based" assumptions in their representation-derivations. People have made these implicit assumptions without recognizing the essentially experimental nature of their proceedings. It is better that this experimental nature be recognized, so that consistency checks can be applied to the unprovable steps. Any inconsistencies involved with these implicit steps may then be identified.

rshowalter - 01:24pm Jun 18, 1998 EST (#596 of 596) Robert Showalter showalte@macc.wisc.edu

Historically, that implicit experimental mathematics has almost never been considered (except by Maxwell.) It needs to be CAREFULLY considered. Following the example of G.J. Chaitin, we can explore mathematical relations that are beyond our established axioms. We'll find that for mapping, we need something more than pure math. We need a mathematical domain that accommodates dimensional numbers, and the special dimensional entities that encode our measurement procedures. These entities, that Steve Kline and I have called natural law operators, have no axiomatically provable existence. They are not the same as numbers. They exist beyond the axioms. They are EXPERIMENTALLY necessary to connect our experimentally measurable world with the usages of pure mathematics. The EXPERIMENTS necessary to establish this "domain of the measurable" are NUMERICAL and CONSISTENCY experiments involving consistency relations within the math itself.

Consistency with physical experiments is important, but a separate and logically second issue. If we are asking for logically coercive mathematical tools (and we are) those mathematical tools must be consistent with themselves. Axiomatic inference fails beyond the reach of the axioms. Even beyond the axioms, consistency tests continue to be powerful. Consistency tests haven't been much used in the past, and I've spent some time working some of them out..

If you're thinking about a "Copernican revolution" in mathematical representation, here it is:

There HAS to be a separate mathematical domain, abstract but beyond the axioms of pure mathematics, that accommodates the usages of measurement- representation. Call this new domain the "measurement domain." Mapping into the measurement domain is logically necessary before equations can be reduced to the domain of pure mathematics. The measurement domain includes entities, the natural law operators, that are not numbers. The algebra of the natural law operators differs from the algebra of numbers in such a way that many infinities and infinitesimals in current derivations are finite.

Perhaps it would be easier to call this a "Columbian revolution". The measurement domain is a new world. Not a very fancy world, not a very difficult world. Once you know that world is there, and know a few rules, it is possible to get the connection between the real world of measurable things and the "meaningless game" of pure mathematics more cleanly connected. Specifically, you can connect COUPLED physical relations in consistent ways that haven't been possible or reliable before.

Bob

)

rshowalter - 01:24pm Jun 18, 1998 EST (#596 of 600) Robert Showalter showalte@macc.wisc.edu

Historically, that implicit experimental mathematics has almost never been considered (except by Maxwell.) It needs to be CAREFULLY considered. Following the example of G.J. Chaitin, we can explore mathematical relations that are beyond our established axioms. We'll find that for mapping, we need something more than pure math. We need a mathematical domain that accommodates dimensional numbers, and the special dimensional entities that encode our measurement procedures. These entities, that Steve Kline and I have called natural law operators, have no axiomatically provable existence. They are not the same as numbers. They exist beyond the axioms. They are EXPERIMENTALLY necessary to connect our experimentally measurable world with the usages of pure mathematics. The EXPERIMENTS necessary to establish this "domain of the measurable" are NUMERICAL and CONSISTENCY experiments involving consistency relations within the math itself.

Consistency with physical experiments is important, but a separate and logically second issue. If we are asking for logically coercive mathematical tools (and we are) those mathematical tools must be consistent with themselves. Axiomatic inference fails beyond the reach of the axioms. Even beyond the axioms, consistency tests continue to be powerful. Consistency tests haven't been much used in the past, and I've spent some time working some of them out..

If you're thinking about a "Copernican revolution" in mathematical representation, here it is:

There HAS to be a separate mathematical domain, abstract but beyond the axioms of pure mathematics, that accommodates the usages of measurement- representation. Call this new domain the "measurement domain." Mapping into the measurement domain is logically necessary before equations can be reduced to the domain of pure mathematics. The measurement domain includes entities, the natural law operators, that are not numbers. The algebra of the natural law operators differs from the algebra of numbers in such a way that many infinities and infinitesimals in current derivations are finite.

Perhaps it would be easier to call this a "Columbian revolution". The measurement domain is a new world. Not a very fancy world, not a very difficult world. Once you know that world is there, and know a few rules, it is possible to get the connection between the real world of measurable things and the "meaningless game" of pure mathematics more cleanly connected. Specifically, you can connect COUPLED physical relations in consistent ways that haven't been possible or reliable before.

Bob

psrain - 05:32am Jun 19, 1998 EST (#597 of 600) Terror, Intrigue, Mars... http://www.psrain.com

Sometimes, as in the complex mode index plane of the scattering matrix, infinities are neccessary to the description of real physical phenomena. Those "poles" yield up the wave numbers of excitations which propagate energy. I love singularities like these Regge trajectories. They stand up and shout "here, you fool!" They dominate every neighborhood about them, and so, basically determine the analytical properties of the thing which yields up physical description in the form of formulas. My point, of course, is that mathematical singularities occuring in equations describing physical phenomena are many times exceedingly desirable. What would Fourier Transformation be without poles? How could irreversible thermodynamic behavior be expressed without the singularities in the inverse Laplace Transformation and the limiting of the contour to the causal portion of the response plane?

rshowalter - 12:03pm Jun 19, 1998 EST (#598 of 600) Robert Showalter showalte@macc.wisc.edu

Good point Rain! (#597) 0's and infinities are an essential part of mathematical applications, at least from analytical geometry on up. I'm worried about false infinities and false 0's that happen in the derivation of differential equations from coupled physical models. But other infinities and 0's do have important and convenient uses.

Let's talk about infinities and 0's that do make sense, before talking about the ones that are longstanding mistakes. We can start somewhere around the baby algebra course taught to Triskadecamus (#587) by the firm educational methodology many of us love so well.

Suppose you represent a plane (a paper, say) and write two perpendicular lines as axes, one horizontal (the x axis) and the other vertical (the y axis). Now if a line is rotated on that plane, its slope with respect to those axes changes. (The same line, judged by different axes, would have different slopes.)

The slope of a line gets larger and larger as it approaches the vertical (approaches parallel with the y axis). In the vertical limit, that line's slope, dy/dx, is infinity. The infinity involved is bigger than any number you could specify by any means, but the infinity is a sensible mathematical entity, that you can focus with a limiting argument if you wish.

If you rotate that line so that it approaches parallel with the horizontal (x) axis, its slope approaches 0, and when the line is parallel to the x axis, its slope is 0.

If you happened, for analytical reasons, to have a separate read on the derivative of the line as it rotated, you'd be able to tell how the line related to the x and y axes by looking at its slope (derivative) without seeing the axes themselves. When things get a little more abstract, (and they do quickly when you start doing linear algebra) that kind of thing happens. We talk about geometries without pictures. We talk about n dimensional geometries that we cannot picture. Infinities and 0's connected to slopes (derivatives) and partial derivatives tell you that you're parallel to particular axes, and perpendicular to other axes. That's not only convenient, its essential. PSRAIN points that out.

In abstract representations, a person has good reason to respect his poles, and love his 0's, and not doubt the analytical usefulness of either of them, unless they cause trouble, or unless they're part of a representation that doesn't work. Even so, I'm not real clear about the beautiful and magical aspects of poles and zeros. They tell you where the axes are in your analysis, and that's good. A little farther along, they tell you where your equilibria are (and where your points of maximum instability are) and that's also good. They do some other similar sorts of things. But I think Rain is waxing a little on the poetic side when he speaks as follows:

I love singularities . . . . . . .. They stand up and shout "here, you fool!" They dominate every neighborhood about them, and so, basically determine the analytical properties of the thing which yields up physical description in the form of formulas."

rshowalter - 12:06pm Jun 19, 1998 EST (#599 of 600) Robert Showalter showalte@macc.wisc.edu

The singularities I've met haven't been so informative. Rain, you might spend a few minutes perusing a book that has filled many pleasurable evenings of mine:

ANALYSIS OF NONLINEAR CONTROL SYSTEMS by Dunstan Graham and Duane McRuer, Dover.

This book is full of poles and zeros, none of which spoke to me in the informative manner you suggest. The book shows how powerful linear control theory (full of poles and zeros) is and also shows how much trouble you get into, and how fast the trouble comes, when you deal with even simple nonlinearities. There are many good parts of the book, but my favorite single sentence is this:

"In spite of the many words that have been used here to elaborate the discussion, and perhaps to hide a fundamental fact, both the study and the physical synthesis of nonlinear control systems are, still today, in a somewhat unsatisfactory state." ...p435

These words were written in 1961, but they could have been written yesterday. The bees, the birds, and (especially) the bats, are way ahead of us. (Either that, or the computer in a bat's brain is MUCH faster than the fastest machines we can muster.)

Analogous mathematical models, with which physicists must be concerned, are no better. I don't have any solution to the mathematical stumps that can happen in this sort of analysis. It often happens that a problem seems to be set up right, but the math simply defeats you. As computer simulation gets better, there are getting to be fewer of these problems but there are still a prodigious number of them, and many of them are important.

People forget how easy these stumps happen. Groups of people can do that, too. They get to talking about things that are "understood in principle" and calculations that "can be done in principle" and if they keep talking like that for a while, they can miss mistakes and incompletenesses that they'd find if they only got down and dug.

A problem, if they talk this way much, is they can feel "SURE" of things they don't really know to be true on the basis of evidence. They can reassure any doubters within their midst. "Rock solid truth" can be manufactured in this way.

rshowalter - 12:09pm Jun 19, 1998 EST (#600 of 600) Robert Showalter showalte@macc.wisc.edu

Psrain is right. Infinities and zeros have essential uses in mathematical representation. The uses Psrain referred to were in the SOLUTION of equations.

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

What about the DEFINITION of equations?

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Particularly, what about the DEFINITION of the equations, that is, the derivation of the equations, from PHYSICAL MODELS directly connected to measurement?

These derivational questions involve separate problems. In derivation of differential equations from coupled physical models, false infinities DO occur. False zeros DO occur. Both cause logical and practical problems.

Troublesome infinities have been central to physics this century.

Selected Papers on QUANTUM ELECTRODYNAMICS J. Schwinger, ed. Dover

sets out really interesting papers, from 1927 to 1948. A lot of good guys sweat bucketfuls, and stayed up nights, worrying about the infinities discussed in the QED literature. Not all infinities are the friendly kind Rain refers to.

I'll be discussing both an infinity and a false zero that occur in the derivation of line conduction equation, because the false infinity is easy to test, and the false zero is medically disastrous, and similar to false zeros that bothered Maxwell. Both are involved with the logic of DERIVATION of an equation, not the logic of solution of that equation under some specific circumstance.

When a scientist or engineer pulls some equations out of a handbook and manipulates them, that manipulation, right or wrong, is isomorphic to a manipulation in pure mathematics, and might as well be pure mathematics.

It is a different case when an equation must be derived. It is a different case when equations must be COMBINED in ways that require decisions about cross-effects. Then, when we derive an equation representing a physical model, reasoning from a sketch and other physical information, we write down symbols and terms representing physical effects. We may write down several stages of symbolic representation before we settle on our "finished" abstract equation. As we write our symbols, we implicitly face the following question:

Question: WHEN can we logically forget that the symbols we write represent a physical model? WHEN can we treat the equation we've derived from a physical model as a context-free abstract entity, subject only to the exact rules of pure mathematics?

We can never do so on the basis of rigorous, certain, clearly applicable axioms. There are no such axioms. In the past, standard practice has been to make an implicit assumption that says

"THIS equation can be treated as a valid abstract equation, without further concern about its context or origin, because it seems right to do so, or because it is traditional to do so. We have made the jump from concrete representation to valid abstraction HERE."

We can do better than that, and there are important reasons why we must do better than that. We can learn what we need to know by insisting that our mathematics be SELF CONSISTENT according to the same consistency logic that has been a mainstay of toolmaking and surveying for centuries.

It turns out that we have to map our representation from the physical model FIRST into a measurement domain that includes natural law operators, and algebraically simplify it there, according to the rules that apply to natural law operators, before we map that equation into the abstract domain of pure mathematics. Otherwise, for coupled models, we may generate false infinities, or can eliminate terms that are finite, and that can be important.

rshowalter - 07:19pm Jun 19, 1998 EST (#601 of 602) Robert Showalter showalte@macc.wisc.edu

I've said that

" There HAS to be a separate mathematical domain, abstract but beyond the axioms of pure mathematics, that accommodates the usages of measurement- representation. Call this new domain the "measurement domain." Mapping into the measurement domain is logically necessary before equations can be reduced to the domain of pure mathematics."

At one level, a statement like this is strange, and even revolutionary. At another level, it's old hat. The difference between the tangible world and the abstract world of mathematics has been talked about since Newton's time. I'd bet one could find such statements by a member of every generation since Newton's. How can such a statement be focused? Clerk Maxwell needed such a focus, and struggled to get it for twenty years or more. It was his main modeling problem, and his failure to handle it impressed generations of physicists after him, as Feynman remarked. NOBODY can be expected to be smarter than Maxwell, and Maxwell was pretty clear where his problem was, and what it was. Still, he stayed stumped.

Here is James Clerk Maxwell, writing a year before his death: ( DIMENSIONS, Encyclopedia Britannica, 9th ed. 1878)

"There are two methods of interpreting the equations relating to geometry and the other concrete sciences.

"We may regard the symbols which occur as of themselves denoting lines, masses, times &c; or we may consider each symbol as denoting only the numerical value of the corresponding quantity, the concrete unit to which it occurs being tacitly understood.

"If we adopt the first method we shall often have difficulty in interpreting terms which make their appearance during our calculations. We shall therefore consider all the written symbols as mere numerical quantities, and therefore subject to all the operations of arithmetic during the process of calculation. But in the original equations and the final equations, in which every term has to be interpreted in its physical sense, we must convert every numerical expression into a concrete quantity by multiplying it by the unit of that kind of quantity."

According to the first, more literal method Maxwell cites, we have "difficulty" interpreting some (cross effect) terms. In fact, we cannot interpret them at all. We are stopped. THEREFORE we make an assumption. We decide to act AS IF our physical quantity representing symbols may be abstracted into simple numbers in our intermediate calculations. This assumption usually works. But it is a pragmatic assumption with no logically rigorous basis at all. Sometimes it fails, and Maxwell knew that. Here's the gruesome choice Maxwell faced:

On Maxwell's first assumption, we have terms that are difficult (impossible) to interpret.

On Maxwell's second assumption, these terms fit readily into our calculus apparatus, and can quite often be "shown" by a limiting argument to be infinitesimal or infinite (sometimes when Maxwell KNEW the terms couldn't be infinitesimal or infinite.)

This was no casual problem for Maxwell, but a central concern of his for two very industrious decades. George Hart said, in 1995, that the problem had not improved or changed in any way after more than a century. (MULTIDIMENSIONAL ANALYSIS, Springer-Verlag, 1995, pp 17-18.)

Resolution of this class of problems takes careful attention to the difference between MAPS (that are REPRESENTATIONS) and TERRITORIES (that are WHAT IS REPRESENTED.)

It takes recognition of the fact that, even when the axioms are inapplicable, consistency tests are both possible and necessary.

The necessary logic for solving this class of problems, from a mathematical side, was pioneered by Kurt Godel and G.J.Chaitin. The necessary logic, from a measurement side, was pioneered by P.W.Bridgman, and applied and focused by S.J. Kline and me.

Kline and I have solved the problem, and the papers showing the solution have been checked to some degree by people who have read this forum. No errors have been pointed out. These papers are discussed and hotkeyed in three (Pi in the Sky entries, #674-676. rshowalter "Pi in the Sky" 6/1/98 3:35pm

To fit the solution into a useful context, I think it is useful to consider some of the thoughts and difficulties of P.W. Bridgman, Nobelist in physics, arch- realist, experimentalist extraordinaire, and a central figure in the definition of the "engineering" view of "modeling rigor" - the notion that right understanding of a model IS an understanding of how to measure the quantities discussed in the model, neither more nor less. This is a very different notion of rigor from the one the pure mathematicians have. Both when I think Bridgman had it right, and when I think he didn't, it seems to me that his ideas throw an interesting light on the questions:

How do you go from a measurable model (territory) to a representation (map) in abstract mathematics, and wind up with a map that represents the territory well?

and

When you do that, how do you explain how you did it?

Bob

rshowalter - 03:57pm Jun 20, 1998 EST (#603 of 605) Robert Showalter showalte@macc.wisc.edu

Our society has never been clear, or felt clear, about exactly how mathematical theory and experiment fit together. Thinkers who've tried to get clear have shown that they were not. We've used the connection between mathematized theory and the measurable world as a magic that somehow works. There's no hope of avoiding some philosophically unclear or arbitrary elements in the math- reality connection. But we CAN hope to get the grammar, the procedural steps, in the connection between the measurable world and the mathematical abstraction more clear than they are.

We need to get some nuts and bolts relations right.

People and institutions are conflicted about math and experiment. Most folks seem pretty comfortable about experiments, except that experiments are so expensive and difficult to do.

Math is different. Everybody I know, when you push them, is uncomfortable and conflicted about math. (Me too.) High and low, folks worship math, fear it, hate it, and have other feelings about it. But, except for some narrow areas where people become accustomed to it, folks seldom deal with math as a matter-of-fact tool. Math isn't a comfortable tool. It is too powerful, for one thing. We don't understand how it works, for another. Even in the best hands, it can refuse to act.

There are stories about guys who fall in love with, and get addicted to, beautiful ladies who sometimes take them to the heights of rapture. Other times, for no reason they can see, they clobber them. Well, for people, the Queen of the Sciences can be like that. The old Queen may do wonderful things for you. But if you think you can predict her, you haven't tried to woo her. She can put the freeze on you when you least expect it, and leave you with dry heaves for as long as you care to heave. Helpful she is, an ordinary tool, she isn't.

I just described math in a magical, medieval way, using language very different from the language people apply to things that they understand and have mastered.

There's no hope of avoiding some philosophically unclear or arbitrary elements in the math-reality connection. There's no hope of avoiding all fear. But we CAN do better than we've done. We CAN hope to get the grammar, the procedural steps, in the connection between the measurable world and the mathematical abstraction more clear than they are.

It helps to know that we've got some problems.

budrap - (#590) said some fascinating, powerful things that have been haunting me. Budrap says that

"Twentieth century science placed (an obstacle) in its own path by putting the cart of mathematics ahead of the horse of experimentation and observation. "

He goes on:

" Scientists wouldn't have to remind themselves that the map is not the territory if they were focused primarily ON the territory. . . . . . . . . . There wouldn't be this great difficulty distinguishing between what is represented and what is representation if scientists spent the majority of their time in open minded observation and experimentation rather than cooking up ever more fanciful theories for which they then design ever more elaborately tailored 'experiments' whose sole purpose is to find some thread of corrobatory evidence no matter how slim and no matter what the cost. ............ Science must rest securely on an empirical footing before it picks up the useful and invaluable, when properly used, tool of mathematics. "

I agree with most of this, and yet, POWERFUL men worked hard to avoid having this math-worship happen. It isn't just that scientists became bad or stupid or lazy folks. It wasn't that science folks just "chose to put an obstacle in their path." All those reservations make Budrap's comments more interesting, not less.

rshowalter - 04:00pm Jun 20, 1998 EST (#604 of 605) Robert Showalter showalte@macc.wisc.edu

You could find a more powerful figure in science administrative history than James Conant if you thought hard, maybe, but Conant was big. James Conant defines science as follows:

SCIENCE: ....."that portion of accumulative knowledge in which new concepts are continuously developing from experiment and observation and lead to further experimentation and observation."....................... "Certain Principles of the Tactics and Strategy of Science" Chapter 4 in ON UNDERSTANDING SCIENCE James B. Conant, 1947.

That notion of science seems to fit Budrap's requirements. Conant's definition doesn't fit much science then or now. If you want a stark contrast to Conant's resolutely empirical definition, look at Feynman's "Classical Physics," table 18- 1, V.2, The Feynman Lectures on Physics. In Feynman's view, very respectable then and since, ALL of classical physics was expressed by:

Maxwell's 4 equations (as organized by Heaviside) at the top,

conservation of change,

an electromagnetic force law,

Newton's law of motion with Einstein's modification, and

the gravity force law.

(Contextual note: after a year and a quarter of hard instruction, with Feynman working harder than anybody, a small percentage of Caltech students, who were still following after most had been thoroughly lost, got to say "Aha! Right there! ALL OF CLASSICAL PHYSICS in half a page.)

(A famous engineering experimentalist in the ‘30's measured the duration of a combustion event. It was .1 millisecond. It took him all day.)

When Feynman displayed that half page, he meant its title seriously: ALL of classical physics. If ALL of classical physics can be expressed on half a printed page, in the form of groups of symbols not clearly connected to cases, we are dealing with a kind of science POWERFULLY disconnected from Conant's kind of science.

Feynman's "science" is a powerful kind of science, too. By and large, people respect it as much, or more, than Conant's kind.

How do these "sciences" coexist? Not always well, not always clearly, not always logically. How DOES the math connect to the measurable science, anyway? There are no accepted explanations that really work. George Johnson's gotten that completely right.

Budrap (#590) makes complaints that would make sense if math had some of the status, and some of the immunities, of magic, and in fact science sometimes does. The concerns Budrap cites are fair concerns. One problem is that math, when it works in physical description, can be so powerful and efficient. That raises the value of theory work, but can also intimidate. Another problem is that math, right are wrong, is often quite easy to write - folks can spin out theoretical papers pretty fast, if they've a mind to, and they can keep on doing so long after the experimentalists have been stumped. (Linder's THE END OF PHYSICS is largely about this.) Another problem comes from university usages. Most papers, for most paper writers, mostly matter as chits for educational employment and advancement in academic departments, or government operations modeling on academic departments. A sort of Gresham's law can occur, with the "cheap chits" of math-theory displacing the "expensive chits" of experimental work. That would fit with the pattern Budrap complains of. But that problem occurs, mostly, because math-theory is DISCONNECTED from experiments.

Experimental discipline is a universally respected ideal that is hard to apply, or impossible to apply, because the connections between math-theory and experiment aren't tight enough for map-territory matchings.

rshowalter - 04:03pm Jun 20, 1998 EST (#605 of 605) Robert Showalter showalte@macc.wisc.edu

What do we have, beyond "ABRACADABRA" when we make the connection between math-theory and experiment? Why do we need to know? Why?

There are practical reasons we ought to know. Maxwell needed to know. Engineers trying to deal with COUPLED circumstances where the couplings really matter need to know. In neurophysiology, the conduction equation now used understates inductance by factors as large as 10^18, with disastrous consequences, because the right answer isn't known.

There are also cultural reasons we would like to know how math and reality connect. Suppose, at some future time, standard instruction asked high school students to do the following in a week or two:

"O.K. kids. Suppose that ALL the equations of natural law were somehow lost, but you still knew the math we've taught you. The data was not lost. How would you quickly and cleanly derive the basic equations of classical physics, JUST FROM THE DATA? We've got the data organized. We'd like Newton's laws, and the laws of moving and stationary charges."

Today academics would fail such a test. They'd take refuge in the notion that the equations were "acts of genius." (In science, after some years, "acts of genius" are supposed to become obvious routine.) If we REALLY knew how to do these things, we could teach people to do them too. Our culture would be more comfortable, and more unified, if these things were widely understood.

We need to know enough about the math-reality connection to describe valid connections across that boundary in clear, procedural ways. If, in the end, the steps are in any way obscure or unclear, they are unsatisfactory.

If we knew that much, Budrap's complaint would be entirely fair. But if we knew that much, Budrap's complaint might be unnecessary, too.

*********************

P.W. Bridgman was a fascinating figure, because he thought intensely about the math-reality connection, and got to the point where he didn't see that there was a conceptual problem of any kind. Even so, he couldn't do a lot of problems, and knew it. It's interesting what he saw, and what he missed. There was a place he walked all over, that had deeper problems than he suspected. Everybody else missed it too.

When we measure many of the quantities that we use, the results of our measurements aren't just numbers. They are more complicated than that, dimensionally and in other ways.

Bob

rshowalter - 07:42pm Jun 20, 1998 EST (#606 of 607) Robert Showalter showalte@macc.wisc.edu

Accomplished old men, who know they are going to die, sometimes write books. P.W. Bridgman wrote an exceedingly interesting one two years before he died of cancer. Whether he knew of his doom when he wrote THE WAY THINGS ARE (Harvard, 1959) I don't know. The last chapter starts with this:

" We now find ourselves, at the end of our long analysis, in a position to offer a tentative and partial answer to the problem dimly shadowed forth in the INTRODUCTION, namely, to find the source of the weakness or ineptness of all human thinking."

The man who wrote that was still doing difficult experiments with great confidence and success. For all the confidence, for all the success, in spite of the Nobel Prize, Bridgman's mind stayed focused, as it had for years, on "the weakness or ineptness of all human thinking" including that thinking aided by instruments that extend the senses. Bridgman, an experimentalist as meticulous, inventive, confident, and successful as anyone is likely to name, had a scientific style that seems as certain as it could be. He knew his own mind, and was uncompromisingly solitary - he published 260 papers, only two with co-authors. He was WONDERFUL with his hands, and his setups were brilliant. He made much of his most precise, impressive equipment himself. To my mind, Bridgman was also as intuitive and intense as any poet. I think he is one of the most interesting physicists of this century. Bridgman thought long, hard, and carefully about some of the same things that have occupied philosophers, Berkeley included, from the Greeks on. His name is now a symbol for a simple idea. He's revered partly for that simple idea, and partly for something else.

The Britannica (1985) quotes the simple idea that people remember about Bridgman from his THE LOGIC OF MODERN PHYSICS:

"In general, we mean by any concept nothing more than a set of operations; the concept is synonymous with the corresponding set of operations."

Bridgman used this notion of operationality as a pattern to fit to all experimental circumstances and all scientific thought and analysis. To him, usages with experimental equipment were operations. Thoughts were, too. In a sense, this was taking the ancient Greek admonition "define your terms!" and pushing it very hard. He showed that this pushing could be enlightening, comforting, and practically useful. He pushed it the more because he was so doubtful about people's ability to see and understand even those things that eyes and instruments showed them, much less anything beyond.

rshowalter - 07:46pm Jun 20, 1998 EST (#607 of 607) Robert Showalter showalte@macc.wisc.edu

Bridgman was much concerned with the fallacy of misplaced concreteness in the sciences. He saw that definition of many scientific ideas was muddled conceptually and from a measurement point of view, and showed many examples. Fighting that muddle, he became something like the patron saint of experimental precision and care.

Bridgman's basic ideas of attainable reality were similar to map makers notions, or instrument maker's notions. Bridgman practiced what he preached as an instrument maker. He built accurate, reliable, explainable, trusted pressure measuring instruments up to 400,000 atmospheres, making invention after invention in order to do so. Here was the CENTRAL thing he knew about calibrating and perfecting a measurement instrument.

The instrument had to pass loop tests.

Different cycles or trajectories, ending at the same place, should yield the same final reading. This is the same test surveyors have applied for centuries. This is a kind of test applied again and again in the making of precision tools. Bridgman didn't invent the loop test. But he showed by example and forceful argument how fundamental loop tests were, and insisted that people understand.

When I think of Bridgman, I think of hard thought, and endless care, and the notion that with enough thought, and enough care, all scientific problems relating to the real measurable world could be mastered. Exacting as that ideal is, life isn't quite that easy. Bridgman knew that. But beyond what care could do, for Bridgman, lay the consolations of philosophy, not problems to be solved.

But Bridgman, the most matter-of-fact of men, taught two other lessons that stick with me.

No matter how we try, our contact with the world will ALWAYS be somewhat provisional, in ways we cannot as animals predict. We always have reason for some suspicion of our ideas about "reality."

even so,

Our instruments (in HIS sense, including our ideas and mathematical operations) do what they do. When an instrument fails a loop test, we should find out why, and fix it.

We can deal with out mathematical instruments in the same spirit we deal with other tools. We can test them, and fix them so that they pass the tests we can apply to them.

Pure math, that we use instrumentally, is a "game" or "logical system" that is testable, consistent, and well defined in its own terms. The foundation of pure math is a few axioms, which may include those needed to define the integers, the arithmetical rules

a + b=b+a ...............................................ab=ba

a+(b+c)=(a+b)+c ....................................ab(c)=a(bc)

a(b+c)=ab+ac

and (sometimes for teaching convenience) Euclid's axioms and the geometrical notions these axioms convey. Pure math results can be verified by rules referred exactly to the axioms. It is as real as the game of chess.

What logical instruments do we have to CONNECT our measurements to pure math?

If there's a real world behind those measurements, and that seems exceedingly likely by now, that connection has to be something real, something specific.

If the real world is as mathematical as it appears to be, that connection has to be something of sharp mathematical precision.

That connection will have to be BEYOND the axioms of pure mathematics, which say nothing of the circumstances we measure. We won't be able to determine that connection axiomatically, no matter how mathematically precise it may be. We WILL be able to apply experimental math, and loop tests in particular, to our investigation of that connection.

rshowalter - 02:18pm Jun 21, 1998 EST (#608 of 608) Robert Showalter showalte@macc.wisc.edu

Note: This is classical physics. Before possible difficulties in "higher" physics are addressed, it makes sense to deal with the logical requirements of classical physics and engineering.

Before the question

What logical instruments do we have to CONNECT our measurements to pure math?

can be asked clearly, one needs to answer another question, well beyond the axioms of pure mathematics.

A) How do we arithmetize our world? How do we measure, and how do we our express our measurements?

Clerk-Maxwell thought harder about that than anyone before him, and worked out dimensional notation standard to this day. A short quote from Maxwell will answer questions under A) above.

From measurement it is a short step to the expression of natural laws, the borderline between axiom-free usages, and mathematical usages that we treat as abstract and axiomatic pure math. That short step, the formation of natural law operators, has seemed unremarkable. I'll quote D.C. Ipsen's clear and standard recounting of the steps that form entities that I'll call natural law operators.

These steps seem entirely obvious and trouble-free if the distinction between map and territory is forgotten, and if axioms are assumed to apply where they do not.

But when the natural law operators are considered as the axiomatically unsupported entities that they are, it makes sense to investigate these steps, and the natural law operators that they bring into being. For clarity, I'll define the natural law operators formally. We can ask these questions about the natural law operators:

Why can we do arithmetic with natural law operators? On what authority?

What do we know about that arithmetic under various circumstances?

We have NO axioms that give authority to ANY of the usages of natural law operators. We've assumed that we've had rules for their use, more-or-less casually, on the basis of a map-territory muddle. But experimental mathematics can guide us toward the rules we need, and rule out internally inconsistent usages that cannot be correct.

Once experimental math is applied to the natural law operators, we can answer the question:

What logical instruments do we have to CONNECT our measurements to pure math?

The natural law operators are the logical instruments that we use to make that connection. Once experimental math is applied to the natural law operators, we can see that the natural law operators are not the same as numbers. When they are properly used, infinities and infinitesimals that have been problematic for centuries are avoided.

rshowalter - 03:06pm Jun 21, 1998 EST (#609 of 609) Robert Showalter showalte@macc.wisc.edu

How do we arithmetize our world? How do we measure, and how do we express our measurements? People have considered this sort of question since Galileo's time. Fourier did so with insight and care. After 1860, James Clerk Maxwell thought harder about measurement and dimensions than anyone before him, because his problems made him do so. Maxwell worked out dimensional notation standard to this day. Here are excerpts from the beginning of Maxwell's A TREATISE ON ELECTRICITY AND MAGNETISM:

"1.] Every expression of a Quantity consists of two factors or components. One of these is the name of a certain known quantity of the same kind as the quantity to be expressed, which is taken as a standard of reference. The other component is the number of times the standard is to be taken in order to make up the required quantity. The standard quantity is technically called the Unit, and the number is called the Numerical Value of the quantity.

"There must be as many different units as there are different kinds of quantities to be measured, but in all dynamical sciences it is possible to define (BRIDGMAN WOULD SAY "EXPRESS") these units in terms of the three fundamental units of Length, Time, and Mass. Thus the units of area and volume are defined respectively as the square and the cube whose sides are the unit of length. ...................................................................................................................................... ...................................................................................................

"2.] In framing a mathematical system we suppose the fundamental units of length, time, and mass to be given and deduce all the derivative units of length, time, and mass from these by the simplest attainable definitions.

"The formula at which we arrive must be such that a person of any nation, by substituting for the different symbols the numerical values of the quantities as measured by his own national units, would arrive at a true result.

"Hence, in all scientific studies it is of the greatest importance to employ units belonging to a properly defined unit system, and to know the relations of these units to the fundamental units, so that we may be able at once to transform our results from one unit system to another. .......................................................................

P. W. Bridgman spent much time elaborating on Maxwell's definitions and pointing out how necessary careful definition of measurement procedures actually was. It is one thing to express the dimensions of a quantity in standard units. It is another thing to define the measurement, and by doing so define what the quantity means. For instance, the units of torque and energy are identical, though torque and energy are entirely distinct physical ideas. Both have units of (Length^2..Mass)/time^2 . For both the energy and torque definition, there is geometrical information necessary to full definition, but not set out in the units. The unit definitions are encoded notations that achieve compactness but lose information.

The issues of measurement Maxwell, Bridgman and many others have discussed are ENTIRELY distinct from and unconnected to the axioms of pure mathematics. Bridgman pointed out that the world of measurement is a complicated and precise world.

The physical world of the measurable and the "meaningless game" of pure mathematics are distinct. There is no strict logic connecting them.

But with a little algebra, a connection between them seems to appear. That appearance has been taken for granted since Newton's time. D.C. Ipsen describes that algebra very clearly.

rshowalter - 05:16pm Jun 21, 1998 EST (#610 of 612) Robert Showalter showalte@macc.wisc.edu

Suppose you've got two entities ENTIRELY unconnected from the axioms of pure math. Do you get an entity that's connected to the axioms of pure math if you divide one into the other? People assume just that. The assumption works very well, but not perfectly.

D.C. Ipsen, a much respected engineer at U.C. Berkeley, describes the process clearly in UNITS, DIMENSIONS, AND DIMENSIONAL NUMBERS (McGraw- Hill, 1960). I don't know how McGraw-Hill did with the book, but almost forty years after it was published, this book is taken out of the U.W. engineering library many times, year after year. Here is Ipsen (I substitute some symbols to fit this website's typography.)

8.2 The formulation of Fundamental Laws

"As we have seen, the formulation of fundamental laws and definitions is somewhat subject to the whim of the formulator. To understand the structure of physical relations, it is necessary to recognize what aspects of form are fundamental. ..........................................................

".....................................Our present concern is how to arrive at a formulation that is compatible with the intrinsic meaning of the variables involved. In other words, we are not so much concerned with the existence of conventions as with their right to existence.

"The viscous shear law provides a simple example of how a physical notion may be expressed mathematically. The law states that in a Newtonian fluid the rate of distortion of the fluid is proportional to the shear stress. .................... For a unidirectional flow the rate of distortion is equal to the transverse derivative of velocity; therefore for such a flow the law may be stated in a numerical form as

s=u (dv/dy) ............................(8.1)

"where u is a factor of proportionality.

Ipsen goes on to define that important factor of proportionality

"The shear stress s, the velocity V, and the transverse coordinate y may be related to the variable in the equation as follows:

s=sn lbf/ft^2 ................................(8.2)

V=Vn ft/sec .................................(8.3)

y=yn ft .........................................(8.4)

(yn, Vn, and sn are the numerical parts of y, V, and s respectively.)

"If we combine these equations appropriately with equation 8.1, we may write

s/(dv/dy) = un lbf .sec/ft^2

"Since the left hand side of the equation represents a physical variable, the right hand side must as well. We therefore give it a name (viscosity) and a symbol

u = un lbf.sec/ft^2

"With this identification we may write the law in the form of a physical equation

s=u (dV/dy)

!_!_!_!_!_!_! NOTE THE FOLLOWING PARAGRAPH !_!_!_!_!_!

"To reach this point, we have presumed only what is implicit in Eqns (8-2) to (8- 4); namely that the product of a number and a unit has a mathematical meaning. Beyond that we have assigned a symbol to the product of a particular number and unit and thereby defined a new physical variable. Experimentally, we discover this new variable to be a property of the fluid.

No matter how reasonable this may seem, there is NO axiomatic justification for what has just been done.

What has been done is "self evident" ONLY if you strip away all the measurement details involved in the "territory" of these physical relations, and deal only with the stripped down (encoded) "maps" of abstract equations, written without measurement detail. There is NO AXIOMATIC OR LOGICAL REASON why this encoding is permitted.

What has been done is very important. By means of properties defined in this way, we define physical laws that we treat mathematically AS IF they are statements in pure abstract mathematics.

Steve Kline and I have decided to call these properties "natural law operators" to focus attention on how special (and extra-intuitive) they actually are.

We have found that all natural law operators are defined according to the same pattern:

DEFINITION: A natural law operator is a "dimensional transform ratio number" that relates two measurable functions numerically and dimensionally. The natural law operator is defined by measurements (or "hypothetical measurements") of two related measurable functions A and B. The natural law operator is the algebraically simplified expression of {A/B} as defined in A = {{A/B}} B. The natural law operator is a transform relation from one dimensional system to another. The natural law operator is also a numerical constant of proportionality between A and B (a parameter of the system.) The natural law operator is valid within specific domains of definition of A and B that are operationally defined by measurement procedures.

Example: A resistance per unit length determined for a specific wire for ONE specific length increment and ONE specific current works for an INFINITE SET of other length increments and currents on that wire (holding temperature the same.)

Here are some directly measurable natural law operators (often referred to as properties):

mass, density, viscosity, bulk modulus, thermal conductivity, thermal diffusivity, resistance (lumped), resistance (per unit length), inductance (lumped), inductance (per unit length), membrane current leakage (per length), capacitance (lumped), capacitance (per unit length), magnetic susceptibility, emittance, ionization potential, reluctance, resistivity, coefficient of restitution, . . . .

There are many, many more.

The natural law operators are not axiomatic constructs. They are context-based linear constructs that encode experimental information.

Look at how the natural law operators are derived, taking into account the measurement details they encode, and the fact that they exist BEYOND the axioms of pure mathematics.

What logical right do we have to assume we know all the arithmetical rules that pertain to the natural law operators? We have none, unless we appeal to experimental tests.

What logical right do we have to assume that the natural law operators are "just numbers?" We have no such right, unless we appeal to experimental tests.

The natural law operators ARE our interface between measurements and the equations that we call "natural laws" and manipulate as abstract equations.

Since Newton's time, we've sometimes had difficulties at the interface between our mathematical descriptions and the measurable things we've set out to describe, particularly in coupled circumstances. The INTERFACE between pure math and the world of the measurable is the "scene of the crime" as far as these problems are concerned. The mathematical entities that do the interfacing between the measurable world and pure math are the natural law operators. We should look at them carefully. We have no axioms to help us do so. We CAN still do experimental mathematics, and insist on rules that pass loop tests.

Here are two standards for loop tests.

1. Self consistency of physical description requires that wholes equal sums of the parts.

2. Self consistency of physical description requires that dimensional equations expressed in different units be consistent with each other.

When the rules for natural law operators are adapted to pass these standards in a way consistent with other things we know, false infinities and false infinitesimals are eliminated.

When the rules for natural law operators are adapted to meet these standards, we have GOOD reason to trust our analogies between physically derived equations and symbol-for-symbol copies of these equations in the domain of pure mathematics. We'll eliminate analogies that are consistently wrong in the mathematical sense, and sometimes wrong enough to matter.

We'll have much more right to talk about "rigorous" mathematical physics, or "rigorous" engineering physics. We'll also be able to solve coupled problems that have defeated us before.

rshowalter - 01:35pm Jun 22, 1998 EST (#614 of 617) Robert Showalter showalte@macc.wisc.edu

We're at a point where some issues raised by George Johnson and other members of this forum become crucial. I've been saying that when coupled equations derived from physical models are written, and converted to finite increment equations prior to reduction to differential equations, the finite increment equations have to be algebraically simplified in a special way, to accommodate special properties that the natural law operators happen to have. False infinitesimals and false infinities have occurred because we haven't known this.

If you want to go through the nuts and bolts right now, you can read

A Modified Equation for Neural Conductance and Resonance at http://www.wisc.edu/rshowalt/memax

and especially one of its appendices

Appendix 2: REPRESENTING PHYSICAL MODELS AS ABSTRACT EQUATIONS: PROCEDURES INFERRED FROM EXPERIMENTAL MATHEMATICS at http://www.wisc.edu/rshowalt/mea2

The natural law operators are defined there. The loop tests that determine the properties of the natural law operators are set out there. There is also an Appendix 4, hotkeyed from memax above, that connects this work to neuroscience applications.

***************************************************************** *

Here I'd like to talk about some practical-philosophical issues that take some care, and are especially important because, in this century, they've sometimes been muddled for whole groups by application of what Triskcadamus called a "firm educational methodology." As much as anyone I know of, George Johnson has asked that these practical-philosophical ideas be attended to. George has done so with a clear sense of what both theory and experiment can do, and what they can do together. I think the world would be a better place if the following admonition of Johnson's was on the masthead of most academic journals.

" Scientists must constantly remind themselves that the map is not the territory, that the models might not be capturing the essence of the problem, and that the assumptions built into a simulation might be wrong. "

The admonition is tough to execute, and goes strongly against a very powerful nihilistic and even anti-rational tradition that has grown in science, especially in the sciences where experiments are difficult and costly in every way. To follow George's admonition, we need serviceable answers to fundamental questions. Here are some that these New York Times forums have been concerned with.

"How DOES and how SHOULD our ideas of mathematics CONNECT with measurable reality"

We cannot know what to expect of the measurable universe but we SHOULD know what to expect of the mathematics and mathematized tools we use to model it.

We ought to expect clear answers to the following question

"How does MATHEMATICS, defined in proper, usable detail, connect to MAPS and TERRITORIES, and measurement procedures, and questions of experimental mathematics?

If there are difficulties with these questions, we ought to be able to expect clearly expressed and understandable difficulties.

We don't have tools for predicting the universe that we've not yet looked at, though we can make guided guesses.

We DO have self consistency tests that permit us to be SURE of well checked pure mathematics.

If we insist on consistency (as we must to avoid wild quantitative absurdities) we CAN infer the mathematical properties of the natural law operators that are our interface between the measurable world and the certain logical world of mathematics. (We can specify the properties of a CONSISTENT interface.)

***

This in NOT a statement about the universe beyond us, but a statement about the mathematical tools with which we partly describe it. We can go beyond the axioms, and connect to the pure mathematics based on axioms, using experimental mathematics. G. J. Chaitin has shown how experimental math works within the domain of the axioms, to show things that are true, but not axiomatically provable. Similar procedures continue to make sense in domains beyond the axioms. Results of mathematical experiments cannot prove with axiomatic certainty, but can disprove. When mathematical experiments show counterexamples to an assumption, that assumption has been ruled out. Since Newton, we've been making mistakes, and sometimes these mistakes have mattered.

The mistakes are mathematical mistakes, yielding mathematical inconsistencies, and can be checked for error as certainly as a chess move can be checked. Consistent rules can be checked for consistency with the same certainty.

We've been mapping the world with faulty surveying tools. We can fix these tools, and there are moral reasons why we must. The fix is done, and slowly, but surely, the checking is proceeding.

There are certain kinds of nihilism, or systematic doubt, that don't apply to the game of chess (which works in its own terms) or to pure math (which works in its own terms.) There are questions that we may have to ask about the universe at large that we shouldn't have to ask about mathematized tools we use to map that universe.

On the flap of George Johnson's FIRE IN THE MIND, a book I've read several times with respect and pleasure, there is this:

"Are there really laws governing the universe? Or is the order we see imposed by the prisms of our nervous systems, a mere artifact of the way evolution wired the brain? Do the patterns found by science hold some claim to universal truth, or would a visitor from another galaxy find them as quaint and culturally determined, as built on faith, as the world's religions? "

In FIRE IN THE MIND, George compellingly describes some worlds, but not the only worlds. Not, for instance, the world of the Boeing engineer or the Patent Office. My position is related to George's, but different. I believe that there IS such a thing as mathematical truth in terms of specified axioms, and such a thing as mathematical consistency in entities tightly related to the domain of pure (axiomatic math.) Beyond that, I'll retreat to a GrouchoMarxian stance, and say

"Either that, or my luck's been terrible (or wonderful.) "

That is, I believe that the mapping tools with which we mathematize the world can be made TRUE in their own terms. For everything else, we look at our evidence and make our fallible judgements on essentially statistical grounds. Our judgements are statistical at bottom, not because the universe is statistical necessarily, but because the universe is beyond our exact knowing.

The maps we make of these tools are, at best, constructions, and may be incomplete or imperfect in many ways for many reasons. The more we "remind (ourselves) that the map is not the territory" and check our work, the better our constructions are likely to be.

rshowalter - 01:45pm Jun 22, 1998 EST (#616 of 617) Robert Showalter showalte@macc.wisc.edu

Let me comment on George's lines.

"Are there really laws governing the universe?

.......There seem to be some. We believe that on GrouchoMarxian grounds.

"Or is the order we see imposed by the prisms of our nervous systems, a mere artifact of the way evolution wired the brain?

......................................People get lots of things to work, so however significant the distortion imposed by our nervous arrangements may be, we have to know a great deal on GrouchoMarxian grounds.

...................................So far as mathematical order, .I'd argue that our senses CAN fool us when we try to do pure math, when our imaginations fail, when we switch a definition, or when we miss in some other way. But I still feel that there IS a kind of stark, real truth in pure mathematics, defined in pure mathematical terms and limited to the pure mathematical field. For example, arithmetic in a computer does what it does.

"Do the patterns found by science hold some claim to universal truth, or would a visitor from another galaxy find them as quaint and culturally determined, as built on faith, as the world's religions? "

What would the great Marx say about this? Science has to be as fallible (and as reliable) as humanity itself. We know a lot of things (justified on STRONG GrouchoMarxian grounds, with plausible shards of logic attached.) Since we can't prove ANY of what we "know" about the universe outside of pure mathematics, it seems plausible that mistakes, acts of faith, and some of the other indica of humanity may be associated with out science. The more you believe in (or hope for) "the order of the universe" the more you should suspect and hope for embedded mistakes in current science

**************

I don't know if Heinzs agrees with this position. A lot of scientists would seem to.

************

So the world is beyond our exact knowing. BUT MATH WE CAN CHECK.

ENTITIES THAT INTERFACE EXACTLY WITH MATH CAN BE CHECKED EXACTLY FOR CONSISTENCY.

************

We can be philosophical and doubtful about our knowledge of the world, and it is good if we are.

If we are philosophical and doubtful about our ability to check axiomatic math, or entities that match with axiomatic math, that is a different story. That is just being intellectually lazy.

rshowalter - 01:47pm Jun 22, 1998 EST (#617 of 617) Robert Showalter showalte@macc.wisc.edu

Even so, the temptation to be intellectually lazy, or nihilistic, is real in the present case. I'm saying that a mistake has been made in the mapping tools by which we've mathematicized our world.

It started as an oversight by Isaac Newton.

The oversight caused a lot of trouble in the 18th century, and LaPlace had to work very hard to put together the magnificent patch that is perturbation theory.

The oversight caused Maxwell, and the entire mathematics community, big trouble in the 19th century, and led to a "crisis of analysis" where the mathematicians essentially stopped working on calculus, and left it as a problem for another century.

The oversight has caused much trouble in the 20th century, most notably, for me, a mistake in neurophysiology that has impoverished an entire field, and lost some chances.

What I'm saying is entirely checkable, in the same sense that any other math is checkable, but the checking is hard to get, not because the checking is intellectually difficult, but because I'm bringing unwelcome news to some people and some groups, and some of that news goes against indoctrinations that have been inculcated with Triscademus' "firm educational methodology."

My sense is that there will be some digging out to do, starting in celestial mechanics from about 1720, but that once the fixes are implemented, the sciences will be easier, more powerful, more understandable, and more fun.

The checking I've been asking for is happening, sometimes explicitly, sometimes by default. Because the work makes some uneasy, and people are people, it takes a while. Still, it is happening.

Steve Kline and I concluded that, if the interface between our mathematical maps and our territories was to be CONSISTENT with the mathematical maps themselves, that interface needed to be more clearly defined, and rules about it had to be changed. That point is entirely a matter of math, logically disconnected from experimental issues.

The math case, and some evidence is presented in A Modified Equation for Neural Conductance and Resonance at http://www.wisc.edu/rshowalt/memax

and especially one of its appendices

Appendix 2: REPRESENTING PHYSICAL MODELS AS ABSTRACT EQUATIONS: PROCEDURES INFERRED FROM EXPERIMENTAL MATHEMATICS at http://www.wisc.edu/rshowalt/mea2

The experimental justification of the point depends on evidence, of which there is a great deal, and willingness to consider that evidence, which is naturally harder to get, but which does come. For people willing to look, there has been plenty of support for the position for many years. More tests can be done. But willingness (and ability) to look is a problem, for reasons George Johnson, Thomas Kuhn, and others have described.

The point Steve Kline and I have worked on usually makes no difference at all in scientific description. But in some places, that have been sources of confusion and problems, it makes a big difference. Maxwell's equations near atomic scale are one example. Neurophysiology is another example. An entire class of coupled engineering problems, especially in turbulent fluid mechanics, offer other examples.

Bob

rshowalter - 12:22pm Jun 23, 1998 EST (#618 of 622) Robert Showalter showalte@macc.wisc.edu

Some people say that there are no perfect analogies. But in mathematical modeling, we work with analogies. Analogies are ALL we have to work with. Perfect analogies are exactly what we want.

Within limited domains, we have some perfect analogies, as far as we can determine that by measurement.

People who use classical electromagnetic theory manipulate an ANALOGY to EM waves, not the waves themselves. They expect that analogy to be PERFECT. Within defined frequency limits, that expectation is rewarded. Satellite communication, TV, radio, radio, and much else would be impossible otherwise.

Einstein's special relativity didn't happen because he had the universe in his head- he had an analogy to the universe in his head, and on paper, and he manipulated that analogy-model. We believe he had an EXACT analogy.

We want more EXACT analogies.

We want the mapping tools we use to construct our analogies to be EXACT.

We want our mapping tools to be EXACT so that we can form new analogy-maps that are as good as we can make them.

We want our mapping tools to be EXACT so that we can re-examine old analogy- maps that we have reason to be concerned about.

If we find an error in the mapping tools we use to construct our analogies, that will mean that analogies we've thought were EXACT are inexact. The inexactness need not be a concern everywhere. Most places, the error might be small, even much too small to measure. Some other places, the error might be very big, and well worth fixing.

Steve Kline and I have been saying that there's an error in the arithmetic of the natural law operators we as a culture have been using. Specifically, our culture has misinterpreted finite increment equations in our physical models of coupled circumstances. When the finite increment has been in the numerator of the term, that's produced a false infinitesimal. When the finite increment has been in the denominator of the term, that's produced a false infinity. The fix is easy.

The shock hasn't seemed to be so easy. I talked some while ago to a very able young academic who said "If that's true, gravity would fail, the planets would fly out of their orbits, the world would end ......" The model in his head, as he saw it, was so tightly knit together that to suggest a change in the arithmetic of finite increment equations was to blow up all the physical models of order in his mind. The fact was, this young academic was agitated by my suggestion, and not very willing to talk about what I was suggesting. Some other people have been the same. Some of them I respect a great deal. Even so, when you look at the SIZE of the new terms Steve Kline and I would suggest, our fix seems innocuous indeed. After looking hard we haven't found a single case where it distrurbs physical models that already WORK WELL. Often enough, we suggest new cross-terms, but usually these crossterms are TINY - too small to measure, by the best means available today or foreseeable in the future. (People are familiar with this sort of thing. Consider the sine series. The series goes on forever, but after a while the terms get very small.)

However, in cases where current models DON'T work well, our fix sometimes comes into play. Computational fluid mechanics, which owes more to Steve Kline than to any other man, is one of those places. Neurophysiology is another place, and there the fix is a matter of life, death, and understanding (the effective inductance of neurons is now underestimated by as much as 10^18.)

I do not know and do not believe that the fix Steve Kline and I are suggesting displaces any calculations THAT ACTUALLY WORK WELL anywhere in science or technology. If anyone has an example of such inconvenience to offer, I'd be most interested. Since I haven't looked everywhere, there may be such examples, but I don't know them.

I regard the Showalter-Kline work as an innocuous clarification of mathematical modeling technique.

Our fundamental premise is this: We want the analogies we use in mathematical modelling to be EXACT. We want the mapping tools we use to construct our analogies to be EXACT. That means we want the most exact interface possible between our measurable models and the pure math we use in our analogies.

Godel suggested and Chaitin has shown that even in the domain of pure mathematics, where the axioms rule, experimental math can offer a complimentary set of tools, and show things that the axioms do not. When the axioms fade away, experimental math remains, and still guide us. That's useful.

rshowalter - 04:23pm Jun 23, 1998 EST (#620 of 622) Robert Showalter showalte@macc.wisc.edu

Note: Steve Kline and I are not the first to notice difficulties in the analogy between dimensional quantities and numbers. Particularly distinguished work has been done by George W. Hart, who reviews much of the other work done in this field in his

MULTIDIMENSIONAL ANALYSIS: Algebras and Systems for Science and Engineering ........Springer-Verlag 1995.

Hart's work focuses on linear algebra rather than the derivation of differential equations, and keys mostly to the dimensional fact summarized in the following old saw about dimensional numbers:

"You can't add apples and oranges, but you can multiply them!"

Anyone interested in physical modeling is likely to learn from Hart's book, especially his first two chapters. I did.

Anyone interested in linear algebra in dimensional contexts will find Hart's book important and essential.

Reasons to doubt our mathematical modeling have been around a long time, and, as Hart points out, we've done as well as we have because our habits, quite often, have been better than our formalities. Unfortunately, in the derivation of differential equations our habits and our formalities have been the same.

rshowalter - 04:25pm Jun 23, 1998 EST (#621 of 622) Robert Showalter showalte@macc.wisc.edu

As things become more complicated there often have to be more rules to describe those things. We know that from experience. Quite often, we can sort out why the additional rules and specifications have to happen.

A very simple example is position in a dimensional space. Position along a line from a fixed origin takes one dimensioned number to specify. (Engineers will tell you that in pure math, the unit is the "arb.") Position in two dimensions for specified axes takes two coordinates. For each additional coordinate, an additional number is needed for specification.

In engineering specification, as objects get more complicated, you need more independent views, more detail in the views, and you need to be more careful comparing the views.

The natural law operators are more complicated than numbers.

The natural law operators are ratios of measurable quantities THAT EXPRESS NATURAL LAWS. Natural Law Operators might be said to have the following "logical dimensions" or "logical aspects."

1. Aspects determined (from the definition) by abstract dimensional arithmetic: numerical value and dimensions.

2. Strict logical connection to the measurement procedures that defined each of the measured numbers that define the natural law operator.

3. Strict logical connection to the physical law they represent in a particular context.

Who's to say that 2. and 3. above are unimportant? On what basis? Whether these additional aspects matter or not, the natural law operators are more complicated and detailed entities that simple real numbers. We knew nothing about them when we were born, and may not have learned everything about them since. They are NOT axiomatic entities. It can't hurt to test them to see how they work.

Now, what do we know about the rules for adding and subtracting terms with natural law operators?

What do we know about the rules for interpreting terms that involve multiplication and division of combinations of natural law operators and spatial increments (of space or time)?

We know NOTHING of these operations on axiomatic grounds. We may wishfully assume that, whatever the rules are, they'll interface cleanly with pure abstract mathematics. So much of our world makes sense in terms of pure math analogies that we've got grounds to expect that assumption is right.

The number of sets of arithmetical rules that could meet THAT criteria is almost limitless.

We need more specification to determine these rules.

We need the specification especially when we're putting several equations together, as we must under coupled circumstances. Putting together differential equations, which are already abstracted, seems to be a trouble-free process.

Putting together finite increment equations, which are the kind of equations that represent actual measurements, seems to be a consistently troublesome process.

Here's the problem. When putting together two equations representing two physical laws at finite scale in a specific circumstance, we can get terms that involve products of two or more natural law operators and spatial increments that may occur one or more times in either the numerator or denominator. It is easy to say "these are just numbers" and go on, but the easy way isn't generally satisfactory. If a finite increment is in the denominator of a term, and you go to the infinitesimal limit, you get an infinity. Odds are you'll know that this "infinity" has to be finite, and likely negligible. If a finite increment is in the numerator, and you go to the infinitesimal limit, the term works out to be 0, even if you know very well that the term has to be finite, and may be important. This happened to Clerk Maxwell.

Neither the infinitesimals nor the infinities are acceptable.

When we apply sensible rules - that the whole must equal the sum of its parts, and that quantities must be dimensionally consistent with themselves in loop tests, we find out that the arithmetic of natural law operators is a little more complicated than we'd thought. We can see results that fail loop tests, and learn to avoid them. Once we do that, false infinities and false infinitesimals cease to happen.

Then, physical modeling forms a clean interface to the calculus of pure mathematics. As you may recall, that the calculus of pure mathematics derived according to the following logic:

Arithmetic's arithmetic.

Do a little arithmetic, and before you know it, you're doing algebra.

Not much algebra happens before you're working with polynomials.

Try to connect polynomials to geometry, and almost before you know it, you're at the polynomial derivative formula. If you ask for the inverse (and if you keep playing, you'll have to) you'll have the polynomial integration forumula.

Take the polynomial derivative and integral formulas, and add the algebra you've been working with before, and you can do most of the calculus that gets done in the real world. You can pretty much derive the rest.

Perfect as pure mathematics, but not necessarily a perfect match to a physical model. The physical model representation has to be validly set up so that it exactly satisfies the arithmetical rules the pure math has.

To get this EXACT ANALOGY in the equation representation of the physical circumstance you are modeling, you need to algebraically simplify coupled terms at a scale that makes sense in terms of the loop tests. That's easy.

Of the infinite number of scales you could choose for algebraic simplification, pick unit scale.

Or look at it differently in a way that gets the same answer.

Define a point form of the increments that is unity, as it mathematically has to be. Algebraically simplify in point form.

Either way gives the same answer. THEN you have arithmetic that IS an exact analog to the arithmetic of pure mathematics.

This trick could have saved J.C. Maxwell and some of his successors a lot of time.

I would have found it useful myself.

Bob

________________________________________________

________________________________________________

budrap - 05:09pm Jun 28, 1998 EST (#623 of 623)

Bob,

I have unfortunately, due to time constraints, been unable to keep up with your considerable output. What you have to say is most interesting and deserves serious consideration, not only by me but anyone concerned with the current state of modern science.

Rather than address your latest postings which I have not yet had time to fully digest, I would like for the moment to refer back to something you said in #598 that I believe lies at the heart of calculus' limitations as a map to the physical trerritory it describes:

The slope of a line gets larger and larger as it approaches the vertical (approaches parallel with the y axis). In the vertical limit, that line's slope, dy/dx, is infinity. The infinity involved is bigger than any number you could specify by any means, but the infinity is a sensible mathematical entity, that you can focus with a limiting argument if you wish.

What is wrong here I think is that a qualitative difference is being reduced to a quantitative one with the results being the purely imaginary and therefore nonsensical "infinity". What I mean by this is that the difference between any x- axis interval and no interval is primarily a qualitative one, say along the nature of a phase change and attempting to treat it as quantitative in nature yields a meaningless and worse, misleading result. The result of division by zero is not an unimaginably large number. Division by zero cannot not have a quantitative result because zero is itself not a number but the absence of a number or in this case the absence of an interval.

What I would like suggest is an alternative concept to be introduced to both math and science wherein the results of a division by zero is understood to be not an "unimaginably large number" but rather "inherently indeterminate" , this to be represented by an appropriate symbol such as |?|. The idea here is to introduce formally into both disciplines the fundamental notion that there will always be some things that we cannot know.

It seems to me that some such approach is going to be absolutely necessary if we are ever to come to terms with the "weirdness" of quantum physics, not to mention modern cosmology.

Bud

rshowalter - 02:04pm Jun 29, 1998 EST (#624 of 628) Robert Showalter showalte@macc.wisc.edu

budrap - (#623) makes a wise and fascinating suggestion, and I've been scratching my head about how to respond. The suggestion illustrates some problems the technical culture has come to, and some reasons why a really clear distinction between "map" and "territory", and between "abstract math" and "physical, measurable circumstance" has to be made. I find Bud's suggestion a perceptive response to many problems, but maybe not the best answer to the problem he sets out.

Here's a fact worth noticing. We get in trouble with obviously misleading "infinities," some of which we know must be finite, some of which we know must be small, when we REPRESENT a physical circumstance by a mathematical procedure that we map into pure math.

In the domain of pure math itself, the notion of infinity works fine.

In our measurements, we never measure an infinity, or anything close.

******.....It is at the interface, representing a measurable circumstance in an abstract mathematical mapping, that we get into trouble. ......********

We don't need to get rid of infinity in pure math, where it works and we couldn't get rid of it anyway. In pure math, we're dealing with "meaningless" or "abstract" symbols. Those symbols have no necessary relation to physical things but they work according to rules. We can talk about infinity in pure math without having that notion "real." Bud, you talk about results that are "purely imaginary and therefore nonsensical." "Purely imaginary" is right. "Therefore nonsensical" isn't a good fit, talking about pure math. In pure math, it is the purely unconnected, or abstract, or "imaginary' that you are talking about. Nothing wrong about that, so long as you remember that math folks really mean it when they say that they play "meaningless games played with meaningless marks." You have to be careful what you map to those "meaningless games."

(If pure math ever seems "real" to you in the physical sense, you should sit down and try to get over it. We aren't good about keeping the difference between "maps" and "territories" straight when things get complicated. Sometimes, I'm afraid, pure math gets to seem real to physics people and their customers. That may seem satisfying, but it's unsafe. )

rshowalter - 02:06pm Jun 29, 1998 EST (#625 of 628) Robert Showalter showalte@macc.wisc.edu

Here's an example of something that makes perfect sense in pure math, but is not of this world. On the x-y plane, consider a line from 0,0 to 0,1.

Map every rational number onto the line from 1,0 to 1,1.

Map every irrational number onto the line from -1,0 to -1, 1.

Any problem? No problem at all, except you have to remember that we're talking about a distinction (between the rationals and the irrational numbers) that does not correspond in any sensible, measurable way to ANYTHING in the real world. The irrational-rational number distinction is not a real world, measurable distinction. There's no contradiction involved, but the abstract domain and the measurable real world are fundamentally different. There are similar examples elsewhere in pure math, too.

Well, the notion of "infinity" in pure math makes no more measurable sense in the measurable world than "irrationality" or "rationality" applied to the number line. Still, the notion of infinity works fine in the abstract domain, and is indispensable. Here goes. We're talking "meaningless marks" here, according to rules.

There is no "largest number." Demonstration: For any number M, you can always multiply that number by a factor of N. And then you can do it again, and again, without limit. Whatever you may call "largest number" can be enlarged again.

There is no "smallest nonzero number.' Demonstration: For any number M, no matter how small, you can always divide by N. Whatever you may call "smallest number" can be divided again.

Reciprocals exist. For any N, N times 1/N =1

Now, Bud said something right as follows, but note that the difficulty is in MAPPING from the measurable to the abstract.

(Here is a problem that Bud believes) lies at the heart of calculus' limitations as a map to the physical territory it describes:

The slope of a line gets larger and larger as it approaches the vertical (approaches parallel with the y axis). In the vertical limit, that line's slope, dy/dx, is infinity. The infinity involved is bigger than any number you could specify by any means, but the infinity is a sensible mathematical entity, that you can focus with a limiting argument if you wish.

As an argument strictly in the abstract domain, this is fine. If one considers the x- y plane of a piece of paper as a symbol of the abstract domain, this is still fine. If "delta x" has a nonzero numerical value, no matter how small, the arithmetic makes sense, and the limiting notion of "infinity" makes corresponding abstract sense. But if you ask "is this real?" you have to be careful what you mean by "real."

Bud goes on to make statements that aren't right about pure math, but that are motivated by an entirely sensible body of correct observations. In pure math, the infinity makes sense. The result of division by zero, in this case, is an unimaginably large number, and we can see that by looking at smaller and smaller values of "delta x" for the same length line as it becomes "vertical" in the imaginary world of pure math.

So long as we stay in the "purely imaginary" world of pure math, this is not nonsensical. But if we forget what is map and what is territory, we can start generating nonsense very quickly. If we make a mistake in our mapping (and our culture has been doing so) work can also become nonsensical quickly.

Bud then makes a suggestion that is extremely interesting and perceptive. I believe that the more you know of the difficulties in theoretical physics this century, the more perceptive and interesting the suggestion is.

There are two parts But suggests, and I've got reservations and additions to add about the first part. The second part, that Budrap states last, I find enormously attractive. If fact, I think this last part is worth a LOT of thought and FORMAL attention. Bud suggests a notation

"represented by an appropriate symbol such as |?|. The idea here is to introduce formally into both disciplines the fundamental notion that there will always be some things that we cannot know. "

Cannot know, or is it, sometimes, do not know? Often, you can't tell the difference. Bud's right that a big, fat |?| might make sense beside a goodly number of infinities. (Infinities that correspond to physical things are more than just a little suspect - especially if you're worried about conserved quantities like energy and momentum.) People always seem to be talking about things they haven't solved that are soluble "in principle". If they knew how to be specific about how things were "insoluble in principle" their principles might get clearer, and it might make for progress, maybe. There are infinities, embedded in long calculations where procedures and map-territory distinctions may all have been muddy, that merit a |?| because they're likely to be part of a mistaken calculational sequence. There are also likely to be infinities embedded in relations (especially in QM) that aren't completely known yet. There are kinds of calculations QM people have been trying to do for decades (in orbital calculation, for instance) that haven't cracked. A person either has to expect that the people are inadequate, or that they have a problem with assumptions, knowledge, or tools. After work has gone on a while, I'm generally inclined to bet that the people are good, but that the tools and other background may not be. The |?| symbol makes sense to me. (!.!.!.!) If people get to asking "What's the |?| about?" folks might figure some things out.

Budrap also suggests that we write off everything we're now calling an infinity, and question them all. Sometimes people already do something less honest, but a little like what Budrap suggests. Only they don't forthrightly label infinities with a |?|. They call the infinities 0, and chuck them, instead. (At least I've seen this happen in engineering.) People say:

"I don't like this unimaginably big number I just calculated, so I'll call it 0."

If that number, properly calculated, is different from zero, that can be a mistake. Whether it makes for problems quantitatively of not, it puts a big |?| on the logical structure, which is, after all, supposed to be logically coercive. (Stated another way, it would be nice if we could build on, and trust, our logical structures.)

It seems to me that the |?| on the logical structure is a big philosophical-moral- practical-operational issue. If people get used to having math arguments blow up on them, for no reason at all, and then if people get used to doing magical, indefensible things, you can't get them to check detailed arguments, and you've lost (I'm serious here) a lot of the unity of the technical-intellectual culture. I think problems with calculus have been damaging our culture, in this way, since about 1860.

!.!.!.!.!.!

If map-territory distinctions are accurate, and people know the natural laws they're dealing with, and know what natural law operators are, it should be possible to get rid of bogus infinities, and calculate valid numbers, instead.

Infinities happen now if a finite increment equation is modeled from physical circumstances, that happens to have a spatial increment in the denominator. If natural law operators were just exactly like numbers, this next step, which yeilds an infinity, would seem right:

(WRONG, STANDARD PROCEDURE:) Take the limit of the expression. The spatial increment, in the denominator, is 0 in the limit, so the term in infinite in the limit.

That's not a permissable procedure because THERE ARE ADDITIONAL RESTRICTIONS ON THE ARITHMETIC OF NATURAL LAW OPERATORS. PRODUCTS AND RATIOS OF NATURAL LAW OPERATOR(S) AND SPATIAL INCREMENTS HAVE TO BE DONE AT UNIT SCALE (OR AT POINT FORM). ONLY AFTER THIS ALGEBRAIC SIMPLIFICATION DO YOU HAVE A COMPOUND NATURAL LAW OPERATOR TO EVALUATE FURTHER IN THE EQUATION.

Correct procedure: Evaluate all compound expressions into valid compound natural law operators, algebraically simplifying at unit scale. THEN, when the term is in a one-to-one correspondence to the arithemetic properties of pure math, you can take the limit, and map the equation, symbol for symbol, into the domain of pure math.

This ISN'T a full solution to the wierdness of QM and its applications. That wierdness is surely there. Who knows what might be required to sort everything out? But if we sort out mistakes that we CAN eliminate, that's progress. If we expand Maxwell's equation and figure out what that means, that's likely to be progress. By getting rid of many of the infinities buried in our calculations, so that the remaining |?|'s stand out more clearly, we'll have a better chance of sorting out what remains to be done.

_______________

Here's a fairly simple experiment. A term in the line conduction equation, that electrical engineers have been chucking as an infinity, is finite. It is capacitance along the line due to the dielectric properties of the conductor itself. As the equation is set up now, one constructs a parallel plate capacitor model, takes the limit as separation distance goes to 0, and "calculates" an infinite self capacitance (and a corresponding zero conduction velocity, which everybody knows is wrong.) Since the infinity is wrong, EE's call this term zero, by default.

When you know the correct way to deal with natural law operators and spatial increments, the term is finite, and not always negligible. It isn't negligible when you're concerned with conduction of electrical waves, at low frequencies, through a lousy conductor like water. For fairly pure water, and frequencies in the kilohz, this capacitance accounts for conduction velocities about 10,000 times slower than the speed of light. That's a LOT slower than EE's are trained to expect.

When I did the test, I used a glass tube (1/4" to 2" doesn't make a difference within my resolution) about a meter long or so, full of water. I used AgCl electrodes (old, beat-up pre 1934 quarters, cloroxed.) . I put a sine wave in one end, measured the attenutated wave on the other end with a very high impedance pickup. I used a good dual channel scope. I looked at the travel time.

The test is fairly easy. I found that, for people who understand what is being done, it is a fine relaxer of jaw muscles. Alas, it gives other observers neither pleasure nor distress.

I had a H*ll of a time getting anybody to look at this test and setup, though it did make for a few believers. That was years ago.

Water molecules flop around, and that shifts the curve a bit. I had a factor of 10,000 down from light speed, which was the speed people expected. Measurements were about what I calculated, and I was fighting other battles, so I didn't get it perfect.

A carefully done test, with some conductive solid, like Boron, would be better. If you can get rid of stray capacitances, and convince people that you've done that, tests on such a solid should be a fairly direct test of the Showalter-Kline math theory.

It would be great to get people to look at this sort of thing seriously.

budrap - 10:37pm Jun 29, 1998 EST (#629 of 630)

Bob,

You stated in #626:

The slope of a line gets larger and larger as it approaches the vertical (approaches parallel with the y axis). In the vertical limit, that line's slope, dy/dx, is infinity. The infinity involved is bigger than any number you could specify by any means, but the infinity is a sensible mathematical entity, that you can focus with a limiting argument if you wish.

As an argument strictly in the abstract domain, this is fine.

My argument is precisely that this is as wrong mathematically as it is in the physical domain. Why? Because zero is not a number - it represents the absence of a number - it represents nothing. To ask how many nothings there are in something (which is what is what division by zero amounts to) is to pose an inherently meaningless question that is logically equivalent to "How many apples are there in a banana?" or "How many horses in a cow?"

Now you can certainly argue that it is possible to add zero to a number and get an intuitively meaningful answer:

How much is something plus nothing? The original something seems a logical and reasonable answer. So x + 0= x

How much is something minus nothing? Again the original number or quantity seems to be intuitively correct. And, x - 0= x

And if you aggregate zero instances of some number or quantity? Zero or nothing seems correct here. So, x(0) = 0

But when you get to division by zero all logic flies out the window. How many zeros are there in 1? An infinite number. And how many in 100? An infinite number also. Of course that means that zero times infinity = 1 and also = 100 and 1000 and any other number you can think of. There is no sound logical footing here on which to base any rational system of thought. Infinity is the Achilles heel of both modern science and modern mathematics.

I fully comprehend the appeal, the seductiveness even of the limiting argument but given the logical results I think it would be advisable to jettison the infinite baggage we are now burdened with and seek elsewhere for an answer to the problem of division by zero. |?| may not be the answer after all but I like it because it fulfills a need for an expression that captures the inherent limitation of all knowledge which seems to me to be a built in feature of existence.

Bud

pbrower2 - 11:14am Jun 30, 1998 EST (#630 of 630)

The problem with black holes is that much about them is in fact infinite due to the 'division by zero' stuff which they imply.

rshowalter - 05:19pm Jun 30, 1998 EST (#632 of 633) Robert Showalter showalte@macc.wisc.edu

Budrap, you're making me think, and think hard. What you say makes sense. The transition between "0" the number and "0 the logical operator" sure can happen quickly and invisibly, can't it? That's interesting, and troublesome, too. You make great points, and I'll be back to you.

dewaite - 11:14am Jul 1, 1998 EST (#633 of 633)

Bob and Bud -

I have been following your discussion with interest. At least, I have been following it as best I can. You guys are both possessed of an immense amount of technical knowledge that I will never have.

In any event, your discussion reminded me of the following, from my post #58 in this forum:

The comments of bmayo 14582 about time bring to my mind a passing thought I have had on occasion: Is there any truly objective reality that physics has not already considered? Or is all of this discussion of relativity, singularities (clothed and naked) and quarks really just a "word game" that is being played with theoretical constructs that are being created to describe features of the universe that simply defy our understanding?

It also reminded me of the following, which is from a post by avoice, #81:

The whole concept of infinity as it deals with things is odd to me. I am not a physicist nor mathematician but I can talk to these people. If you are talking about very high numbers, vast magnitudes,, I can see no difficulty in ascribing them to objects, but if we are talking about infinity what is it that we are identifying? I think it is a mistake of physicists to take abstract mathematical concepts like infinity and rely on them to describe concrete objects or sets of objects. How would you in principle test whether something had an infinite mass anyway? What criteria for measuring would you use?

Am I correct in thinking that you guys are agreeing with us?

- Dave

rshowalter - 03:42pm Jul 1, 1998 EST (#634 of 643) Robert Showalter showalte@macc.wisc.edu

budrap (#629), you're right. Infinity should be used as a marker when it is used at all, and should never, never, never be thought of as a magnitude. Comparing infinities is a confusion between quality and quantity. Anything worth modeling is worth modeling with arithmetic. So long as one approximates 0 with a number (be it ever so small) and remains within the usages of arithmetic, all is well.

Infinity is extra-arithmetical.

Division by zero is undefined. (Maybe it is better to say that the construction that represents division doesn't exist.)

Division by zero can always be avoided, and for any specific modeling, to an arbitrarily good, logically solid approximation. A "zero approximation" number can be as small as you choose, and once you use it, your logic stays straight.

Even so, infinity (maybe with a |?| attached) has its uses. A lot of calculus is symbol manipulation with symbols that stand for quantities, but are not quantitative in themselves. Infinity may be seen as such a symbol. But the symbol for infinity doesn't stand for a quantity, it stands for some sort of unfinished or unspecified arithmetic.

As you point out, 0 IS a perfectly respectable number. For any N, N-N=0. Addition by 0, subtraction of 0, and multiplication by 0 are all sensible.

Limiting arguments in calculus make sense, but for cases that represent physical circumstances, infinitesimals (limits of 0) are much more prevalent than infinities. And these 0's can make sense.

For physical representations, infinities should mean "check this again!"

I LIKE your notion of a |?| symbol. There's plenty of ignorance around, some avoidable with more work, some not. It would be a fine thing to try to label it, to help keep track of both kinds, especially since, so often, we can't tell which is which.

__________________-

Dave (#633) your comments are wonderful, and centering. You ARE correct that I'm agreeing with you and those you cite. I'd guess Bud may feel the same.

A wonderful thing about these forums is that they contact the Zeitgeist, the common body of thoughts and ideas that interest us in common, and concern us in common. George Johnson fingers muddles and contradictions in science that interest people. THE NEW YORK TIMES publishes George's essays, and by doing so makes clarity more likely for the whole culture. Then, in these forums, we talk. Sometimes, as the talk goes on, notions focus. I think that's happened here.

Let me connect to an admonition Johnson wrote about some jobs that are central, and worth doing, and interesting, too. George said:

" Scientists must constantly remind themselves that the map is not the territory, that the models might not be capturing the essence of the problem, and that the assumptions built into a simulation might be wrong. "

Proteins Outthink Computers in Giving Shape to Life NEW SCIENTIST March 27, 1997.

We've been talking about times when George's admonition is harder than people have noticed.

Here's a central point, humanizing, yet sometimes hard to imagine. In your head, you have representations. You have "maps" not "territories". No matter how hard this is to remember, you brain is not, "wider than the sky" as it is in Emily Dickenson's poem, even though it may seem to be. Your brain may MAP things wider than the sky, but that's a very different thing. No matter how real your feelings may be to you, the universe is not inside your head. Only representations can be. And you can be confused about how those representations fit together. Sometimes, for some things, representations in your head can be objectively wrong. We aren't well built, as animals, to remember this. But we need to.

Our minds are limited. Other people, including people who are heros to us, come similarly equipped. Heros in the past, who may seem magical to us, were the same. Here's a central mathematical, scientific, and philosophical question. How can people think and keep track of the world, and make solid, testable conclusions about it, WITHIN OUR LIMITATIONS AS HUMAN BEINGS. How can we use out minds in ways we can trust, and see when we've reached the limits of the knowable?

Such questions aren't answerable in general, but in specific cases we can know some specific, useful things.

The first thing, of course, is to see that there ARE limits to the knowable. There may or may not be limits to the imaginable. But if we're talking about "knowing" we've got limits, and so do all other human beings. A lot of clergymen are clear about this. A lot of scientists are, too. Plenty of things you might wish to know are, in your circumstances, unknowable.

You CAN make solid inferences when you have the tools for making the solid inferences. And if you can TEST your logical tools, you can have reason to trust what they tell you. Unless you can TEST the tools you infer with, you can't trust them. In a research lab shop, I saw this, and liked it:

"It has been truly said that you can't make what you can't measure. How would you know if you'd made it or not?"

That goes for logic, too.

rshowalter - 04:41pm Jul 2, 1998 EST (#636 of 643) Robert Showalter showalte@macc.wisc.edu

In the essays that anchor these forums, Johnson has been very clear about circumstances where, in Johnson's view, people have gone beyond what they actually know, and gotten into muddles.

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

With help from Bud, I've been dealing with a particular kind of muddle. How do you GET TO the realm of "Pi in the Sky" the realm of pure abstract mathematics, FROM the measurable world?

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

If that step's magical, and we don't even have systematic ways of testing the way we take that step, then there are sure-enough DEEP QUESTIONS about some foundations of science.

Steve Kline and I haven't taken any of the magic away from the long jump from the concrete to the abstract. But we HAVE come up with a systematic way of TESTING how we make that step from the concrete to the abstract. And we've found an inconsistency in our mapping procedures, and figured out how to fix it.

We don't know "where" the abstract world is. We don't know "what and where" the measurable world is, in all the ways we'd like to. We know this. The MAPPING TOOLS we use to represent the measurable world in abstract math HAVE TO BE CONSISTENT WITH THEMSELVES.

rshowalter - 04:44pm Jul 2, 1998 EST (#637 of 643) Robert Showalter showalte@macc.wisc.edu

Dave, In your post #58, you ask:

................................"Is there any truly objective reality that physics has not already considered? Or is all of this discussion of relativity, singularities (clothed and naked) and quarks really just a "word game" that is being played with theoretical constructs that are being created to describe features of the universe that simply defy our understanding?

Let me break it down, and try to answer:

"Is there any truly objective reality that physics has not already considered?

I'd say that there MUST ALMOST CERTAINLY BE a "truly objective reality" that we can try to represent. (Too much order otherwise. Either that, or our luck's been wonderful) I think a number of physicists, Bridgman first among them, have thought carefully about this "measurable reality". We can on

ly test it, and try to map it. We can make representations, and "trust" them until we find they're wrong.

Just because I think a notion of "reality" is a tenable statistical inference, and believe in it, doesn't mean everybody does. There are quantum mechanics people who say "reality does not exist." So far as I can tell, their logic goes as follows:

We're having trouble matching up our logic and measurements, and these are basic particles we're having trouble with. We've had to chuck basic notions of reality to get a set of rules that work for the fairly few systems we can calculate. But we really know that our logic is the best it can be, since we did it.. So reality must not exist. .......... "

Maybe so, but I don't know any engineers who buy that. Maybe because I worry much more about classical physics, I don't happen to buy it. I've pegged a mistake in the math that caused the "failure of classical physics" that lets me keep on believing in reality for a while, yet, anyway.

Dave, you go on:

"Or is all of this discussion of relativity, singularities (clothed and naked) and quarks really just a "word game" that is being played with theoretical constructs that are being created to describe features of the universe that simply defy our understanding? "

If "just a representation" is "just a word game" to you, I think that is ALL that people can ever do. That's not necessarily so bad. Still, some "word games" or "representations" are better than others. We can test for CONSISTENCY, and ask for SELF CONSISTENCY and CONSISTENCY WITH WHAT WE KNOW. Taking your points in order:

I don't see anything wrong with relativity.

I think the odds are good that the "singularities" are mistakes - misrepresentations of physical circumstance.

Quarks describe a pattern, and if something else turns out to be better, it will be much the same.

Dave, you cited avoice, #81:

The whole concept of infinity as it deals with things is odd to me. I am not a physicist nor mathematician but I can talk to these people. If you are talking about very high numbers, vast magnitudes,, I can see no difficulty in ascribing them to objects, but if we are talking about infinity what is it that we are identifying?

That sounds just right to me. Avoice goes on:

"I think it is a mistake for physicists to take abstract mathematical concepts like infinity and rely on them to describe concrete objects or sets of objects. How would you in principle test whether something had an infinite mass anyway? What criteria for measuring would you use? "

Right on! Dave, I think you and avoice have it right. Let's talk about the "mistake" you refer to just above. You're describing a map-territory error. One where people haven't been paying enough attention, and at an interface that has something treacherous about it.

First off, avoice is right that nobody tests an infinity, no way, no how. Not even "in principle." That makes the infinity "unreal" by Bridgman's much-respected standard.

Avoice has also pegged a central problem with these infinities - the connection between map and territory is casual, ad hoc, and muddled. Not only that, it gets people into trouble.

Before, people have gone from pure math to raw faith and tests on measurable data, when those tests could be done. (Lots of times, those tests can't be done, especially if you're talking about a nuclear particle, or some star-light years away.)

Pure math (like chess) you could construct by a set of rules and assumptions, and define from those rules, and it worked in its own terms. That was and is fine, but pure math has no obvious our logically traceable connection to the measurable world at all. People have to use pure math to represent measurable goings-on.

At the interface between pure math, and measurement, we've had troubles with calculus, and have had infinities that didn't make sense, and infinitesimals that didn't make sense. The problem goes back so far that, when it started, real estate values in Manhattan were low.

If you ask "how can you go beyond your axioms" here's an answer. You can ask your systems to be CONSISTENT WITH THEMSELVES. You can do loop tests, to test your mathematical procedures, just like surveyors have been doing loop tests to "prove" their surveys, and just like instrument people have been doing to check the linearity and consistency of their instruments.

When you do those loop tests, you learn some things.

You conclude that there have to be entities at the interface between the measurable world and abstract math, called natural law operators. That's an experimental conclusion, tested by experimental math - computational loop tests.

You conclude that the arithmetic of finite increment equations including natural law operators has a special restriction - crossterms have to be algebraically simplified as unit scale (point form). That's an experimental conclusion, tested by experimental math - computational loop tests.

Once you learn these new things, old problems with infinitesimals and infinities are solved.

With this new knowledge, we CAN make solid inferences in an area that's been ad hoc and treacherous before. With this new knowledge, we have the tools to make solid inferences we couldn't make before. We can TEST these logical tools in terms of SELF CONSISTENCY, with loop tests.

And Dave, we find out that we agree with what you and Avoice said a long time ago. Sometimes, I think, intuition can be amazingly good. I think yours is, and Avoice's is.

Bob

dewaite - 09:31am Jul 3, 1998 EST (#639 of 643) The tao that can be spoken is not the eternal Tao.

Bob -

Thanks.

- Dave

rshowalter - 10:24am Jul 3, 1998 EST (#640 of 643) Robert Showalter showalte@macc.wisc.edu

Dave, I missed a part. You asked in your post #58

" ................................"Is there any truly objective reality that physics has not already considered?"

I didn't answer that part of your question. I'm slow, but finally I noticed that I hadn't. I've been thinking about your words. The semantics of the phrases "truly objective reality" and "already considered" are treacherous. I can't be exactly sure of all the things those words might mean. Let me take a shot at responding to what I think you meant to ask about. To me, your question relates pretty closely to the questions

"Is Pi in the Sky"

or

"Is MATH objectively real, and objectively everywhere?"

It seems to me that math has to be "objectively real" in a useful sense - I feel, like another contributor did, that "math is a separate universe that we explore."

If that's too high-flown, I also think that "the game of Chess is a separate small universe, that we explore." How does the location-word "where" fit with "Chess"? Is Chess "everywhere"? Is it "anywhere"? I don't feel the "wheres" fit well, but still, Chess seems real, and testable in its own terms. Because it is testable, it seems provably real. Does it have to exist in space? In the usual senses of "exist?" Am I being muddled here? Sure. Still, in a sense that seems real to me, Chess is real. It is a set of rules, a geometry, and some systems of operations.

Pure math seems to me to be similar. Pure math seems real, and testable in its own terms. Because it is testable, it seems provably real. Does pure math have to exist in space? In the usual senses of "exist?" Maybe not. Anyway, in a sense that seems real to me, pure math is real. Pure math is a set of rules, including or generating a geometry, and a body of operations and precedents. To me, that means math is real, but not material like a plum pudding.

Is pure math anywhere, nowhere, or everywhere? Is it in the sky? We know that an enormous amount of physical process, in the sky and far, far away, FITS patterns that we map as pure math. For me, that means that pure math is "in the sky" in useful senses. But not all senses. Pure math is an imaginary world, that we may explore, but that is, nonetheless, without direct contact to ANY physical thing.

For us, pure math "has to be there," subjectively and by statistical logic, because it connects to rules, and because it is useful for representation in the way that it is. Still, we've had difficulty at the interface between math and the measurable, and we've talked about that.

Now Dave, you asked in your post #58

" ................................"Is there any truly objective reality that physics has not already considered?"

rshowalter - 10:25am Jul 3, 1998 EST (#641 of 643) Robert Showalter showalte@macc.wisc.edu

If Pure math is "a truly objective reality" then there is another "reality" that is "truly objective" in the same sense, that physics has not sufficiently explored. It is the mathematical world of dimensional numbers, especially the measurable domains, including detailed measurement procedures and natural law operators. George Hart has said a lot about this "reality" as it relates to linear algebra (See Multidimensional Analysis Springer-Verlag 1995), and Steve and I have talked about it with respect to mapping from measurements to differential equations, especially when the models involve coupling.

James Clerk Maxwell struggled with this world. He never seems to have been clear that there were two different domains that he was talking about, and he had a frustrating time. P.W. Bridgman thought a lot about this "domain of the measurable." The domain of the measurable was vivid for Bridgman, and he made it vivid for a lot of other people, including engineers today. Steve Kline wrote an influential book, SIMILITUDE AND APPROXIMATION THEORY (McGraw Hill 1966, Springer-Verlag 1984) about doings in the domain of the measurable. (In the first paragraph, Steve ASSUMES the differential equations are right.) After Steve wrote that book, he and I talked about some problems that remained in our exploration of that domain, and some difficulties with that domain, too. But this domain of the dimensional and the measurable, which is, like pure mathematics, a separate universe to be explored, has been much less considered than the domain of pure math. To deal with it, you need to use experimental math. Axioms can't help you. We've done some of that experimental math.

I believe that to represent a physical model in pure math,

you FIRST have to represent it as one or more equations in the domain of the measurable,

THEN you have to algebraically simplify these equations (using the rules for natural law operators) into a form that is arithmetically isomorphic to pure math (isomorphic to the domain of the algebra) and

THEN you can map the equations of the model, symbol-for-symbol into the domain of pure math, without misrepresentations.

I believe that, once the rules for the natural law operators are known, this should be trouble-free. So much works so well that my guess is that the problem Steve and I have found and fixed is the only one left. We were surprised to find just the one problem, and so have other people been.

The fix for the problem is easy: you algebraically simplify crossterms of finite increment equations at unit scale (in point form).

Anyway, this domain of the dimensional and the measurable may really be a "truly objective reality" that physics has not already considered, or not considered carefully enough. I think it is sensible to look at it that way.

Dave, thanks!

Bob

dewaite - 11:59am Jul 7, 1998 EST (#642 of 643) The tao that can be spoken is not the eternal Tao.

Bob -

By "truly objective reality" I meant a rock. Like the kind you can kick as you say, "I refute him thus!"

- Dave

rshowalter - 01:31pm Jul 7, 1998 EST (#643 of 643) Robert Showalter showalte@macc.wisc.edu

Fair enough. But in that case, there is great deal on which physics and other science depends that is NOT "truly objective reality" including every mathematical aspect of science.

You may want to define your "truly objective reality" in that way.

But if you do so, and you care to describe in detail what you see, you have to use "non objectives" such as pure math and measureable domains as well.

I said that math and the domain of the measurable "seemed real to me". I didn't say I could take a kick at math, and stub my toe.

(It may interest some, who object that we can't find God with a telescope, that we can't find mathematics with a telescope either.)

Bob