An error at the interface between the measurable and our culture's equation-representations has been made. Our culture's limiting arguments have been applied to invalid terms. Terms have been mislabeled as 0's or infinities as a result of this mistake.

               This disclosure asks for CHECKING of a mathematical physics argument on which much depends, under circumstances where peer review is problematic.    If we are right, medicine and science are in gross error in some of their models of brain function.    Other serious scientific errors seem likely.    Attention of a wide audience seems justified.    The first part of this disclosure summarizes our work, and problems we have had getting it checked.    We then set out papers, made available by FTP, that we are asking to be checked.    These papers and the material in them have been significantly checked already, and some background on that checking is set out.    The papers have been checked to a limited degree by NATURE, which has objected to them on grounds of format but supported them on matters of content.    We then set out our mathematical physics modeling in enough detail so that it may be understood and checked in one piece.    We ask that this work be checked.    We have made a basic discovery, which may be described in PHYSICAL or in FORMAL terms.


There can be PHYSICAL interactions between several kinds of physical laws, that occur over a length, or over an area, or over a volume, or over a period of time.    We must be able to correctly represent these interactions in differential equations, because differential equations are our culture's fundamental medium of scientific representation.      Differential equations are defined at points. Lengths, areas, volumes, and finite times all disappear at points in space-time.   Currently the inherently spatial interaction effects are wrongly thought to become either 0 or infinite at points. Therefore, these effects are either wrongly dismissed as 0's, or wrongly labelled as infinities.    We show that these inherently spatial interactions yield real finite effects, and we show how to represent these effects in differential equations.


We show, in symbols and by numerical example, that our culture's conventional limiting arguments, which are essential parts of our mathematical modeling procedures, are now degenerate because they apply to terms that are not defined.   Terms are not defined because they include groups (of dimensional parameters(1)and spatial increments) that are not defined.  Terms that include these groups must be properly interpreted and defined in terms of dimensional theory before the conventional limiting process can be validly applied.    When this is done, these terms, which are now dismissed as infinitesimals or labeled as infinities, are finite.

More specifically:

            When we derive DIFFERENTIAL EQUATIONS from PHYSICAL models, we start by setting up FINITE INCREMENT EQUATIONS that represent the model completely (or, if the equations involve some recursive relation, we represent the model to some specified degree of completeness.)

               Before we can use these FINITE INCREMENT EQUATIONS we must know what they mean, term for term.   We must be able to represent EVERY TERM by a specific measurement procedure. (EVERY TERM must make sense in terms of measurement theory.)

                ONLY THEN can we take the limit of the FINITE INCREMENT EQUATION and reduce it to a VALID DIFFERENTIAL EQUATION.

We are asking to be checked.    There are two appendices.    Appendix 1 shows the rejection-limited endorsement letter written to us by NATURE #ap1 .    Appendix 2 is a short description of our backgrounds #ap2 .

           A mistake has been made, just at the interface between the measurable world and our culture's mathematical representations of it.    The mistake is now embedded in the reflexes of science and engineering.   Usually the mistake does no harm, but in neural science and some other places the mistake makes for enormous misunderstandings.   There are number-like entities, called dimensional parameters, that are our culture's interface between the measurable and our culture's equation representations of it.    The dimensional parameters are not just numbers, but have limitations on the arithmetic that can be validly done with them.    Terms occur in our culture's derivations, that have been thought to be defined as written, that involve undefined arithmetic.    Now, our culture dismisses such terms, or calls them infinite, on the basis of limiting processes that use this undefined arithmetic.    These terms must be defined so that they are consistent with dimensional theory beforethe limiting process is applied. When this is done, these terms are finite.

          The idea that "the dimensional parameters are not just numbers" may seem colorless and abstract.    But big human stakes rest on the issue.    Our culture's models of transmission of signals in the axons and dendrites of the brain depends on an equation of neural transmission, the Kelvin-Rall equation, that grossly misrepresents neural behavior(2) and that can understate effective line inductance by 1018:1 or more

                   (  1,000,000,000,000,000,000 : 1 or more. )

The mistake has made it impossible for our culture to understand some basic things about the brain, despite much effort.    This mistake has resulted in misfocused medical research and suboptimal medical procedures.    Places where inductive effects matter particularly include ventricular fibrillation, the most common immediate cause of death in the industrialized countries, and epilepsy. These disease states are not well understood now, and cannot be understood, until the gross error in the Kelvin-Rall transmission equation is corrected.    The idea that "the dimensional parameters are not just numbers" therefore touches issues of injury, life, and death on a large scale.    The proper use of the dimensional parameters matters in other technical fields, as well.

        The work set our here is closely connected to issues of simulation discussed in REALITY BYTES   Science and engineering both depend on valid simulations.    The following "BIG QUESTION" must be implicitly asked and answered many times for scientific reasoning to go on.


Here are the definitions of "concrete" and "abstract" used above:


           A concrete circumstance, or concrete model, refers to embodiments that can be seen or imagined, and that can always be put into some clear relation to our senses. Architectural specifications and engineering specifications are concrete specifications of concrete circumstances, and usually include drawings, descriptions that require language, and equations. The equations may be as simple as numerical equalities written in an implicit form, but may be complicated.

          Abstraction, which is separation from embodiment, strips away all connection to our senses, and strips away all intuitions we may have that depend on our senses. Abstraction is extrasensory.    One association with the word "extrasensory" is "magical". Another association with the word "extrasensory" is "unnatural."    No matter how carefully it is done, abstraction is "magical" in this restricted sense.    No matter how carefully it is explained, people new to an abstraction are likely to find that abstraction somehow uncomfortable, because it seems unnatural.    For reasons like these, "abstraction" and "faith" are associated notions, for good or ill.    The intrinsically extrasensory usages of abstract reasoning have been a fruitful part of science for centuries, and must continue to be.

Here are "magical, faith-based answers" to the BIG QUESTIONS OF SIMULATION:

The "magical, faith-based answers" to the BIG QUESTIONS OF SIMULATION set out above are the answers that our culture most often teaches and uses.    These faith-based answers are very often justified by experiment.    But not all our experiments do what our analysis leads us to expect.    So we may need some more careful and more detailed answers to the     BIG QUESTIONS OF SIMULATION.    We cannot eliminate all need for faith, but we can work for simulations that check what can be checked.        Particularly, we may need to learn how to carefully derive the differential equations that apply to a model, rather than take them on faith.

           George Johnson's FIRE IN THE MIND, explores the following question, and finds reason for us to be uneasy about the answers:

On the basis of faith-based answers to simulation, one cannot take a step-by-step approach to answering this kind of question for a particular case.    The faith part is easy to establish, but the evidential validity of that faith cannot be examined on faith-based terms.

            Much of Johnson's work asks the following question, and his forum REALITY BYTES is devoted to it:

On the basis of faith-based answers to simulation, one cannot take a complete step-by-step approach to answering this question with respect to particular cases.    The participants in REALITY BYTES are plainly unclear about what the relationship between model and reality really is, and are not sure what it should be.   It is worth reading the whole of REALITY BYTES in order to see the depth, breadth, and seriousness of the muddle.

             Based on much experience, George Johnson states his belief, that we share, about finding and validating the truth in science.

Johnson admonishes scientists to check their work.    Still, as a practical matter, he makes an impossible request for business-as-usual science.    So long as our simulation procedures are the faith-based ones, we as members of our culture do not have well defined maps.    We as members of our culture cannot be sure about what we mean by our territories.    Although we can sometimes compare our results to experiments, our procedures, definitions, and results may be fuzzy in undefined and unsuspected ways.    We as a pure scientific culture, committed to abstraction, are unclear about what is a "fit" between theory and experiment, and what is a "mismatch."

            Scientists and engineers have become so accustomed to abstract math that they are insensitive to limitations that it has because it is abstract.    Notions that are much used become familiar.    After notions become familiar, they somehow seem concrete to us.    This can be radically misleading.    Here is a point that may be "obvious" at one level, but that is typically forgotten:


             Let's consider what differential equations are, and what human animals can know about them.    A differential equation is defined at a point in space and time.    A point is an extrasensory notion.    Lines and space-filling geometries are degenerate at points.      No measurements can be made at true points.

             Using the formality of the calculus, finite circumstances are modeled by integrating differential equations, defined at points, over intervals of space and time.    The integration procedure, like the definition of the derivative, is nonintuitive and extrasensory.

             People become accustomed to derivatives and differential equations.    Sometimes they seem to believe that science IS a system of differential equations.    Richard Feynman showed a table listing Newton's laws and Maxwell's equations in a starkly notated differential equation form.    He labelled it CLASSICAL PHYSICS (and clearly MEANT to say "all of classical physics")(3).    People forget, and are taught to forget, how strange derivatives and differential equations really are by sensory standards.

            No one can really visualize a point, much less a derivative.   A derivative is defined at a point. Derivatives are defined in terms of notions that do not seem to make any sense when applied to points.

Derivatives are sketched (in images that always have area) as related rates of change of some quantity that may be impossible to think of at a point

A confused student, baffled by these contrasensory notions, is in an important sense safer and wiser than the professor or scientist who claims full and casual mastery of the calculus apparatus, not because she has mastered these difficulties, but because she has forgotten them.    At least the student will be clear that derivatives are abstract.    The professor may have forgotten even this.

            Derivatives are abstract.    Differential equations, and integrations of differential equations, are abstract.    If we are to do map-territory matching on our simulations, as Johnson so reasonably suggests we should, we must think about stages of simulation that happen BEFORE the system gets expressed in the abstract form of differential equations.    We must also think some about how we convert any abstract manipulations we may do back into concrete statements that we may apply to concrete models.    We must go beyond the "magical, faith-based answers" that are now our culture's standard answers to simulation.

                  We have been working on more detailed and complete answers to the "BIG QUESTIONS OF SIMULATION" for a long time.    We've been building on simulation procedures that engineers have used for a long time.    We've been working to clarify and extend these procedures for the following practical reasons:

Here is a MUCH MORE DETAILED step-by-step answer to the BIG QUESTIONS OF SIMULATION.    We explain how it is possible to map a concrete model of some aspect of the world, directly interpretable in terms of measurement, into the abstract world of mathematical analysis, where models are disconnected from direct definition by measurement.    The application of abstract results to concrete cases is also described. We show how MISMAPPINGS can now occur, and show how these mismappings may be avoided.

For some derivations, the step-by-step procedure below is easy, and gives answers no different from the current procedures.    In these easy cases, the procedure costs little, and may be useful as a check.    For harder derivations, the step-by-step procedure below takes work. The harder the derivation is, the more likely it is to be needed.

The procedure occurs in steps.

CONCRETE MODEL FORMATION:   First, one needs a fully specified finite physical model (concrete in the sense that it can be understood by measurement.)    In the process of forming this model, one prepares finite increment equations, set out in a consistent system of units, that make sense in terms of measurement.    (Current procedures sometimes do not make sense in terms of measurement.)    ONLY AFTER THE FINITE INCREMENT EQUATION MAKES SENSE IN TERMS OF MEASUREMENT can one apply limiting procedures to convert the concrete, measurable finite increment equations into abstract differential equations.

FORMATION OF THE ABSTRACT MODEL:   One takes the limit of the finite increment equations that make sense in terms of measurement, and gets differential equations in an abstract domain that is beyond measurement.

MANIPULATION OF THE ABSTRACT MODEL:   One manipulates these abstract differential equations, according to the rules of the abstract calculus, in the ways that are useful for the simulation at hand.

CONCRETE APPLICATION OF THE ABSTRACT RESULTS:   If the previous steps are rightly done, mapping of the abstract results into the fully specified concrete model can be done using a routine symbol-for-symbol mapping.

Here is the procedure and its background in more detail:


         A concrete physical model directly connected to the details of measurement must be a finite scale model.    The finite scale requirement occurs for two related reasons.    First, geometrical details of specification involve lines and space-filling geometries that become degenerate at the point scale of a differential equation. Second, ALL our measurement procedures are finite scale procedures.    No real measurements occur instantaneously, or at a true geometrical point.    For these reasons, equations representing a concrete physical model in step-by-step detail must be finite increment equations.    The measurements must be expressed in a system of consistent units(4).

A (necessarily finite) concrete physical model must be completely specified before concrete finite increment equations can be derived from that model.

If and only if all the terms in the concrete finite increment model equations are reduced into measure-theory consistent form, then the model equations can be mapped, using a conventional limiting process, into differential equations that exist in the abstract domain in which differential equations exist.

! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !

              Interpretation of finite increment equations has had the following hidden problem, that has resulted in mismapping that has produced invalid differential equations.    In order to interpret ALL the terms in concrete finite increment model equations in a manner consistent with measurement theory, we must know the following NEW information:

! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !

After this group interpretation is done, one has concrete finite increment model equations that are reduced into measure-theory consistent form.     THESE CONCRETE EQUATIONS CAN NOW BE MAPPED INTO DIFFERENTIAL EQUATIONS THAT EXIST IN THE ABSTRACT DOMAIN OF THE CALCULUS USING THE CONVENTIONAL LIMITING PROCESS.


Calculation according to the rules of analysis can then be validly done, on these valid differential equations.


If a result of analysis is to be applied to the measurable model from which it traces, groups in terms that act as if they are dimensional parameters in the domain of calculus can be interpreted as constructively derived dimensional parameters in the measurable model system.    Once this interpretation is made, the calculus model may be mapped back into the measurable model on a term for term basis, for use or checking.

                 This more detailed procedure does not and cannot address the question

That question asks about matters removed from our senses, our intuitions and our measurements. The usefulness of abstract math may always remain a wonder that we have to take on faith.

                  But the procedure DOES address the question

We believe that the procedure above accounts for the accountable.    This accounting catches mistakes that are catchable, and much reduces the amount of faith we have to have in our derivations. It lets us recheck our derivations systematically when we have reason to suspect them.

                We have found major answers in neural science based on this procedure.    We are asking that our answers be checked.    We are also asking that the procedure above be discussed seriously enough, and widely enough, so that it may be established or shown wrong.

                   Our new procedure produces some new terms in some COUPLED physical models.    In most technical calculations, judged by measurable standards, the new results are numerically indistinguishable from the old procedure's results.    (In these cases, our procedure generates finite terms that would be strictly 0 according to the old limiting procedure, but these finite terms are VERY small.)    But sometimes these finite terms are large.    In the neurosciences, these terms lead to enormously different conclusions from the ones the conventional process leads to.

                  The differences between the new procedure and the old one come because of the discovery that the dimensional parameters are not just numbers, but are subject to arithmetic restrictions (or, and this is the same thing, must be interpreted in terms of measurement theory5.)    The requirements of measurement theory make the mathematics that applies directly to concrete circumstances different from abstract mathematics.

The new finite terms we find correspond to some old and basic questions.    Here is the fundamental PHYSICAL issue that current procedures resolve incorrectly.

As of now, our culture cannot represent such coupled relations in differential equations.    Therefore, we as a culture implicitly assume that these integrated effects do not exist.    (Other times we implicitly assume that these spatial interaction effects are infinite.)    The procedure we show above, which includes measurement theory and knowledge of the dimensional parameters, shows that these coupled effects do exist.    These coupled effects are finite.    The procedure shows how to represent these effects in differential equation form.

              This issue of coupled effects concerned James Clerk Maxwell for many years.    Maxwell, the interpreter of Faraday, was a highly trained and very able mathematician and also, and almost independently, an enormously intuitive physicist who trusted his senses.    His provisional response to mathematics' answer to the coupling problem was to reject mathematics when he had to.   In what follows(7), Maxwell includes electrodynamics within dynamics:

             Maxwell did not believe that all the quantities that the mathematicians were calling zeros (infinitesimals) were really zeros.     Maxwell acted on that belief.     His electromagnetic equations include crosseffect terms that Maxwell felt certain must exist on "dynamical" grounds.    These important terms have never been derived from a realistic model by calculus.    Maxwell's electromagnetic equations laid the groundwork for 20th century electrical technology, and much of subsequent physics.    The failure of these electromagnetic equations at atomic scales was interpreted as a fundamental, irrecoverable failure of the classical physics tradition, and that failure is closely connected with the origin and justification of current theories of quantum mechanics.

             Maxwell seems to have known how shaky the theoretical basis for his electromagnetic equations were.    Nor did he seem sure that he'd accounted for all the terms that equation ought to have.   Perhaps for this reason, Maxwell looked hard, seeking an error in derivations that struck out crosseffects he felt had to be finite.   He found a VERY suspicious place to look.   He discusses it, in an encyclopedia piece where controversy had to be avoided, a year before his death in 1879 (DIMENSIONS Encyclopedia Britannica, 9th ed.):

             According to the first, more literal method Maxwell cites, we have "difficulty" interpreting some (cross effect) terms that interested Maxwell particularly.    Indeed, with no more information than Maxwell had, we cannot interpret them at all.    We are stopped.

             THEREFORE we make an assumption, based on faith and experience.    We make that assumption along with Maxwell, giants before him (Newton, LaPlace, LaGuerre, and Fourier) and workers since.    As a culture, we decide to act AS IF our physical quantity representing symbols may be abstracted into simple numbers in our intermediate calculations.    This ASSUMPTION has produced equations that fit experiment innumerable times.    But it remains a pragmatic ASSUMPTION with no logically rigorous basis at all.

               The assumption CANNOT be justified by claiming universal success for our culture's analysis.   Our culture's analysis has generated more successes than a person could reasonably understand and count, but our culture's analysis has also failed (often inexplicably) more often than any person could review or count.    The reporting career of George Johnson, and the text of REALITY BYTES, provide many examples.    Computational fluid mechanics, most fields of engineering, and most fields of physical and biological science offer many examples where analysis fails or cannot be brought to bear, for reasons that seem unclear to all concerned.

                 The strangeness of the assumption that the symbols that we use to represent measurable things are "just numbers" may help explain why some smart, careful students, who wish to carefully and redundantly trace decisive stages of logic as they imagine them and learn them, can distrust mathematical modeling procedures, can refuse to learn and use mathematical modeling, and can come to fear mathematics.    These students can see how measurements (or measurable quantities) can be set out as numbers.    They can see, or think they can see, how quantities can interact according to patterns, and how these patterns can be symbolized by terms in equations.    But at the start of their reasonings and visualizations, quantities are more than stark numbers - they exist in contextual detail, that detail can be expanded to measurement procedures and apparatus, and the terms correspond to pictures.    In the course of calculation, in an untraceable jump made without any detailed explanation or justification, dimensional quantities previously linked to measurement are stripped into simple, decontextualized numbers.    These decontextualized numbers appear in terms that cannot be pictured by any physically connected step-by-step process, although image analogies corresponding to these terms can sometimes be constructed.    These issues bothered Maxwell, and have bothered us, for conceptual reasons and for computational reasons as well. In Maxwell's electromagnetic studies, and in neurophysiology today, the standard mathematical modelling assumptions yield very wrong answers.

To review:

           Maxwell was unhappy with his second assumption, and searched for ways to make his first assumption computationally workable until his death.    Maxwell was NOT convinced that crosseffect terms that mathematicians were calling infinitesimal really were infinitesimal.    But he did not have a mathematically coherent reason to doubt the standard mathematics.    Maxwell had not seen that there were some symbols, that he was using just as if they were simple dimensional numbers, that were not just dimensional numbers.

             Here is a crossterm, one of many considered later.    This crossterm is encountered in the modeling of a conductive line (a wire or a neuron.)    R is resistance per unit length. C is capacitance per unit length.    R and C are defined as ratios of measurables, in (fairly complex) measurement procedures invisibly encoded when the letters R and C are written.    R and C are dimensional parameters.    x is a length along the line. v is voltage. t is time.    James Clerk Maxwell called crossterms like this "difficult to interpret" in 1879.

Crossterms like this make no sense in terms of any measurement procedure.   We as a culture now dispose of them by a limiting argument that assumes they are validly written, when they are not.    As a culture, we now say that

And so terms like this are discarded as "infinitesimal."    We discard these terms without clearly defining what they mean first.

             With knowledge that the dimensional parameters have arithmetical restrictions, we proceed differently.    It happens (and this is new knowledge) that mixed products of R, C, and length are only defined in point form (or when interpreted by an operational measurement procedure, which produces the same thing).    The arithmetic of the limiting argument above, that looks so compelling, is undefined and actively misleading.    The correct interpretation of the crossproduct, writing out R  and C so that their numerical and unit parts are visible, proceeds as follows.    We put the length in point form, which is (1 meter)p for a meter length unit.

We then do arithmetic to evaluate the product of R, C and the point form of length as we usually do dimensional arithmetic: we multiply numerical parts, and add dimensional exponents.    We get a finite term.   That term may be too small to detect (for most values of R and C).   Still, the term is finite rather than zero.   In neurophysiology there are terms like this, that we as a culture have discarded, that are large and important.

(The idea that (1 meter)p is a point form of length (and NOT the same as a length of 1 meter) is nonintuitive, and we'll justify it later in two logically independent ways.  We'll justify it by a logic of interpolation.    We'll justify it independently from measurement theory, which takes into account what dimensional parameters are, and how they are computed. Before these justifications, we show how we may interpret such terms.)

            We've been applying limiting arguments to undefined terms, and have been getting wrong answers.    We have to set out terms like this in a way that makes measurable sense before applying limiting arguments to them.    It turns out that, for these terms, we "get to point scale" by substituting point forms of our spatial variables directly.    (Or by doing an operational measurement procedure that amounts to this.)    At point scale, our arithmetic is defined.    This would seem natural and logical to any fresh student, and no more arbitrary than many other things she has to learn.    But people are not accustomed to it.    We are asking that people consider a new rule that restricts their arithmetic.    Discussion of this can quickly come to resemble an "argument about religion."    Perhaps this isn't surprising.

              People often learn their mathematics as a matter of faith, based on patterns of seeing that may be incomplete or wrong, but that may seem compelling to the people who accept them. Mistakes based on "mathematically perfect" derivations can be hard to see, and can last a long time.    For example, an important mistake in fluid mechanics lasted 150 years, defeated some famous analysts, and still persists in some elementary physics texts.    The inviscid equations of Leonard Euler were used to predict flows.    Those equations "show" that wings have neither drag nor lift.    The analysis was corrected by Ludwig Prandl in 1904, and his change made modern fluid mechanics possible.    The change took new insights (the idea of the boundary layer) and new techniques (asymptotic perturbation theory).    The Euler analysis made a deeply buried (and plausible) assumption (no viscous effects, even at the walls) that wrecked predictions.    It was a long time, even after Prandl pointed out the right answer, before everyone could see the buried mistake, see Prandl's better analysis, and learn to use Prandl's breakthrough.    Even though everybody knew that the Euler equations weren't working, there was still resistance.    A later modification of flow theory, by Kline(8) , took fifteen years and the work of several coworkers and many graduate students to become established.    Again, most of the fluid mechanikers who resisted the new way really knew the old way was wrong, at some level.    Even so, there was much resistance.    Established ways of thought and reflex patterns of work are hard to change.           

             The resistance to special treatment of the dimensional parameters has been far beyond anything in our experience.    We have difficulty because people do not know that the old analysis has a problem.    Many regard the possibility of the problem we speak of as unbelievable.    We question limiting arguments based on ill defined terms, and limiting arguments have a special and high status in mathematics for historical reasons.   We are asking for modification of patterns that are very deeply habituated.     If one looks at a trained analyst at work, in either the "pure" or engineering sciences, one will see limiting arguments and the arithmetical usages around them applied with unconscious-reflexive facility, as spontaneously as breathing.    These processes are "done from the wrist" as if they were spinal reflexes.    One would not be looking at a trained analyst otherwise.    Analysts judge each other by this sort of reflexive facility.

           We would all be lost without our reflexes, that may derive from culture but that become a part of us.    Once a person is committed to a reflex, it is awkward to consider it, or to doubt it, or to try to change it.    In our experience, if one asks a trained analyst to change her limiting procedures, or the arithmetical usages around them, one encounters some of the same difficulties one might have if one asked that analyst to stop breathing.    For one thing, the analyst will perceive the request as an attack on her competence.    For another, the analyst will perceive the request as a crazy, literally unthinkable request.    The fact that the old processes have been involved in much successful work will be taken as reason to trust the old procedures beyond doubting.    The notion that our limiting processes is associated with flawed arithmetic in some cases will be dismissed.    If one persists, and one is taken more seriously, one is likely to encounter fear responses.    If one is taken very seriously, one may encounter panic responses combined with disorientation responses, closely followed by various defensive and hostile responses.    We have seen these responses from individuals committed to the current analytical usages. We have seen these responses from groups committed to these current analytical usages.   These are not responses that fit people for detached judgement.

             We are dealing with a matter that is a large scale issue of life and death.    It is an issue of great scientific interest.    Our conclusions need to be checked.    For mathematicians, physicists, and many engineers these logical issues are not easily addressed because they are linked to contextual, reflexive, institutional issues that are intense but that are not simple.    For such people, we are saying something that is not only "unbelievable," but also "unseeable" in the paradigm conflict sense Kuhn and others have described.

             People who are not so reflexively engaged, and who are not so indoctrinated, are needed to look at the work, and check it step-by-step.    People not blinded by their reflexes and their indoctrinations will be better able to judge the argument and evidence here than the "experts" so blinded.   People interested in truth, who see the stakes, but do not feel they have a lot to lose if the argument goes one way rather than another will be better able to judge the argument and evidence here than the "experts" who are also parties-in-interest.   We care about our results too much to be fully trustable judges of them.   We want our results to be right. Some expert physicists and mathematicians will care about our results too much to be fully trustable judges of them, because if we are right, we will invalidate much of the work they have done, that they rely on, and that they revere.   Such experts will want our results to be wrong.

            No one is to blame for these entirely human motivations.   Even so, we need umpires, who can look at the work from a certain distance, and care for the truth.   The medical implications of this work are so large that right answers are what should matter here.

            We have been in correspondence with George Johnson since December of last year. We've decided that the work requires checking from a broadly based audience that includes mathematicians and physicists, but does not include them exclusively.    We have concluded that an extensive disclosure-submission in a New York Times forum under George Johnson is a proper way to address that audience.    We have found that we cannot reasonably rely on peer review, unassisted, in its usual form for this particular case.    Peer review is valuable but not perfect.    We strongly support peer review as the standard pattern of scientific and professional engineering evaluation.    One of us (Kline) has more than 170 peer reviewed articles, and has reviewed at least as many.    (Kline, no stranger to peer review, is one of the leaders in fluid mechanics, especially computational fluid mechanics, this century.)    But in our particular and unusual case we are, reluctantly but definitely, attacking the "invisible colleges" of physics and mathematics on decisive ground.    We are doing so as engineers, that is, as outsiders.    Stakes are high and, in our experience, emotions related to the work also run high. Peer review was never meant for this.    We need, in addition to checking from mathematicians and physicists, checking by quantitatively competent people who are NOT mathematicians and are NOT physicists.    There are many such people in the United States.    The mathematics that we disclose is not intrinsically difficult once it is pointed out.    It may be checked in a matter-of-fact fashion by professionals of all kinds.    But the new mathematics may be most difficult for mathematicians and physicists, because it requires that they change deeply established patterns of thought and reflex.

             Practical implications of our work include a reinterpretation of important aspects of neurophysiology, some plain matters of life and death, some as interesting as memory.    We say that the effective inductance of small neural lines is now understated by factors of 1010-1019 (that is, effective neural inductance is understated by 10,000,000,000:1 to 10,000,000,000,000,000,000:1 . )    We believe that serious mistakes in neural science and medicine have been made, and are being made because of this mistake, and some other mistakes that follow from the same mathematical misunderstanding.    For scientific, medical, and moral reasons, it is important that we be checked in this.


             Readers of George Johnson and REALITY BYTES may not be surprised when we say that there is an error at the interface between the measurable world and our culture's mathematical modeling.  They may be surprised when they see how basic the error is, and how old it is. We were surprised that the error has existed since Isaac Newton's time.    Since that time it has been assumed that the "mapping patterns" built into analysis somehow exactly fit the "territories" of the measurable world.    The reason for the fit has been thought vague, or magical.

             The fit has been better grounded than many have thought.    Where the map-territory fit exists, the fit is there, in the cases we've examined, because we as a culture have always had sharp, mechanistic "mapping tools" that we have reason to trust from measurement experience. However, we as a culture have not recognized these ubiquitous "mapping tools" as the logically special entities that they are.    We as a culture have sometimes used these tools with facility. As a culture, we have never used them with understanding.    Nor have we as a culture always used these mapping tools perfectly.

             The linkage between the measurable world and our culture's more-or-less abstracted equation representations of that world occurs via dimensional parameters, common number-like entities, used for centuries, that encode and abstract experimental information in our culture's physical laws.    These dimensional parameters ARE the link between the physical laws that can be measured, and (more or less) abstracted equations.    The dimensional parameters have seemed to be so trouble-free that no one has looked very hard at them.    We found that we had to do so.

Here are some directly measurable dimensional parameters:

These dimensional parameters are not abstractions: each of them corresponds to the detailed context of a measurement procedure. (Usually, you'd need a sketch, and some instructions, to describe that measurement procedure.)    The dimensional parameters are each expressible in units, and the unit system can be reduced to units of length, mass, and time.    But the units notated in a dimensional parameter are only defined in the context of a SPECIFIC, and sometimes very detailed, measurement procedure.

          In addition to dimensional parametric functions, there are also compound dimensional parameters, made up of the products or ratios of dimensional parameters and other dimensional variables taken together. RC in equations 2-3 above is a compound dimensional parameter.

              A famous class of the compound dimensional parameters is the dimensionless numbers, such as the Reynolds number used in fluid mechanics.    These "dimensionless numbers" exist in a strongly specified dimensional context: they exist in consistent dimensional systems (systems that may be reduced to length, mass, and time and a tightly specified list of measurement procedures.)    The dimensionless parameters are defined in terms of context-complete, context-specific measurement procedures applied to a particular circumstance.    For instance, a particular Reynolds number will apply to a particular airplane model at a particular angle of attack at a particular velocity for a particular fluid, and to other geometrically similar systems where ratios of inertial to viscous forces are the same.    A Reynolds number or other dimensionless number has an exponent of 0 for all of the dimensions in the consistent dimensional system that it is defined in.   The numerical value of the Reynolds number or other dimensionless number is exactly the same for any other consistent dimensional system subject to EXACTLY the same measurement procedures for EXACTLY the same physical circumstances.    The "dimensionless" numbers exist in a dimensional world that must be specified in much detail.    They are not "just numbers" in the sense of "just abstract numbers."    Again for emphasis: the dimensionless numbers and dimensional parameters relate to concrete, measurable things.    They are not abstract.    They are connected to context and specific embodiment.    All the dimensional parameters can be defined according to the following pattern.

Example:    Resistance per unit length of a wire, R, is the ratio of voltage drop per length of that wire to current flow over that length of wire.    A resistance per unit length determined for a specific wire for ONE specific length increment and ONE specific current works for an INFINITE SET of other length increments and currents on that wire (holding temperature the same, and assuming wire homogeneity.)    R is the dimensional parameter in Ohm's law for a wire.    Other physical laws have other dimensional parameters.

            The dimensional parameters encode experimental information into a compact, linear form.    This form may be denoted by a symbol exactly like the symbols used to denote simple dimensional numbers, and that is now done.    Such symbols, representing encoding of context-bound measurement information, are now used exactly as dimensional numbers inside our culture's physical equations.    The distinction between the dimensional parameters and ordinary dimensional numbers exists functionally and has done so from the first equation definitions of physical law. However, to our knowledge that distinction has not been clearly defined before.    So far as we can tell, the distinction has not been thought important.    Standard notation has always ignored the distinction, so that the distinction has been out of sight and out of mind.

       If one asks "how do our culture's equations connect to the measurable world," a major question in REALITY BYTES, the simple (but not too simple) answer is "via our dimensional parameters, and the specifications and contextual limitations built into them."

             Symbols that denote dimensional parameters and identical symbols denoting ordinary dimensional numbers have been dealt with as if they were interchangeable in all respects.    They are not.    It has not been understood that the dimensional parameters are not just numbers, but are subject to restrictions on the arithmetic that can be validly done with them.    They must be interpreted so that they fit the requirements of dimensional theory.

            Dimensional parameters work in the arithmetically expected way when they are used in the exact manner for which they were defined.     For more complicated conditions, where several dimensional parameters and spatial increments must be interpreted together, the arithmetic must be done in POINT FORM.  

            "Arithmetic" applied to these more complicated conditions that is not in POINT FORM is invalid and misleading.

              ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !

           As a culture, we haven't understood what the dimensional parameters were, and how they work.    As a result of this misunderstanding, which has been deeply buried in our culture's notation, we as a culture have used limiting arguments that are incorrect, sometimes calling finite terms infinitesimal (0), sometimes calling finite terms infinities.    We've had little reason to doubt those limiting arguments, because in many, many cases, our culture's numerical results have been experimentally perfect.    However, there are cases, and neurophysiology is one of them, that have worked very badly because the limiting procedure, in these cases, has been grossly misleading.

            We have spent years searching for this insight together.    Even after we found the result, we had a hard time believing it, for the same reasons that make the result difficult for other people.    We kept working because we had to solve this problem if we were to understand central issues in neurophysiology and elsewhere.

            In many cases, the implications of our new derivational procedure are small because the new terms it produces, though finite in the mathematical sense, are also negligibly small.    However, the same procedure applied to the neural transmission equation radically changes the logic of neurophysiology, where some of the new terms are very large.    Rederivation of the neural conduction equation appears to make much of brain function understandable that has not been understandable before.


       The following papers are available in postscript form by FTP ( ftp . These papers are revisions of papers submitted to NATURE  (discussed below.)    We would be grateful to anyone who would check them.    The mathematical argument that follows in this transmission is somewhat more evolved, and perhaps more clear, than that in the mathematical papers submitted to NATURE, but has less detailed references.    We ask that the text in this transmission be checked especially.    We are interested in checking by mathematicians and physicists, but also by engineers and any of the very large number of people who can follow a technical argument, and trace it step-by-step.    (The first author, MRS, believes that the U.S. Patent Office would be the best adapted and staffed of all the institutions in American society to check this material, if the USPO could be persuaded to undertake the work.    Other institutions, including military organizations, could do the checking well.    The Society of Automotive Engineers, or a mathematically oriented engineering concern, could also check this work well. )


A.   MODELING OF PHYSICAL SYSTEMS ACCORDING TO MAXWELL'S FIRST METHOD                                                                 by   M.R.Showalter and S.J.Kline

           This paper sets out the material at a level that concerned James Clerk Maxwell.    It is, we believe, at a level that would most concern a new student, or someone, not a mathematician, looking at the issues involved.


          This paper defines what a dimensional parameter is, and goes through the detailed reasons why the dimensional parameters are subject to arithmetical restrictions that invalidate our conventional limiting arguments.

C.    If equations derived according to Maxwell's 1st method are right, inferences from experiments are only valid over a RESTRICTED range.   by M.R. Showalter and S.J. Kline

           This paper shows, by numerical example, how the change to the intensification procedure can be either totally insignificant , or of dominant significance, depending on the numerical size of the dimensional parameters involved in the physical case at hand.

The biggest single motivation of our work is set out in the following biologically oriented papers.   

E.      REASONS TO DOUBT THE CURRENT NEURAL CONDUCTION MODEL                                               by   M.R. Showalter

           The Kelvin-Rall passive neural conduction equation (line conduction equation) has the status of doctrine in the neural sciences. Kelvin-Rall has long had evidence standing against it. Standing for it, in the main, is faith in its mathematical derivation, which rests on a limiting argument, thought to be certain, that we call false. In this paper we review evidence against Kelvin-Rall, both old and new. The evidence reviewed favors the Showalter-Kline (S-K) neural equation that we derive.

F.    A NEW PASSIVE NEURAL EQUATION. Part a: derivation        by   M.R. Showalter

            When the arithmetic limitations of the dimensional parameters are accounted for (when full consistency with measurement theory is achieved) the line conduction equation gains new terms.    Under neural conditions, some of these terms are huge.

G.    A PASSIVE NEURAL EQUATION: Part b: neural conduction properties                by   M.R. Showalter

            The S-K neural conduction line equation is evaluated, using standard electrical engineering methods.    Logically interesting properties VERY different from those of Kelvin-Rall are shown.


    When one combines paper G. with anatomy and other neurophysiological knowledge, much logic of brain function, in health and disease, "almost falls into your hand." (Subject to the unit system change below, and in view of the capacitance of the neural membrane, fluid cleft, glial membrane interface.)

        Papers E-G are in the wrong units, but are provided essentially as submitted to NATURE.  The data and graphs show curves and values that are near to correct, but do not account for two opposing and balancing issues, the need to calculate values in the MKS unit system, and the need to account for the capacitance of the neural membrane-fluid cleft-glial membrane assembly, and not just the capacitance of a single neural membrane. These papers are in CGS-coulomb-volt units, the customary units for biological calculation. A checker asked a question about dimensional consistency. We found that, for dimensional consistency of our crossterms, the only consistent unit system available was the Meter-Kilogram-Second-Coulomb-Volt (MKS-Georgi) system. When the change to MKS Georgi units was made, the magnitude of our crossterms changed, and our calculated conduction velocities fell to 1/100th of those we calculated in CGS units, velocities much lower than measured velocities. Our product of the numerical values of line resistance and line capacitance had to be wrong by 100-fold, or our theory had to be wrong. Our line resistance couldn't be very wrong - that left us looking for a two order of magnitude reduction in capacitance. Showalter has found that reduction, and in doing so has recognized the reason for the ubiquity of the glial cells surrounding unmyelenated neurons, a longstanding mystery in the neurosciences. The requirement for MKS units is set out below. The function of glia, and the fluid cleft, are treated in

I.    The Glial membrane-fluid cleft-neural membrane arrangement cuts effective neural capacitance, greatly increasing signal conduction velocity and greatly reducing the energy requirement per action potential.                    M.R. Showalter

            These biologically oriented papers discuss "matters of life and death."      Medical and research decisions in the neural sciences rely, very often, on the Kelvin-Rall equation.    We say that Kelvin-Rall presents a grossly misleading picture of how neurons actually work.    This should be checked.    The issue matters almost everywhere in the neural sciences.    Two interesting examples where misunderstanding due to Kelvin-Rall seem particularly serious are ventricular fibrillation, the largest single immediate cause of death in advanced countries, and epilepsy.

                We are asking to be checked.    Everything we conclude hinges on the following question:   Are the dimensional parameters arithmetically restricted entities, or are they not?

            That question hinges, in turn, on other questions.    Do we have to model concrete circumstances concretely before mapping them into abstract mathematics?    If we do, are there special rules that apply to concrete dimensional equations that do not apply to abstract equations?    We answer yes.    We are asking to be checked.

Checking so far:

       This posting is not our first effort to get this mathematics checked.    We don't believe that George Johnson would permit this posting if he were not satisfied that we'd worked to get checking elsewhere.    Nor is our math unchecked.    Nor have uncorrected mistakes in the mathematics been found by anyone.    (People have expressed aversion to our work, but that is a different thing. ) Nonetheless, the mathematics, which is at the core of physical modeling, has not been accepted or seriously discussed, by mathematicians or physicists.    We do not think it right to review our efforts to get this math checked in detail here.    We can say that, for six years, we've worked hard to get this math, and work leading up to it, critically reviewed and discussed. We've attempted to get checking of the work in every effective way we've been able to think of.    For the last year particularly, after the arithmetical limitations of the dimensional parameters were clearly identified, we've tried to get the work checked.   We've not been called wrong for explicit, traceable reasons that could stand up to examination (except for a much appreciated question about scale choice, which has led us to use MKS-Georgi units.) Nonetheless, the work has been dismissed, and seems to have been treated as unthinkable.    Mathematicians and physicists of distinct good will have had this reaction, and sometimes their reactions have involved visible signs of personal stress. Our work has been "undiscussable."    Late last year, one of us (MRS) sent a series of e-mail messages, describing the math in some detail, and soliciting checking of it, to every member of the Department of Mathematics at the University of Wisconsin, Madison and to most of the neuroscientists at U.W.    These email messages, and a background essay on checking efforts sent to George Johnson in January 1997, are available to interested  parties  (    There were no responses to my requests from mathematicians or physicists solicited, nor did substantial help from neuroscientists occur.    People watching, including a mathematically gifted Dean, felt that there was some obligation for the mathematicians to respond if they had found a mistake.    Those who examine the transmissions may agree. Nonetheless, the mathematics, which is at the core of physical modeling, and which has immediate and large medical and scientific implications, was not discussed.

             If we could have done our work without threatening mathematical foundations, we might have been able to get our work checked and accepted.    Efforts to assist the first author, M.R.S., some above-and-beyond-the-call-of-duty by professorial standards, were made by two gifted mathematicians and, especially, by an able and kind physicist.    Insofar as it was possible to offer help relevant to our neural modeling without acceptance of the "unthinkable" and "wild" math presented, here, that help was offered.    But when we found that a basic mistake at the interface between physical modeling and math was there, and could not be avoided, help ceased, and was replaced by resistance.    Our work could not be considered, discussed, or checked.

            Experience leads us to believe that the conceptual taboos we have faced, that have stood in the way of getting our work checked and considered, are very widespread in the "invisible colleges" of mathematics and physics, and in those "invisible colleges" that defer to mathematics and physics.

            In the ideal, when a mistake is somehow found in a field, steps are taken to rigorously, completely test it, "let the chips fall where they may."    In the ideal, after people have gone out of their way to check the work, they go out of their way, at all levels of the field, to communicate it and see that it is communicated, both formally and informally, if they find out that it matters.    According to this ideal, the news then spreads like wildfire.    Unless we err, and we are asking to be checked, we've encountered as fundamental a mistake as has ever gone undetected in mathematical physics.    However, because of the taboos we've encountered in our work, we've encountered behavior that seems to us to be the exact antithesis of the "ideal behavior" described above.    We're asking for help outside of conventional channels for that reason.

           Although admiration for George Johnson's wonderful writing, and the close and complementary fit between his FIRE IN THE MIND and Steve Kline's CONCEPTUAL FOUNDATIONS FOR MULTIDISCIPLINARY THINKING (Stanford, 1995) was a reason why we contacted him in December of last year, another reason we contacted Johnson was to solicit his help in getting our math checked.      There has been considerable correspondence between us since that time, much of it touching, directly or indirectly, on the subject of checking.    For some time, our thoughts have been those expressed in the following paragraph that I wrote to NATURE in February.

I did not say, but could have, that THE NEW YORK TIMES is, just as plainly, the Five-Star-General of newspapers.

            On the issue we've faced, where we must ask people to confront a massive and fear-laden new truth, we have long felt that our best hopes rested with NATURE, or THE NEW YORK TIMES, or both.       Here, we are getting help from both.

             On February 20, we submitted to NATURE papers closely similar to seven of the eight papers set out above.    This was a highly unconventional, even outrageous, thing to do.    NATURE submissions are expected to be one paper submissions.    We knew that.   Although we did, of course, hope for publication of some of the work we submitted, publications of our submissions, as submitted, was not our first priority.    More than anything else, we were hoping to get the core mathematics CHECKED.    In the transmittal letter, I wrote."

later I wrote

            If our over-massive submission to NATURE seems a desperate act, it was.    The act may serve as a measure of our view of the difficulties we faced getting our mathematics checked.    We needed checking, and we needed that checking at a high enough level so that the work could be taken seriously in neural medicine and elsewhere.    We had to appeal to an elite institution that was multidisciplinary, one that did not owe allegiance to any one discipline, one that did not share the fears of the disciplines, but that judged the disciplines.    And so we submitted to NATURE, with a copy to George Johnson.    (NATURE was informed of the copy to Johnson.)

           Our papers have not been peer reviewed by NATURE.    However, they have been thoughtfully, carefully, and respectfully considered by editorial staff at NATURE.    An effort seems to have been made, and time seems to have been taken, in an attempt to find a way to publish them.   NATURE's senior physical science editor took time to write a remarkably supportive, helpful letter, that was a limited endorsement letter as well as a rejection letter.    That letter, which is deeply appreciated, is shown in Appendix A.    Within NATURE's operational constraints, they may have done absolutely everything they could have for us.    In any event, the letter is helpful.    The primary reason to include this letter here is to reinforce the value of further checking.    Our math is not difficult in the ordinary sense of "hard for someone new to the field."    Our math is difficult in the sense that it is "unthinkable" and "threatening" to mathematicians, physicists, and those who defer to them. NATURE objected to matters of format in our submission that were unacceptable by their standards. NATURE would not or could not ask its reviewers to review informal papers, or the information in them.    Even so, they went on to consider the work.    Although further review may have brought up objections on substance, no objections on substance were mentioned in their letter.    This is not peer review, but it does go some way toward establishing plausibility of the work.

NATURE's letter supports the following chain of argument.

           We are asking that our work be CHECKED.    Indeed, because of the importance of the issue, the mathematics should be FOUGHT OUT IN PUBLIC TO A CLEAR DECISION.    The decision should be clear enough to enough people and institutions so that it affects other decisions in medicine, research, and elsewhere.    We are dealing with a stark matter of life and death in neural medicine, and with an important issue in many other places.    A fight to a decision is justified.    This posting is an attempt to move toward such a clear decision.

We set out our mathematics-physics interface modeling in more detail below.   We ask that this be checked.

         Here is James Clerk Maxwell, writing a year before his death in 1879 (DIMENSIONS Encyclopedia Britannica, 9th ed.):

To review:

        Maxwell's second assumption may be shown wrong mathematically by finding an inconsistency in the arithmetical usages it assumes.    We have found such an inconsistency.    The dimensional parameters, symbol constructs that we've been using just as if they are simple numbers, ARE NOT JUST DIMENSIONAL NUMBERS, AND HAVE SPECIAL RULES.    The limiting arguments that we as a culture have used employing the dimensional parameters are wrong in general, because they are applied to some arithmetically undefined terms.    When we define these terms the inconsistency is eliminated.

How crossterms happen in the derivation of coupled equations from models:

           Here is how the "difficult to interpret" crossterms come about in a particular case, the derivation of electrical line equations from a physical model set out interpreting "the symbols which occur as of themselves denoting lines, masses, times &c."

To derive a differential equation from a physical model in classical physics (still the dominant physics in practical work) we as a culture (especially as an engineering culture) argue as follows. We generally do so without formal distinction between Maxwell's first and second methods:

             Our procedure, discussed before, is an elaboration of standard derivation pattern 1-3 above. Let's proceed to infer a finite increment equation for electrical transmission along a line, a case that is inherently coupled.    We proceed according to Maxwell's first method, using symbols and arithmetical operations to represent a physical situation, not just manipulating abstract and disembodied number-symbols.

Fig. 1 shows a neural conductor (axon or dendrite considered as a transmission line).    A tubular membrane is filled with and surrounded by an ionic fluid.    The fluid inside the tube carries current (and signal) and has resistance R and electromagnetic inductance L per unit length.    The outer fluid is grounded.    The membrane separating these conducting fluids has capacitance and leakage conductance per unit area.    We speak of the following variables and dimensional parameters:


v = voltage      i = current      x = position along the line      alpha = arbitrary length interval

Dimensional Parameters:

R = resistance/length       L=electromagnetic inductance/length

G= membrane conductance/length          C=capacitance/length

We find it convenient to write these as an input specification list like this:      (v, i, x ; R, L, G, C)
with the variables listed before the semicolon, and the dimensional parameters afterwards.    For this line conduction model, we assume that  R, L, G, and C are each smoothly and uniformly distributed over the length of conductor being modeled.    (We are modeling a homogeneous case, assuming continuity and differentiability of the model at any scale, however small.    With these assumptions, we take a step toward abstraction for our still concrete model.)

       Fig. 1 shows an arbitrarily chosen length alpha, which we will call delta x because that is commonly done.   alpha is picked from other indistinguishable lengths.    (For consistency, the length delta x is a number times the unit of measure used in the calculation in the x direction.)   We are giving length a two-symbol name that includes a separate number.     (Note a suspicious circumstance: other entities that have numerical values in our culture's calculations are denoted as single symbols, and do not have separate numerical symbols (such as ) associated with them.)

         We need finite difference equations, delta v/delta x and delta i/delta x.    For the finite equations, we'll be writing out terms that have usually been understood to exist, but that have been called infinitesimal and neglected.    We keep track of these terms until we find out what these symbols mean, and how they must be interpreted.    The logic that generates crossterms in coupled cases is like the following:

From such interactions, it follows that:

In present practice, when we as a culture derive a differential equation from such a coupled relation, we say:

                   i over the interval is a function of v at x and x+delta x

                                            (and nothing more.)

The truncation implied in the words "and nothing more" follows from the second method Maxwell cites, because the crossterms are infinitesimal under that assumption.

           Let's derive voltage and current equations that include crossterms.    We'll see why the crossterms cause us Maxwell's "difficulty in interpretation."    We'll write our voltage and current functions as v(x,t) and i(x,t).    We're assuming homogeneity and symmetry for our conductor.    We assume that, for small enough lengths delta x, the average voltage (current) across the interval from x to x+delta x is the average of the voltage (current) at x and at x+delta x.

Writing down voltage change as a function of the dimensional parameters and variables that directly affect voltage, we have.

Writing down current change as a function of the dimensional parameters and variables that directly affect current, we have.

We may equally well rewrite (4a) and (4b) going from points x- delta x to x+delta x, so that the interval is centered at x.

Note that equation (5a) includes i(x+delta x/2) and its time derivative. di(x+delta x/2)/dt is defined by equation (5b). Equation (5b) includes v(x+delta x/2) and its derivative     dv(x+delta x/2)/dt is defined by equation (5a).     
Each of these equations requires the other for full specification:  each contains the other.
If the cross-substitutions specified implicitly above are explicitly made, each of the resulting equations will also each contain the other.    So will the generation of equations following, and the next, and so on.   This is an endless regress.   
Each substitution introduces new functions with the argument (x+x/2), and so there is a continuing need for more substitutions.   To achieve closure, one needs a truncating approximation position of x, for current, voltage, and their time derivatives.

       We cannot assume that terms in this kind of coupled expansion are unimportant until we know what the symbols in them mean, and what the arithmetical rules that apply to those symbols are.  

We MUST assume that these terms have a dimensional interpretation consistent with the other terms in the same equation.    We must assume that all the terms, properly determined, have the same dimensional unit exponents.    We must assume that all the terms, properly interpreted, are meaningful (have specific numerical and dimensional interpretations) at all spatial scales in the equation's domain of definition.

! ! ! ! ! ! ! ! ! ! ! ! ! !

We can proceed with substitutions like the following, associating symbols without interpreting them numerically or physically. For example


which expands algebraically to

These terms would be simpler if voltages and derivatives of voltages were taken at the interval midpoint, x. But even so simplified, it is terms of this kind that are "difficult to interpret" in Maxwell's sense. (Maxwell was an industrious analyst living in an analytically competitive world, and when he wrote "difficult to interpret" we believe that he must have meant "operationally impossible to interpret".)

Consider crossterms like these:

If one wishes to speak of expressions like those of (9), what do they mean for finite delta when the symbols are considered to stand for fully physical things, or complete models of physical things, subject to the detailed physical rules that stand behind the model?    How do you interpret them with a sketch interpreted by measurement procedure?    In discussions with mathematicians, engineers, and scientists, the first author (MRS) was not (for three years) able to find anyone who was confident of the meaning of these kinds of expressions at finite scales (or, as a matter of logic, when length was reduced to an arbitrarily small value in a limiting argument.)    The equations below shows voltage change over an interval of length delta x, centered about the point of position x, for three stages of cross substitution.    Symbols are grouped together and algebraically simplified up to the point where questions arise about the meaning of further algebraic simplification of relations in the dimensional parameters R, L, G, and C.    According to Maxwell's first method, the expressions in curly brackets are "difficult" to interpret.

The equation for i(x+delta x/2,t)-i(x-delta x/2,t), is isomorphic to 10 (or 11 below), with swapping of the variables v-i and the dimensional parameters R-G, and L-C.

           Let's rewrite the neural conduction equation (10), dividing each term by the length increment, and assuming that length increment is so small that we can approximate the gradient of voltage per unit length as a derivative.    In (11) we substitute the word "length" for delta x.    This substitution is a useful notational step, perhaps most useful because it helps us to think without our reflexes engaged about what the notion of length (or length2, length3, etc.) may mean in the terms of this equation.

10 may be "simplified" according to common arguments to (12)

In current usage, Maxwell's second assumption-method seems to "define" all these "difficult" terms, below the first line in 10, 11 or 12 (and then seems to define them out of existence.) The definition is never tested, because these terms are dismissed by a limiting argument that all trained analysts now trust but do not examine. (Group consensus and truth are not the same.) All the terms in the curly brackets are thought to be "infinitesimal in the limit as x goes to 0" and are discarded (usually they are never written.) These terms are then dismissed based on a limiting argument that ASSUMES that these terms are DEFINED at finite scales according to Maxwell's second method.   They are not consistently defined according to Maxwell's second method, as has been assumed: the assumption that they are well definined leads to contradictions.

          When these cross terms are consistently defined, they become easy to interpret. To show this, we'll start by showing how the assumptions of Maxwell's second method fail logically and numerically.

          Difficulties with dismissal of these terms on the basis of meaning are set out elsewhere(9).    However, even setting these aside, the dismissal fails several closure tests.
(Closure tests are standard tests all through measurement theory.    One goes around a sequence that should be a cycle.    If one is not exactly where one began after a "cycle" that "cycle" is defective.    It has failed a closure test.    Surveying offers a common example.    If a surveyer measures around a mountain (or a lot) and comes back to a particular stake, the measured position of that stake "around the loop" must match, within measurement error, the position measured for that stake before going around the loop.    Kirchoff's laws in electricity are closure definitions useful as laws.)

           Here is a cycle that should close, but that does not.    At a finite scale, before taking the limit, the terms below the first line of equations like 10 or 11 are supposed to represent finite quantities.    Yet when we as a culture take the limit as delta x goes to zero, these crossterms are infinitesimal (0), and are discarded in the conventionally derived differential equation.   Now, let's take that differential equation, which has the "infinitesimal" terms stripped away.   Let's integrate it back up to specific scale x.     We get an equation that lacks the crossterms that we know existed at scale x in the first place. We should have closure, and do not. When one or more of the dimensional parameters R, L, G, and C are large enough, the numerical value of this contradiction can be large.

           In addition, the expressions in curly brackets in (10 and 12) fail as representations of a physical circumstance.    First, the terms fail the test of homogeneity.    Different descriptions of the same thing, that should have the same total size, have different sizes.    For instance, consider a curly bracketed expression in (delta x)2.   If delta x is divided into ten pieces, and those ten subintervals are computed and summed, that sum is only 1/10 the expression computed for the same interval delta x taken in one step.    Depending on how many subintervals of delta x we choose, we can vary the numerical size of the curly bracketed expression over a wide range for one single and unchanged interval scale.    This does not describe physical behavior.

            These expressions are also meaningless because they are constructed on the basis of a type invalid arithmetic.    The loop test below showed us that so clearly that we SAW the inherent error in our culture's arithmetical assumptions, an error that we had been looking at, or looking near, for a long time before.    The loop test then served as a "testbed" that told us what the arithmetical restrictions of the dimensional parameters were.

            We should all know that a "number" or "expression" that can be manipulated by "proper arithmetic" and permissible unit changes so that it has any value at all is meaningless.    Let's look at a simple loop test, analogous to many closure tests in physical logic.    In Fig. 2, an algebraically unsimplified dimensional group that includes products or ratios of dimensional numbers, such as

is set out in meter length units at A.   This quantity is algebraically simplified directly in meter units to produce "expression 1," dealing with the dimensional parameters and increments as "just numbers."   The same physical quantity is also translated from A into a "per meter" basis at C.    The translated quantity at C is then algebraically simplified to D in the same conventional fashion.    The expression at D, expressed in meter length units, is converted to a "per meter" basis to produce "expression 2."    Expression 1 and Expression 2 must be the same, or the calculation is not consistent with itself.    
Quantities like those in the curly brackets of 10-11 consistently fail the loop test.   By choice of unit changes, and enough changes, such "quantities" can be made to have any numerical value at all.    Expressions such as these are meaningless as usually interpreted.

Here is a an example of how our symbolic-numerical conventions "go around the loop."

The loop test fails!

The loop test fails because a standard procedure is flawed.

The result is an irreversible, numerically absurd, but now standard mathematical operation. The difficulty has been deeply buried and hidden in our culture's notation.

           Note that the loop test of Fig. 2 only fails for terms that are now routinely discarded as "infinitesimal" or "infinite" without detailed examination   Multiplication and division of groups of dimensional numbers (and dimensional parameters) without spatial or temporal increments passes the loop test without difficulty, with numerical values handled arithmetically, and dimensional exponents added and subtracted in the usual way.    Dimensional parameters associated with increments according to the exact pattern that defined the dimensional parameters in the first place also work.

            Problems now arise when we as a culture associate several dimensional parameters together with several increments, where the increments correspond to different physical effects evaluated over the same interval.    These problems arise in notations that we as a culture have never understood at finite scales - we as a culture have no reason for confidence for the notational procedures we've been using.    The procedure we as a culture have used for evaluating such circumstances, as we've notated them, involves a kind of multiple counting that yields perverse results, as the loop test shows.    We want to incorporate a rule that avoids mistakes like this.    The rule needed restricts multiplication and division of dimensional parameters and increments to intensive (point) form, except for circumstances where the dimensional parameters are associated with increments in the exact way that defined them.   It requires us to clarify our concept of increments (of length, area, volume, or time) defined at a point.

! ! ! IMPORTANT ! ! !

           Most of the time, engineers and scientists have been dealing with differential equations (defined at points) rather than finite increment equations (defined over intervals).    When differential equations have been manipulated, the difficulties shown by the loop test have not occurred because every term in the differential equations have already been in point form.

           However, DERIVATIONS of differential equations from coupled physical circumstances HAVE involved finite interval models, and need to be reexamined to see if terms have been lost.    "Rigorous proofs" of some of our trusted differential equations can no longer be relied on.
                                   ! ! ! ! ! ! ! ! !

           Notated as we as a culture have notated them, and interpreted as we as a culture have interpreted them, entities that represent coupled relations are physically unclear and numerically undefined.    Although we as a culture may notate the interaction of resistance, capacitance, and resistance together in interaction with the measurable di/dt over the interval x as

where delta x is a number times a length unit, the loop test shows that this notation, literally interpreted, does not correspond to any consistent numerical value when unit systems are changed, and then changed back. (14) shows a common and long trusted notation that does not reflect limitations on the arithmetic permissible with the dimensional parameters.

            Let's rewrite (14) setting out a notation that makes explicit problems we need to solve concerning our notation of "length" in this expression:

What well defined notion can we as a culture have of "length" in the limit as scale shrinks to a point?    Surely, no number of length units can suffice as such a well defined notion of length "at a point", because whatever number of length units we choose, we can always pick a smaller number still.    We as a culture can't be correct if we mean that (but in our culture's limiting arguments, that assume an invalid arithmetic, we do mean that.)   We as a culture need a notion of "length" that makes focused sense at a point, because differential equations are defined at points.

If we assume, incorrectly, that the symbols in

and similar expressions are indistinguishable from the abstract numbers we are used to manipulating in our differential equations, then we are faced with a contradiction.

           On the other hand, if we remember that these symbols represent concrete, measurable circumstances, then we face no contradiction.    We do face a conceptual challenge.    We must face the need to reassess some of our assumptions.    We need to recall that more complicated systems often have rules that less complicated systems do not have.    For this reason, we are not OBLIGATED to assume that the rules of arithmetic that apply to these more complicated entities, in their more complicated concrete context, are exactly the same as the arithmetical rules that apply to the simpler system of abstract numbers.    We can determine what the arithmetical rules in the concrete domain are, rather than assume these rules on the basis of some plausible analogy to abstract mathematics.    For a long time, this conceptual challenge was too much for us.    It was not obvious to us that we could question arithmetical usages in the concrete domain.    The loop test above showed us that we had to.    The dimensional parameters are subject to a type restriction that does not exist in the abstract domain.

           Once we realize that we may determine the arithmetical rules of concrete domain mathematics, we face the question: how do we get these symbols and symbol groups to make fully consistent sense in terms of measurement?    We need to consider what consistent sense in terms of measurement means.

Here is James Clerk Maxwell, starting from the first page of the first volume of his A TREATISE ON ELECTRICITY & MAGNETISM: (Dover)


"1.]            Every expression of a Quantity consists of two factors or components.    One of these is the name of a certain known quantity of the same kind as the quantity expressed, which is taken as a standard of reference.    The other component is the number of times the standard is to be taken in order to make up the required quantity. The standard quantity is technically called the Unit, and the number is called the Numerical Value of the quantity.

            "There must be as many different units as there are different kinds of quantities to be measured, but in all dynamical sciences it is possible to define these units in terms of the three fundamental units of LENGTH, TIME, and MASS.

. . .
"2.]            In framing a mathematical system we suppose the fundamental units of length, mass, and time to be given, and deduce all the derivative units from these by the simplest obtainable definitions. . . . The formulae at which we arrive must be such that a person of any nation, by substituting for the different symbols the numerical values of the quantities as measured in his own national units, would arrive at a true result.
   .    .   .   Hence, in all scientific studies it is of the greatest importance to employ units belonging to a properly defined system, and to know the relations of these units to the fundamental units, so that we may be able at once to transform our results from one system to another.

 "          This is most conveniently done by ascertaining the dimensions (Maxwell means the exponent here) of every unit in terms of the three fundamental units. . . . . .
For instance, the scientific unit of volume is always the cube whose side is the unit of length. If the unit of length varies, the unit of volume will vary as its third power . . . . .

"         A knowledge of the dimensions (Maxwell means exponents) of units furnishes a test which ought to be applied to the equations resulting from any lengthened investigation.    The (unit exponents) of every term of such an equation, with respect to each of the three fundamental units, must be the same.    If not, the equation is absurd, and contains some error, as its interpretation would be different according to the arbitrary system of units which we adopt. "

P.W. Bridgman made Maxwell's position more realistic and definite by clarifying the notion of measurement.    Measurement involves operational procedures in a specific, step-by-step context.    The operational procedures are implicit, and logically necessary, but are not expressed in the abstract algebraic symbols themselves, which show only units raised to exponents.

An important point, for Maxwell, for Bridgman, and for anybody else who wants valid equations, is as follows:

ALL the operational procedures must refer only to steps where, for specific units of LENGTH, MASS, and TIME, (call them L, M, and T) the measured quantity (or partial operational procedure involved in the derivation of the quantity) could be reduced to

Ln1 Mn2 Tn3

where n1, n2, and n3 are exponents that may be integers or fractions.   If and only if every measurable in THE ENTIRE SYSTEM OF UNITS is in this form, then the choice of particular length scale, mass unit, and time unit is arbitrary.

            The requirement that the ENTIRE SYSTEM be consistent is a severe one in the case of units that combine mechanical and electrical (and even more, mechanical, electrical, and magnetic) quantities. There is ONE such system now in common use.    The CONSISTENT SYSTEM now in use is the MKS-Giorgi unit system(10).    (An infinite number of other unit systems would be possible, compared to the MKS-Giorgi unit system, but it would be a lot of work to get even one other one, because of the coupling between the definitions of length, mass, time, the electrical units and the magnetic units, and because of the hard word required to reexpress a large number of dimensional parameters in the new system.)

            For the finite increment equations that we derive to make sense, they must make dimensional sense.    EVERY term must be a valid term.    From the loop test, we know that every one of the terms including curly bracketed crossterms below is arithmetically undefined when the symbols in it are interpreted by abstract arithmetic.    What interpretation can we apply to the cross effect terms?    Our interpretation must be well defined and consistent with measure theory.

Let's refer to equation (17).    Let's ask a standard question:   "how do we, in a meaningful, fully defined way, take the limit of BOTH SIDES of this equation?"    

           On the left side of equation (17), the limit as delta x approaches 0 is dv/dx.    (That limit is the definition of dv/dx). dv/dx is defined at a point - we may say that it is in point form.

On the right side of equation (17), let's proceed term for term, from left to right

We now have 11 terms, each with a collection of symbols within curly brackets that is UNDEFINED in the limit, without some additional convention or notation.

We want to proceed in the closest POSSIBLE analogy to our standard limiting argument.

            If the curly bracketed terms are to make sense in differential and finite difference form (and we assume they MUST make sense) then the curly bracketed expressions must all simplify to COMPOUND DIMENSIONAL PARAMETERS (that do not change in numerical value or dimensionality from point scale to any finite scale for a homogeneous system.)

            So we need to be able to interpret the curly brackets in terms of a Bridgman operational procedure.    We can do this, but our operational definition requires a length scale of 1 length unit.    We are constrained to the 1 length unit scale by requirements we've touched before, with respect to consistent unit systems.

To interpret any of the curly bracketed terms in equation (8) according to a Bridgman operational effective measurement procedure, we can proceed as follows. Since the curly brackets involve undefined arithmetic, we can't interpret them directly.   For any of the curly bracketed expressions, we can, however, proceed as follows:

We can proceed, term for term, in this way. Doing so, we come as close as possible to taking a limit, term for term.    In the end, every one of our terms consists of a dimensional parameter (that may be a properly simplified compound dimensional parameter) times a measurable quantity.    Every one of these dimensional parameters, and every one of these measurables, exists in the same compatible system of units.

            There is another interpretation that gets us to the same result in an operationally simpler way.    We don't have to "conceptually remove our groups for a measurement procedure" and we don't have to "take a limit."    Suppose, rather than work through "an operational measurement procedure" for each term, as described above, and then putting our term at point scale, we'd substituted a "point form of length" with a value of (1 meter)p for every instance of length. Suppose we then algebraically simplify, term for term, using usual arithmetic for dimensional numbers.    Our end result would be at point scale, and would be dimensionally and numerically the same as before.    WE'D HAVE CONCRETE EQUATIONS THAT COULD BE MAPPED INTO DIFFERENTIAL EQUATIONS IN THE ABSTRACT DOMAIN OF THE CALCULUS ON A SYMBOL-FOR-SYMBOL BASIS.

             The logic is as follows.    We are trying to express "length" at a point (or over a differentially small length.    We are trying to express length in (point) form.    R and C are already in intensive (point) form, both numerically and dimensionally.    That is, R and C, both in units that are per unit length, are in a form that is numerically independent of length.   This R and C works for any interval, no matter how short, and works at point scale, without numerical change.    di/dt is also in intensive form (defined at a point).    We as a culture need an intensive form for length.    Here is what is needed, expressed in words.    We need to interpret "length" as "the property of length in per unit length units."

             Let's think of what we as a culture already do when we reason from measurement.   Our measurement procedures define things in terms of spatial variables (length, area, volume, time) and other dimensions (voltage, charge, and many others).    The measurements are inherently finite in nature.    Still, we all speak of properties in INTENSIVE FORM, defined for points.    An intensive property or intensive variable has a well-defined value for each point in the system it refers to.    For instance, we speak of "resistance per unit length defined at a point" even though a point has 0 length.    The numerical-scaling argument we use to arrive at intensive properties and variables is simple and nearly reflexive.    To intensify our properties and variables, we say that the property (or variable) at a point is the property (or variable) that we get from a logic of interpolation from a finite scale to finer and finer scales. The interpolation assumes homogeneity of the model down to a vanishing spatial (and/or temporal) scale."   For example, consider the notion of resistance per unit length. Let's idealize the wire as a line. The resistance R expresses voltage gradient per unit length, per unit current. For any interval that includes length, the basic notion of resistance can be directly defined "per unit length". Resistance per unit length at a point has the same numerical value and the same units as resistance per unit length for some finite increment. We do not use a limiting process.

For example, we find the value of internal energy per unit mass at a point for a homogeneous system by dividing the internal energy of the system by the mass of the system.    We do not use a limiting process.    Other properties can be defined in similar ways "per unit area" or "per unit length" over finite areas, or finite volumes.    But
the notion of "length (or area, or volume) at a point" is an abstraction. This extremely useful abstraction is much older(11) than some of our rigorous calculus formality(12).    In thermodynamics and elsewhere, we as a culture don't intensify our extensive variables by a calculus argument of any kind.    We just assume that the property we're considering is homogeneous.    Then we write our intensive variables directly.

The abstract notion of length or area or volume "at a point" is already embedded in many of the intensive properties in common use.  Using meter and second units, the intensive forms (point forms) of length, area, volume and time are:

Length at a point: { 1 length/length (length unit)}          in meter units: (1 meter)p
Area at a point: { 1 area/area ( (length unit)2 )}              in meter units: (1 meter2)p
Volume at a point: { 1 volume/volume ( (length unit)3 )}     in meter units: ( 1 meter3 )p

An instant in time:   { 1 ( time)/( time)) (time unit) }    in second time units: ( 1 second )

The dimensions of length, area, and volume are length to the first, second and third power respectively. The coefficients are the identity operator, 1 because, for even the smallest imaginable numerical values of length, l, area, a; or volume v

     l/l = 1      a/a = 1      v/v = 1  and      t/t = 1

We can rewrite (10) as

Substituting the intensive form of length into (10) or (12) in place of delta x, we may algebraically simplify the bracketed expressions in the equation(s). This separates R, L, G and into numerical parts (Rn, Ln, Cn, and Gn) that are algebraically simplified together, and unit groups that are algebraically simplified together.   We'll choose MKS-Georgi units, with v as volts, Q as coulombs, and t as time in seconds.    We get:

We could have continued the expansion process that produced (10) and (11) and gotten more terms if we had wished to do so.

The di/dx equation analogous to (19) is

            Each term consists of one (compound) dimensional parameter times a measurable.    These differential equations, when integrated to length delta x, reconstruct the values that apply to that length   x+delta x, with no lost terms. Every term in these differential equations passes the loop test.    We may map these differential equations symbol-for-symbol into corresponding partial differential equations.    We may map these differential (or corresponding partial differential) equations symbol-for-symbol into the domain of the algebra.    These equations are different equations from the Kelvin-Rall equations now used in neurophysiology.    They contain the Kelvin-Rall terms, but have other terms as well.

            An important difference is the effective inductance term.    For unmyelinated axons and dendrites in the neural range of sizes, the numerical magnitude of R2C/4 is between 1012 and 1021 times larger than L, depending on dendrite diameter and other variables.    This term, which is much too small to measure in large scale electrical engineering, is a dominant and practically important term at neural scales in neural tissue.

            Many terms now thought to be "infinities" are also finite terms when they are correctly interpreted in intensive form.

            Physical domains that include dimensional parameters that represent measurable circumstances, differ from the domain of the algebra.    Unless we know this, we can discard important terms, and delude ourselves, or form false infinities, and delude ourselves.

            Here again is the procedure by which we can map a defined circumstance from the real, dimensional world into a concrete model including equations; map that concrete model into abstract equations; and, after abstract manipulations map the results of our abstract analysis back into a concrete model that may be tested against results in the world.


             A concrete physical model directly connected to the details of measurement must be a finite scale model.    The finite scale requirement occurs for two related reasons.    First, geometrical details of specification involve lines and space-filling geometries that become degenerate at the point scale of a differential equation.    Second, ALL our measurement procedures are finite scale procedures.    No real measurements occur instantaneously, or at a true geometrical point.    For these reasons, equations representing a concrete physical model in step-by-step detail must be finite increment equations.   The measurements must be expressed in a system of consistent units4.

A (necessarily finite) concrete physical model must be completely specified before concrete finite increment equations can be derived from that model.

From this physical model specification, one or more concrete finite increment equations can be derived setting out the quantitative relations of the model.

             Interpretation of finite increment equations has had the following hidden problem, that has resulted in mismapping that has produced invalid differential equations.    In order to interpret ALL the terms in concrete finite increment model equations in a manner consistent with measurement theory, we must know the following NEW information:

NEW INFORMATION.    Dimensional parameters are the entities that express concrete physical modeling laws in measurable form. The dimensional parameters are not just numbers.

Arithmetic with a dimensional parameter in the measurable domain is only defined when it corresponds EXACTLY with the definition of that dimensional parameter, or when the dimensional parameter is interpreted as part of a group operational measurement procedure performed at unit scale for the spatial variables in the group.

(An alternative statement is that, for equations in gradient form that are otherwise ready to be reduced to derivatives, groups of dimensional parameters and spatial variables must be algebraically simplified at POINT SCALE.      The point forms6 of the spatial variables, in MKS units, are

length at a point: (1 meter)p             area at a point: (1 meter2)p

volume at a point: (1 meter3)p              a point in time: (1 second)p

           After this group interpretation is done, one has concrete finite increment model equations that are reduced into measure-theory consistent form.    THESE CONCRETE EQUATIONS CAN NOW BE MAPPED INTO DIFFERENTIAL EQUATIONS THAT EXIST IN THE ABSTRACT DOMAIN OF THE CALCULUS USING THE CONVENTIONAL LIMITING PROCESS.

Calculation according to the rules of analysis can then be validly done, on these valid differential equations.

If a result of analysis is to be applied to the measurable model from which it traces, groups in terms that act as if they are dimensional parameters in the domain of calculus can be interpreted as constructively derived dimensional parameters in the measurable model system. Once this interpretation is made, the calculus model may be mapped back into the measurable model on a term for term basis, for use or checking.

We have faced the following question repeatedly:

"If this new procedure, with its new interpretation of crossterms, is right, wouldn't that invalidate many results that everyone has good reason to trust?"

The answer is no.

             We've been dealing with the conduction line equations.    We've derived a new conduction equation, with new terms.    In the cases where the old derivation works well empirically, the new procedure works just as well.    The new terms are too small to matter.    In the case of neural function, where the old derivation fails on many counts, the new derivation has the properties needed to describe behavior(13).    The same equation, with the same terms, has enormously different properties depending on what the numerical values of the dimensional parameters in it happen to be.    Here is a partial expansion of the dv/dx line conduction equation:

R is resistance per length, L is inductance per length, G is membrane conductance per length, and is capacitance per length.

           For many values of R-L-G-C, all the terms in (1) that include products of R, L, G, and will be very small.    However, some of these crossproduct terms can be enormous for different values of R, L, G, and C.    Comparison of a wire case and a neural dendrite case shows the contrast in values that can occur.    R is resistance per length, L is inductance per length, G is membrane conductance per length, and C is capacitance per length.

           For many values of R-L-G-C, all the terms in (1) that include products of R, L, G, and Cwill be very small.    However, some of these crossproduct terms can be enormous for different values of R, L, G, and C.    Comparison of a wire case and a neural dendrite case shows the contrast in values that can occur.

For a 1 mm copper wire with ordinary insulation and placement, typical values of the dimensional parameters would be:

R = 1.4 x 10-3 ohm/meter C = 3.14 x 10-7 farads/meter

G = 3.14 x 10-8 mho/meter L = 5 x 10-7 henries/meter

Here is (19) with the numerical value of terms set out below the symbolic expression of the terms:

Here are the corresponding dimensional parameter values for a 1 micron diameter neural dendrite, assuming accepted values of axolemma conductivity, capacitance per membrane area, and inductance per unit length (volume conductivity 1.1 x 104 ohm-meter g = .318 mho/meter2 c=10-2farads/meter2.)

R = 1.4 x 1012 ohm/meter L = 5 x 10-7 henries/meter

C = 3.14 x 10-8 farads/meter G = 3.14 x 10-6 mho/meter

Note that R is 1015 larger than in the previous case of the wire.

For a neuron situated as real neurons are, surrounded by a glial cell and a fluid cleft, C would be about two orders of magnitude lower.   Even taking this into account, for the same equation, cross product terms that are trivial in the wire case are dominant in the neural dendrite case.

             For the wire case, the numerical values of the primary terms (the numerical values of the dimensional parameters R, L, G, and C) are compared with the numerical values of the numerically most important cross product terms in the voltage gradient transmission equation below.

For this copper wire case, none of the cross product terms are large enough to attend to in a practical modeling equation. The sensible formula to use for the wire values of R, L, G, and C is the same one the limiting procedure would produce:

In the neural dendrite case, some of the crossterms that were trivial in the wire case become DOMINANT terms. For these neural value of R, L, G, and C, two cross product terms that were too small to attend to in the wire case have become dominant terms. Magnetic inductance, L, has become too small to include, because for these values of R, L, G, and C the R2C cross product term is 3 x 1022 times bigger than L, and an important modeling term.

           These finite crossterms make the difference between the brain as a high Q system with sharply switched resonant communication and information storage, and the brain as an overdamped system without resonance, that appears incapable of any significant information processing at all because phase distortion in Kelvin-Rall is prohibitively large.    This makes the difference between a role for inductance in ventricular fibrillation and no role for inductance in ventricular fibrillation.    This makes the difference between epilepsy as a resonant phenomenon, and no role for resonant coupling and resonant neural destruction in resonance. This makes the difference between no plausible memory models that can handle complex information, and switched resonance memory models that appear to be able to handle the information brains do handle.    That is to say, the difference between the limiting procedure for deriving differential equations, and the intensification procedure, involves matters of real neuroscientific interest, including some plain matters of life and death.

(This is worth checking, worth fighting about, worth getting right.)

           Current neurological modeling of dendrites uses the same equation that models the wire, radically understating effective inductance.    Sensible modeling formula for the dendrite involves the following quite different choice of terms.    (Note that this equation is a selection of important terms among others; it is an approximation that approximates reality well enough in a particular regime of R-L-G-C.)

The effective inductance is 3 x 1022 times larger than would be predicted in the Kelvin-Rall equation. The effective inductance term goes as the inverse cube of line diameter; for a .1 micron diameter dendritic spine, effective inductance would be 1000 times larger still.    Lines that have been thought devoid of inductance, and incapable of inductive effects such as resonance, have very large effective inductance.

        Note also that the damping effect normally produced by R is now mostly produced by an R2G/4 term.    Changing G (by opening or closing membrane channels) could switch such a neural system from a strongly damped state to a highly resonant state.

           The notion that solutions are parameter dependent is well established in some fields, for example viscous fluid flow.    That notion applies to these new terms.    In the neural transmission case and elsewhere, crossterms now dismissed as infinitesimals are finite, and some are large.    Electrical measurements testing the conduction equations have been carefully and accurately done, over a limited range of parametric values.   They have been "experimentally perfect" over that range.    Nevertheless, too large an extrapolation beyond that tested range of parametric values can be treacherous.    We say that this kind of extrapolation has been treacherous in neurophysiology.    The dimensional parameters that we as a culture must all use in our physical representations operate with type-restricted arithmetic rules.    Terms that we as a culture have thought to be zeros are finite.    We as a culture must learn to take this into account when we extrapolate an equation that may be "a perfect match to experiments" in one range of parameters into some very different range of parameters.    This provides some reason for caution when physical situations with enormously different values of physical parameters are modeled with the same equations.    "Perfect fit" to experiments in one range of parameters does not insure even a good fit in some other parametric range.

M. Robert Showalter           Madison, Wi. USA
Stephen Jay Kline                 Stanford, Ca. USA


        Here is the letter from NATURE in response to a very long and unconventional submission that we sent them.   We sent that submission, to the most elite academic journal we knew, hoping to get their help in securing checking of the intensification process.   NATURE did not give us the help we asked for.   Perhaps they were right not to do so. They did send the following letter, which was a kindness.  



Here is the text:

Letter from Karl Ziemelis,

Physical Science Editor

NATURE dated 11 April 1997

Dr. M.R. Showalter

Department of Curriculum and Instruction

School of Education

7205B Old Sauk Road

Madison, Wi 53717

Dear Dr. Showalter,

           Thank you for your letter of 20 February and for your seven linked submissions.    I apologize for the unusual length of time that it has taken us to get back to you, but please understand that the sheer volume of interrelated material that you submitted took us rather longer than we had hoped to read and digest.    This delay is all the more regrettable as the work is not in a form that we can offer to consider for publication.

           As you already clearly realize, the space available in the journal clearly poses a fundamental problem.    In my letter of 31 October 1994, I had hoped to explain what we look for in a NATURE paper - in essence, we do not see ourselves as an outlet for exhaustive presentations of scientific problems (regardless of their potential importance), but as a forum for presenting brief and focused reports of outstanding scientific and/or technological merit.     An additional, but equally important, characteristic of NATURE papers is that they should be self-contained: sub-dividing an extensive body of work into numerous (but intimately linked) "Nature-length" contributions is simply not a realistic option.

            You are clearly appreciative of this fact, in that your stated intention is not so much to dominate our pages with your submissions, but to seek peer review on the work in its entirety, and defer until later the decision about what form any published form might take.    This is not, however, a service that we could realistically offer - quite aside from the fact that it would be placing an unrealistic burden on our referees, we aim to send out for peer review only those papers that we judge to be appropriate (at least in principle) for publication in NATURE, in accordance with the criteria outlined above.     

            This is not to deny that within your seven manuscripts there may be the essence of a NATURE paper; but given the time constraints under which we work, the onus must be on the authors, rather than on the referees and editors, to construct such a paper from the vast amount of material supplied.   But I have to say that the need for extensive cross-referencing apparent in the present manuscripts suggests to us that the likelihood that such a paper would be readily forthcoming is not too high.    It is therefore our feeling that your interests would be better served by pursuing publication of the work in a more specialized journal having the space to accommodate the lengthy exposition that your work so clearly requires.

           Although it is sadly the case that some studies simply do not lend themselves to the NATURE format, this need not mean that our readers are left in the dark about the latest developments.    As you know, we frequently discuss such work in the context of our News and Views section, and if you were to send us preprints of your present papers when they are finally accepted elsewhere for publication, we could explore the possibility of doing likewise with your work.

          Once again, I am very sorry that our decision must be negative on this occasion, but I hope and trust that you will rapidly receive a more favorable response elsewhere.

Yours sincerely,

Karl Ziemelis

Physical Science Editor

This is a rejection from NATURE, not an acceptance.   The favorable language in it stands for much less than a conventional peer review.    Even so, we believe the letter does tend to support the view that our work is plausible enough, and important enough, to be worth checking.


Some background on Stephen J. Kline and M. Robert Showalter

           We have each been thinking about the interface between measurement and modeling for much of our professional lives, and have been thinking about this interface, together, for more than a decade.    Showalter has been more focused on the math-physics interface set out here, and issues related to it.    Steve Kline is the more physical of the two of us, the more keyed to images.    Showalter is the more concerned with formality and logic.    We share an interest in the history of ideas of modeling and analysis, and a sense of modeling and analysis as a body of human patterns and assumptions that exist, and may be improved, in a historical context. The point of departure of our work together is Steve's SIMILITUDE AND APPROXIMATION THEORY (McGraw-Hill 1964, Springer-Verlag 1986).    Kline has invested several thousand hours in our work together, much of it supervising and disciplining Showalter, and making sure that the work stayed physical, and stayed connected with known problems.    Showalter has put in much more time than Kline.

Stephen Jay Kline is Clarence J. and Patricia R. Woodard Professor, Emeritus, of Mechanical Engineering at Stanford University, and also Professor of Values, Technology, Science, and Society at Stanford.    He is author of more than 170 papers and two books, and editor of six other volumes.    Kline was the founding Chairman of the Thermosciences Division in Stanford's Department of Mechanical Engineering. He is also one of the four founding faculty members in Stanford's program in Science, Technology, and Society.    The largest number of his technical publications are in fluid mechanics, and report results of experiments, new forms of analysis, computation, and the description of new instrumental methods.    In 1996 the Japanese Society of Mechanical Engineers (JSME) decided to ask the outstanding contributors to various fields how they had pursued their work.    They designated Kline as the most important contributor to fluid mechanics in the 20th century.    While some other worker might have been reasonably chosen, Kline's distinction in experimental and computational fluid mechanics is undoubted.

Stephen Jay Kline has received the following honors and awards:

ASME Melville Medal for best paper of the year 1959.

ASME Centennial Award for career contributions to fluid mechanics

George Stephenson Prize of the British Institution of Mechanical Engineers

Golden Eagle award from U.S. CINE and Bucraino Medal (Italian) for the film "Flow Visualization" (30 minute educational film).

Gold Medal of the Chinese Aero/Astro Society

Member, National Academy of Engineering

Honorary Member, ASME (the highest honor ASME gives)

Fellow, American Association for the Advancement of Science

Books by Stephen J. Kline

SIMILITUDE AND APPROXIMATION THEORY (McGraw-Hill, 1964; Springer-Verlag, 1986)


for more on S.J.Kline,  see klinelink

M. Robert Showalter is a graduate of Cornell University.    Rather than undertake an academic career, Showalter set out to make "analytical invention" a possibility and to make "analytical engineering" more efficient.    He was interested in questions like "How do you define and design an optimal structure in a fully specified, complicated, fully commercial circumstance?"    For instance, suppose an airplane design needs a wing, to mount an engine and connect to a specific fuselage.   How do you arrive at a FULLY optimized design for that wing, in a case of real technical complexity, with "optimal" a defensible word in terms of all the technical considerations that pertain at all the levels that matter in the case (aerodynamics, structure, fabrication, maintenance, cost)?    How do you even approach such a job?    To make sense of such a job would require superposition, coupling together, and repeated solution of sometimes complicated systems of equations.    It would require new techniques of specification, so that system problems could be packaged in ways that could be isolated for analysis, and then reinserted into context.    It would require sharp identification of "linchpin" problems that, if solved, would enlarge the realm of the possible.    It would require solution of some "linchpin" problems concerning mathematical technique.    Showalter found himself forced to attend to such issues of mathematical technique.    For his adult life, he has believed, with Kline, that the "miracle of modeling and analysis" is not how good mathematical modeling sometimes is, but how seldom useful, and how mysteriously treacherous, analytical modeling is when it is applied to problems complicated enough to be of commercial interest.    Although it may be "a poor craftsman who blames his tools" Showalter has long had a focused interest in finding and fixing the defects in our culture's analytical tools.

           In the course of this work, Showalter learned technical fear, a sense of the limitations of his own mind compared to the complexities of the world, a strong sense of the necessity for specification and clear definition, immunity to the notion that any person or group had to be right about anything, and a belief that the most sophisticated and usable patterns for technical description and technical thinking had been evolved, by tens of thousands of smart and motivated people over many years, in patent practice.    He came to believe, and still believes, that specification for optimization, and specification for clear theorizing under defined circumstances, is best done building on these patent usages.    These patent usages involve pictures, step-by-step specifications related to the pictures, and definitions, called claims in patent practice, that define the subject being treated in sharply delineated natural language.    Mathematics that fits in a physical context, Showalter believes, should be clearly connected to that context as set out in a picture-specification-definition format.    He believes that the conveniences of abstraction must link clearly to this sort of specification and concreteness if a mathematical result is to be applied, and applied consistently, and applied by human beings.

           Showalter spent some time, with the support of investors, as an "analytical inventor," working to optimally redesign internal combustion engines.    Kline worked with him on this project for several years, about half time. Kline and Showalter became friends as well as coworkers during this period.    During this same period, a former Vice President of Engineering of Ford Motor Company worked about half time on Showalter's project. Showalter has 23 U.S. patents.    He achieved some "choreographed" control of mixing and combustion related flows, and got much reduced emissions, and significantly improved fuel economy, by doing so.    The work was intended to meet the most rigorous EPA emission standard, including the .4 gram/mile NOx standard.   That standard was rescinded and never enforced.    After the emissions work, in an effort to salvage his enterprise, Showalter worked out ways that appeared to have the practical potential to reduce engine friction about tenfold.    However, he could not achieve commercial ring oil control on the friction reduction package, and that oil control was key to all the rest.    It was not a hit or miss problem: he had to model coupled elastic and hydrodynamic equations on the ring design to keep track of ring oil flows.    The coupling terms that had to be accounted for were all mismodeled as "infinitesimal" by limiting arguments that, Showalter now knows, did not accomodate the arithmetical limitations of the dimensional parameters.    Perturbation approaches could not do this job.    Showalter's project failed, and he failed physically for a time.

           The project may not have failed, and would not have failed as it did, if Showalter had not had epileptic difficulties that incapacitated him for several years.    One inconvenience during this period was loss of the ability to read. Some other skills were lost, too.    Showalter found the work required to regain these skills hard but interesting.    Showalter earned a Professional Engineer's designation toward the end of this period.    During this period, Showalter gained a steady interest in brain function, in health and disease, and became convinced that current brain models, such as they were, were in gross error.

           In 1988, Showalter enrolled in the Department of Education, at U.W. Madison, because of an interest in how people learn to read, and an interest in other issues in learning as a developmental process.    He was interested, from the first, in neural sciences as well as education, and in the interfaces between these fields. Although some neural scientists took an interest in him, he could not, as a practical matter, enroll in the neural science program because he believed that the Kelvin-Rall transmission equation was grossly wrong, and Kelvin-Rall is a well-enforced article of faith in the neural sciences.    By about 1990, Showalter, with Kline's help, had identified the correct neural transmission equation, with effective inductances 1010-1019 greater than that of Kelvin-Rall, and fit it to David Regan's data, and to other data.    Showalter became convinced that nothing realistic or useful in brain modelling could be done, by himself or by anyone else, until the error in the neural transmission equation was found and fixed.    Therefore, as a modeler, he had nothing he could do but keep at the problem, with Kline's help, until it cracked.

           Showalter has long believed that many questions about development and learning that are important in education hinged on the same neural transmission issues that mattered in neural science and medicine.

           Especially since 1992, Showalter has been working nearly full time, with Kline's help, to find the defect at the interface between measurement and analysis that he and Kline knew must be there.    Together, we have found that defect, that traces to the arithmetical restrictions on the dimensional parameters.    We report the fix to this modeling difficulty in this transmission.

           In Showalter's view, the most serious mathematical impediment to analytical engineering, analytical invention, and analytical modeling applied to science that has existed in the past is this interface defect that has caused misspecification of differential equations.    That defect, pending checking, has now been found and fixed.    He hopes to put the new tool to use in neural modelling, the study of efficient and comfortable teaching, and elsewhere.


1. DEFINITION: A dimensional parameter is a "dimensional transform ratio number" that relates two measurable functions numerically and dimensionally. The dimensional parameter is defined by measurements (or "hypothetical measurements") of two related measurable functions A and B. The dimensional parameter is the algebraically simplified expression of {A/B} as defined in A = {{A/B}} B. The dimensional parameter is a transform relation from one dimensional system to another. The dimensional parameter is also a numerical constant of proportionality between A and B (a parameter of the system.) The dimensional parameter is valid within specific domains of definition of A and B that are operationally defined by measurement procedures.

See section devoted to the dimensional parameters in the main text.


3. Feynaman, R.P., Leighton, R.B, & Sands, M. THE FEYNMAN LECTURES ON PHYSICSV. 2 Table 18-1 Addison-Wesley 1964

4. An equation is in consistent units if it can be expressed, term for term, in any unit system consisting of the same system of operational measurement procedures and different units of length, mass, and time raised to exponents n, m, p
Ln Mm tp

5. The analytical foundations of measurement theory related to the requirement here were set out by J.C. Maxwell in sections 1] - 6] of A TREATISE ON ELECTRICITY & MAGNETISM v.1 Dover. p 1-6. Maxwell's algebraic examples must be interpreted in light of Bridgman's emphasis that the units of length, mass, and time in the algebra exist in the tightly specified context of detailed operational measurement procedures. These procedures are typically complicated enough to require specification by a sketch and a short verbal instruction. ALL members of a set of consistent unit systems have EXACTLY the same operational procedures, and apply different units of length, mass, and time to these consistent operational procedures.

6. the p subscript is for labeling, and does not effect the arithmetical function of the numerical and dimensional parts of these point forms.

7. Maxwell, J.C. A TREATISE ON ELECTRICITY & MAGNETISM V. 2 Dover Press, modification from 3'd edition, 1891, twelve years after Maxwell's death. pp. 199-200.

8. See Kline's Stanford Web page, and its discussion of the work on streak formation, and the change in paradigm that it involved.

9. Submission A. Showalter, M.R., Kline, S.J. Modelling of physical systems according to Maxwell's First Method.

10. (For our purposes, we can call this the MKS-volt-coulomb system.) If you look in Rojansky's ELECTROMAGNETIC FIELDS AND WAVES (Dover) you'll see tables inside the front and back covers that give a nice sense of how constrained the choice of dimensional system is when mechanical, electrical, and magnetic units are combined.

11. In PRINCIPIA MATHEMATICA (1687) Book 2, following prop XL, Isaac Newton discusses the propagation of sound. He employs two numbers that moderns would call "dimensional parameters" in his treatment. The first is mass of air per unit volume at a point. The second is compressibility of air per unit volume at a point. These dimensional entities are only experimentally definable in finite terms, but they are set out in intensive (point) form. Numerically and dimensionally, the intensive and extensive form of these numbers is the same.

12. Compare Newton in the 1680's versus the work of Weierstrass and his school in the 1870's, set out in H. Poincare L'oeuvre mathematique de Weierstrass" Acta Mathematica, XXII, 1989-1899, pp 1-18.

13. Showalter, M.R. submissions E, and H.