## An error at the interface between the measurable and our culture's equation-representations has been made. Our culture's limiting arguments have been applied to invalid terms. Terms have been mislabeled as 0's or infinities as a result of this mistake.

### M. Robert Showalter          Stephen J. Kline

U. of Wisconsin, Madison                       Stanford University

BACKGROUND TO THIS DISCLOSURE:

#### IN FORMAL TERMS:

We show, in symbols and by numerical example, that our culture's conventional limiting arguments, which are essential parts of our mathematical modeling procedures, are now degenerate because they apply to terms that are not defined.   Terms are not defined because they include groups (of dimensional parameters(1)and spatial increments) that are not defined.  Terms that include these groups must be properly interpreted and defined in terms of dimensional theory before the conventional limiting process can be validly applied.    When this is done, these terms, which are now dismissed as infinitesimals or labeled as infinities, are finite.

#### More specifically:

When we derive DIFFERENTIAL EQUATIONS from PHYSICAL models, we start by setting up FINITE INCREMENT EQUATIONS that represent the model completely (or, if the equations involve some recursive relation, we represent the model to some specified degree of completeness.)

Before we can use these FINITE INCREMENT EQUATIONS we must know what they mean, term for term.   We must be able to represent EVERY TERM by a specific measurement procedure. (EVERY TERM must make sense in terms of measurement theory.)

ONLY THEN can we take the limit of the FINITE INCREMENT EQUATION and reduce it to a VALID DIFFERENTIAL EQUATION.

### "Scientists must constantly remind themselves that the map is not the territory, that the models might not be capturing the essence of the problem, and that the assumptions built into a simulation might be wrong. "

#### We have been working on more detailed and complete answers to the "BIG QUESTIONS OF SIMULATION" for a long time.    We've been building on simulation procedures that engineers have used for a long time.    We've been working to clarify and extend these procedures for the following practical reasons:

##### We need to study problems that seem "perfectly simulated" at the level of the "faith-based" solution patterns, where simulations have not been working.    Fluid mechanics offers a wealth of such examples.    A very important body of examples occur in neural modeling, where we as a culture can make almost no concrete sense of information processing in the brain.

We have come to feel that our core simulation problems involve the DERIVATION of differential equations.

### Here is a MUCH MORE DETAILED step-by-step answer to the BIG QUESTIONS OF SIMULATION.    We explain how it is possible to map a concrete model of some aspect of the world, directly interpretable in terms of measurement, into the abstract world of mathematical analysis, where models are disconnected from direct definition by measurement.    The application of abstract results to concrete cases is also described. We show how MISMAPPINGS can now occur, and show how these mismappings may be avoided.

#### 2. Complete physical model specification requires a sketch showing the relationships of geometry, variables, and physical law being modeled.

3. Complete physical model specification requires a full set of operational measurement procedures that define the variables and dimensional parameters in the model.

### We have found major answers in neural science based on this procedure.    We are asking that our answers be checked.    We are also asking that the procedure above be discussed seriously enough, and widely enough, so that it may be established or shown wrong.

#### Our new procedure produces some new terms in some COUPLED physical models.    In most technical calculations, judged by measurable standards, the new results are numerically indistinguishable from the old procedure's results.    (In these cases, our procedure generates finite terms that would be strictly 0 according to the old limiting procedure, but these finite terms are VERY small.)    But sometimes these finite terms are large.    In the neurosciences, these terms lead to enormously different conclusions from the ones the conventional process leads to.

The differences between the new procedure and the old one come because of the discovery that the dimensional parameters are not just numbers, but are subject to arithmetic restrictions (or, and this is the same thing, must be interpreted in terms of measurement theory5.)    The requirements of measurement theory make the mathematics that applies directly to concrete circumstances different from abstract mathematics.

### Can there be PHYSICAL interactions between several kinds of physical laws, that occur over a length, or over an area, or over a volume, or over a period of time?    If there can be, can we correctly represent these interactions in differential equations, which are defined at points?     (Lengths, areas, volumes, and finite times vanish at points in space-time.)    If these inherently spatial interactions are real, we must be able to represent them in differential equations, since differential equations are our culture's fundamental medium of scientific representation.

#### "           If we adopt the first method we shall often have difficulty in interpreting terms which make their appearance during our calculations.    We shall therefore consider all the written symbols as mere numerical quantities, and therefore subject to all the operations of arithmetic during the process of calculation.    But in the original equations and the final equations, in which every term has to be interpreted in its physical sense, we must convert every numerical expression into a concrete quantity by multiplying it by the unit of that kind of quantity."

According to the first, more literal method Maxwell cites, we have "difficulty" interpreting some (cross effect) terms that interested Maxwell particularly.    Indeed, with no more information than Maxwell had, we cannot interpret them at all.    We are stopped.

THEREFORE we make an assumption, based on faith and experience.    We make that assumption along with Maxwell, giants before him (Newton, LaPlace, LaGuerre, and Fourier) and workers since.    As a culture, we decide to act AS IF our physical quantity representing symbols may be abstracted into simple numbers in our intermediate calculations.    This ASSUMPTION has produced equations that fit experiment innumerable times.    But it remains a pragmatic ASSUMPTION with no logically rigorous basis at all.

The assumption CANNOT be justified by claiming universal success for our culture's analysis.   Our culture's analysis has generated more successes than a person could reasonably understand and count, but our culture's analysis has also failed (often inexplicably) more often than any person could review or count.    The reporting career of George Johnson, and the text of REALITY BYTES, provide many examples.    Computational fluid mechanics, most fields of engineering, and most fields of physical and biological science offer many examples where analysis fails or cannot be brought to bear, for reasons that seem unclear to all concerned.

The strangeness of the assumption that the symbols that we use to represent measurable things are "just numbers" may help explain why some smart, careful students, who wish to carefully and redundantly trace decisive stages of logic as they imagine them and learn them, can distrust mathematical modeling procedures, can refuse to learn and use mathematical modeling, and can come to fear mathematics.    These students can see how measurements (or measurable quantities) can be set out as numbers.    They can see, or think they can see, how quantities can interact according to patterns, and how these patterns can be symbolized by terms in equations.    But at the start of their reasonings and visualizations, quantities are more than stark numbers - they exist in contextual detail, that detail can be expanded to measurement procedures and apparatus, and the terms correspond to pictures.    In the course of calculation, in an untraceable jump made without any detailed explanation or justification, dimensional quantities previously linked to measurement are stripped into simple, decontextualized numbers.    These decontextualized numbers appear in terms that cannot be pictured by any physically connected step-by-step process, although image analogies corresponding to these terms can sometimes be constructed.    These issues bothered Maxwell, and have bothered us, for conceptual reasons and for computational reasons as well. In Maxwell's electromagnetic studies, and in neurophysiology today, the standard mathematical modelling assumptions yield very wrong answers.

To review:

On Maxwell's first assumption, that deals with quantities that are traceably connected a physical model in measurable physical context, we as a culture have terms that are difficult (impossible) to interpret.

On Maxwell's second assumption, that strips the measurable quantities to "mere numerical quantities," these terms fit readily into our culture's calculus apparatus, and can quite often be "shown" by a limiting argument to be infinitesimal or infinite.

Maxwell was unhappy with his second assumption, and searched for ways to make his first assumption computationally workable until his death.    Maxwell was NOT convinced that crosseffect terms that mathematicians were calling infinitesimal really were infinitesimal.    But he did not have a mathematically coherent reason to doubt the standard mathematics.    Maxwell had not seen that there were some symbols, that he was using just as if they were simple dimensional numbers, that were not just dimensional numbers.

Here is a crossterm, one of many considered later.    This crossterm is encountered in the modeling of a conductive line (a wire or a neuron.)    R is resistance per unit length. C is capacitance per unit length.    R and C are defined as ratios of measurables, in (fairly complex) measurement procedures invisibly encoded when the letters R and C are written.    R and C are dimensional parameters.    x is a length along the line. v is voltage. t is time.    James Clerk Maxwell called crossterms like this "difficult to interpret" in 1879.

Crossterms like this make no sense in terms of any measurement procedure.   We as a culture now dispose of them by a limiting argument that assumes they are validly written, when they are not.    As a culture, we now say that

And so terms like this are discarded as "infinitesimal."    We discard these terms without clearly defining what they mean first.

With knowledge that the dimensional parameters have arithmetical restrictions, we proceed differently.    It happens (and this is new knowledge) that mixed products of R, C, and length are only defined in point form (or when interpreted by an operational measurement procedure, which produces the same thing).    The arithmetic of the limiting argument above, that looks so compelling, is undefined and actively misleading.    The correct interpretation of the crossproduct, writing out R  and C so that their numerical and unit parts are visible, proceeds as follows.    We put the length in point form, which is (1 meter)p for a meter length unit.

We then do arithmetic to evaluate the product of R, C and the point form of length as we usually do dimensional arithmetic: we multiply numerical parts, and add dimensional exponents.    We get a finite term.   That term may be too small to detect (for most values of R and C).   Still, the term is finite rather than zero.   In neurophysiology there are terms like this, that we as a culture have discarded, that are large and important.

(The idea that (1 meter)p is a point form of length (and NOT the same as a length of 1 meter) is nonintuitive, and we'll justify it later in two logically independent ways.  We'll justify it by a logic of interpolation.    We'll justify it independently from measurement theory, which takes into account what dimensional parameters are, and how they are computed. Before these justifications, we show how we may interpret such terms.)

We've been applying limiting arguments to undefined terms, and have been getting wrong answers.    We have to set out terms like this in a way that makes measurable sense before applying limiting arguments to them.    It turns out that, for these terms, we "get to point scale" by substituting point forms of our spatial variables directly.    (Or by doing an operational measurement procedure that amounts to this.)    At point scale, our arithmetic is defined.    This would seem natural and logical to any fresh student, and no more arbitrary than many other things she has to learn.    But people are not accustomed to it.    We are asking that people consider a new rule that restricts their arithmetic.    Discussion of this can quickly come to resemble an "argument about religion."    Perhaps this isn't surprising.

People often learn their mathematics as a matter of faith, based on patterns of seeing that may be incomplete or wrong, but that may seem compelling to the people who accept them. Mistakes based on "mathematically perfect" derivations can be hard to see, and can last a long time.    For example, an important mistake in fluid mechanics lasted 150 years, defeated some famous analysts, and still persists in some elementary physics texts.    The inviscid equations of Leonard Euler were used to predict flows.    Those equations "show" that wings have neither drag nor lift.    The analysis was corrected by Ludwig Prandl in 1904, and his change made modern fluid mechanics possible.    The change took new insights (the idea of the boundary layer) and new techniques (asymptotic perturbation theory).    The Euler analysis made a deeply buried (and plausible) assumption (no viscous effects, even at the walls) that wrecked predictions.    It was a long time, even after Prandl pointed out the right answer, before everyone could see the buried mistake, see Prandl's better analysis, and learn to use Prandl's breakthrough.    Even though everybody knew that the Euler equations weren't working, there was still resistance.    A later modification of flow theory, by Kline(8) , took fifteen years and the work of several coworkers and many graduate students to become established.    Again, most of the fluid mechanikers who resisted the new way really knew the old way was wrong, at some level.    Even so, there was much resistance.    Established ways of thought and reflex patterns of work are hard to change.

The resistance to special treatment of the dimensional parameters has been far beyond anything in our experience.    We have difficulty because people do not know that the old analysis has a problem.    Many regard the possibility of the problem we speak of as unbelievable.    We question limiting arguments based on ill defined terms, and limiting arguments have a special and high status in mathematics for historical reasons.   We are asking for modification of patterns that are very deeply habituated.     If one looks at a trained analyst at work, in either the "pure" or engineering sciences, one will see limiting arguments and the arithmetical usages around them applied with unconscious-reflexive facility, as spontaneously as breathing.    These processes are "done from the wrist" as if they were spinal reflexes.    One would not be looking at a trained analyst otherwise.    Analysts judge each other by this sort of reflexive facility.

We would all be lost without our reflexes, that may derive from culture but that become a part of us.    Once a person is committed to a reflex, it is awkward to consider it, or to doubt it, or to try to change it.    In our experience, if one asks a trained analyst to change her limiting procedures, or the arithmetical usages around them, one encounters some of the same difficulties one might have if one asked that analyst to stop breathing.    For one thing, the analyst will perceive the request as an attack on her competence.    For another, the analyst will perceive the request as a crazy, literally unthinkable request.    The fact that the old processes have been involved in much successful work will be taken as reason to trust the old procedures beyond doubting.    The notion that our limiting processes is associated with flawed arithmetic in some cases will be dismissed.    If one persists, and one is taken more seriously, one is likely to encounter fear responses.    If one is taken very seriously, one may encounter panic responses combined with disorientation responses, closely followed by various defensive and hostile responses.    We have seen these responses from individuals committed to the current analytical usages. We have seen these responses from groups committed to these current analytical usages.   These are not responses that fit people for detached judgement.

We are dealing with a matter that is a large scale issue of life and death.    It is an issue of great scientific interest.    Our conclusions need to be checked.    For mathematicians, physicists, and many engineers these logical issues are not easily addressed because they are linked to contextual, reflexive, institutional issues that are intense but that are not simple.    For such people, we are saying something that is not only "unbelievable," but also "unseeable" in the paradigm conflict sense Kuhn and others have described.

People who are not so reflexively engaged, and who are not so indoctrinated, are needed to look at the work, and check it step-by-step.    People not blinded by their reflexes and their indoctrinations will be better able to judge the argument and evidence here than the "experts" so blinded.   People interested in truth, who see the stakes, but do not feel they have a lot to lose if the argument goes one way rather than another will be better able to judge the argument and evidence here than the "experts" who are also parties-in-interest.   We care about our results too much to be fully trustable judges of them.   We want our results to be right. Some expert physicists and mathematicians will care about our results too much to be fully trustable judges of them, because if we are right, we will invalidate much of the work they have done, that they rely on, and that they revere.   Such experts will want our results to be wrong.

No one is to blame for these entirely human motivations.   Even so, we need umpires, who can look at the work from a certain distance, and care for the truth.   The medical implications of this work are so large that right answers are what should matter here.

We have been in correspondence with George Johnson since December of last year. We've decided that the work requires checking from a broadly based audience that includes mathematicians and physicists, but does not include them exclusively.    We have concluded that an extensive disclosure-submission in a New York Times forum under George Johnson is a proper way to address that audience.    We have found that we cannot reasonably rely on peer review, unassisted, in its usual form for this particular case.    Peer review is valuable but not perfect.    We strongly support peer review as the standard pattern of scientific and professional engineering evaluation.    One of us (Kline) has more than 170 peer reviewed articles, and has reviewed at least as many.    (Kline, no stranger to peer review, is one of the leaders in fluid mechanics, especially computational fluid mechanics, this century.)    But in our particular and unusual case we are, reluctantly but definitely, attacking the "invisible colleges" of physics and mathematics on decisive ground.    We are doing so as engineers, that is, as outsiders.    Stakes are high and, in our experience, emotions related to the work also run high. Peer review was never meant for this.    We need, in addition to checking from mathematicians and physicists, checking by quantitatively competent people who are NOT mathematicians and are NOT physicists.    There are many such people in the United States.    The mathematics that we disclose is not intrinsically difficult once it is pointed out.    It may be checked in a matter-of-fact fashion by professionals of all kinds.    But the new mathematics may be most difficult for mathematicians and physicists, because it requires that they change deeply established patterns of thought and reflex.

Practical implications of our work include a reinterpretation of important aspects of neurophysiology, some plain matters of life and death, some as interesting as memory.    We say that the effective inductance of small neural lines is now understated by factors of 1010-1019 (that is, effective neural inductance is understated by 10,000,000,000:1 to 10,000,000,000,000,000,000:1 . )    We believe that serious mistakes in neural science and medicine have been made, and are being made because of this mistake, and some other mistakes that follow from the same mathematical misunderstanding.    For scientific, medical, and moral reasons, it is important that we be checked in this.

THE DIMENSIONAL PARAMETERS:

Readers of George Johnson and REALITY BYTES may not be surprised when we say that there is an error at the interface between the measurable world and our culture's mathematical modeling.  They may be surprised when they see how basic the error is, and how old it is. We were surprised that the error has existed since Isaac Newton's time.    Since that time it has been assumed that the "mapping patterns" built into analysis somehow exactly fit the "territories" of the measurable world.    The reason for the fit has been thought vague, or magical.

The fit has been better grounded than many have thought.    Where the map-territory fit exists, the fit is there, in the cases we've examined, because we as a culture have always had sharp, mechanistic "mapping tools" that we have reason to trust from measurement experience. However, we as a culture have not recognized these ubiquitous "mapping tools" as the logically special entities that they are.    We as a culture have sometimes used these tools with facility. As a culture, we have never used them with understanding.    Nor have we as a culture always used these mapping tools perfectly.

The linkage between the measurable world and our culture's more-or-less abstracted equation representations of that world occurs via dimensional parameters, common number-like entities, used for centuries, that encode and abstract experimental information in our culture's physical laws.    These dimensional parameters ARE the link between the physical laws that can be measured, and (more or less) abstracted equations.    The dimensional parameters have seemed to be so trouble-free that no one has looked very hard at them.    We found that we had to do so.

Here are some directly measurable dimensional parameters:

Mass, density, viscosity, bulk modulus, thermal conductivity, thermal diffusivity, resistance (lumped), resistance (per unit length), inductance (lumped), inductance (per unit length), membrane current leakage (per length), capacitance (lumped), capacitance (per unit length), magnetic susceptibility, emittance, ionization potential, reluctance, resistivity, coefficient of restitution, and many more.

These dimensional parameters are not abstractions: each of them corresponds to the detailed context of a measurement procedure. (Usually, you'd need a sketch, and some instructions, to describe that measurement procedure.)    The dimensional parameters are each expressible in units, and the unit system can be reduced to units of length, mass, and time.    But the units notated in a dimensional parameter are only defined in the context of a SPECIFIC, and sometimes very detailed, measurement procedure.

In addition to dimensional parametric functions, there are also compound dimensional parameters, made up of the products or ratios of dimensional parameters and other dimensional variables taken together. RC in equations 2-3 above is a compound dimensional parameter.

A famous class of the compound dimensional parameters is the dimensionless numbers, such as the Reynolds number used in fluid mechanics.    These "dimensionless numbers" exist in a strongly specified dimensional context: they exist in consistent dimensional systems (systems that may be reduced to length, mass, and time and a tightly specified list of measurement procedures.)    The dimensionless parameters are defined in terms of context-complete, context-specific measurement procedures applied to a particular circumstance.    For instance, a particular Reynolds number will apply to a particular airplane model at a particular angle of attack at a particular velocity for a particular fluid, and to other geometrically similar systems where ratios of inertial to viscous forces are the same.    A Reynolds number or other dimensionless number has an exponent of 0 for all of the dimensions in the consistent dimensional system that it is defined in.   The numerical value of the Reynolds number or other dimensionless number is exactly the same for any other consistent dimensional system subject to EXACTLY the same measurement procedures for EXACTLY the same physical circumstances.    The "dimensionless" numbers exist in a dimensional world that must be specified in much detail.    They are not "just numbers" in the sense of "just abstract numbers."    Again for emphasis: the dimensionless numbers and dimensional parameters relate to concrete, measurable things.    They are not abstract.    They are connected to context and specific embodiment.    All the dimensional parameters can be defined according to the following pattern.

DEFINITION:    A dimensional parameter is a "dimensional transform ratio number" that relates two measurable functions numerically and dimensionally. The dimensional parameter is defined by measurements (or "hypothetical measurements") of two related measurable functions A and B. The dimensional parameter is the algebraically simplified expression of {A/B} as defined in  A = {{A/B}} B.   The dimensional parameter is a transform relation from one dimensional system to another.   The dimensional parameter is also a numerical constant of proportionality between A and B (a parameter of the system.)   The dimensional parameter is valid within specific domains of definition of A and B that are operationally defined by measurement procedures.

Example:    Resistance per unit length of a wire, R, is the ratio of voltage drop per length of that wire to current flow over that length of wire.    A resistance per unit length determined for a specific wire for ONE specific length increment and ONE specific current works for an INFINITE SET of other length increments and currents on that wire (holding temperature the same, and assuming wire homogeneity.)    R is the dimensional parameter in Ohm's law for a wire.    Other physical laws have other dimensional parameters.

The dimensional parameters encode experimental information into a compact, linear form.    This form may be denoted by a symbol exactly like the symbols used to denote simple dimensional numbers, and that is now done.    Such symbols, representing encoding of context-bound measurement information, are now used exactly as dimensional numbers inside our culture's physical equations.    The distinction between the dimensional parameters and ordinary dimensional numbers exists functionally and has done so from the first equation definitions of physical law. However, to our knowledge that distinction has not been clearly defined before.    So far as we can tell, the distinction has not been thought important.    Standard notation has always ignored the distinction, so that the distinction has been out of sight and out of mind.

If one asks "how do our culture's equations connect to the measurable world," a major question in REALITY BYTES, the simple (but not too simple) answer is "via our dimensional parameters, and the specifications and contextual limitations built into them."

It is HERE that the measurable interfaces our culture's symbolic mathematical representations.

#### Papers E-G are in the wrong units, but are provided essentially as submitted to NATURE.  The data and graphs show curves and values that are near to correct, but do not account for two opposing and balancing issues, the need to calculate values in the MKS unit system, and the need to account for the capacitance of the neural membrane-fluid cleft-glial membrane assembly, and not just the capacitance of a single neural membrane. These papers are in CGS-coulomb-volt units, the customary units for biological calculation. A checker asked a question about dimensional consistency. We found that, for dimensional consistency of our crossterms, the only consistent unit system available was the Meter-Kilogram-Second-Coulomb-Volt (MKS-Georgi) system. When the change to MKS Georgi units was made, the magnitude of our crossterms changed, and our calculated conduction velocities fell to 1/100th of those we calculated in CGS units, velocities much lower than measured velocities. Our product of the numerical values of line resistance and line capacitance had to be wrong by 100-fold, or our theory had to be wrong. Our line resistance couldn't be very wrong - that left us looking for a two order of magnitude reduction in capacitance. Showalter has found that reduction, and in doing so has recognized the reason for the ubiquity of the glial cells surrounding unmyelenated neurons, a longstanding mystery in the neurosciences. The requirement for MKS units is set out below. The function of glia, and the fluid cleft, are treated in

I.    The Glial membrane-fluid cleft-neural membrane arrangement cuts effective neural capacitance, greatly increasing signal conduction velocity and greatly reducing the energy requirement per action potential.                    M.R. Showalter

#### That question hinges, in turn, on other questions.    Do we have to model concrete circumstances concretely before mapping them into abstract mathematics?    If we do, are there special rules that apply to concrete dimensional equations that do not apply to abstract equations?    We answer yes.    We are asking to be checked.

Checking so far:

This posting is not our first effort to get this mathematics checked.    We don't believe that George Johnson would permit this posting if he were not satisfied that we'd worked to get checking elsewhere.    Nor is our math unchecked.    Nor have uncorrected mistakes in the mathematics been found by anyone.    (People have expressed aversion to our work, but that is a different thing. ) Nonetheless, the mathematics, which is at the core of physical modeling, has not been accepted or seriously discussed, by mathematicians or physicists.    We do not think it right to review our efforts to get this math checked in detail here.    We can say that, for six years, we've worked hard to get this math, and work leading up to it, critically reviewed and discussed. We've attempted to get checking of the work in every effective way we've been able to think of.    For the last year particularly, after the arithmetical limitations of the dimensional parameters were clearly identified, we've tried to get the work checked.   We've not been called wrong for explicit, traceable reasons that could stand up to examination (except for a much appreciated question about scale choice, which has led us to use MKS-Georgi units.) Nonetheless, the work has been dismissed, and seems to have been treated as unthinkable.    Mathematicians and physicists of distinct good will have had this reaction, and sometimes their reactions have involved visible signs of personal stress. Our work has been "undiscussable."    Late last year, one of us (MRS) sent a series of e-mail messages, describing the math in some detail, and soliciting checking of it, to every member of the Department of Mathematics at the University of Wisconsin, Madison and to most of the neuroscientists at U.W.    These email messages, and a background essay on checking efforts sent to George Johnson in January 1997, are available to interested  parties  (FTP:angus.macc.wisc.edu/pub2/showalt).    There were no responses to my requests from mathematicians or physicists solicited, nor did substantial help from neuroscientists occur.    People watching, including a mathematically gifted Dean, felt that there was some obligation for the mathematicians to respond if they had found a mistake.    Those who examine the transmissions may agree. Nonetheless, the mathematics, which is at the core of physical modeling, and which has immediate and large medical and scientific implications, was not discussed.

#### On February 20, we submitted to NATURE papers closely similar to seven of the eight papers set out above.    This was a highly unconventional, even outrageous, thing to do.    NATURE submissions are expected to be one paper submissions.    We knew that.   Although we did, of course, hope for publication of some of the work we submitted, publications of our submissions, as submitted, was not our first priority.    More than anything else, we were hoping to get the core mathematics CHECKED.    In the transmittal letter, I wrote."

"            We need to establish one mathematical point.    Once established, it need not take up much paper, either. Suppose that a short piece of less than a page, including a few definitions fore and aft and the following language, were published in NATURE.

(the modified differential equation derivation procedure, with intensification for currently undefined terms, was here.)

#### Let's derive voltage and current equations that include crossterms.    We'll see why the crossterms cause us Maxwell's "difficulty in interpretation."    We'll write our voltage and current functions as v(x,t) and i(x,t).    We're assuming homogeneity and symmetry for our conductor.    We assume that, for small enough lengths delta x, the average voltage (current) across the interval from x to x+delta x is the average of the voltage (current) at x and at x+delta x.

Writing down voltage change as a function of the dimensional parameters and variables that directly affect voltage, we have.

Writing down current change as a function of the dimensional parameters and variables that directly affect current, we have.

We may equally well rewrite (4a) and (4b) going from points x- delta x to x+delta x, so that the interval is centered at x.

Note that equation (5a) includes i(x+delta x/2) and its time derivative. di(x+delta x/2)/dt is defined by equation (5b). Equation (5b) includes v(x+delta x/2) and its derivative     dv(x+delta x/2)/dt is defined by equation (5a).
Each of these equations requires the other for full specification:  each contains the other.
If the cross-substitutions specified implicitly above are explicitly made, each of the resulting equations will also each contain the other.    So will the generation of equations following, and the next, and so on.   This is an endless regress.
Each substitution introduces new functions with the argument (x+x/2), and so there is a continuing need for more substitutions.   To achieve closure, one needs a truncating approximation position of x, for current, voltage, and their time derivatives.

! ! ! ! !   IMPORTANT    ! ! ! ! !

We cannot assume that terms in this kind of coupled expansion are unimportant until we know what the symbols in them mean, and what the arithmetical rules that apply to those symbols are.

We MUST assume that these terms have a dimensional interpretation consistent with the other terms in the same equation.    We must assume that all the terms, properly determined, have the same dimensional unit exponents.    We must assume that all the terms, properly interpreted, are meaningful (have specific numerical and dimensional interpretations) at all spatial scales in the equation's domain of definition.

! ! ! ! ! ! ! ! ! ! ! ! ! !

We can proceed with substitutions like the following, associating symbols without interpreting them numerically or physically. For example

is

which expands algebraically to

These terms would be simpler if voltages and derivatives of voltages were taken at the interval midpoint, x. But even so simplified, it is terms of this kind that are "difficult to interpret" in Maxwell's sense. (Maxwell was an industrious analyst living in an analytically competitive world, and when he wrote "difficult to interpret" we believe that he must have meant "operationally impossible to interpret".)

Consider crossterms like these:

If one wishes to speak of expressions like those of (9), what do they mean for finite delta when the symbols are considered to stand for fully physical things, or complete models of physical things, subject to the detailed physical rules that stand behind the model?    How do you interpret them with a sketch interpreted by measurement procedure?    In discussions with mathematicians, engineers, and scientists, the first author (MRS) was not (for three years) able to find anyone who was confident of the meaning of these kinds of expressions at finite scales (or, as a matter of logic, when length was reduced to an arbitrarily small value in a limiting argument.)    The equations below shows voltage change over an interval of length delta x, centered about the point of position x, for three stages of cross substitution.    Symbols are grouped together and algebraically simplified up to the point where questions arise about the meaning of further algebraic simplification of relations in the dimensional parameters R, L, G, and C.    According to Maxwell's first method, the expressions in curly brackets are "difficult" to interpret.

The equation for i(x+delta x/2,t)-i(x-delta x/2,t), is isomorphic to 10 (or 11 below), with swapping of the variables v-i and the dimensional parameters R-G, and L-C.

Let's rewrite the neural conduction equation (10), dividing each term by the length increment, and assuming that length increment is so small that we can approximate the gradient of voltage per unit length as a derivative.    In (11) we substitute the word "length" for delta x.    This substitution is a useful notational step, perhaps most useful because it helps us to think without our reflexes engaged about what the notion of length (or length2, length3, etc.) may mean in the terms of this equation.

10 may be "simplified" according to common arguments to (12)

In current usage, Maxwell's second assumption-method seems to "define" all these "difficult" terms, below the first line in 10, 11 or 12 (and then seems to define them out of existence.) The definition is never tested, because these terms are dismissed by a limiting argument that all trained analysts now trust but do not examine. (Group consensus and truth are not the same.) All the terms in the curly brackets are thought to be "infinitesimal in the limit as x goes to 0" and are discarded (usually they are never written.) These terms are then dismissed based on a limiting argument that ASSUMES that these terms are DEFINED at finite scales according to Maxwell's second method.   They are not consistently defined according to Maxwell's second method, as has been assumed: the assumption that they are well definined leads to contradictions.

When these cross terms are consistently defined, they become easy to interpret. To show this, we'll start by showing how the assumptions of Maxwell's second method fail logically and numerically.

Difficulties with dismissal of these terms on the basis of meaning are set out elsewhere(9).    However, even setting these aside, the dismissal fails several closure tests.
(Closure tests are standard tests all through measurement theory.    One goes around a sequence that should be a cycle.    If one is not exactly where one began after a "cycle" that "cycle" is defective.    It has failed a closure test.    Surveying offers a common example.    If a surveyer measures around a mountain (or a lot) and comes back to a particular stake, the measured position of that stake "around the loop" must match, within measurement error, the position measured for that stake before going around the loop.    Kirchoff's laws in electricity are closure definitions useful as laws.)

Here is a cycle that should close, but that does not.    At a finite scale, before taking the limit, the terms below the first line of equations like 10 or 11 are supposed to represent finite quantities.    Yet when we as a culture take the limit as delta x goes to zero, these crossterms are infinitesimal (0), and are discarded in the conventionally derived differential equation.   Now, let's take that differential equation, which has the "infinitesimal" terms stripped away.   Let's integrate it back up to specific scale x.     We get an equation that lacks the crossterms that we know existed at scale x in the first place. We should have closure, and do not. When one or more of the dimensional parameters R, L, G, and C are large enough, the numerical value of this contradiction can be large.

In addition, the expressions in curly brackets in (10 and 12) fail as representations of a physical circumstance.    First, the terms fail the test of homogeneity.    Different descriptions of the same thing, that should have the same total size, have different sizes.    For instance, consider a curly bracketed expression in (delta x)2.   If delta x is divided into ten pieces, and those ten subintervals are computed and summed, that sum is only 1/10 the expression computed for the same interval delta x taken in one step.    Depending on how many subintervals of delta x we choose, we can vary the numerical size of the curly bracketed expression over a wide range for one single and unchanged interval scale.    This does not describe physical behavior.

These expressions are also meaningless because they are constructed on the basis of a type invalid arithmetic.    The loop test below showed us that so clearly that we SAW the inherent error in our culture's arithmetical assumptions, an error that we had been looking at, or looking near, for a long time before.    The loop test then served as a "testbed" that told us what the arithmetical restrictions of the dimensional parameters were.

We should all know that a "number" or "expression" that can be manipulated by "proper arithmetic" and permissible unit changes so that it has any value at all is meaningless.    Let's look at a simple loop test, analogous to many closure tests in physical logic.    In Fig. 2, an algebraically unsimplified dimensional group that includes products or ratios of dimensional numbers, such as

is set out in meter length units at A.   This quantity is algebraically simplified directly in meter units to produce "expression 1," dealing with the dimensional parameters and increments as "just numbers."   The same physical quantity is also translated from A into a "per meter" basis at C.    The translated quantity at C is then algebraically simplified to D in the same conventional fashion.    The expression at D, expressed in meter length units, is converted to a "per meter" basis to produce "expression 2."    Expression 1 and Expression 2 must be the same, or the calculation is not consistent with itself.
Quantities like those in the curly brackets of 10-11 consistently fail the loop test.   By choice of unit changes, and enough changes, such "quantities" can be made to have any numerical value at all.    Expressions such as these are meaningless as usually interpreted.

Here is a an example of how our symbolic-numerical conventions "go around the loop."

The loop test fails!

The loop test fails because a standard procedure is flawed.

Before algebraic simplification, going from one unit system to another adjusts not just the numerical value of dimensional properties in the different unit systems, but numerical values corresponding to the spatial variable (length), as well.

After algebraic simplification, one has a compound dimensional property - adjusting it to a new unit system corresponds to adjusting numerical values that correspond to the unit change for the dimensional properties only, with no corresponding adjustment for the absorbed spatial variable.

The result is an irreversible, numerically absurd, but now standard mathematical operation. The difficulty has been deeply buried and hidden in our culture's notation.

Note that the loop test of Fig. 2 only fails for terms that are now routinely discarded as "infinitesimal" or "infinite" without detailed examination   Multiplication and division of groups of dimensional numbers (and dimensional parameters) without spatial or temporal increments passes the loop test without difficulty, with numerical values handled arithmetically, and dimensional exponents added and subtracted in the usual way.    Dimensional parameters associated with increments according to the exact pattern that defined the dimensional parameters in the first place also work.

Problems now arise when we as a culture associate several dimensional parameters together with several increments, where the increments correspond to different physical effects evaluated over the same interval.    These problems arise in notations that we as a culture have never understood at finite scales - we as a culture have no reason for confidence for the notational procedures we've been using.    The procedure we as a culture have used for evaluating such circumstances, as we've notated them, involves a kind of multiple counting that yields perverse results, as the loop test shows.    We want to incorporate a rule that avoids mistakes like this.    The rule needed restricts multiplication and division of dimensional parameters and increments to intensive (point) form, except for circumstances where the dimensional parameters are associated with increments in the exact way that defined them.   It requires us to clarify our concept of increments (of length, area, volume, or time) defined at a point.

! ! ! IMPORTANT ! ! !

Most of the time, engineers and scientists have been dealing with differential equations (defined at points) rather than finite increment equations (defined over intervals).    When differential equations have been manipulated, the difficulties shown by the loop test have not occurred because every term in the differential equations have already been in point form.

However, DERIVATIONS of differential equations from coupled physical circumstances HAVE involved finite interval models, and need to be reexamined to see if terms have been lost.    "Rigorous proofs" of some of our trusted differential equations can no longer be relied on.
! ! ! ! ! ! ! ! !

Notated as we as a culture have notated them, and interpreted as we as a culture have interpreted them, entities that represent coupled relations are physically unclear and numerically undefined.    Although we as a culture may notate the interaction of resistance, capacitance, and resistance together in interaction with the measurable di/dt over the interval x as

where delta x is a number times a length unit, the loop test shows that this notation, literally interpreted, does not correspond to any consistent numerical value when unit systems are changed, and then changed back. (14) shows a common and long trusted notation that does not reflect limitations on the arithmetic permissible with the dimensional parameters.

Let's rewrite (14) setting out a notation that makes explicit problems we need to solve concerning our notation of "length" in this expression:

What well defined notion can we as a culture have of "length" in the limit as scale shrinks to a point?    Surely, no number of length units can suffice as such a well defined notion of length "at a point", because whatever number of length units we choose, we can always pick a smaller number still.    We as a culture can't be correct if we mean that (but in our culture's limiting arguments, that assume an invalid arithmetic, we do mean that.)   We as a culture need a notion of "length" that makes focused sense at a point, because differential equations are defined at points.

If we assume, incorrectly, that the symbols in

and similar expressions are indistinguishable from the abstract numbers we are used to manipulating in our differential equations, then we are faced with a contradiction.

On the other hand, if we remember that these symbols represent concrete, measurable circumstances, then we face no contradiction.    We do face a conceptual challenge.    We must face the need to reassess some of our assumptions.    We need to recall that more complicated systems often have rules that less complicated systems do not have.    For this reason, we are not OBLIGATED to assume that the rules of arithmetic that apply to these more complicated entities, in their more complicated concrete context, are exactly the same as the arithmetical rules that apply to the simpler system of abstract numbers.    We can determine what the arithmetical rules in the concrete domain are, rather than assume these rules on the basis of some plausible analogy to abstract mathematics.    For a long time, this conceptual challenge was too much for us.    It was not obvious to us that we could question arithmetical usages in the concrete domain.    The loop test above showed us that we had to.    The dimensional parameters are subject to a type restriction that does not exist in the abstract domain.

Once we realize that we may determine the arithmetical rules of concrete domain mathematics, we face the question: how do we get these symbols and symbol groups to make fully consistent sense in terms of measurement?    We need to consider what consistent sense in terms of measurement means.

Here is James Clerk Maxwell, starting from the first page of the first volume of his A TREATISE ON ELECTRICITY & MAGNETISM: (Dover)

"PRELIMINARY.  ON THE MEASUREMENT OF QUANTITIES

"1.]            Every expression of a Quantity consists of two factors or components.    One of these is the name of a certain known quantity of the same kind as the quantity expressed, which is taken as a standard of reference.    The other component is the number of times the standard is to be taken in order to make up the required quantity. The standard quantity is technically called the Unit, and the number is called the Numerical Value of the quantity.

"There must be as many different units as there are different kinds of quantities to be measured, but in all dynamical sciences it is possible to define these units in terms of the three fundamental units of LENGTH, TIME, and MASS.

. . .
"2.]            In framing a mathematical system we suppose the fundamental units of length, mass, and time to be given, and deduce all the derivative units from these by the simplest obtainable definitions. . . . The formulae at which we arrive must be such that a person of any nation, by substituting for the different symbols the numerical values of the quantities as measured in his own national units, would arrive at a true result.
.    .   .   Hence, in all scientific studies it is of the greatest importance to employ units belonging to a properly defined system, and to know the relations of these units to the fundamental units, so that we may be able at once to transform our results from one system to another.

"          This is most conveniently done by ascertaining the dimensions (Maxwell means the exponent here) of every unit in terms of the three fundamental units. . . . . .
For instance, the scientific unit of volume is always the cube whose side is the unit of length. If the unit of length varies, the unit of volume will vary as its third power . . . . .

"         A knowledge of the dimensions (Maxwell means exponents) of units furnishes a test which ought to be applied to the equations resulting from any lengthened investigation.    The (unit exponents) of every term of such an equation, with respect to each of the three fundamental units, must be the same.    If not, the equation is absurd, and contains some error, as its interpretation would be different according to the arbitrary system of units which we adopt. "

P.W. Bridgman made Maxwell's position more realistic and definite by clarifying the notion of measurement.    Measurement involves operational procedures in a specific, step-by-step context.    The operational procedures are implicit, and logically necessary, but are not expressed in the abstract algebraic symbols themselves, which show only units raised to exponents.

An important point, for Maxwell, for Bridgman, and for anybody else who wants valid equations, is as follows:

ALL the operational procedures must refer only to steps where, for specific units of LENGTH, MASS, and TIME, (call them L, M, and T) the measured quantity (or partial operational procedure involved in the derivation of the quantity) could be reduced to

Ln1 Mn2 Tn3

where n1, n2, and n3 are exponents that may be integers or fractions.   If and only if every measurable in THE ENTIRE SYSTEM OF UNITS is in this form, then the choice of particular length scale, mass unit, and time unit is arbitrary.

The requirement that the ENTIRE SYSTEM be consistent is a severe one in the case of units that combine mechanical and electrical (and even more, mechanical, electrical, and magnetic) quantities. There is ONE such system now in common use.    The CONSISTENT SYSTEM now in use is the MKS-Giorgi unit system(10).    (An infinite number of other unit systems would be possible, compared to the MKS-Giorgi unit system, but it would be a lot of work to get even one other one, because of the coupling between the definitions of length, mass, time, the electrical units and the magnetic units, and because of the hard word required to reexpress a large number of dimensional parameters in the new system.)

For the finite increment equations that we derive to make sense, they must make dimensional sense.    EVERY term must be a valid term.    From the loop test, we know that every one of the terms including curly bracketed crossterms below is arithmetically undefined when the symbols in it are interpreted by abstract arithmetic.    What interpretation can we apply to the cross effect terms?    Our interpretation must be well defined and consistent with measure theory.

Let's refer to equation (17).    Let's ask a standard question:   "how do we, in a meaningful, fully defined way, take the limit of BOTH SIDES of this equation?"

On the left side of equation (17), the limit as delta x approaches 0 is dv/dx.    (That limit is the definition of dv/dx). dv/dx is defined at a point - we may say that it is in point form.

On the right side of equation (17), let's proceed term for term, from left to right

There is nothing to do for the -Ri term - it is already in point form

There is nothing to do for the - L di/dt term - it is already in point form

(R and L are numerically and dimensionally the same, applied to any length of the homogeneous wire, or applied to any point on that wire.    Other dimensional parameters also stay the same, numerically and dimensionally, for any interval.)

We now have 11 terms, each with a collection of symbols within curly brackets that is UNDEFINED in the limit, without some additional convention or notation.

Within these curly brackets, in addition to products of the dimensional parameters R, L, G, and C, are associated (multiplied?) spatial variables.    The association of the dimensional parameters and the spatial variables DOES NOT CORRESPOND EXACTLY TO THE CONDITIONS FOR WHICH THE DIMENSIONAL PARAMETERS ARE DEFINED. THE PRODUCT (OR RATIO) OF THE DIMENSIONAL PARAMETERS AND SPATIAL (OR TEMPORAL) VARIABLES IS NOT DEFINED FOR GENERAL VALUES OF THE SPATIAL OR TEMPORAL VARIABLES.    The loop test shows this.    Some other cyclic closure tests also show this.
So our culture's standard limiting argument for these curly bracketed expressions, taking delta x smaller and smaller, till these curly bracketed expressions can be dismissed, isn't valid because that limiting argument uses arithmetic that isn't valid.

We want to proceed in the closest POSSIBLE analogy to our standard limiting argument.

If the curly bracketed terms are to make sense in differential and finite difference form (and we assume they MUST make sense) then the curly bracketed expressions must all simplify to COMPOUND DIMENSIONAL PARAMETERS (that do not change in numerical value or dimensionality from point scale to any finite scale for a homogeneous system.)

So we need to be able to interpret the curly brackets in terms of a Bridgman operational procedure.    We can do this, but our operational definition requires a length scale of 1 length unit.    We are constrained to the 1 length unit scale by requirements we've touched before, with respect to consistent unit systems.

To interpret any of the curly bracketed terms in equation (8) according to a Bridgman operational effective measurement procedure, we can proceed as follows. Since the curly brackets involve undefined arithmetic, we can't interpret them directly.   For any of the curly bracketed expressions, we can, however, proceed as follows:

1. We can take each curly bracketed term out of the equation, to evaluate it by measurement theory

2. We can set the length scales at 1 length unit, and after doing that, algebraically simplify the term into a compound dimensional parameter. (This is the Bridgman operational, dimensionally correct procedure.)

3. We can then reinsert the curly bracketed term, which is now a (compound) dimensional parameter, where it was before. This dimensional parameter will be numerically and dimensionally the same for WHATEVER length scale it is evaluated for, including point scale.

If we choose point scale, we've come as close as we can, considering the arithmetical limitations we've been dealing with, to taking a valid limit of that term.

We can proceed, term for term, in this way. Doing so, we come as close as possible to taking a limit, term for term.    In the end, every one of our terms consists of a dimensional parameter (that may be a properly simplified compound dimensional parameter) times a measurable quantity.    Every one of these dimensional parameters, and every one of these measurables, exists in the same compatible system of units.

There is another interpretation that gets us to the same result in an operationally simpler way.    We don't have to "conceptually remove our groups for a measurement procedure" and we don't have to "take a limit."    Suppose, rather than work through "an operational measurement procedure" for each term, as described above, and then putting our term at point scale, we'd substituted a "point form of length" with a value of (1 meter)p for every instance of length. Suppose we then algebraically simplify, term for term, using usual arithmetic for dimensional numbers.    Our end result would be at point scale, and would be dimensionally and numerically the same as before.    WE'D HAVE CONCRETE EQUATIONS THAT COULD BE MAPPED INTO DIFFERENTIAL EQUATIONS IN THE ABSTRACT DOMAIN OF THE CALCULUS ON A SYMBOL-FOR-SYMBOL BASIS.

The logic is as follows.    We are trying to express "length" at a point (or over a differentially small length.    We are trying to express length in (point) form.    R and C are already in intensive (point) form, both numerically and dimensionally.    That is, R and C, both in units that are per unit length, are in a form that is numerically independent of length.   This R and C works for any interval, no matter how short, and works at point scale, without numerical change.    di/dt is also in intensive form (defined at a point).    We as a culture need an intensive form for length.    Here is what is needed, expressed in words.    We need to interpret "length" as "the property of length in per unit length units."

Let's think of what we as a culture already do when we reason from measurement.   Our measurement procedures define things in terms of spatial variables (length, area, volume, time) and other dimensions (voltage, charge, and many others).    The measurements are inherently finite in nature.    Still, we all speak of properties in INTENSIVE FORM, defined for points.    An intensive property or intensive variable has a well-defined value for each point in the system it refers to.    For instance, we speak of "resistance per unit length defined at a point" even though a point has 0 length.    The numerical-scaling argument we use to arrive at intensive properties and variables is simple and nearly reflexive.    To intensify our properties and variables, we say that the property (or variable) at a point is the property (or variable) that we get from a logic of interpolation from a finite scale to finer and finer scales. The interpolation assumes homogeneity of the model down to a vanishing spatial (and/or temporal) scale."   For example, consider the notion of resistance per unit length. Let's idealize the wire as a line. The resistance R expresses voltage gradient per unit length, per unit current. For any interval that includes length, the basic notion of resistance can be directly defined "per unit length". Resistance per unit length at a point has the same numerical value and the same units as resistance per unit length for some finite increment. We do not use a limiting process.

For example, we find the value of internal energy per unit mass at a point for a homogeneous system by dividing the internal energy of the system by the mass of the system.    We do not use a limiting process.    Other properties can be defined in similar ways "per unit area" or "per unit length" over finite areas, or finite volumes.    But
the notion of "length (or area, or volume) at a point" is an abstraction. This extremely useful abstraction is much older(11) than some of our rigorous calculus formality(12).    In thermodynamics and elsewhere, we as a culture don't intensify our extensive variables by a calculus argument of any kind.    We just assume that the property we're considering is homogeneous.    Then we write our intensive variables directly.

The abstract notion of length or area or volume "at a point" is already embedded in many of the intensive properties in common use.  Using meter and second units, the intensive forms (point forms) of length, area, volume and time are:

Length at a point: { 1 length/length (length unit)}          in meter units: (1 meter)p
Area at a point: { 1 area/area ( (length unit)2 )}              in meter units: (1 meter2)p
Volume at a point: { 1 volume/volume ( (length unit)3 )}     in meter units: ( 1 meter3 )p

An instant in time:   { 1 ( time)/( time)) (time unit) }    in second time units: ( 1 second )

The dimensions of length, area, and volume are length to the first, second and third power respectively. The coefficients are the identity operator, 1 because, for even the smallest imaginable numerical values of length, l, area, a; or volume v

l/l = 1      a/a = 1      v/v = 1  and      t/t = 1

We can rewrite (10) as

Substituting the intensive form of length into (10) or (12) in place of delta x, we may algebraically simplify the bracketed expressions in the equation(s). This separates R, L, G and into numerical parts (Rn, Ln, Cn, and Gn) that are algebraically simplified together, and unit groups that are algebraically simplified together.   We'll choose MKS-Georgi units, with v as volts, Q as coulombs, and t as time in seconds.    We get:

We could have continued the expansion process that produced (10) and (11) and gotten more terms if we had wished to do so.

The di/dx equation analogous to (19) is

Each term consists of one (compound) dimensional parameter times a measurable.    These differential equations, when integrated to length delta x, reconstruct the values that apply to that length   x+delta x, with no lost terms. Every term in these differential equations passes the loop test.    We may map these differential equations symbol-for-symbol into corresponding partial differential equations.    We may map these differential (or corresponding partial differential) equations symbol-for-symbol into the domain of the algebra.    These equations are different equations from the Kelvin-Rall equations now used in neurophysiology.    They contain the Kelvin-Rall terms, but have other terms as well.

An important difference is the effective inductance term.    For unmyelinated axons and dendrites in the neural range of sizes, the numerical magnitude of R2C/4 is between 1012 and 1021 times larger than L, depending on dendrite diameter and other variables.    This term, which is much too small to measure in large scale electrical engineering, is a dominant and practically important term at neural scales in neural tissue.

Many terms now thought to be "infinities" are also finite terms when they are correctly interpreted in intensive form.

Physical domains that include dimensional parameters that represent measurable circumstances, differ from the domain of the algebra.    Unless we know this, we can discard important terms, and delude ourselves, or form false infinities, and delude ourselves.

Here again is the procedure by which we can map a defined circumstance from the real, dimensional world into a concrete model including equations; map that concrete model into abstract equations; and, after abstract manipulations map the results of our abstract analysis back into a concrete model that may be tested against results in the world.

PROCEDURAL RELATIONS BETWEEN CONCRETE PHYSICAL MODELS AND ABSTRACT MATHEMATICAL REPRESENTATIONS OF THEM

A concrete physical model directly connected to the details of measurement must be a finite scale model.    The finite scale requirement occurs for two related reasons.    First, geometrical details of specification involve lines and space-filling geometries that become degenerate at the point scale of a differential equation.    Second, ALL our measurement procedures are finite scale procedures.    No real measurements occur instantaneously, or at a true geometrical point.    For these reasons, equations representing a concrete physical model in step-by-step detail must be finite increment equations.   The measurements must be expressed in a system of consistent units4.

A (necessarily finite) concrete physical model must be completely specified before concrete finite increment equations can be derived from that model.

1. Complete physical model specification requires an input specification list setting out the variables of the model, and setting out the physical laws at play in the model with their corresponding dimensional parameters (3).

2. Complete physical model specification requires a sketch showing the relationships of geometry, variables, and physical law being modeled

3. Complete physical model specification requires a full set of operational measurement procedures that define the variables and dimensional parameters in the model.

From this physical model specification, one or more concrete finite increment equations can be derived setting out the quantitative relations of the model.

ALL the terms in these concrete finite increment model equations MUST make sense by the standards of measurement theory5 before the concrete equations can be validly transformed via a limiting argument into the abstract domain (5) in which differential equations exist.

This means that ALL terms must, when properly interpreted, be defined according to a step-by-step measurement or calculational procedure consistent with the unit system being used. Such a step-by-step procedure is an operational measurement procedure. When this is done for every term, the model equation is in consistent units and is in "measure theory consistent form."

Iff all the terms in the concrete finite increment model equations are reduced into measure-theory consistent form, then the model equations can be mapped, using a conventional limiting process, into differential equations that exist in the abstract domain in which differential equations exist.

Interpretation of finite increment equations has had the following hidden problem, that has resulted in mismapping that has produced invalid differential equations.    In order to interpret ALL the terms in concrete finite increment model equations in a manner consistent with measurement theory, we must know the following NEW information:

NEW INFORMATION.    Dimensional parameters are the entities that express concrete physical modeling laws in measurable form. The dimensional parameters are not just numbers.

Arithmetic with a dimensional parameter in the measurable domain is only defined when it corresponds EXACTLY with the definition of that dimensional parameter, or when the dimensional parameter is interpreted as part of a group operational measurement procedure performed at unit scale for the spatial variables in the group.

(An alternative statement is that, for equations in gradient form that are otherwise ready to be reduced to derivatives, groups of dimensional parameters and spatial variables must be algebraically simplified at POINT SCALE.      The point forms6 of the spatial variables, in MKS units, are

length at a point: (1 meter)p             area at a point: (1 meter2)p

volume at a point: (1 meter3)p              a point in time: (1 second)p

After this group interpretation is done, one has concrete finite increment model equations that are reduced into measure-theory consistent form.    THESE CONCRETE EQUATIONS CAN NOW BE MAPPED INTO DIFFERENTIAL EQUATIONS THAT EXIST IN THE ABSTRACT DOMAIN OF THE CALCULUS USING THE CONVENTIONAL LIMITING PROCESS.

WORK IN THE ABSTRACT DOMAIN:
Calculation according to the rules of analysis can then be validly done, on these valid differential equations.

REMAPPING OF ABSTRACT RESULTS INTO A CONCRETE CONTEXT:
If a result of analysis is to be applied to the measurable model from which it traces, groups in terms that act as if they are dimensional parameters in the domain of calculus can be interpreted as constructively derived dimensional parameters in the measurable model system. Once this interpretation is made, the calculus model may be mapped back into the measurable model on a term for term basis, for use or checking.

We have faced the following question repeatedly:

"If this new procedure, with its new interpretation of crossterms, is right, wouldn't that invalidate many results that everyone has good reason to trust?"

We've been dealing with the conduction line equations.    We've derived a new conduction equation, with new terms.    In the cases where the old derivation works well empirically, the new procedure works just as well.    The new terms are too small to matter.    In the case of neural function, where the old derivation fails on many counts, the new derivation has the properties needed to describe behavior(13).    The same equation, with the same terms, has enormously different properties depending on what the numerical values of the dimensional parameters in it happen to be.    Here is a partial expansion of the dv/dx line conduction equation:

R is resistance per length, L is inductance per length, G is membrane conductance per length, and is capacitance per length.

For many values of R-L-G-C, all the terms in (1) that include products of R, L, G, and will be very small.    However, some of these crossproduct terms can be enormous for different values of R, L, G, and C.    Comparison of a wire case and a neural dendrite case shows the contrast in values that can occur.    R is resistance per length, L is inductance per length, G is membrane conductance per length, and C is capacitance per length.

For many values of R-L-G-C, all the terms in (1) that include products of R, L, G, and Cwill be very small.    However, some of these crossproduct terms can be enormous for different values of R, L, G, and C.    Comparison of a wire case and a neural dendrite case shows the contrast in values that can occur.

For a 1 mm copper wire with ordinary insulation and placement, typical values of the dimensional parameters would be:

R = 1.4 x 10-3 ohm/meter C = 3.14 x 10-7 farads/meter

G = 3.14 x 10-8 mho/meter L = 5 x 10-7 henries/meter

Here is (19) with the numerical value of terms set out below the symbolic expression of the terms:

Here are the corresponding dimensional parameter values for a 1 micron diameter neural dendrite, assuming accepted values of axolemma conductivity, capacitance per membrane area, and inductance per unit length (volume conductivity 1.1 x 104 ohm-meter g = .318 mho/meter2 c=10-2farads/meter2.)

R = 1.4 x 1012 ohm/meter L = 5 x 10-7 henries/meter

C = 3.14 x 10-8 farads/meter G = 3.14 x 10-6 mho/meter

Note that R is 1015 larger than in the previous case of the wire.

For a neuron situated as real neurons are, surrounded by a glial cell and a fluid cleft, C would be about two orders of magnitude lower.   Even taking this into account, for the same equation, cross product terms that are trivial in the wire case are dominant in the neural dendrite case.

For the wire case, the numerical values of the primary terms (the numerical values of the dimensional parameters R, L, G, and C) are compared with the numerical values of the numerically most important cross product terms in the voltage gradient transmission equation below.

For this copper wire case, none of the cross product terms are large enough to attend to in a practical modeling equation. The sensible formula to use for the wire values of R, L, G, and C is the same one the limiting procedure would produce:

In the neural dendrite case, some of the crossterms that were trivial in the wire case become DOMINANT terms. For these neural value of R, L, G, and C, two cross product terms that were too small to attend to in the wire case have become dominant terms. Magnetic inductance, L, has become too small to include, because for these values of R, L, G, and C the R2C cross product term is 3 x 1022 times bigger than L, and an important modeling term.

These finite crossterms make the difference between the brain as a high Q system with sharply switched resonant communication and information storage, and the brain as an overdamped system without resonance, that appears incapable of any significant information processing at all because phase distortion in Kelvin-Rall is prohibitively large.    This makes the difference between a role for inductance in ventricular fibrillation and no role for inductance in ventricular fibrillation.    This makes the difference between epilepsy as a resonant phenomenon, and no role for resonant coupling and resonant neural destruction in resonance. This makes the difference between no plausible memory models that can handle complex information, and switched resonance memory models that appear to be able to handle the information brains do handle.    That is to say, the difference between the limiting procedure for deriving differential equations, and the intensification procedure, involves matters of real neuroscientific interest, including some plain matters of life and death.

(This is worth checking, worth fighting about, worth getting right.)

Current neurological modeling of dendrites uses the same equation that models the wire, radically understating effective inductance.    Sensible modeling formula for the dendrite involves the following quite different choice of terms.    (Note that this equation is a selection of important terms among others; it is an approximation that approximates reality well enough in a particular regime of R-L-G-C.)

The effective inductance is 3 x 1022 times larger than would be predicted in the Kelvin-Rall equation. The effective inductance term goes as the inverse cube of line diameter; for a .1 micron diameter dendritic spine, effective inductance would be 1000 times larger still.    Lines that have been thought devoid of inductance, and incapable of inductive effects such as resonance, have very large effective inductance.

Note also that the damping effect normally produced by R is now mostly produced by an R2G/4 term.    Changing G (by opening or closing membrane channels) could switch such a neural system from a strongly damped state to a highly resonant state.

The notion that solutions are parameter dependent is well established in some fields, for example viscous fluid flow.    That notion applies to these new terms.    In the neural transmission case and elsewhere, crossterms now dismissed as infinitesimals are finite, and some are large.    Electrical measurements testing the conduction equations have been carefully and accurately done, over a limited range of parametric values.   They have been "experimentally perfect" over that range.    Nevertheless, too large an extrapolation beyond that tested range of parametric values can be treacherous.    We say that this kind of extrapolation has been treacherous in neurophysiology.    The dimensional parameters that we as a culture must all use in our physical representations operate with type-restricted arithmetic rules.    Terms that we as a culture have thought to be zeros are finite.    We as a culture must learn to take this into account when we extrapolate an equation that may be "a perfect match to experiments" in one range of parameters into some very different range of parameters.    This provides some reason for caution when physical situations with enormously different values of physical parameters are modeled with the same equations.    "Perfect fit" to experiments in one range of parameters does not insure even a good fit in some other parametric range.

M. Robert Showalter           Madison, Wi. USA
Stephen Jay Kline                 Stanford, Ca. USA

APPENDIX 1

Here is the letter from NATURE in response to a very long and unconventional submission that we sent them.   We sent that submission, to the most elite academic journal we knew, hoping to get their help in securing checking of the intensification process.   NATURE did not give us the help we asked for.   Perhaps they were right not to do so. They did send the following letter, which was a kindness.

........................................................................

*****************************************

Here is the text:

Letter from Karl Ziemelis,

Physical Science Editor

NATURE dated 11 April 1997

Dr. M.R. Showalter

Department of Curriculum and Instruction

School of Education

Dear Dr. Showalter,

Thank you for your letter of 20 February and for your seven linked submissions.    I apologize for the unusual length of time that it has taken us to get back to you, but please understand that the sheer volume of interrelated material that you submitted took us rather longer than we had hoped to read and digest.    This delay is all the more regrettable as the work is not in a form that we can offer to consider for publication.

As you already clearly realize, the space available in the journal clearly poses a fundamental problem.    In my letter of 31 October 1994, I had hoped to explain what we look for in a NATURE paper - in essence, we do not see ourselves as an outlet for exhaustive presentations of scientific problems (regardless of their potential importance), but as a forum for presenting brief and focused reports of outstanding scientific and/or technological merit.     An additional, but equally important, characteristic of NATURE papers is that they should be self-contained: sub-dividing an extensive body of work into numerous (but intimately linked) "Nature-length" contributions is simply not a realistic option.

You are clearly appreciative of this fact, in that your stated intention is not so much to dominate our pages with your submissions, but to seek peer review on the work in its entirety, and defer until later the decision about what form any published form might take.    This is not, however, a service that we could realistically offer - quite aside from the fact that it would be placing an unrealistic burden on our referees, we aim to send out for peer review only those papers that we judge to be appropriate (at least in principle) for publication in NATURE, in accordance with the criteria outlined above.

This is not to deny that within your seven manuscripts there may be the essence of a NATURE paper; but given the time constraints under which we work, the onus must be on the authors, rather than on the referees and editors, to construct such a paper from the vast amount of material supplied.   But I have to say that the need for extensive cross-referencing apparent in the present manuscripts suggests to us that the likelihood that such a paper would be readily forthcoming is not too high.    It is therefore our feeling that your interests would be better served by pursuing publication of the work in a more specialized journal having the space to accommodate the lengthy exposition that your work so clearly requires.

Although it is sadly the case that some studies simply do not lend themselves to the NATURE format, this need not mean that our readers are left in the dark about the latest developments.    As you know, we frequently discuss such work in the context of our News and Views section, and if you were to send us preprints of your present papers when they are finally accepted elsewhere for publication, we could explore the possibility of doing likewise with your work.

Once again, I am very sorry that our decision must be negative on this occasion, but I hope and trust that you will rapidly receive a more favorable response elsewhere.

Yours sincerely,

Karl Ziemelis

Physical Science Editor

This is a rejection from NATURE, not an acceptance.   The favorable language in it stands for much less than a conventional peer review.    Even so, we believe the letter does tend to support the view that our work is plausible enough, and important enough, to be worth checking.

APPENDIX 2:

Some background on Stephen J. Kline and M. Robert Showalter

We have each been thinking about the interface between measurement and modeling for much of our professional lives, and have been thinking about this interface, together, for more than a decade.    Showalter has been more focused on the math-physics interface set out here, and issues related to it.    Steve Kline is the more physical of the two of us, the more keyed to images.    Showalter is the more concerned with formality and logic.    We share an interest in the history of ideas of modeling and analysis, and a sense of modeling and analysis as a body of human patterns and assumptions that exist, and may be improved, in a historical context. The point of departure of our work together is Steve's SIMILITUDE AND APPROXIMATION THEORY (McGraw-Hill 1964, Springer-Verlag 1986).    Kline has invested several thousand hours in our work together, much of it supervising and disciplining Showalter, and making sure that the work stayed physical, and stayed connected with known problems.    Showalter has put in much more time than Kline.

Stephen Jay Kline is Clarence J. and Patricia R. Woodard Professor, Emeritus, of Mechanical Engineering at Stanford University, and also Professor of Values, Technology, Science, and Society at Stanford.    He is author of more than 170 papers and two books, and editor of six other volumes.    Kline was the founding Chairman of the Thermosciences Division in Stanford's Department of Mechanical Engineering. He is also one of the four founding faculty members in Stanford's program in Science, Technology, and Society.    The largest number of his technical publications are in fluid mechanics, and report results of experiments, new forms of analysis, computation, and the description of new instrumental methods.    In 1996 the Japanese Society of Mechanical Engineers (JSME) decided to ask the outstanding contributors to various fields how they had pursued their work.    They designated Kline as the most important contributor to fluid mechanics in the 20th century.    While some other worker might have been reasonably chosen, Kline's distinction in experimental and computational fluid mechanics is undoubted.

Stephen Jay Kline has received the following honors and awards:

ASME Melville Medal for best paper of the year 1959.

ASME Centennial Award for career contributions to fluid mechanics

George Stephenson Prize of the British Institution of Mechanical Engineers

Golden Eagle award from U.S. CINE and Bucraino Medal (Italian) for the film "Flow Visualization" (30 minute educational film).

Gold Medal of the Chinese Aero/Astro Society

Honorary Member, ASME (the highest honor ASME gives)

Fellow, American Association for the Advancement of Science

Books by Stephen J. Kline

SIMILITUDE AND APPROXIMATION THEORY (McGraw-Hill, 1964; Springer-Verlag, 1986)

CONCEPTUAL FOUNDATIONS FOR MULTIDISCIPLINARY THINKING (Stanford University Press, 1995)

for more on S.J.Kline,  see   http://www-tsd.stanford.edu/tsd/ressjk.html klinelink

M. Robert Showalter is a graduate of Cornell University.    Rather than undertake an academic career, Showalter set out to make "analytical invention" a possibility and to make "analytical engineering" more efficient.    He was interested in questions like "How do you define and design an optimal structure in a fully specified, complicated, fully commercial circumstance?"    For instance, suppose an airplane design needs a wing, to mount an engine and connect to a specific fuselage.   How do you arrive at a FULLY optimized design for that wing, in a case of real technical complexity, with "optimal" a defensible word in terms of all the technical considerations that pertain at all the levels that matter in the case (aerodynamics, structure, fabrication, maintenance, cost)?    How do you even approach such a job?    To make sense of such a job would require superposition, coupling together, and repeated solution of sometimes complicated systems of equations.    It would require new techniques of specification, so that system problems could be packaged in ways that could be isolated for analysis, and then reinserted into context.    It would require sharp identification of "linchpin" problems that, if solved, would enlarge the realm of the possible.    It would require solution of some "linchpin" problems concerning mathematical technique.    Showalter found himself forced to attend to such issues of mathematical technique.    For his adult life, he has believed, with Kline, that the "miracle of modeling and analysis" is not how good mathematical modeling sometimes is, but how seldom useful, and how mysteriously treacherous, analytical modeling is when it is applied to problems complicated enough to be of commercial interest.    Although it may be "a poor craftsman who blames his tools" Showalter has long had a focused interest in finding and fixing the defects in our culture's analytical tools.

In the course of this work, Showalter learned technical fear, a sense of the limitations of his own mind compared to the complexities of the world, a strong sense of the necessity for specification and clear definition, immunity to the notion that any person or group had to be right about anything, and a belief that the most sophisticated and usable patterns for technical description and technical thinking had been evolved, by tens of thousands of smart and motivated people over many years, in patent practice.    He came to believe, and still believes, that specification for optimization, and specification for clear theorizing under defined circumstances, is best done building on these patent usages.    These patent usages involve pictures, step-by-step specifications related to the pictures, and definitions, called claims in patent practice, that define the subject being treated in sharply delineated natural language.    Mathematics that fits in a physical context, Showalter believes, should be clearly connected to that context as set out in a picture-specification-definition format.    He believes that the conveniences of abstraction must link clearly to this sort of specification and concreteness if a mathematical result is to be applied, and applied consistently, and applied by human beings.

Showalter spent some time, with the support of investors, as an "analytical inventor," working to optimally redesign internal combustion engines.    Kline worked with him on this project for several years, about half time. Kline and Showalter became friends as well as coworkers during this period.    During this same period, a former Vice President of Engineering of Ford Motor Company worked about half time on Showalter's project. Showalter has 23 U.S. patents.    He achieved some "choreographed" control of mixing and combustion related flows, and got much reduced emissions, and significantly improved fuel economy, by doing so.    The work was intended to meet the most rigorous EPA emission standard, including the .4 gram/mile NOx standard.   That standard was rescinded and never enforced.    After the emissions work, in an effort to salvage his enterprise, Showalter worked out ways that appeared to have the practical potential to reduce engine friction about tenfold.    However, he could not achieve commercial ring oil control on the friction reduction package, and that oil control was key to all the rest.    It was not a hit or miss problem: he had to model coupled elastic and hydrodynamic equations on the ring design to keep track of ring oil flows.    The coupling terms that had to be accounted for were all mismodeled as "infinitesimal" by limiting arguments that, Showalter now knows, did not accomodate the arithmetical limitations of the dimensional parameters.    Perturbation approaches could not do this job.    Showalter's project failed, and he failed physically for a time.

The project may not have failed, and would not have failed as it did, if Showalter had not had epileptic difficulties that incapacitated him for several years.    One inconvenience during this period was loss of the ability to read. Some other skills were lost, too.    Showalter found the work required to regain these skills hard but interesting.    Showalter earned a Professional Engineer's designation toward the end of this period.    During this period, Showalter gained a steady interest in brain function, in health and disease, and became convinced that current brain models, such as they were, were in gross error.

In 1988, Showalter enrolled in the Department of Education, at U.W. Madison, because of an interest in how people learn to read, and an interest in other issues in learning as a developmental process.    He was interested, from the first, in neural sciences as well as education, and in the interfaces between these fields. Although some neural scientists took an interest in him, he could not, as a practical matter, enroll in the neural science program because he believed that the Kelvin-Rall transmission equation was grossly wrong, and Kelvin-Rall is a well-enforced article of faith in the neural sciences.    By about 1990, Showalter, with Kline's help, had identified the correct neural transmission equation, with effective inductances 1010-1019 greater than that of Kelvin-Rall, and fit it to David Regan's data, and to other data.    Showalter became convinced that nothing realistic or useful in brain modelling could be done, by himself or by anyone else, until the error in the neural transmission equation was found and fixed.    Therefore, as a modeler, he had nothing he could do but keep at the problem, with Kline's help, until it cracked.

Showalter has long believed that many questions about development and learning that are important in education hinged on the same neural transmission issues that mattered in neural science and medicine.

Especially since 1992, Showalter has been working nearly full time, with Kline's help, to find the defect at the interface between measurement and analysis that he and Kline knew must be there.    Together, we have found that defect, that traces to the arithmetical restrictions on the dimensional parameters.    We report the fix to this modeling difficulty in this transmission.

In Showalter's view, the most serious mathematical impediment to analytical engineering, analytical invention, and analytical modeling applied to science that has existed in the past is this interface defect that has caused misspecification of differential equations.    That defect, pending checking, has now been found and fixed.    He hopes to put the new tool to use in neural modelling, the study of efficient and comfortable teaching, and elsewhere.

NOTES:

1. DEFINITION: A dimensional parameter is a "dimensional transform ratio number" that relates two measurable functions numerically and dimensionally. The dimensional parameter is defined by measurements (or "hypothetical measurements") of two related measurable functions A and B. The dimensional parameter is the algebraically simplified expression of {A/B} as defined in A = {{A/B}} B. The dimensional parameter is a transform relation from one dimensional system to another. The dimensional parameter is also a numerical constant of proportionality between A and B (a parameter of the system.) The dimensional parameter is valid within specific domains of definition of A and B that are operationally defined by measurement procedures.

Here are some dimensional parameters: mass, density, viscosity, bulk modulus, thermal conductivity, thermal diffusivity, resistance (lumped), resistance (per unit length), inductance (lumped), inductance (per unit length), membrane current leakage (per length), capacitance (lumped), capacitance (per unit length), magnetic susceptibility, emittance, ionization potential, reluctance, resistivity, coefficient of restitution, and many more.

See section devoted to the dimensional parameters in the main text.

2. Showalter, M.R. REASONS TO DOUBT THE CURRENT NEURAL CONDUCTION MODELsubmission E.

3. Feynaman, R.P., Leighton, R.B, & Sands, M. THE FEYNMAN LECTURES ON PHYSICSV. 2 Table 18-1 Addison-Wesley 1964

4. An equation is in consistent units if it can be expressed, term for term, in any unit system consisting of the same system of operational measurement procedures and different units of length, mass, and time raised to exponents n, m, p
Ln Mm tp

5. The analytical foundations of measurement theory related to the requirement here were set out by J.C. Maxwell in sections 1] - 6] of A TREATISE ON ELECTRICITY & MAGNETISM v.1 Dover. p 1-6. Maxwell's algebraic examples must be interpreted in light of Bridgman's emphasis that the units of length, mass, and time in the algebra exist in the tightly specified context of detailed operational measurement procedures. These procedures are typically complicated enough to require specification by a sketch and a short verbal instruction. ALL members of a set of consistent unit systems have EXACTLY the same operational procedures, and apply different units of length, mass, and time to these consistent operational procedures.

6. the p subscript is for labeling, and does not effect the arithmetical function of the numerical and dimensional parts of these point forms.

7. Maxwell, J.C. A TREATISE ON ELECTRICITY & MAGNETISM V. 2 Dover Press, modification from 3'd edition, 1891, twelve years after Maxwell's death. pp. 199-200.

8. See Kline's Stanford Web page, and its discussion of the work on streak formation, and the change in paradigm that it involved. http://www-tsd.stanford.edu/tsd/ressjk/html.

9. Submission A. Showalter, M.R., Kline, S.J. Modelling of physical systems according to Maxwell's First Method.

10. (For our purposes, we can call this the MKS-volt-coulomb system.) If you look in Rojansky's ELECTROMAGNETIC FIELDS AND WAVES (Dover) you'll see tables inside the front and back covers that give a nice sense of how constrained the choice of dimensional system is when mechanical, electrical, and magnetic units are combined.

11. In PRINCIPIA MATHEMATICA (1687) Book 2, following prop XL, Isaac Newton discusses the propagation of sound. He employs two numbers that moderns would call "dimensional parameters" in his treatment. The first is mass of air per unit volume at a point. The second is compressibility of air per unit volume at a point. These dimensional entities are only experimentally definable in finite terms, but they are set out in intensive (point) form. Numerically and dimensionally, the intensive and extensive form of these numbers is the same.

12. Compare Newton in the 1680's versus the work of Weierstrass and his school in the 1870's, set out in H. Poincare L'oeuvre mathematique de Weierstrass" Acta Mathematica, XXII, 1989-1899, pp 1-18.

13. Showalter, M.R. submissions E, and H.