Appendix 1: Derivation of a finite increment equation from a coupled physical model, showing combined effect terms.

Most mathematical models of physical circumstances start by matching one or more differential equations from a handbook with a physical case. Thereafter, analysis is carried out according to the rules of abstract mathematics. In this usual procedure, the differential equation is not derived from a physical model, but is applied to that model and taken as an operationally perfect or good enough match.

However, the differential equations themselves are sometimes inferred directly from physical models in a step by step way from a sketch-model. This process is called "derivation," and is taught in engineering schools.   

To derive a differential equation from a physical model:

       we construct a finite scale model that sets out the laws and geometry to be represented;

        then we derive (one or more) finite increment equation(s) that map that finite model.

After a finite increment equation that represents our model has been defined, we may pass that finite equation to the infinitesimal limit to yield a differential equation.

Equation definition can take special attention if our finite model includes coupled effects.  In such a case, two equations are implicitly defined in terms of each other.

One of the simplest and most important examples is current change and voltage change along a length of conductive line (such as a wire or a length of neural tube.)  Current drops i are coupled to voltage and voltage drops according to logic like the following:

i over the interval is a function of v at x and x+delta x
     which is a function of i at x and x+delta x
          which is a function of v at x and x+delta x
               which is a function of i at x and x+delta x

                                     and so on

Voltage drops are coupled to current and current drops in the same nested way.

We have to be able to represent the coupled effects that occur correctly at finite scales, before we can take the limit of those terms, at successively smaller finite scales, to the infinitesimal limit.   (Current procedure does not ask for meaningful finite coupled effect terms, and applies "limiting logic" to these undefined terms, which are invariably dismissed as 0's or labelled as infinities.)

Fig. 1 above shows a conducting line that could be neural conductor.

     v = voltage                      i = current
     x = position along the line              alpha= arbitrary length interval
     R = resistance/length                     L=electromagnetic inductance/length
     G= membrane leakage conductance/length               C=capacitance/length

Fig. 1 above shows an arbitrarily chosen length alpha, of arbitrary magnitude, which we will call delta x.

R, L, G, and C are natural law operators (Appendix 2.)   They represent physical laws, and are defined as the ratio of one measurable to another under particular measurement circumstances. The natural law operators, which implicitly represent much measurement detail, are our interface between the detailed measurement procedures of physical reality and abstract equation representations of physical circumstances.  The arithmetical properties of the natural law operators are justified by inductive generalization, not axiomatic proof.  We have no provable reason to think R, L, G, and C have exactly the arithmetical properties and restrictions of numbers.   In the derivation below, we'll operate on terms including the symbols R, L, G, and C in the usual algebraic way, stopping short of algebraically simplifying the terms. We'll not interpret these terms numerically or physically here in appendix 1, leaving that for appendix 2.

To derive line differential equations in dv/dx and di/dx, we first need finite difference equations, (delta v)/(delta x) and (delta i)/(delta x).   For the finite equations, we'll be writing out terms that have usually been understood to exist, but that have been called infinitesimal and neglected.   Let's consider the coupled effects physically.

In an exactly symmetric way

The idea that di/dt depends ONLY on G and C neglects effects that act over delta x. The idea that dv/dt depends ONLY on R and L neglects crosseffects over delta x.    (If the effects are finite over finite lengths, they MUST be represented in the differential equations that are integrated to represent these finite lengths.)  Appendix 2.

Let's derive voltage and current equations that include crosseffects. We'll write our voltage and current functions as v(x,t) and i(x,t). We assume homogeneity and symmetry for our conductor. We assume that, for the small lengths of interest, the average voltage (average current) across the interval delta x is the average value of voltage (current) at the endpoints of the interval delta x.

Writing down voltage change as a function of the natural law operators and variables that directly affect voltage, and centering our interval at x, so that our interval goes from x-(delta x)/2 to x+(delta x)/2, we have:

The current change equation is isomorphic:

Note that :

Equation (1) includes i(x+(delta x)/2,t) and its time derivative.    
i(x+(delta x)/2,t)   is defined by equation (2).

Equation (2) includes v(x+(delta x)/2,t) and its time derivative.    
v(x+(delta x)/2,t)   is defined by equation (1).  

Each of these equations requires the other for full specification: each contains the other.

If the cross-substitutions implicit in these equations are explicitly made, each of the resulting equations will also contain the other. So will the next generation of substituted equations, and the next, and so on. This is an endless regress. Each substitution introduces new functions with the argument (x+(delta x)/2), and so there is a continuing need for more substitutions.  To achieve closure, one needs a truncating approximation.  

Expression of current, voltage, and their time derivatives at x, the midpoint of the interval, truncates the series.

A key point of this paper is that we have not been sure of how the arithmetic (and the dimensional or scale limitations) of these symbols have worked.   If we ask

the conventional answer is "N times."    But we are not actually sure of this, and the answer that consistency requires is "once."   (See Appendix 2.)    Let's proceed with these substitutions, associating symbols without interpreting them numerically or physically.    

For example


which expands algebraically to

These terms would be simpler if voltage averages and derivative averages were taken at the interval midpoint, x, as follows:

How may terms like those of (6) be interpreted, physically and numerically, at finite scale?   In these expressions, two natural law operators are EACH associated with the SAME interval of length.  Do the lengths multiply?  On what authority do we say that the lengths multiply?  Are there restrictions on the scale at which the multiplication can be done?  If the lengths do multiply, what does this represent physically?   Does the multiplication make numerical sense, and is that multiplication consistent with tests the expression must pass?   It turns out that if we apply standard arithmetical rules to these crossterms, we are led to mathematical and physical inconsistencies. (Appendix 2)    We have no axiomatic reason to be surprised by this.

The equation below shows voltage change over an interval of length delta x, divided by the length  delta x to produce a gradient form analogous to a derivative.   Terms derived from three stages of cross substitution are shown.   Symbols are grouped together and algebraically simplified up to the point where the meaning of further algebraic simplification of relations in the dimensional parameters R, L, G, C, and delta x becomes unclear.   Expresssions in curly brackets are NOT YET DEFINED.

The current gradient equation over the same interval is isomorphic to 7 with swapping of v for i, R for G, and L for C.

Whenever coupled physical effects act over an interval of space, combined effect terms are to be expected. Rules for their interpretation must be found. Those rules are beyond the authority of the axioms of pure mathematics, but consistent rules for interpreting these expressions can be inferred from mathematical experiments. (Appendix 2)

Combined effect terms such as those shown here are seldom derived, because they are thought to always vanish in the limit.    That assumption embodies an assumption about scale or numerical limits on the arithmetic involved.

However, expressions such as those in the curly brackets of (7), interpreted in a consistent way, are finite, and yield finite terms in differential equations. Often such combined effect terms are negligible, but sometimes they are important.



Procedures for representing physical models in equation form cannot be determined from our axioms because our axioms are limited to abstract domains. But representation procedures can be examined by means of experimental mathematics. Valid representation procedures must be consistent with computational consistency tests. Current techniques for calculating the interaction of several natural laws over a spatial increment fail tests that valid representation requires, and are ruled out. A consistent technique is proposed. According to the proposed technique, terms in some equations that have been thought to be infinitesimal are finite. Implications in neural medicine and other fields that deal with the brain appear to be large (see appendix 4).

Some seem to feel that mathematics is axiomatic construction and nothing else, but that sometimes, nevertheless, that axiomatic construction can be mapped to some useful work. The jump from the abstract to the concrete is held to occur by some discontinuous and unexplained process. A smoother, better explained transition between the abstract and concrete seems desirable. Mathematics already interfaces with experimental usages, and has long been pushed toward experimental approaches by the computer(1).

G.C. Chaitin has shown that many things in pure math are "true for no (axiomatically provable) reason at all(2)." Chaitin suggests that where existing axioms don't apply, new organizing assumptions may be considered, and may be useful. K. Godel advocated experimental approaches in mathematics on similar grounds(3). Even the interior of mathematics has experimental aspects. Some degree of experimental math seems justified and useful even in number theory.

The interface between abstract mathematics and the representation of physical circumstances can be investigated experimentally, as well.

There may be many reasons to investigate this interface between abstraction and concrete representation. My main one is concern about the correct form of the neural transmission equation. Medically important differences in neural line inductance, that can be 1018:1 or larger, hinge on a question that is beyond the jurisdiction of the axioms of pure mathematics. That question can be clarified, and perhaps entirely resolved, with experimental mathematics.

Conclusions based on mathematical experiments always lack the certainty of an axiomatic basis. Even so, some much-tested conclusions may be useful, and using them as new assumptions can permit useful logical work that would not be possible otherwise. Experiment-based inferences (assumptions) are now widely used in cryptography and other computer-based fields.

Results of mathematical experiments cannot prove with axiomatic certainty, but can disprove. When mathematical experiments show counterexamples to an assumption, that assumption has been ruled out.

Even within pure math, where axioms reign, there are good reasons to use experimental approaches to test and organize ideas that we may wish to use, where our axioms cannot be brought to bear. This supplements axiomatic usages without violating them.

In mathematical representation of PHYSICAL circumstances, set out in terms of experimentally derived physical laws, we are using mathematical techniques beyond where the axioms of pure math apply. If we are to proceed at all, we must use experimental mathematics.

Here is the logic that experimental work has:

          E1. In experimental work, candidate assumptions are somehow recognized or guessed.
          (No testing can happen before we focus on something to test.)

          E2. Candidate assumptions are tested against evidence. So long as an assumption
          survives all tests, it is used (with some wariness) as a provisional assumption.

          E3. Assumptions that evidence contradicts are rejected, or the assumptions are modified
          so that they do fit evidence.

If we use these experimental approaches we may sometimes usefully organize, extend, and focus our knowledge beyond the realm of our axioms.     If we do not use these approaches, we cannot go beyond our axioms at all.

When we mathematically represent a physical circumstance, we are beyond our axioms. Let's call that representation process "p-m representation" for "representation from physical model to mathematical model."

(We'll assume that a workable p-m representation can be reversed in a m-p representation so that we can start with a physical model, convert it into a statement in abstract mathematics, operate on the abstract mathematical statement, and then relate that statement in abstract mathematics back to the physical model without misinterpreting or losing information of interest to us.)

We have NO axioms for p-m representation or m-p representation. We must determine the representation procedures of valid p-m representation and m-p representation on EXPERIMENTAL grounds.

Here is the p-m representation problem in more detail. When we derive an equation representing a physical model, reasoning from a sketch and other physical information, we write down symbols and terms representing physical effects. We may write down several stages of symbolic representation before we settle on our "finished" abstract equation.   As we write our symbols, we implicitly face the following question:

       Question: WHEN can we logically forget that the symbols we write represent a physical model?        WHEN can we treat the equation we've derived from a physical model as a context-free abstract         entity, subject only to the exact rules of pure mathematics?

We can never do so on the basis of rigorous, certain, clearly applicable axioms. There are no such axioms. We cannot avoid making an implicit assumption that says

     "THIS equation can be treated as a valid abstract equation, without further concern about its
     context or origin, because it seems right to do so, or because it is traditional to do so.
     We have made the jump from concrete representation to valid abstraction HERE."

This assumption may happen to be right in the case at hand. But the assumption about p-m representation is not provably true from the axioms and procedures of pure mathematics.    People go ahead and make these sorts of assumptions as they work. They cannot avoid doing so. Right or wrong, they are making "experimentally based" assumptions in their representation-derivations. People have made these implicit assumptions without recognizing the essentially experimental nature of their proceedings. It is better that this experimental nature be recognized, so that consistency checks can be applied to the unprovable steps. Any inconsistencies involved with these implicit steps may then be identified.

For any particular case of p-m representation, decisions are being made in a context of EXPERIMENTAL MATH at the interface between abstract math and physical circumstances. If a counterexample or inconsistency pertaining to a p-m representation usage is found, that is an extra-axiomatic circumstance. The extra-axiomatic usages that are failing as p-m representative tools should be modified so that they pass the consistency tests right p-m representation takes. Such modifications may disturb habits, but they need not, and commonly cannot, disturb the axioms of pure mathematics.

The Kelvin-Rall neural transmission equation derivation is based on an implicit, unprovable assumption about p-m representation:.

     USUAL P-M REPRESENTATION ASSUMPTION: Abstract mathematical      usages and p-m representative usages are the SAME. When we are      representing a physical circumstance with mathematical symbols, those      symbols are NUMBERS, and nothing more, the instant they are written      down. All our rules of abstract mathematics apply immediately to our      symbolic constructions.

On the basis of this USUAL P-M REPRESENTATION ASSUMPTION, all of the crossterms in equations 7, 8, and 9 are ill defined. Here is equation 7, derived in detail in Appendix 1.   At a finite scale delta x each of these crossterms (terms below the first line) must correspond to finite physical effects. We have NO axiomatic guidance for computing these compound expressions.

We are referring to products of p-m representation procedures, not to axiom-based entities.    We must judge the procedures we use to compute these compound expressions by experimental standards. Do these representations map the territories we expect when we check them?    We may if necessary modify those procedures for consistency without violation of any axiom.

We must know what these representations mean numerically. If our computation is valid, the magnitude of a term at a set value of delta x and a set value of independent variable must be unique. After all, our limiting argument is an argument that deals with a decreasing sequence of finite terms.   Before we can validly take the limit of equation 7,  set out in the main paper, and derived in more detail in Appendix 1), and derive a differential equation from it, we must know the magnitude of the crossterms for any finite delta x we choose.   If we proceed according to the USUAL P-M REPRESENTATION ASSUMPTION, we find that our crossterms are not well defined.   Equation 7b is the finite increment form of equation 7, which is in (delta v)/(delta x) form.

The indeterminacy of these crossterms according to the USUAL P-M REPRESENTATION ASSUMPTION can be shown in the following ways.   The difficulties set out below also apply to other crossterms that represent the combination of physical laws over an increment of length, area, volume, or time.

Numerical indeterminacy under "permitted" algebraic manipulations:

We have been taught to assume that the crosseffect-containing terms such as the curly bracketed terms in (7) consist of symbols that are "just numbers." We should be able to algebraically simplify each of these crossterms in many different sequences that involve dimensional unit changes, so long as the end of each of the sequences is in the same dimensional units. The numerical values of all such paths should be the same. They are not.  See Fig. 1 below.  An algebraically unsimplified dimensional group that includes products or ratios of dimensional numbers, such as one of those in curly brackets in (7), is set out in cm length units at A. This quantity is algebraically simplified directly in cm units to produce "expression 1." The same physical quantity is may be translated from A into a "per meter" basis at C. The translated quantity at C can then be algebraically simplified to D. The expression at D, expressed in meter length units, is converted to a "per cm" basis to produce "expression 2." Expression 1 and Expression 2 must be the same, but they are not. The calculation is not consistent with itself.

By repeating different "valid" computational loops in this way, any of the crossterms in curly brackets in (7) or (7b) can be changed to any value at all, large or small.   This is not the valid arithmetical behavior that we conventionally and thoughtlessly expect!   The loop test of Fig. 1 above shows that these crossterms are meaningless as usually calculated, and the reason is as follows:

          Before algebraic simplification, going from one unit system to another adjusts not just the           numerical value of dimensional properties in the different unit systems, but numerical values           corresponding to the spatial variable, as well.

          After algebraic simplification, adjusting it to a new unit system corresponds to adjusting
          numerical values that correspond to the unit change for the dimensional properties only,
          with no corresponding adjustment for the absorbed spatial variable.

The result is an irreversible, numerically absurd, but now standard mathematical operation.


Contradiction between differential equations and the models they came from.

Suppose we assume that the symbols in the crossterms are all "just numbers." When we take the limit as delta x goes to zero on that assumption, these crossterms are all infinitesimal.   So the differential equation we derive on this basis lacks these crossterms.

We take our differential equation, and integrate it back up to a specific scale delta x.   We get an equation that lacks the crossterms that we know existed at scale delta x in the first place.    The values at the same point, derived by two "correct calculations" are inconsistent, and can be very different.


Crossterms also fail a standard test map-representations should pass - the test that the whole should equal the sum of its parts:

In physical representations, wholes should equal the sums of which they consist.   Consider any of the terms below the first line of 7 or 7b. Suppose any term, evaluated at interval delta x, is instead set out as the sum of a number of intervals adding up to interval delta x. If delta x is divided into n pieces, and those n subintervals are computed and summed, that sum will be is only 1/nth (or 1/n2) the value for the same expression computed over interval delta x, taken in one step. We can make the value of the term on the interval delta x vary widely, depending on how many subintervals we choose to divide delta x into. This cannot represent PHYSICAL behavior.   These terms are supposed to represent physical behavior.


The USUAL P-M REPRESENTATION ASSUMPTION is that the symbols we write down are "just numbers" the instant we write them down. In the case of these crossterms that represent multiple physical effects over the same spatial increment, the usual assumption fails. So we need to look more closely at the details of what we are representing, the symbols we use to do that representing, and the procedures that apply to those symbols.   We need representative procedures, that interface with our physical model on the one side, and interface with abstract mathematical usages on the other side, that avoid the representative contradictions shown above.

When we look at how physical models are represented by mathematics, we have NO axioms to rely on, and we have NO valid intuition to guide us. We must rely on the ordinary patterns of experimental investigation.

According to E-3 below, we are seeking a modification of current p-m representation procedure that maps these crossterms validly into axiomatic, abstract mathematics without changing other p-m representation procedures, now currently established, that we have no reason to doubt.

We are violating no valid axiomatic principles when we use experimental approaches to find a p-m representation that passes all operational tests needed to validly map to abstract equations.   If a valid p-m representation procedure is found, that empowers axiomatic mathematics, and in no way diminishes it.

If we use the patterns of experimental logic and investigation with the same care that other people have applied to many other technical problems, operationally valid experimental rules for representation can be found, tested, and verified.  

       Here again are the experimental patterns:

E1. In experimental work, candidate assumptions are somehow recognized or guessed. (No testing can happen before we focus on something to test.)

E2. Candidate assumptions are tested against evidence. So long as an assumption survives all tests, it is used (with some wariness) as a provisional assumption.

E3. Assumptions that evidence contradicts are rejected, or the assumptions are modified so that they do fit evidence.

This is not axiomatics, but we are beyond the axioms of pure mathematics, and have no other axioms. Experimental logic and investigation are all we have.

Operational definition of representative entities and inference of arithmetical rules that apply to them in p-m representation.

The jump between a physical system model, defined in terms of drawings, measurement procedures and other detail, and the abstract mathematical representation of it is taken for granted, but not usually set out clearly.   S.J. Kline and I have tried to understand at a defined, procedural level how measurable circumstances are mapped to mathematical equations.   Kline had written a respected book tightly connected with the subject(4). A first task was to identify the natural law operators, sometimes called dimensional parameters, in procedural detail.

The natural law operators are the entities that interface between our experimental measurements and the formalities of abstract, symbolic mathematics. Here are some directly measurable natural law operators (often referred to as properties):

There are many, many more.

All are defined according to the same pattern:

DEFINITION: A natural law operator is a "dimensional transform ratio number" that relates two measurable functions numerically and dimensionally. The natural law operator is defined by measurements (or "hypothetical measurements") of two related measurable functions A and B. The natural law operator is the algebraically simplified expression of {A/B} as defined in A = {{A/B}} B. The natural law operator is a transform relation from one dimensional system to another. The natural law operator is also a numerical constant of proportionality between A and B (a parameter of the system.) The natural law operator is valid within specific domains of definition of A and B that are operationally defined by measurement procedures.

Example: A resistance per unit length determined for a specific wire for ONE specific length increment and ONE specific current works for an INFINITE SET of other length increments and currents on that wire (holding temperature the same.)   (Unrelated measurables could also be expressed as ratios, but such ratios would describe only one point, not an infinite set of points.)

The natural law operators are not axiomatic constructs. They are context-based linear constructs that encode experimental information.

We are concerned with the arithmetical properties of the natural law operators because of the inconsistencies related to crossproducts including spatial entities that have been discussed above.

Let's review the arithmetical properties relating to the natural law operators that we have no reason to doubt, and much reason to be sure of:

Natural law operators work just like dimensional numbers when they are used in exact correspondence with the equation that defines them.

For example, resistance per unit length is the numerical and dimensional transform that expresses Ohm's law, and acts "just like a number" in expressions of Ohm's law.:

Natural law operators may be combined to form compound natural law operators.  

DEFINITION: A compound natural law operator is a "dimensional transform ratio number" that relates two measurable functions numerically and dimensionally. The compound natural law operator is a transform relation from one dimensional system to another.  The compound natural law operator is also a numerical constant of proportionality between one measurable value and another.  The compound natural law operator is the product or ratio of two natural law operators, sometimes in association with a spatial increment. The compound natural law operator is valid within specific domains of definition of the natural law operators that define it.

Natural law operators act "just like numbers" when they multiply or divide to form a compound natural law operator that does not include an increment of space (length, area, volume, or time.)    The Heaviside equations, the conductance equations that apply to a line conductor, such as a wire, are examples. Here is the Heaviside equation for voltage, and the constructed natural law operators that apply to it, operationally defined. The products LC, RC, and LG are compound natural law operators that relate the derivatives and variables shown. They are calculated, numerically and dimensionally, just like other products of dimensional numbers:

LC, RC, LG, and RG act as compound natural law operators as follows:

Mathematical and engineering practice has long depended on our ability to multiply and divide natural law operators in this (scale independent) way.   There is NO axiomatic reason why we can treat natural law operators as ordinary dimensional numbers when we calculate compound natural law operators that do not include spatial increments.   But we have solid experimental support for the fact that we can do so. That evidence goes back to celestial mechanics calculations now nearly three hundred years old, and has been essential all through the history of mathematical physics.

We have practically no experience with compound natural law operators that contain spatial increments, however.    J.C. Maxwell and others worked with such constructs, and were often frustrated in calculational sequences.   Indeed, for reasons reviewed above, we have solid calculational experimental support for the fact that we CANNOT treat compound natural law operators including spatial increments, such as those in the curly brackets below, as "just numbers."

However one may wish to describe or think about our difficulties with these constructs, what is numerically essential is that we infer a rule that is a valid p-m representation.    In physical representations, wholes should equal the sums of which they consist. This is an essential test in cartography, the literal mapping of physical spaces that is the type case of our representations. If the sum of a term over an interval is to be independent of the number of (evenly divided) subintervals into which that interval is divided, that term must be proportional to the following relation:

Every term on the right side of 7b is already linearly related to length (m=1) by the delta x outside the curly bracketed compound natural law operator expressions. The compound natural law operator terms cannot have any length dependence at all.  Otherwise, the terms cannot describe physical behavior. The argument for other compound natural law operator terms (with area or volume increments) will be the same.

For numerical consistency, compound natural law operator expressions in terms such as those shown in 7b must be valid numerical coefficients numerically independent of increment scale, just as other natural law operators are independent of increment scale.   

Even so, for DIMENSIONAL consistency, the dimensional exponents of the increments in the compound natural law terms must be ADDED in the usual way.  We know that in a valid equation, every term must have the same net dimensions. (Suppose not: with an algebraic rearrangement, one side of the equation would have different dimensions from the other.) In appendix 1, equation 7 is derived by valid dimensional number algebra - every term is dimensionally correct.   In every term where an increment occurs, its dimensionality is added in computation of the dimensionality of the term.   We have found reason to change (restrict) the numerical arithmetic procedures used to simplify (define) some of these terms, but the changes must preserve the calculation of dimensionality, which is correct.

We infer the following P-M REPRESENTATION RULE:

ASSUMPTION: When the symbols that represent natural laws are combined to form a new natural law, there are special rules for putting them together. Only AFTER combination according to these rules can a symbolic construction be formed that can be dealt with according to ordinary rules of algebra.

Specifically: Constructed natural law operators in combined effect terms will include constructed natural law operators comprising several natural law operators and (perhaps) increments of space or time variables. Constructed natural law operators are computed (would be algebraically simplified) as follows:

        numerical part:   Numerical parts of the natural law operators making up the constructed natural law         operator are multiplied (divided).     Numerical parts of any increments in the constructed         natural law operator are not part of the multiplication or division (i.e. are set at a numerical value of         1.0)   (The numerical value of the constructed natural law operator is therefore numerically independent           of the increment scale at which it is evaluated.) .........This requirement is satisfied if we restrict the algebraic simplification           to  an increment scale with a numerical coefficient of unity in the dimensional system in which algebraic simplification is           done.)

         dimensional part:   dimensional exponents of all natural law operators and any associated          increments in the constructed natural law operator would be added (subtracted).   This requirement is            also satisfied if we restrict the algebraic simplification to  an increment scale with a numerical coefficient of unity in the            dimensional system in which algebraic simplification is done.)

This rule produces constructed natural law operators that are increment scale insensitive.    Once the constructed natural law operators are algebraically simplified (that is, defined in an arithmetically workable way) these operators can apply to any scale.   

For compound natural law operators without increments, this rule reduces to the procedure we've used for centuries.  This rule  differs for compound natural law operators that have included increments, and avoids the self-contradictory behavior these entities have had.

According to this rule, crossterms in equations derived from coupled physical circumstances are numerically determinant under permitted algebraic manipulation. There is no longer any contradiction between between differential equations and the models they came from. Wholes equal sums of parts.

The rule may be rephrased, in a way some may find easy to understand, and was expressed as follows in the main paper:

When we derive a finite increment equation from a coupled finite increment physical model, that equation will include terms that represent crosseffects including several physical law operators and several increments in interaction together.

           We have no axiomatic basis for deciding what the proper scale or unit            system for algebraic simplification of these terms should be.

           We know that choice of simplification scale and unit system matters            numerically. Therefore, consistency requires us to specify the scale-unit            system conditions for valid algebraic simplification.

Self-consistent results are obtained if we insist that algebraic simplification be done at a physical scale (or length, area, volume, etc) with a numerical value of 1.0 in the unit system in which algebraic simplification is done.    This physical scale can be as large or small as we choose, since we can also choose any consistent unit system for expressing our measurements. After algebraic simplification (at a numerical scale of unity) we can convert our calculation to whatever consistent unit system we choose.

For example, the expressions within the curly brackets of equations (7), (8) and (9) are physical interpretations of natural laws that happen to have been "effectively measured" at scale delta x. To compute a natural law coefficient that corresponds to the expression in the curly brackets, and that is valid at any scale, including differential scale, we convert to a consistent measurement unit system where length delta x is 1 length unit.    (Or we evaluate a "delta x" of 1 length unit in the measurement unit system we are using.)

With our unit system (or measurement) chosen so that the numerical value of delta x=1, we algebraically simplify the expressions in the curly brackets    That done, we convert back to the unit system of our overall calculation, if we have departed from it.  We have an equation that is arithmetically isomorphic to ordinary algebra, that will not generate false infinitesimals or infinities. The equation we had before was not arithmetically isomorphic to ordinary algebra, and our false infinities and infinitesimals trace from that.

We may also say::

The old, conventional answer is "N times." The difference between "once" and "n times" is usually insignificant, but in the case of neural transmission makes a very large difference.

We may also infer a consistent notation for evaluation of equations like (7), (8), and (9), in another way, at a differential (point) scale.

The notion of a dimensional parameter at a point, or of spatial increments at a point, has long been ill-defined.  What does "resistance per unit length" mean at a point?  Doesn't that require a notion of "length at a point?"  What might "length at a point" be?  In the evaluation and interpretation of compound natural law operators including spatial increments, the algebraic simplification is an "effective measurement."  We need notations for the spatial increments at a point, that make measurement sense, and that yield results that work consistently when they are integrated.  The following convention passes consistency tests.

When we derive a differential equation (defined at a point) from a coupled finite increment physical model, we must put ALL the variables and increments in our model equation into POINT FORM prior to algebraic simplification. The point forms of spatial quantities and time (in cm and second units) are:

Differential equations so derived, and integrated to a finite scale, correspond to equations evaluated at that finite scale by the rule above. Again, false infinitesimals and false infinities are avoided.

The S-K equation follows from application of this rule to constructed natural law operators that include spatial increments, and the results are the same ones that follow from algebraic simplification of crossterms at a unity spatial scale, followed by passing to the infinitesimal limit.

We can represent combined physical effects that act over spatial increments as finite terms in differential equations.

Summary: Experimental Math at the edge of axiomatics:

This appendix has treated calculations at the INTERFACE between abstract mathematics and the measurable world.   

In mathematical representation of PHYSICAL circumstances, set out in terms of experimentally derived physical laws, we are using mathematical techniques beyond where the axioms of pure math apply.    If we are to proceed at all, we must use experimental mathematics.   This paper has done so.

     The results are not so sure as axiomatic results can be, and the negative results are more sure than the      positive ones.

     We can rule out current interpretations of crossterms that call them infinitesimal in the limit. That is a      strong result.

We can suggest a P-M REPRESENTATION RULE that is a simple change to a currently accepted rule. The P-M REPRESENTATION RULE is consistent with all physical and mathematical issues that have been considered.   The P-M REPRESENTATION RULE is a suggestion, that we can hold to be probable, and that we can compare to further calculations and to physical data.  The rule assumes that the natural law operators that multiply numerically or divide numerically in compound natural law operators with increments interact arithmetically in the same way that natural law operators in compound natural law operators without increments do,  but that spatial increments must be evaluated at a numerical value of unity.   That seems reasonable, and arguments for the arithmetical restriction seem strong.  Still, this arithmetical procedure is an unprovable assumption applied to extra-axiomatic circumstances.   We have gone beyond the range where axioms determine results.   There is no trick that can conjure up axioms for us here: we must work on an experimental basis, as we have done here.

However, the results so far are useful.   The Kelvin-Rall neural conduction equation, which lacks inductance, is strongly ruled out.    The Showalter-Kline neural conduction equation follows from a consistent, reasonable procedure that can be tested further.   It is reasonable that we should be left with a conclusion of experimental math that must be subject to further experimental verification or disproof.


Dedication: Professor Stephen J. Kline, of Stanford University, author of SIMILITUDE AND APPROXIMATION THEORY4 and one of the great mathematical and experimental fluid mechanicians of this century, was my partner in the work leading up to this paper.   We worked together on this for almost ten years, up to his death in November of 1997. Steve's contributions were many and indispensible. Steve thought hard about the problems of physical representation, and was completely clear about the need to find and fix an error at the interface between the representation of coupled physical models at the level of a sketch, and representation by a differential equation.    The notion that measurables, and constructions of measurables, were ENTIRELY outside the axioms was hard for both of us.   Steve kept thinking about it, and kept me thinking about it, till his life ended.



1. G. C. Chaitin "Randomness in arithmetic and the decline and fall of reductionism in pure mathematics" p. 25 in G.C. Chaitin THE LIMITS OF MATHEMATICS Springer-Verlag, Singapore 1998.

2. G.C. Chaitin "An Invitation to Algorithmic Information Theory" in Chaitin, op. cit. p. 80

3. K. Godel, COLLECTED WORKS, V.3 manuscript "*1951" cited in Chaitin, op. cit. p.85

4. S.J. Kline SIMILITUDE AND APPROXIMATION THEORY McGraw-Hill, 1967, Springer-Verlag, 1984.