Utah p implies necessarily possible p

Rational Representatons

This is a rough copy of the handout LeRoy Eide produced for his April 4th, 2008 talk at the Utah Logic Group meeting. This copy incorporates a few changes he made through 21-Apr-2008.

It seems that a more current copy tends to reside at: http://www.dyalog.com/dfnsdws/n_ratrep.htm

Index


Author:         LeRoy N. Eide
Created:        03-Apr-2008
Modified:       21-Apr-2008
ReModified:     25-Apr 2008

The following is a somewhat brief description of a representation scheme for rational numbers -- followed by some discussion about addition (and subtraction) of items so represented. Most of this is derived from personal thoughts and notes developed over the past few years.


Preliminaries

Consider the bi-directionally infinite polynomial in b

        ... + d[2]*b^(2) + d[1]*b^(1) + d[0]*b^(0) + d[-1]*b^(-1) + ...

where:  +*^ mean respectively addition, multiplication, and exponentiation;
        d[i] means "d sub i for some integer i" -- the coefficient of b^(i) --
        and each d[i] is a member of an ordered set of "digits" of
        cardinality b, one of which is chosen to represent the value zero,
        e.g. {...,0,...}, implying that b>0 (a non-empty set) but in practice
        we assume that b>1.
        [Later we will see that we will only need |b|>1]

The polynomial above is generally given as the interpretation of a
"number" as typically written, e.g.
        123.95
where b is assumed to be ten and d's are chosen from the ordered set
        {0,1,2,3,4,5,6,7,8,9}, i.e. d[2]=1, d[1]=2, d[0]=3, d[-1]=9, d[-2]=5.

The zero digit is conventionally represented by the character "0"; other
digits in this ordered set are also assumed to represent integer values
descending (to the left) or ascending (to the right) in steps of one.

Also, the (radix) point is conventionally placed between the d[0] and d[-1]
digits (although it should arguably be placed beneath the d[0] digit).  And
zeroes are conventionally thought of as extending infinitely on both sides
of the written number ...

So much for convention.

Euler's conjecture/claim

  S(x) = ... + x^(3) + x^(2) + x^(1) + x^(0) + x^(-1) + x^(-2) + x^(-3) + ...
or
  SUM(x^i) over all integers i
is identically zero for all x.

At first blush this seems preposterous, but there are some plausible
reasons to suspect the (computational) truth to this claim.  Without
expounding much, simply consider the implications of:

  x*S(x) - S(x) = 0

I will accept Euler's claim in the discussion that follows.


Notation

The Notation that I use is described below.  Be patient -- it starts out
looking rather ugly but becomes increasingly beautiful (and useful) after
a little development ...

-------------------------------------------------------------------------------

The Digits:

An ordered set of digits (symbols) will be separated by commas and
enclosed in curly braces, e.g.

        {0,1,2,3,4,5,6,7,8,9}
or
        {-,0,+}

but if all of the digits are single characters, the commas may be omitted,
e.g.

        {0123456789}
or
        {-0+}

If it is unclear which digit represents zero, that digit will be enclosed
in parentheses, e.g.

        {M(Z)P}

The primary consequence of designating some digit to be the zero is that
the location of the zero digit entirely determines the characteristics of
the addition table.

The set of digits, if specified, is always mentioned external to (and
usually following) the number being represented, e.g.

        123.95 {0123456789}

If the set of digits is conventional (or obvious from context), the set
of digits won't be mentioned at all.

Note that digits may be represented by "non-digit" symbols but the use of
any of the following characters as digits should (and will) be avoided:

        { } [ ] ( ) < > \ | / , : .

Of special note is the use of "=" for double-minus and "#" for double-plus
(except where pressed into service for "equality" and "inequality").

-------------------------------------------------------------------------------

The Base:

If needed, the base b may be noted in square brackets, e.g. [10] or [3],
where the base itself (possibly signed) is written in conventional base
ten; otherwise, the base may be inferred from the cardinality of the
specified set of digits (if present) or from context.

The base, if specified, is always mentioned external to (and usually
following) the number being represented, e.g.

        123.95 [10]

-------------------------------------------------------------------------------

The Number and Its Infinite Extensions:

Two bars delimit what is generally considered the number proper.

If powers of the base increase to the left,  \'s are used:  \123.95\
If powers of the base increase to the right, /'s are used:  /59.321/

In either case, the infinite string of zeroes to both the left and right
are also always indicated with angle brackets as

        <0\123.95\0>  or <0/59.321/0>

and where the point appears between the b^(0) and b^(-1) digits.

If the digits of the number are multi-character, commas are used to
separate the digits one from another (the point, if present, is also a
separator in this case), e.g.

        <_\I,II,III.IX,V\_> [10] {(_),I,II,III,IV,V,VI,VII,VIII,IX}

-------------------------------------------------------------------------------

The Point and Scale Factor:

Note that the (radix) point cannot appear outside of the enclosing bars!
If the point is omitted, it is assumed to be nestled in the (inside) acute
corner of one of the bars, e.g.

        <0\42\0> means <0\42.\0>  and  <0/24/0> means <0/.24/0>

both of which represent the answer to life, the universe, and everything!

As previously mentioned, I would prefer to designate the locations of the
point by, well, just "point"ing at the b^(0) digit -- something like

        <0\12395\0>
             ^          # Note: This displays well only for a mono-spaced font!
possibly with a "scale factor" (in conventional base ten) placed below,
e.g.

        <0\12395\0>
               ^        # Note: This displays well only for a mono-spaced font!
              -2

All of which gets rather notationally messy.

As a compromise (and to avoid multi-line notation), the point may be
replaced by a (possibly signed) scale factor wrapped in parentheses, e.g.

        <0\123(0)95\0>  or  <0\12395(-2)\0>

Note that "." is simply shorthand for "(0)".  Think of () as a "really big
dot" with a number written inside of it.  This scale factor may be moved
either right or left as needed, its contained number increasing if moving
"upward" (toward higher powers of the base) or decreasing if moving
"downward" (toward lower powers of the base) -- think of "upward" as
"left" for \\ or "right" for //, "downward" otherwise.  All of this is, of
course, nothing more than a sort of embedded "scientific notation".

In any case, the point (or the really big dot with scale factor) must
always remain between the bars.

-------------------------------------------------------------------------------

The Bars:

The \\ and // bar notations are themselves without much grace.

If one conventionally (and always) writes numbers only in one direction or
the other, then it's cleaner (and clearer) to use vertical bars instead.
I will adopt this usage from here on, using || for \\ throughout unless
some non-obvious distinction needs to be drawn.

        <0|123.95|0> means <0\123.95\0> unless otherwise noted.

Note that Arabic actually writes its numbers low-order digit first -- but
since their numbers are written right-to-left, they appear to be the
"same" as ours.  Actually, when the West adopted the Arabic way of writing
numbers, it didn't correct for the order in which they were written ...  I
suppose it was a sufficient accomplishment that the concept of zero was
accepted (perhaps for no other purpose than as a placeholder for a
"missing digit").  We have by now generally accepted zero as an actual
digit, but I am going to ask for a lot more acceptance in some of what
follows!

-------------------------------------------------------------------------------

The Replication Units:

The Right Replication (or Repetition) Unit (hereinafter the RRU) indicates
the repeating string of digits that go ever onward for decreasing powers
of the base.  Examples:

        <0|0|0>  means 0.0000...        or more simply  |0>  means 0000...
        <0|0|3>  means 0.3333...                        |3>  means 3333...
        <0|0|14> means 0.1414...                        |14> means 1414...

The following are allowed for an RRU:

a) Internal replication -- |14> may be rewritten as |1414> or |141414> or ...
b) Internal contraction -- |829829> may be rewritten as |829>
c) External emission    -- |567> may be rewritten as 5|675> or 56|756> or ...
d) External absorption  -- 12|312> may be rewritten as 1|231> or |123>

In (c) and (d) above, care must be taken to explicitly fix the point
preceding the bar before un-rolling and/or re-rolling the infinite spool
of repeating digits across the bar; the point cannot simply be assumed to
immediately precede the bar nor can the point be absorbed into the spool
of digits.  If necessary, the point can always be converted into a scale
factor and then moved as needed (while appropriately adjusting the scale
factor, of course).

In (d) above, a digit can be absorbed only if it could have been
previously emitted.  Also note that both emission and absorption
appropriately rotate the RRU contents (but does not otherwise permute it).
All of this is, of course, obvious after some brief thought ...

The Left Replication (or Repetition) Unit (hereinafter the LRU) indicates
the repeating string of digits that go ever onward for increasing powers
of the base; items (a) through (d) apply to an LRU as well and with
similar comments and caveats.

But, you say, "Isn't just <0| the only LRU?"
I say, "Well, no ... but wait until we cover a few other matters!"

-------------------------------------------------------------------------------

Complements:

The complement of a number is simply that number with each of its digits
replaced by its complementary digit (the point is unaffected).  Each digit
in the ordered set of digits has an index into that set (which is not
necessarily the same as its value); its complement is the digit with the
same index but counting from the other end of that same ordered set.  For
an odd base, one of the digits (the central one) will be its own
complement (self-complementary).

Examples:       Base ten        {0123456789}
                complements     {9876543210}

                Balanced ternary    {-0+}
                complements         {+0-}

                Skewed four         {=-0+}
                complements         {+0-=}

Note that for any base the sum of a digit and its complement is a constant
whose value can be represented by some single digit (i.e. the sum will
never produce a non-zero carry).  It is this characteristic that
ultimately permits us to identify the complement of a number with the
negative of that number.

Complements provide intrinsically-signed numbers.  Consider a number N of
the form
        
where l is the LRU contents,
      n is the so-called integer part of the number,
      p is the point (or scale factor),
      m is the so-called fractional part (mantissa) of the number,
  and r is the RRU contents.

If n is absorbed by l and m by r, then  may be written as 
provided that the scale factor is zero, i.e.
        <0|1> is simply a convenient shorthand for <0|.|1>

The determination as to whether N<0 or N=0 or N>0 can be made without
resorting to any extrinsic sign convention; signs are intrinsic for any
base -- stay tuned!

If N and its complement were added, the result would be
        
for some constant k (where each k here may actually be a string of k's).
We will see that numbers of this form are (computationally) merely
variants of zero -- but first we must learn how to add and subtract!

-------------------------------------------------------------------------------

Addition and Subtraction:

As we all learned in elementary school, adding two unsigned (decimal)
numbers requires that

        a) the (decimal) points "line up"

and then addition proceeds, starting at the rightmost column of digits,
by adding each column and propagating any carry (either zero or one) to
the column immediately to the left; continue until done.

The more general procedure requires that

        a) the points "line up" (meaning that the scale factors are equal
           and are positioned one above the other);
        b) the bars "line up";
        c) the angle brackets of the RUs "line up" (meaning that the LRUs
           are of equal length and the RRUs are of equal length)

and then addition proceeds, starting at the rightmost column of digits,
by adding each column and propagating any carry (negative one, zero, or
positive one) to the column immediately to the left; continue until done,
but with the caveat that what is carried into an RU must also be carried
out (like the Wilderness Rule "Carry out what you carried in!").

That the magnitude of each carry is either zero or one, regardless of base
(balanced, skewed, or otherwise), is left as an exercise for the reader --
but the proof is not difficult.

Here's an example of a base ten addition problem:

        <0|123.95|0>    rewrite <0|123.95|0>
        <0|4|3>                 <0|004.33|3>
                                ------------
                                =0   +   =0     -- carries
                                <0|128.28|3>

Note that + and - are used to indicate non-zero carries and that all zero
carries are omitted except for the carry-in for each RU.  Additionally, if
the carry-out for an RU differs from its carry-in, then a "#" is used to
mark this difference (here meaning inequality); otherwise an "=" is used
(both "#" and "=" are notational devices and are only placed above
non-digits).

If the carry-out for the RRU is non-zero, then the initially assumed
carry-in of zero was incorrect; in this case, the carry-out is added back
into the RRU (correcting our initial assumption or one might simply
consider this as restarting the addition but this time assuming the
correct initial carry-in for the RRU).

But what if the carry-out of the LRU differs from its carry-in?  We can't
just "correct" the carry-in (that was computationally generated) but we
can extend the LRU and have another go at it.  Here's an example:

        <0|92|34>       rewrite <0|92.3|43>
        <0|8.6|1>               <0|08.6|11>
                                -----------
                              =0#+ +   = 0      -- carries
                              <0:1:00.9|54>     rewrite <0|100.9|54>

Here a broken bar ":" is used as a tentative bar for the LRU of the sum
until the LRU carry-out equals its carry-in.  The leftmost broken bar is
then retained as a full bar, the other broken bars being removed.
Generally, all summed LRUs should be computed in this manner.

Subtraction is merely the addition of a complement.  Here's an example:

        <0|92|34>       rewrite <0|92.3|43>
        <0|8.6|1>               <9|91.3|88>
                                -----------
                                      +#+0      -- carries
                                       |31>     abort (wrong carry-out)!

                                =+    +=++      -- carries
                                <0:83.7|32>     rewrite <0|83.7|32>

The astute reader will notice that the complementation of a number
involves ALL of its digits -- not merely the digits between the bars!

-------------------------------------------------------------------------------

Eventual (or Equivalent) Representations:

Some numbers have more than one conventional form but they are
computationally (and mathematically) equivalent.  For example:

        1.000...        or      <0|1|0>
        0.999...        or      <0|0|9>

Such forms are characterized by the fact that their RRUs will accept more
than one assumed initial carry-in which they will then generate as the
carry-out.  This is equivalent to adding zero (<0|0|0>) to a number but
assuming either + or - as the initial carry-in instead of 0.

        <0|1|0> will accept either 0 or - (the latter generating <0|0|9>)
        <0|0|9> will accept either 0 or + (the latter generating <0|1|0>)

Such equivalent forms are not problematic.

-------------------------------------------------------------------------------

A Plethora of Zeroes:

It is obvious that <0|0|0> is computationally zero -- it yields itself
when added to itself.  But so does <9|9|9> -- a simple exercise.  In fact
any number consisting of a single digit throughout is computationally
zero.

Recall Euler's claim mentioned at the beginning -- this simply states that

        <1|1|1> is zero for any base whatsoever!

This is one of the consequences of the statement (also aforementioned) of

        b*S(b) - S(b) = 0       for any base b.

Since all numbers consisting solely of a single digit are merely some
multiple of <1|1|1> then all of these must be computationally zero as
well.  And so are numbers that consist solely of some repeating string of
digits, such as <824|824|824>, since

        (b^3)*S(b) - S(b) = 0   for any base b.

As a notational convenience, a number like
        <0|0.0|0>
which can be written as
        <0|.|0>         by absorption
may be written as
        <0|0>
or even just as
        <0>

The same may be done for any zero, e.g.
        <824>
which may also be written as
        <248>
or even
        <482>

But how can there be so many zeroes?  Isn't there really just one zero?
This is something like the particle zoo of 20th-century physics which
settled down after the appropriate viewpoint was finally arrived at ...

-------------------------------------------------------------------------------

A Plethora of Numbers:

Given any "simple" number like <0|1|0> (namely one), we can create a vast
quantity of computationally equivalent forms by adding appropriate zeroes
to it, yielding things like
        <1|2|1>
        <8|9|8>
        <423|424|423>   or <234|24|423> or <342|4|423> by absorption.

Now wait a minute!  What's really going on here?

-------------------------------------------------------------------------------

Philosophy:

First, think of what we really do when we write a (conventional) number.
We assume a bi-directional infinite blank slate (really filled with "0"s)
and a "point" (to indicate a known "units" position) upon which we will
lay some "digits" in appropriate positions, making these positions differ
from the blank background.  We compute the value of our work (our number)
by measuring the difference each position has from the original blank
background.  We are constrained in the amount of variation we can make at
any position by the base of our number system -- the value of each
position must be representable by one (and only one) of the digits from
our ordered set of digits (this insures that the number that we want to
write on our slate has a unique representation -- except for the possible
eventual but equivalent representations described above).

But suppose that the background on our slate consists of "1"s rather than
"0"s -- it's still flat but everything is elevated a bit.  In order to
make a difference of one we need to write a "2", for two we write a "3",
..., for eight a "9" -- but (assuming base ten) how do we make a
difference of nine?  We must write a "2" in the next digit position on the
slate (making a difference of one there, that is to say a difference of
ten here) and then write a "0" in the current position (making its value
be one less that the background of "1"); combined, the resulting
difference from the background is nine.

All of this remains true for background terrain that is hilly rather than
flat -- it's the variance from the expected background, not the actual
elevation, that determines the value of the number represented.  The only
real constraint is that the elevation (i.e. the value) at each location be
representable (and represented) by one of the digits from our ordered set
of digits.

There are an infinite number of backgrounds -- that's why there are an
infinite number of zeroes (they each just follow and do not deviate from
their respective backgrounds).  That's the view from the outside.  From
the inside, there is just one zero -- the current background.

So what determines the background?  Well, it's just the contents of the
LRU.  The LRU specifies the expected terrain (the local zero); the rest of
the number specifies the deviations from the expectations.

And it's the first deviation (going from left to right in our conventional
representation) that determines the sign of the entire number -- if the
elevation is higher than expected, the number is positive; if lower than
expected, the number is negative.  This is the intrinsic sign.

That's why <824|6|248> [10] represents negative two -- the expected digit
is not "6" but "8".

If the function of being zero is assigned to the first digit in the
ordered set of digits, it is not possible to represent a negative number
with a background (LRU) entirely of zero digits since a descent from the
background is necessary in order to form a negative number.  Similarly, if
the function of being zero is assigned to the last digit in the ordered
set of digits, it is not possible to represent a positive number with a
background entirely of zero digits since an ascent from the background is
necessary in order to form a positive number.  Such numbers can be
represented, however, by choosing another background -- and there is an
infinite collection of backgrounds to choose from.

Assigning the function of being zero to an intermediate digit in the
ordered set of digits permits the representation of both positive and
negative numbers all with a background consisting entirely of zero digits
(although another background could be chosen if desired).

One way of thinking about a background in a number representation is by
analogy to a carrier wave with amplitude modulation -- but with the
restriction that the amplitude is constrained by both upper and lower
bounds.

-------------------------------------------------------------------------------

Just-In-Time-Subtraction Revisited:

Halving a (balanced) ternary number N by doing a single subtraction --
conceptually this is computing M = 3*M - 2*M where 2*M = N

If N is actually divisible by two (i.e. even), the procedure yields the
correct result.  Does it also yield the correct result if N is odd?

Let N = 7, i.e. <0|+-+|0> {-0+}; 
the JITS procedure yields <-|-0-|0> which can be rewritten as <-|0-|0>.

By adding a convenient zero, say <+>, we can readjust the background of
this result:

        <-|0-|0>
        <+|++|+>
        --------
        =0   =0         -- carries
        <0|+0|+>        which is recognizable as three and a half.

So, yes, the JITS procedure also produces the correct result if N is odd
-- or negative or with a non-zero background -- one merely needs to adjust
the background of the result to get it to zero elevation in order to more
clearly see the result.

-------------------------------------------------------------------------------

Base Zero:

Here the ordered set of digits is empty, namely {} -- there's nothing to
designate as the zero digit.  Forget I even mentioned it ...

-------------------------------------------------------------------------------

Base One:

Again, uninteresting.  At least there is a digit to designate as the zero
digit, namely {0} -- but there are no numbers that we can represent other
than zero itself <0> (i.e., there is no way to make a difference from the
one-and-only background).

-------------------------------------------------------------------------------

Negative Bases:

If the base b is negative, i.e. b<1, all even powers of the base are
positive so we can consider this as a "normal" number in base b^(2).

Consider base negative two {01} as an example.  In this so-called
nega-binary system, each even power of the base makes a positive (or zero)
contribution to the value of the number while each odd power of the base
makes a negative (or zero) contribution.  A brief examination will show
that nega-binary numbers have unique (but allowing for eventual)
representations.  The digits here can be considered in pairs as the base
four system {=-0+}:

         Two    Four
         ---    ----
 0:     00000   0000
 1:     00001   000+
 2:     00110   00+=
 3:     00111   00+-
 4:     00100   00+0
 5:     00101   00++
 6:     11010   0+==
 7:     11011   0+=-
 8:     11000   0+=0
 9:     11001   0+=+
10:     11110   0+-=
11:     11111   0+--
12:     11100   0+-0
13:     11101   0+-+
14:     10010   0+0=
15:     10011   0+0-
16:     10000   0+00
         ...     ...    and so on.

Since a negative base b<1 can be "normally" represented as base b^(2), it
is only required that |b|>1 -- as aforementioned.  Consequently, negative
bases may be disposed of as a mere curiosity.

-------------------------------------------------------------------------------

Reciprocal Bases:

A number represented in a base that is a reciprocal of an integer b>1
seems only to be the base b number reversed (with due attention being paid
to negating the scale factor and positioning the dot).  Lacking this
interpretation, it is difficult to make sense of the cardinality of any
ordered set of digits as relating to the base.  For example:

        <0\+(2)\-> [1/3] {-0+}

should likely be interpreted as

        <0/(-2)+/-> [3] {-0+}

or just
        <-|+(-2)|0> [3] {-0+}

in the more conventional "upright bar" notation.

If one could make some other coherent sense of reciprocal bases, it might
then be possible to make coherent sense of rational bases in general.
Short of that, the Author is inclined to disregard all non-integral bases
as lacking even a modicum of curiosity.

On the other hand, just thinking about reciprocal bases makes clearer some
things that might otherwise remain rather opaque.  In particular, ponder
the power series generator:

        1/(1-x) = P(x) = 1 + x^(1) + x^(2) + x^(3) + x^(4) + ...

This is generally considered only when 0 [1/3]  or  <0\1> [3]

If x>1 then P(x) diverges, say for x=3 -- or does it?

Clearly the generator has a well-defined (negative) value, so squinting
again in just the right way, one can see that

        P(3) = <0/1> [3]  or  <1\0> [3]

which is just -0.5 -- exactly what the generator evaluates to.

-------------------------------------------------------------------------------

There's Only One Base b (for any b>1):

Consider the three possible base three systems:

  {012}       -- standard ternary (sometimes {0+#} if using plus and double-plus);
  {-0+} -- balanced ternary; and
  {=-0} -- inverted ternary, using double-minus and minus

and also a generic base three system:

  {xyz} -- could be any of the three systems above ...

What can one say about a number represented in this generic system?

         {xyz}

First, the number is positive since the first digit to deviate from the
background  is y, which is ordered after x in {xyz} and thus x    or <0|+##|+> {0+#} if using plus and double-plus.

Now suppose that {xyz} is really {x(y)z}, meaning {-0+}.  Then we have:

           <-|0++|0>
           <+|+++|+>    -- adjust background by adding the complement of <->
         -----------
         =0#+ ++ =0     -- carries
         <0:+:-0-|+>    rewrite <0|+-0-|+> {-0+}
                        which is recognizably seventeen and a half in {-0+}.

But suppose that {xyz} is really {xy(z)}, meaning {=-0}.  Then we have:

           <=|-00|->
           <0|000|0>    -- adjust background by adding the complement of <=>
           ---------
           =0    =0     -- carries
           <=:-00|->    rewrite <=|-00|-> ... no change (but not surprising!)

Perhaps one could try adding <-> instead ...

           <=|-00|->
           <-|---|->
         -----------
         =-#0    =0     -- carries
         <-:0:=--|=>    rewrite <-|0=--|=>

           <=|-00|->    or with a different initial carry (- rather than 0) ...
           <-|---|->
         -----------
         =-#0    =-     -- carries
         <-:0:=-=|0>    rewrite <-|0=-=|0>
                        Compare this pattern with <0|+-0-|+> {-0+} from above.

Maybe adding <=> will help ...

           <=|-00|->
           <=|===|=>
           ---------
                -#0     -- carries
                 |0>    abort (wrong carry-out)!

           =- ---=-     -- carries
           <=:-00|->    rewrite <=|-00|-> ... no change!
                        We can't make the background <0> since in this case
                        "0" is the topmost digit and there is no "up" from "0"
                        to encode this positive number.
                        Nonetheless, <=|-00|-> {=-0} is seventeen and a half.

This behavior is not restricted to representations having a single-digit
background.  Being somewhat more adventurous, consider the following:

         {xyz}

This behavior is not restricted to representations having a single-digit
background.  Being somewhat more adventurous, consider the following:

         {xyz}

If {xyz} is really {(x)yz}, meaning <01|22|1> {012}, then:

           <01|22|11>   -- by replication within the RRU
           <21|21|21>   -- adjust background by adding the complement of <01>
           ----------
                 # 0    -- carries
                 |02>   abort (wrong carry-out)!

            =1 11=11    -- carries
            <0:21|10>   rewrite <0|21|10>
                        which is recognizably seven and three eighths in {012}.

If {xyz} is really {x(y)z}, meaning <-0|++|0> {-0+}, then:

           <-0|++|00>   -- by replication within the RRU
           <+0|+0|+0>   -- adjust background by adding the complement of <-0>
        -------------
        = 0# +   = 0    -- carries
        <00:0+:-+|+0>   rewrite <00|0+-+|+0>
                             or  <0|0+-+|+0> by contraction
                             or   <0|+-+|+0> by absorption
                        which is recognizably seven and three eighths in {-0+}.

If {xyz} is really {xy(z)}, meaning <=-|00|-> {=-0}, then:

           <=-|00|-->   -- by replication within the RRU
           <0-|0-|0->   -- adjust background by adding the complement of <=->
           ----------
           = 0   = 0    -- carries
           <==:0-|-=>   rewrite <==|0-|-=>
                             or  <=|0-|-=> by contraction
                        which is still seven and three eighths in {=-0} even
                        though the best that can be accomplished, again, is
                        to reduce the background to <=> or <0>'s complement.

I suspect, however, that "<=|0-|-=>" would not be the proper answer to the
haberdasher's question, "Sir, what is your hat size?".

What's the REAL Difference Between 1's- and 2's-Complement Arithmetic

Modern computing hardware generally represents integers as binary with
intrinsic sign, abandoning attempts to represent integers as (extrinsic)
sign and magnitude.

So positive integers are represented as <0|n|0> [2] where the LRU is
explicitly represented as a high-order bit, the RRU is implicitly assumed,
and n (the number) is represented by a fixed-length string of bits
(typically of length word-size less one).

Negative integers are simply the complements of corresponding positive
integers, complementing the background (the LRU) which includes the
so-called "sign" bit) without altering the bit-length of n.  But what is
done with the RRU?

For 1's-complement hardware, the RRU is simply left unchanged (i.e.
background is assumed to extend both left and right).

But 2's-complement hardware always assumes that the RRU is |0>,
independent of the background; consequently, an RRU carry-in is
arbitrarily assumed and which is now guaranteed to convert the |1> to |0>
and then carry-out across the bar.  Unfortunately, if all the bits between
the bars are also one, the carry continues across the other bar and then
carries out of the LRU as well.  No carry-in/carry-out rules are violated,
but <1|1|1> will always get converted to <0|0|0> (this is either good or
bad, depending on one's computational point of view).
[Such hardware usually implements "logical" complements in addition to
 "arithmetic" complements as well as a number of other "logical" versus
 "arithmetic" operations -- the point being made here is "arithmetic".]

The Author remains computationally agnostic on this point.

Commentary

The "Just-in-Time Subtraction" algorithm divides a base-b number by b-1.
The name "Just-in-Time Subtraction" was coined by Dylan Pocock, one of the
Author's co-workers.

The JITS algorithm actually generates a complete and accurate division by
b-1 without remainder -- provided that one properly handles bi-directionally
infinite strings of digits (in some base b) as representations of rational
numbers.  In such a world there are multiple (but equivalent)
representations of every rational number (both positive and negative), all
of which are amenable to even simple pencil-and-paper calculation.  The
Just-in-Time-Subtraction algorithm always generates a rational result
without remainder but possibly in one of those equivalent representations.

It has been pointed out that the phrase "Replication (or Repetition) Unit"
is somewhat ponderous; the terminology "Repetition Sequence" has been
suggested as a possibly more appropriate replacement.  The two terms
"Replication" and "Unit" got into this fine mess during some of the Author's
original doodling regarding representations of rational numbers; these terms
have simply survived, but are mostly replaced by the somewhat semantically
empty moniker "RU".  The Author remains open to further suggestions.

Concerning balanced ternary, one will undoubtedly notice that a simple test
for evenness (and therefore integral divisibility by two) is just to sum the
non-zero digits of the number (assuming RUs of zero), iteratively as needed
("casting out signs" is like "casting out nines" in base ten).
Pre-rounding/truncating the number to induce evenness would eliminate the
need for a post-adjustment of any non-integral division by two.

The idea for the RU notation was unashamedly swiped from quantum mechanics
(the "bra" and "ket" notation) where it is used for an entirely different
purpose.

The word "xyzzy" comes from an early interactive fantasy role playing game
called Adventure (written in Fortran as I recall); it was (and I suppose
still is) carved in stone and effected magical transport when uttered, er,
typed.  The fact that  is also my hat size I chalk up to the
general perversity of Life, the Universe, and Everything -- but for that we
really need .

I originally thought of the continual carry to the left as somehow
disappearing over the horizon ("To Infinity and Beyond!") so as to be
out-of-sight and out-of-mind.  And I always wanted to avoid invoking any
notion of "end around" carries in ALL contexts -- a notion that seemed to be
expedient in many cases but which I felt obscured a more fundamental
understanding of what was going on.  All of this eventually got reduced to
the "carry out what you carry in" maxim.  An end-around carry, in any case,
is merely a convenient fiction -- RU carries always originate at the
low-order end, propagate through the RU, and then carry out of the
high-order end (there's no "around" around here).  If the carry-out differs
from the carry-in, then the initially assumed carry-in is incorrect and the
computation must be redone using a different initial carry (which one can't
actually know until the carry-out is known).

Additionally, I have come to think of the background and its reduction to a
sea of "0"s as being akin to the particle physicist's trick of
renormalization to get rid of those "damned infinities" (as Richard Feynman
referred to them).  And if we cannot reduce the background to zero but only
to the complement of zero, then we are no worse off than computer scientists
who do this all the time with their "high-order sign bit(s)".

As an historical note, I originally came to the problem of the continual
carries (actually borrows) while working on the JITS procedure.  But this
eventually seemed no more bizarre that the continual carries (of zero) that
fundamentally happen for every addition/subtraction operation -- carries
that as school children we were taught to ignore (if they were ever
mentioned at all), but they are there nonetheless.  The JITS procedure was
trying to tell me something -- it certainly wasn't just random noise heading
over the far horizon!  And if it meant anything at all, it had to be a
representation of the remainder, or something like that ...

I toyed with the idea of placing the initial (low-order) carry over the
final delimiter of an RU but decided in the end (pun intended) that it
really belongs in line with the (low-order) digit to which it applies.
Similarly, the carry-out skips the leading non-digit punctuation and lines
up with the digit to which _it_ applies (the '=' or '#' being placed over
the leading punctuation to indicate whether the carry-out equaled the
carry-in or not).

Go to ...


This page is http://web.utah.edu/utahlogic/misc/rational-rep.html
This page was last modified on April 24th, 2008.