Half a croissant, on a plate, with a sign in front of it saying '50c'
h a l f b a k e r y
Cogito, ergo sumthin'

idea: add, search, annotate, link, view, overview, recent, by name, random

meta: news, help, about, links, report a problem

account: browse anonymously, or get an account and write.

user:
pass:
register,


                           

Please log in.
Before you can vote, you need to register. Please log in or create an account.

No mantissa floating numbers

easy as falling off a log.
  (-2)
(-2)
  [vote for,
against]

As I understand it, most computer representations of floating point numbers have 3 parts: sign, mantissa and exponent.
The mantissa gives a certain number of the most significant bits of the number, and the exponent is the number of places to shift the binimal place by (left or right)

number=(-1*sign)*mantissa*2^exponent
(Just forget about the sign for now...)

So for example 12=3*2^2=%11<<2

I propose that we throw away the mantissa, and just use the number 1 for all numbers. This doesn't then need to be stored.

number=2^exponent

The exponent would be represented by a fixed point number, so numbers other than powers of two can be obtained.

The number format is then very simple.

Loris, Oct 11 2002

[link]






       Depending on your point of view, computers do only use the number 1 (and zero), with a power.
Jinbish, Oct 11 2002
  

       How do you add 2^0 and 2^0.5? Explain how a computer should do the same.
supercat, Oct 11 2002
  

       Choose your number system according to the operations you want to perform. A log number system, such as this, will be nice for multiplying things together and poor, as supercat points out, for adding numbers together.   

       Imagine the fun your bank would have with the rounding errors on your account. Mmmm, more money.   

       Not a universal system but might have a use somewhere.
st3f, Oct 11 2002
  

       // Not a universal system but might have a use somewhere. //   

       Enron ? Worldcom ? Andersen ?   

       Tharg, him make mark on cave wall with Burnt Stick ...
8th of 7, Oct 11 2002
  

       Looks like the jury is still out on this one. As for me, I fail to see the benefit of this representation over the usual ones while, as [supercat] points out, the cost is quite apparent. But maybe there's an application somewhere that justifies it. Until somebody identifies that application, I'll stay neutral on this idea.
BigBrother, Oct 11 2002
  

       Actually, I think I have a good argument in its favour.   

       First, let me cover points already raised (I'll paraphrase):   

       Adding is hard.
Adding using the alternative system is only easy when the numbers are about the same size (within range of the mantissa). Otherwise, it cannot be done without precision loss. Consider for example trying to add 1 to 2^25. Can't be done unless the mantissa is at least 25 bits.
  

       Supercat, I'd need to work through the maths on paper in more detail before saying how its done, but it does involve converting the numbers to other formats. This isn't such a shock, since some standard floating point operations also require this On the other hand, it is easier to do other operations, as st3f points out, so this is swings and roundabouts.   

       Regarding rounding errors, I think this is actually a point in this idea's favour. Of course banks are not likely to use this format, this would be a foolish effort.
Do you think banks use floating point numbers to represent cash now? They'd know very precisely the fractions of a penny that a schoolchild held, and if you had a large enough account you could pay out small amounts of money for free. :-)
  

       So...   

       Having a mantissa actually wastes some of the information capacity of the bits used.
This is because there are several ways of representing the same number.
For example, 5*(2^2)=10*(2^1).
Mantissa free numbers don't have this problem, thus I propose that for a given amount of bits and number range they can represent numbers more precisely.
  

       However, they do have a practical weakness. This is that they may not precisely be able to hold integer numbers which mantissa-containing representations could. But. Even floating point numbers cannot represent integers with digits (or rather bigits) further apart than the number of bits in their mantissa.   

       I may henceforth refer to mantissa-free floating point numbers as "floating one numbers".
Loris, Oct 11 2002
  

       But manatees are supposed to be floating, no? I mean, I know they're endangered and all but eliminating them seems harsh.
bristolz, Oct 11 2002
  

       Aw nuts. I just can't leave this one alone. I gotta try to find an application for fractional exponent (f-e) numbers. I just gotta.   

       I think we can rule out high precision arithmetic on desktop computers. That seems to be better handled by the mantissa+exponent (m+e) format already in use.   

       We can rule out applications that want to mix fractions with integers, as very few integers would have an exact representation in the f-e format.   

       We might find an application in the realm of digital signal processing. A lot of DSP modules use fractional mantissa (no exponent) representations in their computations to improve speed and to simplify hardware (at the cost of reduced numeric range and prescision). Others use m+e representations for better numeric range and precision, but sacrifice simplicity and speed to get it. Perhaps f-e could find a niche somewhere between these extremes.
BigBrother, Oct 11 2002
  

       By the way, I'm certain that financial institutions don't do anything in base two. All the info I have indicates that base ten is used, with a fixed number of fractional digits (often 6 or 8). Accounting software actually goes out of its way to store fractional base ten numbers as globs of 1s and 0s that maintain the base-ten-ness of the numbers.
BigBrother, Oct 11 2002
  

       Floating-point formats generally use an "implied 1" at the start of the mantissa. For numbers that are within the allowable exponent range, there is exactly one representation of any given value. The only weakness of using the "implied 1" is that it requires special handling of extremely small numbers. This is typically done by saying that the most negative allowable exponent represents a special case for storing "denormalized" numbers.   

       With the requirement that all numbers be stored in normalized format except for those numbers that are too small to be so stored, there is exactly one representation for every value except zero (which has two).
supercat, Oct 12 2002
  

       Bigbrother, I appreciate your interest. Regarding bank account credit storage you may be correct. Perhaps they use packed decimal format in their legacy Cobol systems. The number system doesn't really matter, so long as they don't use any floating point. :-)   

       Supercat, I was hoping to rebut this but thinking it through I think you are correct. Although it does occur to me that there is a bit of a jump in representable numbers where the implied 1 kicks in. I therefore retract this claim. (I'll leave my annotation as it is so this discussion makes sense)   

       However, this makes me think that although the number of numbers represented is the same, perhaps floating one numbers are more evenly distributed.   

       Every number in a number storage system has a precision value. This is the accuracy to which the actual number is stored. For example, integers have a range of plus or minus 0.5.
Thus, for a given format, I propose that one could calculate the 'average proportional error value'.
One would calculate for each value the distance to the bordering two values, as a proportion of that value. I think it important to express it as a proportion rather than absolute amount. One would sum the (possibly squared) 'potential error' values and divide by the number of values analysed.
  

       I've not done this, but I suspect floating One numbers might have a smaller and therefore better APEV for a given bit-length, than mantissa carrying fp numbers. This would make them preferable for many f-p applications.
Loris, Oct 13 2002
  

       We've been down this road before, sort of. Anyone else remember Microsoft Binary Format? MBF was a floating-point number standard which was built into such powerhouse programming languages as QuickBasic. An MBF float had greater precision and smaller range than an IEEE float. The MBF and IEEE standard differed from one another in the number of bits used in the mantissa and exponent.   

       IEEE won. Starting in 32-bit Visual C++, Microsoft even dropped the mbf-to-ieee runtime library functions.
spammityspam, Jan 02 2004
  

       Loris: If math routines support denormalized numbers, there is no "funny jump" in representable values as numbers get small. Suppose for example, that numbers were stored in a 4.4 format [no sign, for simplicity], with an offset of eight on the exponent.   

       A value of 1.0 would be stored as 8:0, 2.0 would be stored as 9:0; 3.0 would be stored as 9:8. A value of 0.5 would be stored as 7:0; and a value of (1/128) would be stored as 1:0. When the exponent field contained 1, the mantissa step size would be 1/16 of the exponent, i.e. 1/2048; the range of numbers with an exponent of 1 would thus be 16/2048 to 31/2048. An exponent value of zero would represent the same exponent value (2^-7) but without the implied '1', allowing the numbers 0/2048 to 15/2048 to be represented.
supercat, Jan 04 2004
  
      
[annotate]
  


 

back: main index

business  computer  culture  fashion  food  halfbakery  home  other  product  public  science  sport  vehicle