I got an interesting task to interpret a binary file. The problem was that, in the file, they didn’t use the same standard for the floating point number architecture.

The most widely used standard for binary floating point numbers is IEEE 754-1985. For single precision (32 bits), bit 0-22 is the *fraction*, 24-30 is *exponent *and bit 31 is the sign. The floating point value is then: (-1)^sign * 2^(*exponent*-127) * 1.*fraction*. 1.*fraction *is a binary fractional number (1.11 is 1 + 2^(-1) + 2^(-2) or something). It was confusing since I never used that before.

Anyway, this did not comply with the floating point numbers in my file. However, the fraction (might also be called mantissa?) was always the same. And after a little guessing and fiddling around with bits, I came to the conclusion that:

Bit 0-23 is the fraction, bit 24 is the sign bit and 25-31 is the exponent. The floating point value is then (-1)^sign * 2^(*exponent*-129) * 1.*fraction*. So the *exponent* offset is different and the sign and exponent bit switched places. This has worked with all the numbers I tried to far.

What is so annoying is that I don’t find this architecture anywhere! I don’t know where to look, google gives me nothing! According to a document I got with the file the numbers are in the CPM format. What is that?? Anybody?