Quote

The area of study is called "Algorithmic Information Theory". It requires an extensive understanding of discrete mathematics.

Basically, is it possible read the binary data as an incredibly large decimal number then find a relatively simple way to represent this number in an equation?

I'm just kind of rambling but it was an idea I had in my CS class and I was wondering if such a concept already exists what it is called.

I'm thinking that it would be dramatically easier to transfer these "magic numbers" than the entire binary file, and then simply use these numbers to recreate the decimal number and write it as binary.

Also, I realize that memory would be an issue here. However, Mathematica seems to handle incredibly large numbers on machines with only a gig or two of memory so I am lead to believe that I should worry more about whether or not these magic numbers would actually take less data to represent than the final result?

Kind of like how you can max out a calculators memory relatively quickly by simply using powers and multiplication.

About base 16 to base 10 conversion.. my basic understanding of process for base conversion is you sum the value of: each digit D (Ones place D=1, tens D=2, etc) with a value M from {1..N} you multiply M by base B to the D-1.

For example, the hexadecimal number 2AF3 is equal, in decimal, to (2 × 16^3) + (10 × 16^2) + (15 × 16^1) + (3 × 16^0) , or 10,995.

Assuming there's 2 hexadecimal per byte (2^8= 256 = 16*16) there would be 2*P hexadecimal digits with P equal to the number of bytes in a binary file I'm thinking the amount of decimal digits required to represent a hexadecimal number grows exponentially..

Of course all of this means that the decimal number will take up much more room than the original. But this is okay because the idea is data portability. To the point where you can print out a list of numbers, type them into your computer, run a program which recreates absurdly large decimal number and saves it as binary.

Justification: Just by using a few numbers you can create absurdly large numbers. All that large amounts of binary data is is a specific absurdly binary number. See attachment for the amount of digits calculated using 2^(2^(3141592)): nearly 10 to the 10 to the 10 ^ 6 digits.

decimal value = readBinaryAsDecimal(myFile); decimal generatedTotal = 0; decimal generated = 0; decimal currentExponent = 0; decimal[] recipe = decimal[]; while(generatedTotal != value) while(generatedTotal + generated < value) { generated = 2^2^(currentExponent); } generatedTotal += 2^2^(--currentExponent) recipe.add(currentExponent); currentExponent = 0; }

Its also seems logical that there would be "hotspots" in which files could be represented by values relatively close to an easily generated number. There would also be "deadspots" which there were no nearby easily generated numbers. The key here is that for each generated number (GN) added to the total generated (TGN) number, it must increase the number of matching digits in the TGN to the Target Number (TN).

#### Attached image(s)

This post has been edited by **DillonSalsman**: 14 January 2011 - 01:46 PM