I'm just wondering,what factors declare that data cannot be compressed any further when data is compressed recursively.
how far can compression go
Page 1 of 14 Replies - 1950 Views - Last Post: 21 February 2011 - 02:55 AM
Replies To: how far can compression go
#2
Re: how far can compression go
Posted 23 February 2009 - 01:54 PM
I think it's heavily dependent on what your trying to compress
#3
Re: how far can compression go
Posted 25 February 2009 - 11:23 PM
It depends on the type of compression, namely lossy or lossless. Examples of lossy compression algorithms are for images, videos, and music where some loss of information is acceptable (i.e. loss of quality), and the complete set of information can never be recovered from the compressed data. I would say that the single factor that dictates how far these types of information can be compressed is simply the perception of the end user, and what they find acceptable.
Lossless algorithms on the other hand, compress data in a manner such that there is no loss of the original data. For example, compressing text documents into a zip file. Statistical redundancy (patterns and repetition) are what decide how far you can go with this. It is the specific algorithm, e.g. Huffman coding, that decides what patterns you are exploiting for the compression, and that's what decides how far you can go. You can use a combination of algorithms to compress files further than just one and you can use algorithms that decide the optimal compression method for you specific set of data. But put simply, the single factor that decides the lower limit on compression is statistical redundancy, however you choose to go about exploiting it.
Lossless algorithms on the other hand, compress data in a manner such that there is no loss of the original data. For example, compressing text documents into a zip file. Statistical redundancy (patterns and repetition) are what decide how far you can go with this. It is the specific algorithm, e.g. Huffman coding, that decides what patterns you are exploiting for the compression, and that's what decides how far you can go. You can use a combination of algorithms to compress files further than just one and you can use algorithms that decide the optimal compression method for you specific set of data. But put simply, the single factor that decides the lower limit on compression is statistical redundancy, however you choose to go about exploiting it.
#4 Guest_Gerald Hume*
Re: how far can compression go
Posted 21 February 2011 - 01:24 AM
Lossless compression keeps on compressing even it is a dumb one
like statistical mode compression. Lossless compression doesn't
keep compressing because the quality starts becomming poor.
like statistical mode compression. Lossless compression doesn't
keep compressing because the quality starts becomming poor.
#5 Guest_Gerald Hume*
Re: how far can compression go
Posted 21 February 2011 - 02:55 AM
Dr. Fox, on 25 February 2009 - 11:23 PM, said:
It depends on the type of compression, namely lossy or lossless. Examples of lossy compression algorithms are for images, videos, and music where some loss of information is acceptable (i.e. loss of quality), and the complete set of information can never be recovered from the compressed data. I would say that the single factor that dictates how far these types of information can be compressed is simply the perception of the end user, and what they find acceptable.
Lossless algorithms on the other hand, compress data in a manner such that there is no loss of the original data. For example, compressing text documents into a zip file. Statistical redundancy (patterns and repetition) are what decide how far you can go with this. It is the specific algorithm, e.g. Huffman coding, that decides what patterns you are exploiting for the compression, and that's what decides how far you can go. You can use a combination of algorithms to compress files further than just one and you can use algorithms that decide the optimal compression method for you specific set of data. But put simply, the single factor that decides the lower limit on compression is statistical redundancy, however you choose to go about exploiting it.
Lossless algorithms on the other hand, compress data in a manner such that there is no loss of the original data. For example, compressing text documents into a zip file. Statistical redundancy (patterns and repetition) are what decide how far you can go with this. It is the specific algorithm, e.g. Huffman coding, that decides what patterns you are exploiting for the compression, and that's what decides how far you can go. You can use a combination of algorithms to compress files further than just one and you can use algorithms that decide the optimal compression method for you specific set of data. But put simply, the single factor that decides the lower limit on compression is statistical redundancy, however you choose to go about exploiting it.
Huffman died because he was wrong. You should be able to use a procedure
like his repetitively on a file to make it smaller and smaller instead of
only a single time. Prime number compression is probably the best. That is
what they wanted to use with Mpeg instead of the lossy method. Those 64 golden
juicy delicious Mpeg primes with getting even or something with a lossless
compression ratio of about 48:1.
Page 1 of 1

New Topic/Question
Reply



MultiQuote



|