0 Replies - 2092 Views - Last Post: 24 September 2009 - 05:02 PM

#1 sam.adams61   User is offline

  • D.I.C Regular

Reputation: 12
  • View blog
  • Posts: 283
  • Joined: 14-July 08

Encoding/Decoding in .Net

Post icon  Posted 24 September 2009 - 05:02 PM

Hi guys, I'm working my way through a text book & at the moment reading about Encoding and Decoding. By way of example the author gives the following concerning ASCII: It stores code points 0 through 127 in 1 byte, code points 128 through 2047 in 2 bytes(ie, Latin, Greek, Cryllic, Hebrew), and code points 2048 through 65535 in 3 bytes(ie, Chinese, Japanese). Therefore, given a file containing 1000 ASCII characters, but also has 2 Greek characters and 3 Chinese characters would take up 1008 bytes of disk space.
My question is, how can this be? I cannot see the logic behind that answer. Could someone please clear the fog for me? Many thanks in advance for any helpful advice.

Is This A Good Question/Topic? 1
  • +

Page 1 of 1