Sometimes the best bugs are the ones you unintentionally put in yourself.

Consider the following problem. You have a buffer of arbitrary binary data that is interpreted as various data pieces. In this particular scenario you are putting various sized integers in this buffer. The buffer is defined to be network byte order.

Let's say you have a number all ready to go:

What is the output?

But wait, we had to do some math after we got our original number!

What is the output now?

What is this sorcery? Let us remember that there are two ways to represent byte order: little and big endian (LE/BE) or host and network order. A quick sidebar to the reader who has much more architecture experience, since x86 is little endian, I will refer to LE as host for the remainder of this post. Yes some host architecture is BE (Motoroola, old SPARC versions, etc...). Big-endian systems store the most-significant byte of a word at the smallest memory address and the least significant byte at the largest. A little-endian system stores the least-significant byte at the smallest address.

A byte is represented by 8 bits:

0 0 0 0 0 0 0 0

Each bit from right to left is an increasing power of two starting with 0:

2

Each bit that is set to one, we sum to get the integer value of that byte:

00000001 = 1

00000010 = 2

....

10000000 = 128

Since binary gets unwieldy quickly, lets move into hex:

00 = 0

01 = 1

...

FF = 255

Single bytes are the smallest unit that will be ordered based on LE/BE. There is no nibble defined ordering. If there was, there would be weeping and gnashing of teeth. Dogs would lie down with cats. Madness! What was I talking about? Oh right, so consider the following 16 bit integer:

BE: 00 24 => 36 base 10

LE: 24 00

BE: 0000 0000 0010 0100 => 36 base 2

LE: 0010 0100 0000 0000

If I have BE 00 24 and add LE 05 00 to it:

Bad!

BE + BE

Good!

LE + LE

Good!

Remember to be aware of word ordering in your code and avoid sadness!

Quote

If debugging is the process of removing bugs, then programming must be the process of putting them in.

- Edsger W. Dijkstra

- Edsger W. Dijkstra

Consider the following problem. You have a buffer of arbitrary binary data that is interpreted as various data pieces. In this particular scenario you are putting various sized integers in this buffer. The buffer is defined to be network byte order.

Let's say you have a number all ready to go:

uint16_t num = htons(36); //hex 24 htons() says convert this number from host ordering to network ordering (more on this below) string buf((char*)(&num), 2); for(int i = 0; i < buf.length(); i++) cout << hex << (unsigned int)buf[i] << " ";

What is the output?

Spoiler

But wait, we had to do some math after we got our original number!

uint16_t num = htons(36); //hex 24 num += 5; string buf((char*)(&num), 2); for(int i = 0; i < buf.length(); i++) cout << hex << (unsigned int)buf[i] << " ";

What is the output now?

Spoiler

What is this sorcery? Let us remember that there are two ways to represent byte order: little and big endian (LE/BE) or host and network order. A quick sidebar to the reader who has much more architecture experience, since x86 is little endian, I will refer to LE as host for the remainder of this post. Yes some host architecture is BE (Motoroola, old SPARC versions, etc...). Big-endian systems store the most-significant byte of a word at the smallest memory address and the least significant byte at the largest. A little-endian system stores the least-significant byte at the smallest address.

A byte is represented by 8 bits:

0 0 0 0 0 0 0 0

Each bit from right to left is an increasing power of two starting with 0:

2

^{7}2^{6}2^{5}2^{4}2^{3}2^{2}2^{1}2^{0}Each bit that is set to one, we sum to get the integer value of that byte:

00000001 = 1

00000010 = 2

....

10000000 = 128

Since binary gets unwieldy quickly, lets move into hex:

00 = 0

01 = 1

...

FF = 255

Single bytes are the smallest unit that will be ordered based on LE/BE. There is no nibble defined ordering. If there was, there would be weeping and gnashing of teeth. Dogs would lie down with cats. Madness! What was I talking about? Oh right, so consider the following 16 bit integer:

BE: 00 24 => 36 base 10

LE: 24 00

BE: 0000 0000 0010 0100 => 36 base 2

LE: 0010 0100 0000 0000

If I have BE 00 24 and add LE 05 00 to it:

0000 0000 0010 0100 +0000 0101 0000 0000 0000 0101 0010 0100

Bad!

BE + BE

0000 0000 0010 0100 +0000 0000 0000 0101 0000 0000 0010 1001

Good!

LE + LE

0010 0100 0000 0000 +0000 0101 0000 0000 0010 1001 0000 0000

Good!

Remember to be aware of word ordering in your code and avoid sadness!

### 4 Comments On This Entry

Page 1 of 1

#### GWatt

30 June 2015 - 04:57 PM
Out of curiosity, what were you doing that needed you to care about endianness?

While this may not apply to you, in this instance, I highly recommend reading this before you think about endianness:

http://commandcenter...er-fallacy.html

While this may not apply to you, in this instance, I highly recommend reading this before you think about endianness:

http://commandcenter...er-fallacy.html

#### WolfCoder

01 July 2015 - 08:23 AM
I haven't had to worry about this

*even in my networked applications*. It all runs fine. Putting hton whatever usually makes sure there's lots of bugs for me to find later, so I omit it until it actually becomes a problem.
Page 1 of 1

### Tags

### My Blog Links

### Recent Entries

### Recent Comments

### Search My Blog

### 10 user(s) viewing

**10**Guests

**0**member(s)

**0**anonymous member(s)