The Assignment instructions is to implement a BCD character encoding of an IBM 704: BCD table here via bit packing masking and shifting

- Include methods for encoding and decoding individual BCD characters
- Include methods for converting to and from BCD and ascii characters
- Add methods for storing and retrieving 5 BCD characters in each unsigned int in that array. The first character should be in bits 0-5, the second, in bits 6-11, the third in bits 12-17, the fourth, in bits 18-23, and the 6th in bits 24-29. Bits 30 and 31 are unused and should be set to zero. Your Bcd class should allow allocation of a dynamic array of unsigned integers
- Add methods to pack and unpack entire strings of ascii characters into array of integers of length n with 5 BCD characters per 32bit integer.

Before diving into this assignment, the class has only had experience with Java and C#. I am not familiar with the idea of bit packing or encoding at all, and only have the vaguest idea of how to do basic bit shifting, eg: binary value 010 << 1 = 100

Here is the only example we were given to dissect and work with:

#include <iostream> using namespace std; typedef struct RGB { unsigned int s, b, g, r; }RGB; unsigned int packRGB(const RGB *rgb) { return rgb->s << 24 | rgb->b << 16 | rgb->g << 8 | rgb->r; } RGB unpackRGB(unsigned int rgb) { RGB unpacked = {rgb >> 24 & 0XFF, rgb >> 16 & 0XFF, rgb >> 8 & 0XFF, rgb & 0XFF}; return unpacked; }

I just need to know what to even start doing, and possibly a layman's explanation of what that example is doing. My biggest problem, is I don't know how to even begin understanding the supplied BCD table.