5 Replies - 524 Views - Last Post: 24 October 2017 - 02:44 PM Rate Topic: -----

#1 Piranha91  Icon User is offline

  • New D.I.C Head

Reputation: 0
  • View blog
  • Posts: 4
  • Joined: 15-June 17

Bitmap Pixel Format not what I expect it to be

Posted 10 October 2017 - 12:54 PM

Hello,

I'm trying to do some image manipulation and I'm running into problems with the very basics. I'd like my code to be able to distinguish between greyscale images and RGB images, and to process them accordingly. However, I'm having trouble even opening them with the expected format. I have two representative examples:

test.tif should be a 16 bit greyscale image: https://drive.google...iew?usp=sharing

input.tif should be a 16 bit RGB image: https://drive.google...iew?usp=sharing


I have code which should tell me the total bit depth:

System.Drawing.Bitmap bitmap = new Bitmap("input.tif"); // change to test.tif when necessary
int bit_depth = GetBitDepth(bitmap);

public static int GetBitDepth(Bitmap image)
        {
            return ((int)image.PixelFormat >> 8) & 0xFF;
        }




When I run the code on input.tif (opens as 16bit RGB in ImageJ) I get the expected total bit depth of 48, and it loads with PixelFormat of Format48bppRgb.

However, when I run the code on test.tif (opens as 16 bit greyscale in ImageJ) I get a total bit depth of 32, and it loads with PixelFormat of Format32bppArgb.

I have some questions:

1: Is test.tif opening as a 32 bit greyscale or as a 16 bit combination of greyscale + some second channel?

2: How do I get test.tif to open as a 16 bit greyscale the way it does in ImageJ?

3: What would be a reliable way to determine # of bits per channel (i.e. to distinguish if I'm opening a 48 bit image or a 16 bit RGB)?

4: I eventually want to have code that allows me to manipulate the pixels. It currently looks as follows:


public static Bitmap RescaleRGB(string path, double[] scalefactors, int max_possible_pixel_value)
            {
                System.Drawing.Bitmap bitmap = new Bitmap(path);
                System.Drawing.Imaging.BitmapData data = bitmap.LockBits(new Rectangle(0, 0, bitmap.Width, bitmap.Height), System.Drawing.Imaging.ImageLockMode.ReadWrite, System.Drawing.Imaging.PixelFormat.Format24bppRgb);

                IntPtr ptr = data.Scan0; // get address of first line
                int bytes = Math.Abs(data.Stride) * bitmap.Height;
                byte[] rgbvalues = new byte[bytes]; // array to hold rgb values
                // copy RGB values into array
                System.Runtime.InteropServices.Marshal.Copy(ptr, rgbvalues, 0, bytes);
                // scale pixel values
                byte max_pixel_value = (byte)max_possible_pixel_value; 

                for (int counter = 0; counter < rgbvalues.Length; counter += 3)
                {
                    rgbvalues[counter] = Convert.ToByte(rgbvalues[counter] * scalefactors[0]); // scale R
                    if (rgbvalues[counter] > max_pixel_value) { rgbvalues[counter] = max_pixel_value; }

                    rgbvalues[counter + 1] = Convert.ToByte(rgbvalues[counter+1] * scalefactors[1]); // scale G
                    if (rgbvalues[counter + 1] > max_pixel_value) { rgbvalues[counter + 1 ] = max_pixel_value; }

                    rgbvalues[counter + 2] = Convert.ToByte(rgbvalues[counter + 2] * scalefactors[2]); // scale B
                    if (rgbvalues[counter + 2] > max_pixel_value) { rgbvalues[counter + 2] = max_pixel_value; }
                }
                // copy rgb values back to bitmap
                System.Runtime.InteropServices.Marshal.Copy(rgbvalues, 0, ptr, bytes); // source, start index, destination, length
                bitmap.UnlockBits(data);
                
                return bitmap;
            }



I basically parroted the code from a previous topic and I only semi-understand it. If someone could clarify some things for me I'd appreciate it:

1. In the line "int bytes = Math.Abs(data.Stride) * bitmap.Height;", I don't really understand what the code is doing. What is data.Stride? If I'm understanding correctly it's the amount of bits, including "padding space", used to hold a single element of an array? If that's the case, would bytes be the number of bits per one pixel column in the image?

2. The above code seems to only work if each pixel can be contained in one byte (e.g. an 8 bit image). Indeed, I have to convert the image to Format24bppRgb before doing the scaling. How would I go about supporting 16 bit-per-channel images? I can no longer do Convert.toByte because the value is no longer stored as a single byte. What would be an appropriate alternative?

Thanks in advance for any help!

Is This A Good Question/Topic? 0
  • +

Replies To: Bitmap Pixel Format not what I expect it to be

#2 Skydiver  Icon User is offline

  • Code herder
  • member icon

Reputation: 5887
  • View blog
  • Posts: 20,095
  • Joined: 05-May 12

Re: Bitmap Pixel Format not what I expect it to be

Posted 10 October 2017 - 05:57 PM

Why are you shifting the PixelFormat of the image to the right 8 bits and then just taking the lower byte? I would think that you simply need to check to see if it is equal to PixelFormat.Format16bppGrayScale.
Was This Post Helpful? 0
  • +
  • -

#3 GazinAtCode  Icon User is offline

  • D.I.C Head

Reputation: 18
  • View blog
  • Posts: 69
  • Joined: 26-September 16

Re: Bitmap Pixel Format not what I expect it to be

Posted 18 October 2017 - 10:36 AM

Your file "test.tif" is identified as Format32bppArgb on my laptop. Apparently, System.Drawing.Bitmap isn't good at reading 16bpp images (more info and a possible workaround here: https://www.codeproj...ull-bit-Support ).

As for the stride, it's a value that corresponds to the number of bytes in each row of pixels (scan line). In other words, it's the "step" you have to take (measured in bytes) in order to reach the next line of pixels in an image. Usually, this means 1-3 extra bytes per row that contain no actual information. The concept was originally introduced for alignment reasons.
Was This Post Helpful? 0
  • +
  • -

#4 Piranha91  Icon User is offline

  • New D.I.C Head

Reputation: 0
  • View blog
  • Posts: 4
  • Joined: 15-June 17

Re: Bitmap Pixel Format not what I expect it to be

Posted 19 October 2017 - 11:13 AM

View PostGazinAtCode, on 18 October 2017 - 10:36 AM, said:

Your file "test.tif" is identified as Format32bppArgb on my laptop. Apparently, System.Drawing.Bitmap isn't good at reading 16bpp images (more info and a possible workaround here: https://www.codeproj...ull-bit-Support ).

As for the stride, it's a value that corresponds to the number of bytes in each row of pixels (scan line). In other words, it's the "step" you have to take (measured in bytes) in order to reach the next line of pixels in an image. Usually, this means 1-3 extra bytes per row that contain no actual information. The concept was originally introduced for alignment reasons.


Thank you, especially for the CodeProject link. I will try to implement this over the weekend (not a professional coder here; just a researcher that likes to automate routine tasks).

Given that your laptop also identifies the greyscale image as ARGB, and the code in the link you provided seems to expect you to tell it the bit depth so it knows what to cast the pixels to, do you have any recommendations for how to algorithmically determine the bit depth? If not, my laptop seems to correctly identify 16 bit RGB as I mentioned in the OP so I can just check to see if it's recognized as that and assume everything else is 16 bit greyscale, but I'm planning to leave my current job in about a year and it would be great if whoever comes after me could drop in 8 bit or 16 bit images interchangeably.

Thanks again!
Was This Post Helpful? 0
  • +
  • -

#5 GazinAtCode  Icon User is offline

  • D.I.C Head

Reputation: 18
  • View blog
  • Posts: 69
  • Joined: 26-September 16

Re: Bitmap Pixel Format not what I expect it to be

Posted 24 October 2017 - 10:50 AM

You're welcome.

View PostPiranha91, on 19 October 2017 - 11:13 AM, said:

Thank you, especially for the CodeProject link. I will try to implement this over the weekend (not a professional coder here; just a researcher that likes to automate routine tasks).

Given that your laptop also identifies the greyscale image as ARGB, and the code in the link you provided seems to expect you to tell it the bit depth so it knows what to cast the pixels to, do you have any recommendations for how to algorithmically determine the bit depth?


As a last resort, you could try to plow through the file header and elicit the appropriate information from it. I suppose it must be there . . . somewhere. Unfortunately, I'm not really familiar with the structure of a TIFF file, and it may seem pretty confusing: https://www.itu.int/.../docs/tiff6.pdf

This post has been edited by GazinAtCode: 24 October 2017 - 10:51 AM

Was This Post Helpful? 0
  • +
  • -

#6 Skydiver  Icon User is offline

  • Code herder
  • member icon

Reputation: 5887
  • View blog
  • Posts: 20,095
  • Joined: 05-May 12

Re: Bitmap Pixel Format not what I expect it to be

Posted 24 October 2017 - 02:44 PM

The TIFF format is very confusing because it tries to be a container for everything. It's almost like it was designed by committee.
Was This Post Helpful? 0
  • +
  • -

Page 1 of 1