Koasati, let's make it simple and concentrate on the PC and BMP bitmaps. The list at post #19 is the way it is. As a games engine programmer those are the actual color depths one works with. In short: 24-bits is using the same amount of colors as 32-bit. The 8 bits difference are use for the Alpha channel. Some numbers you have shown are wrong. Sorry about that. It is like Frogboy said a matter of powers of 2. You can calculate these values. But his error was that he assumed that all 32 bits are used to store color information. That is nonsense.
What happens with the 8-bits when they are not used? They just take an extra byte (containing 0). WOM is right about that. In TGA or PNG bitmaps that extra byte is used to store Alpha information. BTW in W2K/WXP there is support for 32-bit BMP bitmaps that actually use the Alpha channel. That allows programs like DesktopX to use per pixel alpha blending.
If you read somewhere that 32-bits contains more color information than 24-bits then that information is likely to be incorrect. Maybe some odd hardware or odd graphics format will use the extra 8-bits for color information, but I don't know any.
Anyway. Like I said, the numbers of colors you gave in 24 and 32-bit color depths are incorrect. Even the 256 color format needs explanation. In 8-bit color formats there are 256 colors used, but these colors are actually 24-bits and stored in a table. The color value is just an index in that table.
If you want more information about this then the only reliable sources are the Platform SDK from Microsoft, the reference manuals of the various graphics cards and the programmers reference manuals of the various file formats.
BTW: Don't believe everything Google comes up with.
[Message Edited]