If you "really" wanted to get to that level you need to study digital electronics and learn to build digital circuits. You still won't be programming them in binary though.
The computer's memory is binary because it's made up of switches. The switches only have two positions: on and off. On and off are represented by 1 and 0. Viola! Binary.
8 switches is called a "byte of memory" and that's the smallest unit you actually work with. The 8 switches can be set to 256 unique combinations of positions. So, the way you actually "set" the switches is to electronically pass a number between 0 and 255 to the byte and the machine sets the eight binary switches according to the number you sent. 0 for example means all the switches are off and 255 means all 8 switches are on. Likewise, you can read the switches which will cause a number between 0 and 255 to be sent back to you so that you can determine how the switches are currently set.
So, no one does binary really. We do what's called "hexadecimal". Hexidecimal is base 16 math. Our normal math is base 10. In other words there are 10 digits that we use from 0 to 9 and then each increment after that uses another digit. Hexidecimal has 16 digits from 0 to F. After 9 it uses the alphabet from A to F. (Binary is actualy base 2 math.)
The reason we use hexadecimal is because it fits the size of an 8 bit byte very well. Using two hexadecimal digits we can perfectly represent all 256 possible numeric combinations of a byte. 00 hex equals 0. FF hex equals 255, or the highest possible value. 0F equals 16 and 10 is the number after that and represents 17.
In one of the examples above you can see hexadecimal numbers being used.
If you program in "machine language" you won't program in binary, you'll program in hexadecimal. By passing in a hexadecimal number you will set the 8 switches. You can "think" of them in binary; there's a binary number that equals any hexadecimal number you come up with. In fact, if you convert the hexadecimal number to a binary number using math you can actually see which switches will be set and which won't within the byte.
But people who actually do this don't use binary, they use hexadecimal.
The way that machine language works is that one or more byte values may give the processor a command. Let's say we want the computer to add two numbers in memory together and then store the value.
If we were writting that in "normal" numbers it might be that the binary value "1111 1111" means add together the values of the next two bytes and store them in a register. But instead of writing it in binary you could write it as the normal value of 255. 255 and "11111111" in binary are the same value. FF hexadecimal also equals both numbers.
So, then the "command" FF means add the next two byte values together and store them in such and such register. (A register is like a very super fast but incredibly tiny piece of memory where you store values.)
So, in machine language it might look like "FF 01 0F" and the FF means "add the next two byte values together". 01 + 0F in hex equals 10 hex. So this machine language command would cause the hex value 10 to be stored in such and such register since that's the answer to the addition problem.
So, if you were to write machine language code (the language we "imagine" the computer "speaking") you would do it in hex like that, not binary. You could write it in binary as "1111 1111 0000 0001 0000 1111" = "0001 0000". But the hex numbers are just much easier for humans to read. So, any program that allows you to write machine code is probably going to want you to do it in hex rather than binary. It's the exact same thing but the hex version is just easier for humans to understand.
If you want to get technical, the machine doesn't actually understand binary either. Even binary is an "imagination" of what's actually going on. What's actually going on is electrical impulses are setting switches in the circuit which gets into digital circuits and voltages. The 1's and 0's of binary are just the way we typically think of the switches either being on or off. And hex just makes it easier to deal with a byte (smallest actual unit of memory that can be worked with consisting of 8 switches).
Assembly language takes this concept of "readability" up one level. Instead of the statement "FF 01 0F" that we had before, Assembler would make the addition command of "1111 1111" or "FF" into the mnemonic "Add". So in Assembler, the exact same code is written something like "Add 01, 0F". That still just means "add the next two byte values together and store the answer in such and such register.
So assembly language is the exact same thing as machine language except that you subsitute human readable mnemonics for command codes and such.
If you get reasonably good at assembler, you should be able to write code in machine language using all hexadecimal (or theoretically even binary if you could find an assembler that can convert the binary code into hex that the machine can work with). But the bottom line is that at that point you will likely understand there's no point in writing code in machine language or binary because Assembler is the exact same thing, just much easier to actually work with.
So, unless you want to learn the electronics of constructing digital circuits (how to build a computer with a soldering iron), the next closest thing is Assembly language. That will definately teach you how the machine actually "thinks".
If you really want to know how the computer "thinks". I recommend this book. It is extremely dated having been written in 1992. However, it's one of the most approachable books on the subject of assembler (how the computer thinks) I've ever read.
He has a more modern book, but I've never read it. I can't imagine he's gotten worse at writing over the past 20 years, but since I've never read it, I can't recommend it. Still, it might be a more up to date book considering things have changed a lot in the compute world in the past 20 years.
None the less. His 1992 book may not technically apply to today's computers but it's probably going to give you just as good an understanding of how assembler works from an academic perspective. (If you just want to read about it rather than actually do it.)
I can highly recommend Kip Irvine's book. It's not quite as approachable as Duntemann's books, but it's still extremely good and applies to modern Windows computer except for "maybe" not Windows 8. I'm certain it's good for Windows 7 at least. And Duntemann's 1992 book was written for DOS and his more recent book is for Linux.
If you really want to know "how the computer thinks". Learn assembly language. Just be prepared to have the "curtain thrown back and the wizard revealed". You will learn kind of the computer equivilent of "photographs are just chemicals that are light sensitive and capture an image by reflecting tiny fragments of color in the exact shape of an image cast on the negative and not a way of capturing peoples' souls". And that may be greatly disappointing for some who hope to see the world in a romantic sort of way. The world's a much less exciting and mysterious place when you actually understand how it works.
This post has been edited by BBeck: 05 May 2014 - 01:00 PM