One of the first concepts covered in computer science class is the bit. We've come to know that it represents either a one or a zero, but what does this really mean? How do our computers process these ones and zeros to give us the functionalities we have today?

No worries, that's what part 1 of this post will help with!

Alright-- So we know that bits work like switches: on or off. The on state is represented by a one, in real life, this corresponds to the presence of some voltage beyond a certain threshold. The off is, similarly, represented by a zero.

Ok great this means we can represent two states with one bit. That's great if we spoke in binary or only used two letters to communicate to one another.

What if we add more bits together? Well, now we have quite a few combinations of ones and zeros. If you start to draw out the possible combinations you'll notice that we can have many states: 2^(number of bits) to be exact.

A **byte** is a group of bits. Of course, you can also say you have a 4-bit number or an 8-bit number and so on and so forth. (For terminologies sake, in case you were curious).

Anyway, with so many bits we can represent the alphabet and make sentences or strings as its commonly called in the field. The goes the other way as well: imagine writing some code or a program, our text gets manipulated and processed into groups of bits in order for parts of the CPU (the brain of the computer) to execute the various logic commands.

That's the basics of bits and bytes for now. Leave me a comment, was this helpful? What would you like to learn about next?

Yes, we all have to do it: Get on a device to look up a term that gets thrown across the room in a tech conversation. It's impossible to avoid this. Everyday, there's a new framework, language, utility, device... the list goes on and on.