O ur digital civilisation, if you can call it that, runs on just two numbers – 0 and 1. The devices we call computers run on vast strings of ones and zeros. How? By having electrical currents that are either flowing or not. The tiny electronic switches that decide whether they’re on (1) or off (0) are called transistors.
Once upon a time, these were tangible objects: I remember buying one with my pocket money in the 1950s for a radio receiver I was building. But rapidly they were reduced in size, to the point where electrical circuits using them could be etched on thin wafers of silicon. Which I guess is how they came to be called silicon “chips”.
Nowadays, a chip is a grid of millions, or even billions, of these tiny switches that flip on and off to process those ones and zeros – to store them and to convert images, characters, sounds, whatever – into billions of binary digits. In the 1960s, Gordon Moore, the co-founder of Intel, an early chip manufacturer, noticed that every year the company was able to double the number of transistors it packed on to a given area of silicon. And since computing power seemed to be correlated with chip density, he formulated Moore’s law, which indicated that computing power would double every two years – a compound annual growth rate of 41% – which kind of explains why the A15 processor in my Apple iPhone (which has 15bn transistors) has vastly more computing power than the room-size IBM computer I used as a student.
Inescapably, then, computers need chips. But what that increasingly means is that nearly everything needs chips. How come? Because computers are embedded in almost every device we use. And not just in things that we regard as electronic. One of the things we learned during
Read more on theguardian.com