Moore's law, one of the foundations of the information revolution, cannot last forever, writes Michio Kaku in Physics of the Future: How science will shape human destiny and our daily lives by the year 2100 (www.penguin.com). He adds that the future of the world economy and the destiny of nations may ultimately hinge on which nation develops a suitable replacement for silicon.

Double power

For starters, the law — ‘a rule of thumb that has driven the computer industry for fifty or more years, setting the pace for modern civilisation like clockwork' — says that computer power doubles about every eighteen months. First stated in 1965 by Gordon Moore, one of the founders of Intel Corporation, this law has helped to revolutionise the world economy, generated fabulous new wealth, and irreversibly altered our way of life, narrates Kaku. He notes that when you plot, on a logarithmic curve, the plunging price of computer chips and their rapid advancements in speed, processing power, and memory, you would find a remarkably straight line going back by fifty years.

And that if you extend the graph, so that it includes vacuum tube technology and even mechanical hand-crank adding machines, the line can be extended more than 100 years into the past! Which explains why, “today, your cell phone has more computer power than all of NASA back in 1969, when it placed two astronauts on the moon… The Sony PlayStation of today, which costs $300, has the power of a military supercomputer of 1997, which cost millions of dollars.”

Too much heat

The current revolution in silicon-based computers, as the author elaborates, has been driven by one overriding fact – the ability of UV (ultraviolet) light to etch smaller and smaller transistors onto a wafer of silicon. “Today, a Pentium chip may have several hundred million transistors on a wafer the size of your thumbnail. Because the wavelength of UV light can be as small as 10 nanometres, it is possible to use etching techniques to carve out components that are only thirty atoms across.”

The bad news, however, is that the above process cannot continue forever owing to several reasons, as discussed in the book. Foremost reason is that the heat generated by powerful chips will eventually melt them, cautions Kaku. “One naïve solution is to stack the wafers on top of one another, creating a cubical chip. This would increase the processing power of the chip but at the expense of creating more heat. The heat from these cubical chips is so intense you could fry an egg on top of them.”

Cubical and un-cool

If you wonder why a cubical chip generates more heat, here is an example from the book: When you double the size of a cubical chip, the heat it generates goes up by a factor of eight, since the cube contains eight times more electrical components, but its surface area increases only by a factor of four. And the simple rule, as the author reminds, is that if you pass cool water or air across a hot chip, the cooling effect is greater if you have more surface contact with the chip.

The second hurdle before Moore's law, as Kaku outlines, is a problem posed by the quantum theory – the uncertainty principle – which says that you cannot know for certain the location and velocity of any atom or particle. “Today's Pentium chip may have a layer about thirty atoms thick. By 2020, that layer could be five atoms across, so that the electron's position is uncertain, and it begins to leak through the layer, causing a short circuit. Thus, there is a quantum limit to how small a silicon transistor can be.”

Parallel promise

An answer to the problem can be parallel processing, whereby you string a series of chips in parallel, so that a computer problem is broken down into pieces and then reassembled at the end, the author observes. He sees a parallel to parallel processing in the way our own brain works. “If you do an MRI scan of the brain as it thinks, you find that various regions of the brain light up simultaneously, meaning that the brain breaks up a task into small pieces and processes each piece simultaneously.”

More importantly, as Kaku highlights, this explains why neurons (which carry electrical messages at the excruciatingly slow pace of 200 miles per hour) can outperform a supercomputer, in which messages travel at nearly the speed of light. “What our brain lacks in speed, it more than makes up for by doing billions of small calculations simultaneously and then adding them all up.”

In parallel processing, the critical factor, though, is the breaking up of every problem into several pieces for the purpose of processing by different chips. The author alerts that the coordination of this breakup can be exceedingly complicated, and that it depends specifically on each problem, thus making a general procedure very difficult to find. “The human brain does this effortlessly, but Mother Nature has had millions of years to solve this problem. Software engineers have had only a decade or so.”

Molecular transistors

A possible replacement for silicon chips is transistors made of individual atoms, the author mentions. “If silicon transistors fail because wires and layers in a chip are going down in size to the atomic scale, then why not start all over again and compute on atoms?” And molecular transistors can be a way of realising the atomic route. “A transistor is a switch that allows you to control the flow of electricity down a wire. It's possible to replace a silicon transistor with a single molecule, made of chemicals like rotaxane and benzenethiol.” A molecule of benzenethiol, Kaku describes, looks like a long tube, with a ‘knob,' or valve, made of atoms in the middle. Normally, electricity is free to flow down the tube, making it conductive, but it is also possible to twist the ‘knob,' which shuts off the flow of electricity, he instructs. “In this way, the entire molecule acts like a switch that can control the flow of electricity. In one position, the knob allows electricity to flow, which can represent the number ‘1.' If the knob is turned, then the electric flow is stopped, which represents the number ‘0.' Thus, digital messages can be sent by using molecules.”

Shape-shifting technology

While that is simplified physics to the lay, a down-to-screen example comes your way in a section titled ‘Midcentury (2030-2070).' It opens with ‘shape-shifting,' making a reference to T-1000, the advanced robot that attacks Arnold Schwarzenegger in the movie ‘Terminator 2: Judgment Day.' Lest you dismiss T-1000 as science fiction, the author alerts that by midcentury a form of shape-shifting technology may become commonplace, and that one of the main companies driving this technology is Intel.

What is it that the scientists at Intel are working on? Create a computer chip in the shape of a tiny grain of sand, and then let these smart grains of sand allow you to change the static electric charge on the surface, so that these grains can line up to form a certain array. “These grains are called ‘catoms' (short for claytronic atoms) since they can form a wide range of objects by simply changing their charges, much like atoms.”

The author cites a quote of Jason Campbell, a senior researcher at Intel, talking about catoms in mobile devices, thus: “My cell phone is too big to fit comfortably in my pocket and too small for my fingers. It's worse if I try to watch movies or do my email. But if I had 200 to 300 millilitres of catoms, I could have it take on the shape of the device that I need at that moment.” That way, you can have a cell phone which can morph into something else the next moment, rather than burden yourself with too many electronic gadgets.

From refrigerator to an oven

The book paints the scenarios for tomorrow's child and mother, too. For instance, children might celebrate Christmas not by opening presents under the tree but by downloading software for their favourite toy that Santa has emailed them, Kaku foresees. “Renovating homes and apartments won't be such a chore with programmable matter. In your kitchen, replacing the tiles, tabletops, appliances, and cabinets might simply involve pushing a button.”

Wonders Kaku – when visiting Seth Goldstein at Carnegie Mellon University – whether it would be a programming nightmare to give detailed instructions to billions of catoms so that a refrigerator might suddenly transform into an oven. The response he gets is that it would not be necessary to give detailed instructions to every single catom, because each catom has to know only which neighbours it must attach to. “If each catom is instructed to bind with only a tiny set of neighbouring catoms, then the catoms would magically rearrange themselves into complex structures (much like the neurons of a baby's brain need to know only how to attach themselves to neighbouring neurons as the brain develops).”

A book that can stretch your sights to many years ahead.

>dmurali@thehindu.co.in

Tailpiece

“All was well with our futurologist till...”

“Uh?”

“The accounts department noticed that his appointment letter was post-dated by 20 years!”