Question

What do you think of the use of Moore’s Law to hypothesize a timeframe for the...

  1. What do you think of the use of Moore’s Law to hypothesize a timeframe for the origin of life?
  2. How does Moore’s Law apply to the efficiency of algorithms as we discussed in the last class?
  3. Where do you think computers will be in 10 years? 100?
0 0
Add a comment Improve this question Transcribed image text
Answer #1

Moore's law and the origin of life:

As life has evolved, its complexity has increased exponentially, just like Moore’s law. Now geneticists have extrapolated this trend backward and found that by this measure, life is older than the Earth itself.

O Mammalso Fisho Worms Log 10 Genome Size, bp of Earth Eukaryotes Prokaryotes TTTTTTTTTT Origin O Total genome Functional non

Here’s an interesting idea. Moore’s Law states that the number of transistors on an integrated circuit doubles every two years or so. That has produced an exponential increase in the number of transistors on microchips and continues to do so.

But if an observer today was to measure this rate of increase, it would be straightforward to extrapolate backward and work out when the number of transistors on a chip was zero. In other words, the date when microchips were first developed in the 1960s.

A similar process works with scientific publications. Between 1990 and 1960, they doubled in number every 15 years or so. Extrapolating this backward gives the origin of scientific publication as 1710, about the time of Isaac Newton.

Today, Alexei Sharov at the National Institute on Ageing in Baltimore and his mate Richard Gordon at the Gulf Specimen Marine Laboratory in Florida, have taken a similar to complexity and life.

These guys argue that it’s possible to measure the complexity of life and the rate at which it has increased from prokaryotes to eukaryotes to more complex creatures such as worms, fish and finally mammals. That produces a clear exponential increase identical to that behind Moore’s Law although in this case, the doubling time is 376 million years rather than two years.

That raises an interesting question. What happens if you extrapolate back to the point of no complexity–the origin of life?

Sharov and Gordon say that the evidence by this measure is clear. “Linear regression of genetic complexity (on a log scale) extrapolated back to just one base pair suggests the time of the origin of life = 9.7 ± 2.5 billion years ago,” they say.

And since the Earth is only 4.5 billion years old, that raises a whole series of other questions. Not least of these is how and where did life begins.

Of course, there are many points to debate in this analysis. The nature of evolution is filled with subtleties that most biologists would agree we do not yet fully understand.

For example, is it reasonable to think that the complexity of life has increased at the same rate throughout Earth’s history? Perhaps the early steps in the origin of life created complexity much more quickly than evolution does now, which will allow the timescale to be squeezed into the lifespan of the Earth.

Sharov and Gorden reject this argument saying that it is suspiciously similar to arguments that squeeze the origin of life into the timespan outlined in the biblical Book of Genesis.

Let’s suppose for a minute that these guys are correct and ask about the implications of the idea. They say there is good evidence that bacterial spores can be rejuvenated after many millions of years, perhaps stored in ice.

They also point out that astronomers believe that the Sun formed from the remnants of an earlier star, so it would be no surprise that life from this period might be preserved in the gas, dust and ice clouds that remained. By this way of thinking, life on Earth is a continuation of a process that began many billions of years earlier around our star’s forerunner.

Sharov and Gordon say their interpretation also explains the Fermi paradox, which raises the question that if the universe is filled with intelligent life, why can’t we see evidence of it.

However, if life takes 10 billion years to evolve to the level of complexity associated with humans, then we may be among the first, if not the first, intelligent civilization in our galaxy. And this is the reason why when we gaze into space, we do not yet see signs of other intelligent species.

There’s no question that this is a controversial idea that will ruffle more than a few feathers amongst evolutionary theorists.

Is Moore's law is applicable to the efficiency of algorithms?

Not really, no. We don't discover a 2x faster algorithm for a given problem every two years. (I'm assuming you meant some restatement of Moore's law about hardware capability, not number of transistors on a chip.)

Algorithm performance in the literature is usually measured in terms of asymptotic complexity anyway, so saying an algorithm is 2x faster doesn't make any sense, because you have to specify on what input size it is 2x faster. If a new algorithm has a better asymptotic running time than a previous one, on some inputs, it may be slower; on others, it maybe 2x faster, and on others, 100x faster. Moore's law would have to be defined in some other way to even make sense for algorithms.

Moore's Law makes sense for hardware only because what operations are O(1) in hardware doesn’t change.

Also, in terms of practical software development, there is what's known as “the Great Moore's Law Counterbalancer,” which roughly states “software gets slower as hardware gets faster.” Remember when you could run an operating system on 64 MB of memory? Or when you could fit one on a floppy disk? As hardware capabilities evolve, people build more complex software that requires more resources. To build that more complex software, people use more high-level abstractions and tools. These tools are usually fairly general, which means their performance might not be optimized for the specific use case. Machine performance is traded away for software engineer performance (productivity) and also engineering best practices like modularity and testability, which are necessary for ensuring that this ever-more-complex software actually works and can be competently maintained and extended.

Computers after 100 years:

To call the evolution of the computer meteoric seems like an understatement. Consider Moore's Law, an observation. He noticed that the number of transistors engineers could cram onto a silicon chip doubled every year or so. That manic pace slowed over the years to a slightly more modest 24-month cycle.

Assuming microprocessor manufacturers can continue to live up to Moore's Law, the processing power of our computers should double every two years. That would mean computers 100 years from now would be 1,125,899,906,842,624 times more powerful than the current models. That's hard to imagine.

But even Gordon Moore would caution against assuming Moore's Law will hold out that long. In 2005, Moore said that as transistors reach the atomic scale, we may encounter fundamental barriers we can't cross. At that point, we won't be able to cram more transistors in the same amount of space.

We may get around that barrier by building larger processor chips with more transistors. But transistors generate heat, and a hot processor can cause a computer to shut down. Computers with fast processors need efficient cooling systems to avoid overheating. The larger the processor chip, the more heat the computer will generate when working at full speed.

Another tactic is to switch to multi-core architecture. A multi-core processor dedicates part of its processing power to each core. They're good at handling calculations that can be broken down into smaller components; however, they aren't as good at handling large computational problems that can't be broken down.

Future computers may rely on a completely different model than traditional machines.

Add a comment
Know the answer?
Add Answer to:
What do you think of the use of Moore’s Law to hypothesize a timeframe for the...
Your Answer:

Post as a guest

Your Name:

What's your source?

Earn Coins

Coins can be redeemed for fabulous gifts.

Not the answer you're looking for? Ask your own homework help question. Our experts will answer your question WITHIN MINUTES for Free.
Similar Homework Help Questions
ADVERTISEMENT
Free Homework Help App
Download From Google Play
Scan Your Homework
to Get Instant Free Answers
Need Online Homework Help?
Ask a Question
Get Answers For Free
Most questions answered within 3 hours.
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT