Brett King

Posts Tagged ‘Moore’s Law’

Telcos need to think about different mobile pricing

In Mobile Banking, Retail Banking, Technology Innovation on June 15, 2010 at 11:24

The greatest competitive differentiation a mobile operator can give me today is an always on data plan across devices. Right now I have an iPhone, a Blackberry, an iPad and a Mac and I effectively have to manage different data plans for each device. This sucks. I also maintain a broadband connection at home, although I would abandon that gladly if my wireless data deal were better.

Not only does multi-device connectivity cost me more than I believe it should, but I actually have different plans with different providers for different devices. Some are monthly WiFi deals, others are mobile data deals that actually limit my downloads on a monthly basis, and others are pre-paid deals that I pick up when I am visiting other countries.

My best deal is a great 3.5G solution through CSL in Hong Kong, where I pay around US$50 a month for 21Mbps access speeds and unlimited downloads. Unfortunately when I am working in the United States, UK and Australia on my iPhone or iPad, I can’t get a deal even remotely close to this sort of value for money. Firstly, 21Mbps isn’t available on AT&T, Telstra or many of the UK providers.  Secondly, unless you are Sprint 4G in the US, there’s not one provider who gives an unlimited download deal.

In the US, UK and Australia on my mobile plans I am restricted to downloading between 6 Gb and 10 Gb per month. You might think that sounds like a lot, but I’ve recently been conducting webinars and Skype teleconferences frequently, and I can chew through 1 Gb of data in a single day. If you exceed the monthly download limit, then that’s where you start to singlehandedly make an sizeable direct contribution to the profits of the telco themselves. Normally this manifests itself as overage charges that resemble the budget of a mid-size multinational.

Plans need to be for access, not data

I understand the need and right of an operator to make margin from their business. To some extent with fixed line business I understand the cost of running cable and the fact that as a user of the infrastructure I must pay a penalty. But let’s face it, when it’s wireless data of the 3G or 4G network, essentially the operator is providing this over cell tower infrastructure that was installed in most cases over 10 years ago, and has just undergone successive upgrades of antenna and firmware to operate at the new frequencies. Unless you are a VNO (Virtual Network Operator) the data is costing you nothing.

In any case, the cost of the infrastructure is a sunk cost, and regardless of how much data I suck down the pipe, I should be paying for the size of the pipe, not for the data because the operator most certainly isn’t paying for the data.

To illustrate the great digital divide let’s compare the more progressive countries with US, UK and Australia based on 12 month contracts.

The great digital divide
2010-06-15-images-DataPlans35G.png

Pricing plans should get cheaper a lot faster than they do

You’ve heard of Moore’s Law right? Well there’s a law for the telecoms sector in respect to bandwidth too. It’s called Gilder’s law. Gilder’s law effectively states that the capacity of a pipe to carry data will increase by at least 3 times Moore’s law. Moore’s law says that computing capacity/power will increase at 200% per 2 years, so that means bandwidth will increase 600% in carrying capacity every 2 years.

So the cost of data over a 7Mb Next-G modem, if it is $50 today, should be $8 in 2 years time for the same deal. From my experience, this is extremely unlikely.

So what is happening is operators are getting increasingly cheaper pipes, and are maximizing the profit of those pipes over more years than they need to. If South Korea can provide 1 Gbps broadband in the home for the same price as Australia charges for a 2.5Mbps connection, you know something has to give eventually. So what is the great equalizer?

4G – Herein lies the problem

The next generation of mobile standards (4G) allows for much faster download speeds, infact, when 4G taps out the upper end will allow 1 Gbps download speeds. The problem is that when Australian, UK and US providers move to the next generation of technology, capping downloads with limits just won’t make any sense whatsoever. What would you cap it at? 100 Gb?

It gets a little ridiculous. I could download a DVD quality movie every day and still not exceed my download limit. But more importantly, once in place, the whole benefit in 4G is the fact that I become permanently unwired as a consumer.

To understand where we are going means that we will move from one device to another seamlessly. This is already happening with the iPad, iPhone and your HD TV. I am looking for a data provider that allows me access to connectivity as a bundle, not by Mb.

Conclusion

In a Wired article back in 1993 George Gilder predicted that Bandwidth would eventually be free. I believe that bandwidth will eventually be so cheap that it is effectively free, but right now operators need to understand that charging for the pipe, and not the data is how they can both enable business and future revenue.

After Moore’s Law

In Technology Innovation on October 26, 2009 at 04:46

Excerpt from Chapter 9 – Deep Impact – Technology and Disruptive Innovation

Looking further into the future there are really only two promising solutions that will replace the silicon paradigm that underlies the flawless performance of Moore’s Law to-date. Those two solutions are Quantum computing and DNA or Biological computing.

Quantum computing essentially utilizes the quantum state of qubit (the equivalent of a normal bit/bite in computing terms but at the quantum level). Like a traditional bit, a qubit has an on and off state, but whereas a bit can ONLY be 1 or 0, a cubit can also produce a superposition of both states. Thus, depending on configurations, implementation, the principles of entanglement and superposition (quantum mechanical phenomena) a quantum computer will likely operate of an underlying bit structure that contains at least 8 different three-bit strings. But because of the nature of quantum mechanics, it can simulate the calculations of almost any combination of results simultaneously.

This means a completely different type of programming would be required, but it results in massive computing power. Programs, calculations or simulations that would take weeks, months or even years to complete on today’s platforms could be executed in real-time almost instantly. Chips the size of a grain of rice would be more powerful than today’s supercomputers, and use almost no power at all.

Recently some progress has been made in this field with Resonant Tunnelling Diodes (RTD), and software modelling that simulates quantum processing. Needless to say, this all very hi-tech and the applications are mind blowing. Computers will be everywhere, some of them as small as dust or embedded within our blood cells to keep check on our vitals. Near instantaneous transfer of information will exist around the planet. The applications are endless.

So when will this all happen? Estimates of quantum hardware of this type being in commercial production range from 10-30 years. But already MRAM (Magnetoresistive Random Access Memory), RTD’s and other quantum applications are already in the market or in development. So it seems just a matter of time.

The other promising replacement for silicon technology is DNA Computing which uses DNA, biochemistry and molecular biology. It was first demonstrated as a concept by Leonard Adleman of the SoCal (University of Southern California), in 1994. Adleman demonstrated a proof-of-concept use of DNA as a form of computation which solved the seven-point Hamiltonian path problem. He used an oligonucleotide, which is just a really fancy name for a polymer. But if you’ve ever watched an episode of CSI when they take a piece of evidence with a suspect’s DNA and put it in a solution to identify who it belongs to, etc then you are watching one typical use of oligonucleotides, as they are often used to amplify DNA in what is called a polymerase chain reaction. Ok, ok, enough of the technobable…well almost.

What does it all mean? Well DNA computers will operate as molecular computers, or other words very, very small. In respect to capability, a typical desktop computer can execute say 108 operations per second, whereas super computers available today can execute say 1014 operations per second. Well a single strand of DNA could execute say 1020 operations per second, or to put it in perspective, a DNA computer would be more than a thousand times faster than current super computers, while being about a million times more efficient in energy terms than a super computer. Impressive! Oh, and it could store 1 Terabyte of data, on the space we take to store about 1 Kb of data right now.

So in theory, inside a cell inside your body, you could carry a DNA computer capable of more computational power than the world’s most powerful supercomputer. This might be useful combined with nano-technology to enhance our natural immune system response, or even more exotic solutions such as augmenting our natural abilities, improving longevity by correcting cellular reproduction error at the molecular level, etc. Pretty wild…

Moore’s Law – Why computers are increasingly disruptive to industry

In Technology Innovation on October 25, 2009 at 06:57

Excerpt from Chapter 9 – Deep Impact: Technology and Disruptive Innovation

You’ve undoubtedly heard of “Silicon Valley” right? Did you know why it is called Silicon Valley? You might think it is because of all the dot com, 2.0 companies that inhabit this region of California. But you’d be wrong. We have to go much further back to the 1950s to find out the origin of the term. It must have something to do with computer chips, because microchips are made of Silicon…

Well in 1947 a gentleman by the name of William Shockley along with John Bardeen and Walter Brattain, invented the transistor. For this, the three were awarded the Nobel Prize in Physics in 1956. The attempts of Shockley to commercialize the transistor is what led to the formation of a bunch of companies in California specializing in the manufacturing of these components. During the 50s and 60s there was a great deal of speculation in the markets about ‘tronics’ or the ability to capitalize on these ‘new’ technologies and advances.

On April 19th, 1965, Gordon Moore, the co-founder of Intel corporation, published an article in Electronics Magazine entitled “Cramming more components onto Integrated Circuits”. In that article he stated a law on computing power that has remained consistent for more than 40 years, a law that drives technology development today and for the near future.

The complexity for minimum component costs has increased at a rate of roughly a factor of two per year … Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years. That means by 1975, the number of components per integrated circuit for minimum cost will be 65,000. I believe that such a large circuit can be built on a single wafer – Gordon Moore’s prediction in 1965.


The term “Moore’s Law” was reportedly coined in 1970 by the CalTech professor and VLSI pioneer Calvin Mead . Essentially what this meant was that Moore predicted computing power would double every two years. Since 1965, that law has held true and remains the backbone of classical computing platform development. But what this all means it that since 1965 we have been able to predict both the reduction in costs and the improvements in computing capability of microchips, and those predictions have held true.

In reality what does this mean. Let’s put it in perspective. In 1965 the amount of transistors that fitted on an integrated circuit could be counted in tens. In 1971 Intel introduced the 4004 Microprocessor with 2,300 transistors. In 1978 when Intel introduced the 8086 Microprocessor, the IBM PC was effectively born (the first IBM PC used the 8088 chip) – this chip had 29,000 transistors. In 2006 Intel’s Itanium 2 processor carried 1,700,000,000 transistors. What does that mean? Transistors are now so small that more than a million of them could fit on the head of a pin. While all this was happening, the cost of these transistors was also exponentially falling, as per Moore’s prediction.

In real terms this means that a mainframe computer of the 1970s that cost over $1 million, has less computing power than your iPhone has today. It means that the USB memory stick you carry around with you in your pocket would have taken a room full of Hard Disk platters in the 70s. Have you ever watched the movie Apollo 13? Remember they were trying to work out how to fire up the Apollo Guidance Computer without breaking their remaining power allowance? Well that computer, which was at the height of computing technology in the 70s, had around 32k of memory, ran at a clock speed of 1.024 MHz. When the IBM PC XT launched in 1981 it was already about 8 times faster than the Apollo computer. The next generation of smartphone we will be using in the next 2-3 years will have 1 Ghz processor chips. That is roughly 1 million times faster than the Apollo Guidance Computer…

These numbers are so mind blowing that if we apply it to the world outside computing things get a little bizarre. For example, if a house shrunk at the same pace transistors have, you would not be able to see a house without a microscope. In 1978 a commercial flight between New York and Paris cost around US$900 and took 7 hours to complete. If Moore’s law applied to aviation in the same way as computing, then that flight today would cost about 1 cent (a penny) and would take less than a second.

Now you know why your technology budget is the way it is…