Two's Company

Deep down in the murky depths of Intel, something is stirring. And that something is dual core processor chips. Intel is way behind most of its rivals in producing dual core chips, but presumably expects to be able to catch up by using it's marketing muscle. I expect that means that we will be hearing a lot about dual core processors and computers just as soon as Intel has ramped up the production to commercial levels.

So what is a dual core chip, how will it affect the ordinary computer user, and how much of the razz is just marketing hype?

Dual core chips are, in effect, two processors on the same chip. As any one who has looked at the 'Processes' tab of the Windows Task Manager will know modern desktop computers run dozen of tasks simultaneously (I count 60+ running on my laptop as I write this piece). Server machines, of course run far more tasks than that.

Single cores, single processor, machines handle this by splitting their time up into extremely small chunks (known as time slices) and running each program for a couple of slices before moving onto the next one. Given the speed at which processors work these days, and the relative sluggishness of the human nervous system, it looks to an outside observer as though all the programs are running simultaneously.

Until, that is, there are too many programs to run...

If there are too many programs then two things happen. First, the programs get time slices too infrequently and starts to run slowly. Second, since switching between programs in itself takes time, the cost of switching programs starts to take an ever larger proportion of the processor's time, making thing run even slower. Traditionally, the way to handle this has been to buy a faster processor, but there are limits to how far you can go down that path.

Enter dual- and multi-processor machines. In theory it's very simple. If you have (say) two processors, then the operating system can allocate half the work to each processor. Very neat - but unfortunately you don't actually get twice the power. There are many reasons for this, the key one being that the processors are sharing the same memory sub-system. Even though they are probably not using the same area of memory, both processors are using the same address and data busses to access the memory. Unfortunately, you can only have one processor at a time access the busses, so the processors have to queue up when they want to fetch more data or programming instructions from the main memory.

The net result is that you get less and less additional processing power for each processor added. For instance you might get something like this: one processor = 100%, two processors - 190%, three processors 275%, four processors 350%, and so on. As you can see, you will eventually get to a stage where adding another processor gets you no more power.

So far dual- and multi-processor machines have been the province of server class machines and high-end workstations. This is mainly because they are expensive - the processor chip and the Windows operating system are the two most expensive components in the average desktop machine. Add another processor and you significantly increase the cost of the machine.

There is another process at work here too. Over the last 20 years the number of transistors that manufacturers can cram onto a chip has doubled about every 18 months - this is known as 'Moore's Law' - and that is expected to continue, at least for the next ten years. This has resulted in bigger and faster chips with the chip companies, like Intel and AMD, falling over one another to announce more and better processors.

The problem is, processor speed isn't turning into faster performance. I've just got a new laptop. The processor on the last one was 1.2 GHz, the new one is 2.1 GHz. My new laptop certain doesn't run at nearly twice the speed of the old one. (Not that I'm complaining, though, it's a gorgeous machine!) Think about it. When was the last time you upgraded a computer and got a really massive increase in speed? I bet it was a long time ago.

Partly the problem is that the other parts of computer haven't experienced a similar increase in speed, so the processor spends a lot of time waiting around. Partly it's because the programmers are writing larger and more complex programs that absorb more of the available processing power. (Programmers haven't really had to consider performance for most desktop computer software since Windows 3.1 came in - if customers complained about speed they were advised to upgrade their computer.)

So what does a competitive chip manufacturer do when faced with twice the chip real estate that there was 18 months ago, but nothing like a corresponding increase in performance? Twice as much space - why not put two processors on the same chip! This is what's known as a dual-core processor. It won't necessarily allow you to run any given program faster, but you will be able to play Doom at full speed on one of the cores while the other one crunches numbers on your Excel spread sheet. :)

It -could- also speed up individual programs if they are multi-threaded. Threading is a technique already in use by some programs; it allows different bits of the same program to run simultaneously. If you use a modern version of Microsoft Word then you will have used its threading capabilities whenever you do background printing. One thread is formatting the text and sending it to the printer queue while another is accepting typed input from you.

The problem with threading is that you have to make sure the different threads are not simultaneously manipulating the same bit of memory. If two threads try to write to the same bit of memory at the same time, the results can be extremely unpleasant. There are ways to prevent that happening, of course, but they increase both the size and the complexity of the program, and make it much more difficult to debug.

The advantage of multi-threading is that if you have more than one processor (either multiple processors or a multi-core processor) the operating system can run different threads on different processors or cores, effectively speeding the program up.

Unfortunately, writing multi-threaded programs properly is an order of magnitude more difficult than writing single-threaded programs, and if you make a mistake it's much more difficult to find it, because it often only shows up with unusual combinations of things happening. Most of the multi-threaded programs around at the moment have only ever run on one, single-core, processor. People whose programming ability I have a great deal of respect for tell me that there is a real possibility that problems will start to show up when the threads are actually running on different processors.

And, there's another problem clouding the horizon. This time it's the commercial software vendors. The shrink-wrapped software you pay for is licensed on the assumption that it's going to run on a single processor system. What happens when you run it on a dual-core system?

Quite a lot of server software is licensed on a per-processor basis. I have no doubt that the commercial software companies would like to extend that to desktop machines, charging extra for running on multi-core chips. That debate is already underway, and the big companies are divided on which route to go down - charging by the processor chip, or charging by the number of processor cores.

So - you might get something out of dual core processors. Problem is, that something might just be more buggy programs. And you could well have to pay more for the privilege! Interesting times, as the Chinese say, interesting times.

http://www.theregister.co.uk/2005/07/11/intel_dualcore_server/


Alan Lenton
17 July, 2005


Update

Last week I discussed dual core processors. It seems it was a more timely piece than I realised. This week Intel announced that there will only be one more single core upgrade of its Itanium 2 processor family. After that it will be multi-core versions only.

I mentioned in the analysis piece that software vendors are having difficulty sorting out licensing prices. This week two software companies announced how they will deal with multi-core processors. VMware are being sensible and are charging as though the chips were single core - i.e. they are charging per processor chip. Oracle, being Oracle, have announced a complex formula. I won't bore you with the maths, but to give you some idea if you had a chip with 11 cores, you would pay the same price as if you had 9 single core processors! Oracle claim this will reduce costs for their customers, but I don't think they have factored in the cost of hiring a maths Ph.D. to do the sums!

Alan Lenton
24 July 2005


Read more technical topics

Back to the Phlogiston Blue top page


If you have any questions or comments about the articles on my web site, click here to send me email.