I've only skimmed the article, but from what I see it is all about how bad CISC cpus are. Perhaps you got the terms mixed up? Remember, Intel = CISC!
Here's what the article say.
RISC vs. CISC, get over it
This seems to have touched a nerve with a number of people, since I called the whole CISC vs. RISC argument bullshit. It is. RISC is just a simpler way of designing a processor, but you pay a price in other ways. By placing the constraint that the instruction set of a processor be fixed with, i.e. all instructions be 2 bytes, or 4 bytes, or 16 bytes in size (as with the Itanium), it allows the engineers to design a simpler decoder and to decode multiple instructions per clock cycle. But it also means that the typical RISC instruction wastes bits because even the simplest operation now requires, in the case of the PowerPC, 4 bytes. This in turn causes the code on the RISC processor to be larger than code on a CISC processor. Larger code means that for the same size code cache, the RISC processor will achieve a lower hit rate and pay the penalty of more memory accesses. Or alternately, the RISC processor requires a larger code cache, which means more transistors, and this merely shifts transistors from one area of the chip to another.
The people who declared x86 and CISC processors dead 10 years ago were dead wrong. CISC processors merely used the same kind of tricks as RISC processors - larger cache, multiple decoders, out-of-order execution, more registers (via register renaming), etc. In some cases, such as during task switching when the entire register set of the processor needs to be written to memory and then a different register set read in, the larger number of "visible" registers causes memory memory traffic. This in turn puts a load on the data cache, so as with the code cache, you either make it larger and use more transistors, or you pay a slight penalty.
My point is, these idiots from 10 years ago were wrong that RISC is somehow clearly superior to CISC and that CISC would die off. It's merely shifting transistors from one part of the chip to another. On the PowerPC, all instructions are 32-bit (4 bytes) long. Even a simple register move, an addition of 2 registers, a function return, pushing a value to the stack, all of these operations require 4 bytes each. Saving the 32 integer registers alone requires 128 bytes of code, 4 bytes per instruction times 32 instructions. Another 128 bytes of reload them. Ditto for the floating point registers. So who cares that it simplifies the decoder and removes a few transistors there. It causes more memory traffic and requires more transistors in the cache.
And the decoding problem is not that big of a problem for two reasons. I'll use the example of the 68040, the PowerPC, and the x86. A PowerPC chip can decode multiple instructions at once since it knows that each instruction is 4 bytes long. A 68040 processor has instructions that are a minimum of 2 bytes long and can go up to 16 bytes in size (I think, I can't think of an example off the top of my head that's longer than 16). Let's say 16. The necessary bits required to uniquely decode the instruction are usually found in the first 2 bytes of the instruction, 4 bytes for floating point. That's all the decoder needs to figure what this instruction is. It needs to decode the additional bytes only in cases of complex addressing modes. This is one area where Motorola screwed up (and likely decided the fate of the 68K) is that they truly made a complex instruction that requires decoding of almost every byte.
In the case of x86, Intel either lucked out or thought ahead and made sure that all the necessary bits to decode the instruction are as close to the beginning of the instruction as possible. In fact, you can usually decode an x86 instruction based on at most the first 3 bytes. The remaining bytes are constant numbers and addresses (which are also constant). You don't need to decode, say, the full 15 bytes of an instruction, when the last 10 bytes are data that gets passed on down into the core. So as one reader pointed out in email, Intel stuck with the old 8-bit processor techniques (such as the 6502) where you place all your instruction bytes first, then your data bytes. In the case of the 6502, only the first byte needed decoding. Any additional bytes in the instruction were 8-bit or 16-bit numeric constants.
So decoding x86 is quite trivial, almost as easy as decoding RISC instructions. AMD seems to have figured out how to do it. Intel almost had it figured out in the P6 family (with only the 4-1-1 rule to hold them back), and then for the Pentium 4 they just decided to cut features and just gave up on the whole decoding things. That's Mistake #3 on my list of course, but this in no way demonstrates how superior fixed instruction sizes are over variable sized instructions.
Over the years, CISC and RISC have kept pace with each other. Sure, one technology may leap ahead a bit, then the other catches up a few months later. Neither technology has taken a huge lead over the other, since the decision whether to used fixed or variable sized instructions and whether to have 8 16 32 or 64 registers in the chip are just two factors in the overall design of the processor. Much of the rest of the design between RISC and CISC chips is very similar. And over time, ideas get borrowed both ways
True. Haven't read the article, but from what I've learnt at fast clock speeds RISC is far better than CISC.
For those who aren't sure about the difference: RISC processors complete an instruction very fast (maybe in one clock cycle) but only have very simple instructions.CISC processors can take a while to do an instruction - up to 10 clock cycles or so - but have lots of instructions which can do slightly more complex things.
So if you have a 500MHz CPU and it's CISC, it might only be executing 50M instructions per second, if those instructions all took 10 cycles. Of course, other things like waiting for disk access/other devices to do things slow it down further.
In contrast, a RISC processor could execute 50M instructions per second with a clock rate of only 100MHz, or maybe 200MHz - because each instruction executes so quickly.
You might say "So what? Surely it takes more instructions to do everything with RISC because each instruction only does one simple thing". Yes - in theory. In practice, nobody codes in assembler, so no program ever uses the full potential of the hundreds of different commands on a CISC CPU. Compilers aren't always 100% correct about the absolute most efficient CPU instructions to generate.
Whereas with a RISC, because there aren't so many instructions, it's quite easy for compilers to generate code which does use the most efficient instructions - there aren't many possibilities to choose from!
So on modern PC's, RISC generally outperforms CISC unless you handcode everything in assembler, and noone does that. It takes too long.
Obviously both of you have to read the article more thoroughfully. It's quite long and a good off-line read.
No, MS haven't declared the price of the XBox. They haven't declared release dates, either (and before you say the end of this year or something, Gates was quoted as saying he wouldn't release it until he got 3x the graphics performance of his rivals). All the estimates say that at $300 MS would be selling at a big loss. While they probably will sell at a loss, there is a limit. So more than $300 looks likely, and European prices always go through the roof compared to the US.
Even at US300, Sony is selling PS2 at a loss too ( covered by licensing fees ). So I think Microsoft will do the same thing too.
Add to that the fact that specs haven't even been finalised yet - so games can't be relied on to come out at a particularly fast rate - and the XBox does not look like a good bet. I walked into a high street shop a few weeks ago, and they had PS2's for sale over the counter, and a pretty respectable range of titles for it. Now, MS aren't releasing XBox in Europe to start off with, so even if they DO make a release date by the end of the year (I suppose it *could* happen) it'll never make it to the UK before mid-late 2002 - so it'll be 2003 before you can buy it over-the-counter. By Moores law, in 2003 the average PC will be running well over 1GHz, and a top-end model will be at least 2Ghz. Hmmm. Not much competition, I feel.[Oh yes: Specs. Microsoft is well known for sticking to "predicted" specs, isn't it? Remember the required specs for Win95? A 386 with 4MB RAM. Yes, MS, I think that'll work. All the graphics we've seen so far are based on "predictions" of what the XBox can produce. Nobody actually knows what it'll do in reality.
I think the X-Box spec is pretty much finalised already, by the fact that nVidia have already producing the GPU and the MCPx chips for X-Box. Only that the specs aren't publicised much as I thought it would be. In general, information available today should be enough to gauge how well X-Box performance will be.