Author Topic: PSX Emulator (FAO Jari, Dag & Friends)  (Read 109112 times)

Skillster/RedSarg99

  • *
  • Posts: 2286
  • Loving every Final Fantasy
    • View Profile
PSX Emulator (FAO Jari, Dag & Friends)
« Reply #125 on: 2001-03-05 16:52:00 »
hmm was it 1200mhz?
nope , o here it was: http://www.xbitlabs.com/cgi-bin/archives.cgi?category=1&view=2-01" TARGET=_blank>http://www.xbitlabs.com/cgi-bin/archives.cgi?category=1&view=2-01
(page takes a while to load)scroll down to the taiwan memory sales table, see the 128mb pc 2100? so thats 1050mhzDDR ram for u?
i reckon that by the time xbox is mainstream the next geforce will be ready to ship, dont you?

Skillster/RedSarg99

  • *
  • Posts: 2286
  • Loving every Final Fantasy
    • View Profile
PSX Emulator (FAO Jari, Dag & Friends)
« Reply #126 on: 2001-03-05 16:56:00 »
oh, and i am refering to amd and intel raodmaps thank u.
amd r set to release there new athlon core in about Q2 this year, followed by a newer core in Q3 that makes it a 4th generation athlon? at about 1.6 to 1.8Ghz?
and p4 is set to come out with there new core very soon. and also new p3 core will debut too. along with a final p4 revision in Q3 leading up to the 2GHz late in the year.
true?

cHiBiMaRuKo

  • *
  • Posts: 179
    • View Profile
PSX Emulator (FAO Jari, Dag & Friends)
« Reply #127 on: 2001-03-05 22:39:00 »
Remember that X-bOX DDR-RAM is 128-bit not 64 bit as in your article. It's so much faster than DDR-RAM for PC. Next GeForce4 is ready by the time X-Box launch? With no competition, will you think nVidia will do that? That move will hurt themselves. Probably GeForce3 MX is more viable product that will come from nVidia at that time.

Both AMD ands Intel will have 2Ghz CPU by the end of this year, but seriously I don't think if those 2Ghz machines can match X-Box in terms of performance when all you have is NV20 as the best VGA card you can get. Another thing you have to remember is that a 2Ghz CPU+NV20 combo will be VERY EXPENSIVE, so anyone is better off buying an X-Box anyway.


dagsverre

  • *
  • Posts: 323
    • View Profile
    • http://ffsf.cjb.net
PSX Emulator (FAO Jari, Dag & Friends)
« Reply #128 on: 2001-03-06 00:19:00 »
You forget:

a) 2GHz + GeForce 2 is still cheaper than 2GHz + GeForce 2 + X-Box. You *need* the PC anyway, so while graphics cards and such can be upgraded for a not so high cost the x-box comes *in addition* to your PC equipment. The x-box is great for people who don't need a PC but we don't belong to those.

b) Multiple processors. 2GHz processors will be expensive but why buy that when you can get two 1 GHz processors on the same board for a much smaller cost? More and more games support it and there's no drawbacks in applications.

In some years we won't be running 10GHz processors, we will be emulating windows on many small RISC processors. The Pentium platform (CISC) is a step in the wrong direction, RISC processors (PowerPC ones for instance) rules...


Skillster/RedSarg99

  • *
  • Posts: 2286
  • Loving every Final Fantasy
    • View Profile
PSX Emulator (FAO Jari, Dag & Friends)
« Reply #129 on: 2001-03-06 02:21:00 »
chill man
what i mean is that nvidia WILL release gforce4 by middle next year and by then xbox will be mainstream.
why? coz ati r casing them down the throat.
BUT it is still chaeper to own a xbox than a pc.
but what the hell can you do on a xbox that u cant do on a pc?
surf,emails, games, etc.


cHiBiMaRuKo

  • *
  • Posts: 179
    • View Profile
PSX Emulator (FAO Jari, Dag & Friends)
« Reply #130 on: 2001-03-06 02:29:00 »
   
Quote
a) 2GHz + GeForce 2 is still cheaper than 2GHz + GeForce 2 + X-Box. You *need* the PC anyway, so while graphics cards and such can be upgraded for a not so high cost the x-box comes *in addition* to your PC equipment. The x-box is great for people who don't need a PC but we don't belong to those.

Why do you need a 2Ghz PC when you already have a X-Box? My computer is perfectly fine for the moment, and I'll only upgrade when I can't play games with this anymore. If only for Internet surfing or typing, even my computer is an overkill. Chances are, if I were to buy an X-Box, I won't buy those overpriced 2Ghz, at least before their price dropped. If I must buy a 2Ghz computer just to use a X-Box emulator, I'll rather buy the console itself, which is surely cheaper.

 

Quote
b) Multiple processors. 2GHz processors will be expensive but why buy that when you can get two 1 GHz processors on the same board for a much smaller cost? More and more games support it and there's no drawbacks in applications.

Now which 1Ghz processors you want to buy and make an SMP system? AMD? Intel? While Durons/TBird CPUs technically support SMP, but where's the chipset? Intel 1Ghz CPU? Do you think it support SMP? Depending on the stepping and batches, some support SMP but others aren't, or none of them support SMP anyway. You'll have to check against Intel database to know if your CPU support SMP or not.

How many games support SMP? Too little, I can count all SMP-enabled games with my hands. And a 2Ghz system will be faster than a system with 2 1Ghz CPUs in SMP mode. Point of diminishing return anyone?


   

Quote
In some years we won't be running 10GHz processors, we will be emulating windows on many small RISC processors. The Pentium platform (CISC) is a step in the wrong direction, RISC processors (PowerPC ones for instance) rules...

Duh! Who say RISC CPU is more superior than of CISC CPU? It's been proven wrong long time ago. This is a good read for you.    http://www.emulators.com/pentium4.htm" TARGET=_blank>Click here  

Quote
chill man
what i mean is that nvidia WILL release gforce4 by middle next year and by then xbox will be mainstream.
why? coz ati r casing them down the throat.
BUT it is still chaeper to own a xbox than a pc.
but what the hell can you do on a xbox that u cant do on a pc?
surf,emails, games, etc.

Dreamcast with Windows CE already can be used to surf the Internet, wrote e-mail etc. So it's not too ambitious to say if X-Box could do it too, even I don't remember Microsoft have said that.

[This message has been edited by cHiBiMaRuKo (edited March 05, 2001).]


PSX Emulator (FAO Jari, Dag & Friends)
« Reply #131 on: 2001-03-06 03:51:00 »
cHiBiMaRuKo: "Define pretty often. In the past, new release of DirectX is out roughly once a year. But for now, there's still no news about DirectX9. I've been beta-testing DirectX for Microsoft since DirectX 6 days and up to today, there's still are no inside news for DirectX 9. I think it wouldn't come out this year at least, because no news for it as of yet ( as opposed to the last time )."

Point taken. What about DirectX 8.1?

"If it truly X-Box will come out this US fall, PC is highly likely won't have time to catch-up. New graphic card every 6 months? How long NV20 takes to come out? More than 1 full year since NV15 released. Radeon2 will still don't match NV25 speed even if ATI released them in time ( which is hould be now ). Note the word if. The graphic chip market now seems to have beeen slowing down it seems. No competition. And I don't see that NV20 prices will come down before X-Box launch at least."

Release dates:

Geforce 256: September 1999
Geforce 2: Spring 2000
Geforce 2 Ultra: Late september 2000
NV20/GeForce 3: February 27, 2000

 

Quote
NVIDIA Introduces GeForce3 for the PC
Industry's Most Advanced Processor Ever is the Core of Both DirectX 8.0 and Xbox Game Console
INTEL DEVELOPER FORUM - SAN JOSE, CA - February 27, 2001 - Just 5 days after the original announcement for the Macintosh platform, NVIDIA Corporation (NASDAQ: NVDA) announced today the GeForce3TM GPU (graphics processing unit) for the PC. The GeForce3 GPU is based on a radical new graphics architecture that will offer breakthrough graphics to multiple markets, including the desktop PC and the Macintosh® platforms. This groundbreaking GPU includes the same core technology as NVIDIA's highly anticipated Xbox GPU (XGPU) for Microsoft's XboxTM game console.
  http://www.nvidia.com/Pages.nsf/pages/pr_022701" TARGET=_blank>>>Read on  

Like I said, every 6 months. The earliest the X-box could possibly be out is 6-7 months from now.

"I do want to see how those emulator programmers try to emulate the way X-Box handles data streaming, Dolby Digital AC-3 processing and large texture handling routine in PC; that 3 are just for examples."

I'm no programmer. Explain briefly what the problems are and I'll try to answer.

"Remember that X-bOX DDR-RAM is 128-bit not 64 bit as in your article. It's so much faster than DDR-RAM for PC. Next GeForce4 is ready by the time X-Box launch? With no competition, will you think nVidia will do that? That move will hurt themselves. Probably GeForce3 MX is more viable product that will come from nVidia at that time."

Where'd you get that from? My understanding is that:
1. The X-box only has 64MB RAM
2. It's Unified RAM, not DDR

"Both AMD ands Intel will have 2Ghz CPU by the end of this year, but seriously I don't think if those 2Ghz machines can match X-Box in terms of performance when all you have is NV20 as the best VGA card you can get. Another thing you have to remember is that a 2Ghz CPU+NV20 combo will be VERY EXPENSIVE, so anyone is better off buying an X-Box anyway."

Even if the market does slow a bit, nVidia has 10 months till the end of the year. More than enough time to develop and release a post NV20 card. If they don't, someone else will. Otherwise, PC Gamers in the US (very hardcore bunch) will kill.
I don't see how a P3 733MHz with an NV20 based card will be able to match a 1-2 GHz processor with a second or third generation NV20 card in terms of FPS. I'll be happy to compare framerates with you when the time comes, provided I have one of those PCs.

"Why do you need a 2Ghz PC when you already have a X-Box? My computer is perfectly fine for the moment, and I'll only upgrade when I can't play games with this anymore. If only for Internet surfing or typing, even my computer is an overkill. Chances are, if I were to buy an X-Box, I won't buy those overpriced 2Ghz, at least before their price dropped. If I must buy a 2Ghz computer just to use a X-Box emulator, I'll rather buy the console itself, which is surely cheaper."

Lot's of reasons:
1. A P733 MHz, no matter *how* optimized, will never be as fast as a 1466+ MHz processor.
2. Console and PC Games are in most cases very different. I'd prefer it stayed that way. If I ever want to play console, I go to a friend's or get the emulator.

I still don't think a 2GHz processor will be necessary for X-Box emulation. Probably 1GHz with a GeForce 3 will be enough.

[This message has been edited by Srethron Askvelhtnod (edited March 05, 2001).]


cHiBiMaRuKo

  • *
  • Posts: 179
    • View Profile
PSX Emulator (FAO Jari, Dag & Friends)
« Reply #132 on: 2001-03-06 08:06:00 »
 
Quote
Point taken. What about DirectX 8.1?

It wasn't a major release. Even the folks at the Microsoft beta newsgroups are caught off-guard by it release.

 

Quote
Release dates:

Geforce 256: September 1999
Geforce 2: Spring 2000
Geforce 2 Ultra: Late september 2000
NV20/GeForce 3: February 27, 2000

GeForce2 NV15 is released on March 2000 and it's successor NV20 come out in February. Clearly they've missed their 6-month release cycle. I don't consider GF2 Ultra as a new card because it use the same chip as GF2 GTS.

 

Quote
I do want to see how those emulator programmers try to emulate the way X-Box handles data streaming, Dolby Digital AC-3 processing and large texture handling routine in PC; that 3 are just for examples."

I'm no programmer. Explain briefly what the problems are and I'll try to answer.

How the programmers of any X-Box emulator want to simulate the way X-Box transfer the high-res textures on high speed data buses ( talikng about gigabytes of data per second ) on an AGP 4x port which don't even reach 1Gigabytes per second in bandwidth?

How the programmers will try to emulate Dolby Digital AC-3 processing ( which could take a lot of CPU time ) on today soundcards and also dodge the posibble legal issues from Dolby Laboratories?

Those 2 are only the examples. More issues about DVD data transfer mechanism etc. must be also taken into account.

 

Quote
Where'd you get that from? My understanding is that:
1. The X-box only has 64MB RAM
2. It's Unified RAM, not DDR

Which RAM will you think X-Box will use? Dreamcast use SDRAM, PS2 use RDRAM. As far as I know, Micron will supply the DDR for Microsoft. The "unified RAM" terms means that all 64 MB RAM are used by all parts of the console, ie; textures, sound buffer, game data, etc. But X-Box itself use DDR-RAM 128bit that were commonly used only in videocards.

 

Quote
Even if the market does slow a bit, nVidia has 10 months till the end of the year. More than enough time to develop and release a post NV20 card. If they don't, someone else will. Otherwise, PC Gamers in the US (very hardcore bunch) will kill.
I don't see how a P3 733MHz with an NV20 based card will be able to match a 1-2 GHz processor with a second or third generation NV20 card in terms of FPS. I'll be happy to compare framerates with you when the time comes, provided I have one of those PCs.

I believe that there's will be no nVidia new videochip, judging for their current competitors condition. Unless ATI or Matrox or others like NEC can come out with a worthy competition to NV15, there's no reason for nVidia to continue their policy of 6-moths-one-product-release, as the policy is effective only when there's competition. Futhermore, nVidia have already making steps to enter the soundcard market, making them busier.

The hardcore bunch only make a very tiny market for nVidia products, so highly unlikely that they will make nVidia suffer. It's those OEM deals from Dell, Compaq and others that nVidia were interested in, and those companies don't give a damn whatever nVidia will release new chipset every 6 months or 1 year as they themselves are resellers. I'm confident if ATI and Matrox can give a worthy competitors to NV20, nVidia will resume the 6-months policy again. That's why competitions are good.


 

Quote
Lot's of reasons:
1. A P733 MHz, no matter *how* optimized, will never be as fast as a 1466+ MHz processor.
2. Console and PC Games are in most cases very different. I'd prefer it stayed that way. If I ever want to play console, I go to a friend's or get the emulator.

I still don't think a 2GHz processor will be necessary for X-Box emulation. Probably 1GHz with a GeForce 3 will be enough.

Why need 1433Mhz when 733Mhz ( it 667Mhz actually, Microsoft have slowed X=Box down ) will get the job done? The X-Box is fast NOT only because of CPU processing power, but also of kick-ass video processor, operating with a lot more bandwidth that NV20 can only dream of, and also the whole system operate using data buses much faster ( 200Mhz )than the limitations of PCI ( 33Mhz ) and AGP ( 66Mhz ) buses inside of PCs. Not only the CPU, the whole console is optimized for gaming ( and others ? ). The sound processor, the graphics etc.

Don't know if a 1Ghz with a NV20 can emulate X-Box, but even AC-3 will take a lot of CPU time already.  

[This message has been edited by cHiBiMaRuKo (edited March 06, 2001).]


The SaiNt

  • *
  • Posts: 1300
    • View Profile
PSX Emulator (FAO Jari, Dag & Friends)
« Reply #133 on: 2001-03-06 15:02:00 »
Go here http://www.xbox.com/xbox/FLASH/specs.asp" TARGET=_blank>http://www.xbox.com/xbox/FLASH/specs.asp

The main graphics chipset for the X-Box is joint developed by Nvidia and Microsoft. What's stopping Nvidia for doing the same thing for the PC?

Go to nvidia.com and readup, the geforce 3 is supposed to surpass or equal the xbox in everyway. If you're talking bout the RAM it uses, the GEFORCE 3 will not be using DDR RAM or RDRAM, it will be using something new.


ficedula

  • *
  • Posts: 2178
    • View Profile
    • http://www.ficedula.co.uk
PSX Emulator (FAO Jari, Dag & Friends)
« Reply #134 on: 2001-03-06 15:24:00 »
XBox has 64MB of RAM for *everything*? Jesus, that's crap. The PC doesn't *need* an incredibly fast data bus because it has more RAM, generally speaking. With only 64MB RAM I can see the XBox will be frantically swapping data in and out of memory to make room for textures and sound effects .... PC won't have that problem, once it's loaded into RAM once, that's it. Same with the PS2; it has an overfast data bus to compensate for the fact that it has (compared to a modern PC) little RAM to store textures etc in.

The fact remains most people already *have* a PC so I doubt the XBox would be cheaper than upgrading. Say I wanted to fully upgrade my PC to a 1Ghz+ monster. What'd it cost?

New M/Board. < 100UKP
New CPU    100UKP
New GFX card.   100UKP
Oh, that's it.

Monitor? Got a nice one already. Case? That too. Hard disk? UDMA66 already, good enough for me. Sound card? Live! CD? DVD drive already. RAM? 192MB of PC100 is plenty good enough, but I could upgrade it to 133 pretty cheaply.

The XBox, for anyone with a half-decent PC already, is more expensive than upgrading and won't give you much better performance. I'm not planning on buying a PS2, but at least it's not more than majorly upgrading my PC, and the price *will* fall.


cHiBiMaRuKo

  • *
  • Posts: 179
    • View Profile
PSX Emulator (FAO Jari, Dag & Friends)
« Reply #135 on: 2001-03-06 18:52:00 »

 
Quote
The main graphics chipset for the X-Box is joint developed by Nvidia and Microsoft. What's stopping Nvidia for doing the same thing for the PC?

Without any competition, nVidia won't make available the technology to PC users. Do you think nVidia is mad to release a new chip ( let say NV25 )that will only compete with  their own product NV20? It's just business strategy from nVidia. If ATI or others can't release a worthy product that can compete with NV20, nVidia won't likely release a new chip because if they do so, nVidia will found out that they are competing with their own product ( NV20 ), which is bad in bussiness sense. nVidia can do it of course, but the timing of release must be also taken into account.


 

Quote
Go to nvidia.com and readup, the geforce 3 is supposed to surpass or equal the xbox in everyway. If you're talking bout the RAM it uses, the GEFORCE 3 will not be using DDR RAM or RDRAM, it will be using something new.

GeForce 3 will use DDR-SDRAM just like GeForce2 Ultra ( 4ns 128bit DDR-SDRAM ). It's only that NV20 will have much better memory controller than what of NV15 had offered, thus more efficient use of the bandwidth available.


 

Quote
XBox has 64MB of RAM for *everything*? Jesus, that's crap. The PC doesn't *need* an incredibly fast data bus because it has more RAM, generally speaking. With only 64MB RAM I can see the XBox will be frantically swapping data in and out of memory to make room for textures and sound effects .... PC won't have that problem, once it's loaded into RAM once, that's it. Same with the PS2; it has an overfast data bus to compensate for the fact that it has (compared to a modern PC) little RAM to store textures etc in.

Micron DDR-SDRAM for X-Box have a total of 6.4GB per second bandwidh thus I'm sure that flushing textures in and out is easy.


 

Quote
The fact remains most people already *have* a PC so I doubt the XBox would be cheaper than upgrading. Say I wanted to fully upgrade my PC to a 1Ghz+ monster. What'd it cost?

New M/Board. < 100UKP
New CPU 100UKP
New GFX card. 100UKP
Oh, that's it.

You can get a GeForce3 ( which is now have a MSRP price of 600 pound sterling )at 100 pound sterling at the end of the year? Or did you want to buy only a GF2 MX? Don't think that NV20 price will drop so fast, unless ATI and other give nVidia very intense heat.


 

Quote
The XBox, for anyone with a half-decent PC already, is more expensive than upgrading and won't give you much better performance. I'm not planning on buying a PS2, but at least it's not more than majorly upgrading my PC, and the price *will* fall.

Did Microsoft released the price for X-Box already? No I don't think so. So how do you know that X-Box will cost more than 300 pound sterling? I'll keep my opinion on X-Box price till Microsoft announce them out, then only I'll say if upgrading PC is cheaper than buying a X-Box.



Skillster/RedSarg99

  • *
  • Posts: 2286
  • Loving every Final Fantasy
    • View Profile
PSX Emulator (FAO Jari, Dag & Friends)
« Reply #136 on: 2001-03-06 19:24:00 »
oh guys about psx emulators:
good news for the next release of epsxe (due in about 1-2weeks) its gonna feature dual shock / force feedback support and, AND.....
Wait for it....
SAVE STATES/ FREEZE SAVES!!!!
so say good bye to sodding psx crappy memory card problems!! and so hello to Zsnes style saving!!

ficedula

  • *
  • Posts: 2178
    • View Profile
    • http://www.ficedula.co.uk
PSX Emulator (FAO Jari, Dag & Friends)
« Reply #137 on: 2001-03-06 20:10:00 »
No, MS haven't declared the price of the XBox. They haven't declared release dates, either (and before you say the end of this year or something, Gates was quoted as saying he wouldn't release it until he got 3x the graphics performance of his rivals). All the estimates say that at $300 MS would be selling at a big loss. While they probably will sell at a loss, there is a limit. So more than $300 looks likely, and European prices always go through the roof compared to the US.

As for memory bandwidth ... yes, that's the whole point. It *has* to have fast bandwidth 'cos it has a truly pitiful amount of RAM compared to the PC. The fast bandwidth lets it move textures etc in and out very fast ... on the PC, you wouldn't *need* to move them at all, since you could store them all in RAM permanently.

Add to that the fact that specs haven't even been finalised yet - so games can't be relied on to come out at a particularly fast rate - and the XBox does not look like a good bet. I walked into a high street shop a few weeks ago, and they had PS2's for sale over the counter, and a pretty respectable range of titles for it. Now, MS aren't releasing XBox in Europe to start off with, so even if they DO make a release date by the end of the year (I suppose it *could* happen) it'll never make it to the UK before mid-late 2002 - so it'll be 2003 before you can buy it over-the-counter. By Moores law, in 2003 the average PC will be running well over 1GHz, and a top-end model will be at least 2Ghz. Hmmm. Not much competition, I feel.

Oh yes: Specs. Microsoft is well known for sticking to "predicted" specs, isn't it? Remember the required specs for Win95? A 386 with 4MB RAM. Yes, MS, I think that'll work. All the graphics we've seen so far are based on "predictions" of what the XBox can produce. Nobody actually knows what it'll do in reality

[This message has been edited by ficedula (edited March 06, 2001).]


dagsverre

  • *
  • Posts: 323
    • View Profile
    • http://ffsf.cjb.net
PSX Emulator (FAO Jari, Dag & Friends)
« Reply #138 on: 2001-03-06 21:18:00 »
 
Quote
Duh! Who say RISC CPU is more superior than of CISC CPU? It's been proven wrong long time ago. This is a good read for you. Click here

I've only skimmed the article, but from what I see it is all about how bad CISC cpus are. Perhaps you got the terms mixed up? Remember, Intel = CISC!


ficedula

  • *
  • Posts: 2178
    • View Profile
    • http://www.ficedula.co.uk
PSX Emulator (FAO Jari, Dag & Friends)
« Reply #139 on: 2001-03-07 02:29:00 »
True. Haven't read the article, but from what I've learnt at fast clock speeds RISC is far better than CISC.

For those who aren't sure about the difference: RISC processors complete an instruction very fast (maybe in one clock cycle) but only have very simple instructions.

CISC processors can take a while to do an instruction - up to 10 clock cycles or so - but have lots of instructions which can do slightly more complex things.

So if you have a 500MHz CPU and it's CISC, it might only be executing 50M instructions per second, if those instructions all took 10 cycles. Of course, other things like waiting for disk access/other devices to do things slow it down further.

In contrast, a RISC processor could execute 50M instructions per second with a clock rate of only 100MHz, or maybe 200MHz - because each instruction executes so quickly.

You might say "So what? Surely it takes more instructions to do everything with RISC because each instruction only does one simple thing". Yes - in theory. In practice, nobody codes in assembler, so no program ever uses the full potential of the hundreds of different commands on a CISC CPU. Compilers aren't always 100% correct about the absolute most efficient CPU instructions to generate.

Whereas with a RISC, because there aren't so many instructions, it's quite easy for compilers to generate code which does use the most efficient instructions - there aren't many possibilities to choose from!

So on modern PC's, RISC generally outperforms CISC unless you handcode everything in assembler, and noone does that. It takes too long.


cHiBiMaRuKo

  • *
  • Posts: 179
    • View Profile
PSX Emulator (FAO Jari, Dag & Friends)
« Reply #140 on: 2001-03-07 08:54:00 »

 
Quote
I've only skimmed the article, but from what I see it is all about how bad CISC cpus are. Perhaps you got the terms mixed up? Remember, Intel = CISC!

Here's what the article say.

RISC vs. CISC, get over it

This seems to have touched a nerve with a number of people, since I called the whole CISC vs. RISC argument bullshit. It is. RISC is just a simpler way of designing a processor, but you pay a price in other ways. By placing the constraint that the instruction set of a processor be fixed with, i.e. all instructions be 2 bytes, or 4 bytes, or 16 bytes in size (as with the Itanium), it allows the engineers to design a simpler decoder and to decode multiple instructions per clock cycle. But it also means that the typical RISC instruction wastes bits because even the simplest operation now requires, in the case of the PowerPC, 4 bytes. This in turn causes the code on the RISC processor to be larger than code on a CISC processor. Larger code means that for the same size code cache, the RISC processor will achieve a lower hit rate and pay the penalty of more memory accesses. Or alternately, the RISC processor requires a larger code cache, which means more transistors, and this merely shifts transistors from one area of the chip to another.

The people who declared x86 and CISC processors dead 10 years ago were dead wrong. CISC processors merely used the same kind of tricks as RISC processors - larger cache, multiple decoders, out-of-order execution, more registers (via register renaming), etc. In some cases, such as during task switching when the entire register set of the processor needs to be written to memory and then a different register set read in, the larger number of "visible" registers causes memory memory traffic. This in turn puts a load on the data cache, so as with the code cache, you either make it larger and use more transistors, or you pay a slight penalty.

My point is, these idiots from 10 years ago were wrong that RISC is somehow clearly superior to CISC and that CISC would die off. It's merely shifting transistors from one part of the chip to another. On the PowerPC, all instructions are 32-bit (4 bytes) long. Even a simple register move, an addition of 2 registers, a function return, pushing a value to the stack, all of these operations require 4 bytes each. Saving the 32 integer registers alone requires 128 bytes of code, 4 bytes per instruction times 32 instructions. Another 128 bytes of reload them. Ditto for the floating point registers. So who cares that it simplifies the decoder and removes a few transistors there. It causes more memory traffic and requires more transistors in the cache.

And the decoding problem is not that big of a problem for two reasons. I'll use the example of the 68040, the PowerPC, and the x86. A PowerPC chip can decode multiple instructions at once since it knows that each instruction is 4 bytes long. A 68040 processor has instructions that are a minimum of 2 bytes long and can go up to 16 bytes in size (I think, I can't think of an example off the top of my head that's longer than 16). Let's say 16. The necessary bits required to uniquely decode the instruction are usually found in the first 2 bytes of the instruction, 4 bytes for floating point. That's all the decoder needs to figure what this instruction is. It needs to decode the additional bytes only in cases of complex addressing modes. This is one area where Motorola screwed up (and likely decided the fate of the 68K) is that they truly made a complex instruction that requires decoding of almost every byte.

In the case of x86, Intel either lucked out or thought ahead and made sure that all the necessary bits to decode the instruction are as close to the beginning of the instruction as possible. In fact, you can usually decode an x86 instruction based on at most the first 3 bytes. The remaining bytes are constant numbers and addresses (which are also constant). You don't need to decode, say, the full 15 bytes of an instruction, when the last 10 bytes are data that gets passed on down into the core. So as one reader pointed out in email, Intel stuck with the old 8-bit processor techniques (such as the 6502) where you place all your instruction bytes first, then your data bytes. In the case of the 6502, only the first byte needed decoding. Any additional bytes in the instruction were 8-bit or 16-bit numeric constants.

So decoding x86 is quite trivial, almost as easy as decoding RISC instructions. AMD seems to have figured out how to do it. Intel almost had it figured out in the P6 family (with only the 4-1-1 rule to hold them back), and then for the Pentium 4 they just decided to cut features and just gave up on the whole decoding things. That's Mistake #3 on my list of course, but this in no way demonstrates how superior fixed instruction sizes are over variable sized instructions.

Over the years, CISC and RISC have kept pace with each other. Sure, one technology may leap ahead a bit, then the other catches up a few months later. Neither technology has taken a huge lead over the other, since the decision whether to used fixed or variable sized instructions and whether to have 8 16 32 or 64 registers in the chip are just two factors in the overall design of the processor. Much of the rest of the design between RISC and CISC chips is very similar. And over time, ideas get borrowed both ways


 

Quote
True. Haven't read the article, but from what I've learnt at fast clock speeds RISC is far better than CISC.
For those who aren't sure about the difference: RISC processors complete an instruction very fast (maybe in one clock cycle) but only have very simple instructions.

CISC processors can take a while to do an instruction - up to 10 clock cycles or so - but have lots of instructions which can do slightly more complex things.

So if you have a 500MHz CPU and it's CISC, it might only be executing 50M instructions per second, if those instructions all took 10 cycles. Of course, other things like waiting for disk access/other devices to do things slow it down further.

In contrast, a RISC processor could execute 50M instructions per second with a clock rate of only 100MHz, or maybe 200MHz - because each instruction executes so quickly.

You might say "So what? Surely it takes more instructions to do everything with RISC because each instruction only does one simple thing". Yes - in theory. In practice, nobody codes in assembler, so no program ever uses the full potential of the hundreds of different commands on a CISC CPU. Compilers aren't always 100% correct about the absolute most efficient CPU instructions to generate.

Whereas with a RISC, because there aren't so many instructions, it's quite easy for compilers to generate code which does use the most efficient instructions - there aren't many possibilities to choose from!

So on modern PC's, RISC generally outperforms CISC unless you handcode everything in assembler, and noone does that. It takes too long.

Obviously both of you have to read the article more thoroughfully. It's quite long and a good off-line read.

Quote
No, MS haven't declared the price of the XBox. They haven't declared release dates, either (and before you say the end of this year or something, Gates was quoted as saying he wouldn't release it until he got 3x the graphics performance of his rivals). All the estimates say that at $300 MS would be selling at a big loss. While they probably will sell at a loss, there is a limit. So more than $300 looks likely, and European prices always go through the roof compared to the US.

Even at US300, Sony is selling PS2 at a loss too ( covered by licensing fees ). So I think Microsoft will do the same thing too.

Quote
Add to that the fact that specs haven't even been finalised yet - so games can't be relied on to come out at a particularly fast rate - and the XBox does not look like a good bet. I walked into a high street shop a few weeks ago, and they had PS2's for sale over the counter, and a pretty respectable range of titles for it. Now, MS aren't releasing XBox in Europe to start off with, so even if they DO make a release date by the end of the year (I suppose it *could* happen) it'll never make it to the UK before mid-late 2002 - so it'll be 2003 before you can buy it over-the-counter. By Moores law, in 2003 the average PC will be running well over 1GHz, and a top-end model will be at least 2Ghz. Hmmm. Not much competition, I feel.

[Oh yes: Specs. Microsoft is well known for sticking to "predicted" specs, isn't it? Remember the required specs for Win95? A 386 with 4MB RAM. Yes, MS, I think that'll work. All the graphics we've seen so far are based on "predictions" of what the XBox can produce. Nobody actually knows what it'll do in reality.

I think the X-Box spec is pretty much finalised already, by the fact that nVidia have already producing the GPU and the MCPx chips for X-Box. Only that the specs aren't publicised much as I thought it would be. In general, information available today should be enough to gauge how well X-Box performance will be.



cHiBiMaRuKo

  • *
  • Posts: 179
    • View Profile
PSX Emulator (FAO Jari, Dag & Friends)
« Reply #141 on: 2001-03-07 09:01:00 »

 
Quote
oh guys about psx emulators:
good news for the next release of epsxe (due in about 1-2weeks) its gonna feature dual shock / force feedback support and, AND.....
Wait for it....
SAVE STATES/ FREEZE SAVES!!!!
so say good bye to sodding psx crappy memory card problems!! and so hello to Zsnes style saving!!

Of man.. save states. A nifty feature indeed. Dual shock support? Will it support PS analogue controllers via DirectPad Pro?


Skillster/RedSarg99

  • *
  • Posts: 2286
  • Loving every Final Fantasy
    • View Profile
PSX Emulator (FAO Jari, Dag & Friends)
« Reply #142 on: 2001-03-07 14:26:00 »
YES!!! EPSXE WILL SUPPORT ALL things that support force feedback (pc gamepads) and all psx dual shock controllers connected thru a PSX to PC adaptor THAT are designed with dual shock support!!
I love save states man! i hate mem' cards!
EPSXE is finally moving in the right direction (well faster than b4!)
ppl said it couldnt be possible, they would have to save ALL the pc and video ram in use, all i say is AHAHAHAHAH!!  :D

Ant

  • *
  • Posts: 402
    • View Profile
PSX Emulator (FAO Jari, Dag & Friends)
« Reply #143 on: 2001-03-07 15:20:00 »
Nice....

Whats even better for me is that now, finally ePSXe 1.20 supports FF9 PAL!!!!!

Yeah Baby!!!

So now its time for Qhimm to roll out his FF9 editor.  Sweet!


Skillster/RedSarg99

  • *
  • Posts: 2286
  • Loving every Final Fantasy
    • View Profile
PSX Emulator (FAO Jari, Dag & Friends)
« Reply #144 on: 2001-03-07 17:19:00 »
check the infos on EPSXE v1.2.0 at http://www.psxemu.com" TARGET=_blank>www.psxemu.com
see ya

ficedula

  • *
  • Posts: 2178
    • View Profile
    • http://www.ficedula.co.uk
PSX Emulator (FAO Jari, Dag & Friends)
« Reply #145 on: 2001-03-07 18:25:00 »
OK, I've read that article now.

Oh, THAT was a well constructed argument! I mean, you can tell they spent hours and hours researching that, seeking out all the technical data, so they could make the informed and persuasive argument: "It's crap". I mean, that convinced me! Before, I always thought you needed *reasons* and *proof* when you were saying an argument was wrong, but no! all you have to do it say "that's crap" and immediately everybody has to be convinced! Well, that's going to work SO well in my university work; "I could give proof, but no; it's just crap." Obviously all those scientists who try to PROVE theories are wasting their time, all they need to do is swear at anybody who disagrees and that *makes* them correct!

As you might have realised, that article somehow failed to convince me on the RISC/CISC issue.


cHiBiMaRuKo

  • *
  • Posts: 179
    • View Profile
PSX Emulator (FAO Jari, Dag & Friends)
« Reply #146 on: 2001-03-07 21:59:00 »

 
Quote
Oh, THAT was a well constructed argument! I mean, you can tell they spent hours and hours researching that, seeking out all the technical data, so they could make the informed and persuasive argument: "It's crap". I mean, that convinced me! Before, I always thought you needed *reasons* and *proof* when you were saying an argument was wrong, but no! all you have to do it say "that's crap" and immediately everybody has to be convinced! Well, that's going to work SO well in my university work; "I could give proof, but no; it's just crap." Obviously all those scientists who try to PROVE theories are wasting their time, all they need to do is swear at anybody who disagrees and that *makes* them correct!

I think the author already give proofs, backed in second or third level programming language examples. He just doesn't say that "RISC is superior than CISC" theory is crap ( even I learn that in one of my 8088 subjects in uni ), but also gives proof of what's he saying. Why don't you give him an e-mail and try to convince him that he is wrong?


dagsverre

  • *
  • Posts: 323
    • View Profile
    • http://ffsf.cjb.net
PSX Emulator (FAO Jari, Dag & Friends)
« Reply #147 on: 2001-03-08 00:00:00 »
Well, for starters he didn't even care to tell about what Fic explained so well: Few short instructions make faster programs than many slow as long as the compiler only uses a few anyway.

Also I don't think there's anything wrong with passing variable length instructions under the RISC architechture, though I have no real knowledge in this area. I just know that the general opinion (on slashdot etc.) is that Intel is crap because of the reasons Fic mentioned.

It is a bit like the X-Box vs. PSX2 discussion in a way, RISC being PSX2 and CISC being X-Box...

And at any rate, the clock speed race is wrong. What, no games support dual processors? Well, except for that being wrong (Unreal Tournament supports it, enough for me...) once the world start using them games supporting them will be written!

Ever heard of the Transmeta processor? It emulates the 386 architecture on a RISC chipset. It consumes very little power, and because of that it doesn't need a fan and so consumes even less power. It's of course not as effective but still usable for everything but games. I believe that is where the world is heading...there's simply no point in faster and faster with bigger and bigger fans when you can have many small ones running cold.


ficedula

  • *
  • Posts: 2178
    • View Profile
    • http://www.ficedula.co.uk
PSX Emulator (FAO Jari, Dag & Friends)
« Reply #148 on: 2001-03-08 00:11:00 »
OK, as for his arguments:

"This in turn causes the code on the RISC processor to be larger than code on a CISC processor. Larger code means that for the same size code cache, the RISC processor will achieve a lower hit rate and pay the penalty of more memory accesses. Or alternately, the RISC processor requires a larger code cache, which means more transistors, and this merely shifts transistors from one area of the chip to another."

Nope, got it wrong. By having every instruction 4 bytes in length exactly, it speeds up code. Because you know that 32-bytes of code contain 8 instructions, and when you're executing instruction #7, say, it's time to go and get the next block of code. It also means instructions never get split up over code boundaries - for example, if your cache line was 32-bytes, with variable length instructions, what happens when an instruction starts on the last byte and carries on elsewhere? As an example, on Win98 there's a code optimizer tool that pads executable code so the instructions all end on 4K boundaries. THIS INCREASES THE SIZE OF THE FILE BUT IMPROVES SPEED OF EXECUTION.

"The necessary bits required to uniquely decode the instruction are usually found in the first 2 bytes of the instruction, 4 bytes for floating point. That's all the decoder needs to figure what this instruction is. It needs to decode the additional bytes only in cases of complex addressing modes."

First of all he says they're "usually" found in the first two bytes. Then he says "that's all it needs". Then he changes mind again and says that sometimes it DOES need the rest of the instruction! Either
a) You get all the instruction, leading to increasing memory traffic, exactly what he was accusing the RISC of    or
b) You don't, and when it turns out you DID need the rest of it, you're f***ed. (Well, actually it means another memory access - but you've just lost out speed-wise).

"...could give the appearance of executing 1 instruction per cycle. Any given instruction still takes multiple clock cycles to execute, but by overlapping several instructions at once at different stages of execution, you get the appearance of one instruction per cycle."

You do; on this he's quite right, if you've got enough simultaneous fetch/decode/execute subunits. He also points out this fails if the CPU is wrong about which instruction is going to be executed next (in the case of a Jump (GOTO, if you like) command). So, everybody, would would we all prefer:

a) A CPU which appears to execute 1 instruction per cycle except when it jumps, when the next few instructions will take up to 10 cycles.
b) A CPU which appears to execute 1 instruction per cycle except when it jumps, when the next few instructions will take up to 3 cycles.

Hmmm .... I'd want (b). That's the RISC.

Incidentally, JUMP instructions are fairly common in code. I've done some assembler programming as part of my course, and taking the example assembler I wrote for of the exercises, out of 17 instructions, 3 were conditional jumps. Lots of things generate jumps; calling a function, IF statements, loops, etc.

Also, he mentions exactly *how* modern CISC processors gain their speed: Multiple decoders, like you need for the above trick. What about the PIII? "Decoders 2 and 3 can decode only simple, RISC-like instructions that break down into 1 micro-op." OK: So it's fast because the extra decoders in it are small RISC processors... that's exactly the case here: Modern CISC's *do* use a lot of RISC-type architecture to run as fast as possible. Does this suggest, perhaps, that RISC has speed advantages over CISC? No, of course it doesn't!.

The guy who wrote that article seems to have a good grounding on CPU history, and a fairly good grasp of theory, but a less-than-100% idea of how the basic electronics in the CPU are working. Given the choice between believing this guy, who seems to have a lot of x86 experience but not a lot else, or believing my CompSci lecturers - who tell us to go and check everything out in the hardware labs, which we do - I'll go with the lecturers.


Skillster/RedSarg99

  • *
  • Posts: 2286
  • Loving every Final Fantasy
    • View Profile
PSX Emulator (FAO Jari, Dag & Friends)
« Reply #149 on: 2001-03-08 16:14:00 »
HEY ITS OUT EPSXE1.2.0
GO GRAB IT FROM http://www.psxemu.com" TARGET=_blank>www.psxemu.com  !!!