Yes - This is FF related as FF9 has come out on the Playstation, and to play it on the PC you'll need to use an emulator! (Of which, personally speaking, ePSXe is the best!)
I'm currently busy with a project I consider cooler than PSX emulators: www.legacy-ovwp.org
So, all my programming time goes to that project. I don't have any experience in emulating anyway, I'd use half a year to catch up with the people already working on it and probably couldn't help all that much.
Most of the Programmers who helped enhance the PC version of FF8 (You remember those lovely problems with Midi's people!) are in this forum. As FF9 and FF10, etc... will ONLY be out on the PSX and PSX2, don't you think it was a good idea for me to ask if they could help out the people who are making it possible to play these games on our PC's? Sorry - But the PSX looks too crap to play these games on, so the emulators help no end!!! (And yes - I have a PSX, and I still play the games on my PC!)
Yes - There are people out there doing plug-ins already, but only a few people. Quite frankly, the more the merrier! :)
I'm just going to have a look at your site Dag!
Oh well. It was worth a shot!... I'm going back to playing FF9... Happy hunting!!! :)
That's supposed to be my line!
:D
Ha! I'll bring this post back to it's subject. For emulator I prefer Bleem!
*I know how some of you guys here hate someone who uses pirated programs* but I got my full version of Bleem! without paying even a penny! MWAHAHA...the wonders of hacking I would say... :D Let's not get to hasty on the subject shall we??
LIVE PC!!
[This message has been edited by Srethron Askvelhtnod (edited January 17, 2001).]
Nothing but a numbers game....I bet you that the Gamecube specs will go up, too....It's just paper, right?
C-net has one too.
http://www.gamecenter.com/Features/Exclusives/Ces2001/?st.gc.fd.fe.i
According to them....it will.
And FFIX will be out on pc. When I get round to doing my first web page campainging for it. Gee that should be in about 30 years then. I think FFIx will be the last you see on pc. By the look of it its traditional.
I hope they do, as it's so much like FF7 - It's a good game.
However - I'm contented to use ePSXe to play it!...
(Hopefully this will be WRONG, and they might see sense and release the game on other platforms too...)
The PSX2 has a sickly fast bus (the thing that transfer stuff between memory, CPU, DCD, video card etc.). You can't really compare the speed of the PSX2 bus with the PC bus. On the other hand, the PSX2 have 4 MB of graphics memory in total!
On a PC, you usually download all the textures to the memory of your graphics card when the game starts, and use them the rest of the time. On the PSX2, the textures will be pumped over the main system bus each time the textures are needed. There is no way the PC is going to be able to do that. So basically, any emulation would involve rewriting the entire game automatically on the fly...
Ummm....PS2, again. The code for programming games for PS2 is called SonyPS2x. It is a VERY tough programming language that I think not even Qhimm can master. Even the person in Square who knows the most about it is STILL debugging the game. As for X-Box, It supports most programming languages including XGame++(XBox Game languange), C++, Delphi, Visual Basic?! SonyPsx(PSX Programming Language) SaturnV(Sega saturn) Dream128(Dreamcast) and Cube128(Nintendo's Cube) I'm not sure about this but one of my collegues at Square received this news. FFX will be on X-Box and on PS2.
Get your fact right. PS games are coded in C/C++ language with customized headers on a Unix platform. PS2 are the same way too, they use C/C++ ( again with customized headers ) as programming language on a Unix-based PS2 development kit. The problem is that Sony don't give the headers files source code to developers, instead opting to intergrate everything on that development kit, so the developers have to read the documentation instead to see what that particular header files do and make decisions on try-and-error and experimintation ( typo? ) basis. So it'll be quite some time before the developers really master the PS2 console programming ( probably in 2 or 3 years ).
And no such thing as SonyPSx or SonyPS2x language ( unless you want to call that modified C/C++ language with that name ). That's bullshit. For a side note, X-Box programming is easy, as it's based on DirectX 8 and use standard C++. About Dreamcast or Gamecube, I don't know though. I know there's Malaysians who works for Square, but I don't think they ( Square) would hire a 15-years-old kid, no matter how efficient he/she is on programming. 15-years-old kid should sit for their PMR, not debugging games for Square. That's too early.
Joey: Does someone feed you this stuff all day, or do you just think it all up?
Jari Huttunen: re: the Truth in Advertising. I agree. Everyone thinks Nintendo's machine is inferior because they seem to be providing honest specs.
Ficedula: True. FF8PSX was a graphical improvement over FF7PSX, and FF9PSx was a graphical improvement over FF8PSX. And I think that all of the Tomb Raiders after the first sucked.
cHiBiMaRuKo: Thanks for saving someone else the trouble. :wink:
The 64MB of the XBox matters nothing! With a good programmer the PSX2 will be able to produce much better and faster graphics with the 4MB memory in it. I would predict that in the beginning the XBox will have better GFX, then, as the programmers begin to master the PSX2, it will catch up. At least there's still a way out of Micro$oft-infested products...
And Im not gonna get dream weaver any time soon so could some one give me a good place to start out a web page?
Sir Canealot: As in html help?
http://www.dencity.com/html/ helped me out when I was getting started.
[This message has been edited by Srethron Askvelhtnod (edited January 19, 2001).]
Well, I guess everyone needs to at least learn SOMETHING on tech subjects..
Anyway, about the consoles, I've looked at the tech stuff and features and whatever, it looks to me like (technically speaking) X-Box comes out on top, Gamecube slightly under and PS2 below that. The main question though is what console will end up with the best games and best developers. Maybe price is also a question... I hear Gamecube will be out for $200? PS2 was $300 and there are rumors about the X-Box being as high as $500... You can even pre-order it from someplace now for $499.95.
All of these features are present in the other consoles (Gamecube/XBox) but the PS2 is here already. The PSX dropped in price pretty fast, so by the time the XBox sees the light of day it'll probably be 3X the price of a PS2, if not more, and there'll be lots of games out for the PS2. Same goes for the gamecube. The N64 crashed and burned as far as I'm concerned because it came out at exactly the wrong time - lots of PSX games available, and PS2 on the horizon. Could happen again.
Just my 2c.
Dag Sverre: True. However, the thing about the PS2 catching up. Won't the games for the X-Box similarly improve as the programmers begin to master the X-Box?
Nope, because the programmers will master the X-Box from the very first day, they already know how to program PC games.
I agree with Ficedula. My bet is on the P2.
If you havent been bothered to read what i just wrote DEVELOPERS ARE LAZY and probably WON'T GO WITH THE PS2 once the EASIER systems are out.
Also in regard to specs the ps2 has to repeat render images 8 times to get the same affect as the xbox and gamecube so unless sony gets a kit to the developers they are not gonna be too motivated to develop for it. (Like at the beggining of this thread no one would make an emulator or plugins because it was a bit hard and would take too much time).
Speedwise at the moment the gamecube ranks the best because:
developers have reported 20 000 000 polygons per second in a game environment.
PS2 developers are a bit lazy and have only made games that use 3 000 000 polygons per second
Microsoft havent sent any final development kits out because they are still finalising them so there are no games to report pps rates for.
I seem to have been sat here for a while writting this so i am gonna have to go and play ff9 because i want to see how much better it gets.
The PS2 also is released "now" and that makes a difference. Besides, when Sony actually manages to provide enough PS2's for the market, the "latent" buyers will flood the market and with such a vast market the PS2 will surely win the X-Box
I remember someone saying that the programmers are lazy, right? Although the PC platform has a wide range of games and fantastic graphics, a lot of people still stick to the pixelated PS platform. Why? Ease of use and no such thing as big bugs. When programming the X-box, the same problem and attitude will apply thus causing the X-box to be failure. You might say the only reason PC programmes have big bugs is because PC's have many different hardware configs. True, but when Microsoft is in charge they will surely bring in all their stupid stuff like "DirectX 9.0 for X-box!" updates and programmers have to update their software and knowledge while X-box users have to make update to their bios of firmware. If that is the case, will the programmers ever master anything???
Look at it this way....
Eventhough Microsoft is a giant in the PC software industry, Sony itself is a giant in the gaming and electronics industry. So lets say money goes into marketing, who do you think will win? The X-box is made of components that are done by multiple companies so not much of the profit goes to Microsoft. For Sony, most but not all of the components are done by Sony themselves so most of the revenue goes to Sony. This also makes them become less dependant on suppliers.
So, what's the final verdict?
The PS2 will be a sure winner but only time will tell the fate of the X-Box. The X-Box has a chance of winning over the PS2 only if Microsoft plays its cards properly. But looking at Microsoft's performance lately, it really looks unlikely.
I read that post just before lunch ... thought I'd eat ... then I come back to find out you've said everything I was going to.
Wait a minute- Just saw the begging of the saints post(yes sometimes i read backwards)- whats the point of buying a next gen console if you are gonna play all the games that you have already played and probably completed on the psx. Also in about 6 months time there will hardly be any psx games left so that market will be gone. Is it me or did i go into a shop and see only 10 ps2 games? if it is true then there would be a danger of what happened to the n64 in japan happening to the ps2 (They took the consoles back because they completed all the launch titles) Although the games are bigger so it would take longer so more time to get games out.
I will always use my pc more than a console because it is kept sufficiently updated that i never get big bugs.
I don't think the PS2 is overpriced - new consoles always cost a fortune, and it isn't that much above what the PSX cost on release.
Re: Virus: A virus is only possible where executable code is distributed, generally speaking. So if some company made a game which updated itself via the net (quite possible with nextgen consoles) in theory the same technique could be used to send a virus to the console. Of course, all it could really do would be mess up the current game and any memory cards. Unless you had a hard disk installed, when you *are* screwed.
That's also the basic problem with XBox/DirectX. If each game uses the version of DX on it's own CD then you guarantee the game runs OK - but what happens if somebody fixes a bug in DX that makes effects look smoother, etc. Either you can't use it - or you're back to the update-a-rama again.
PS2 may have backward-compatibility, but unfortunately it isn't perfect. Anyone could say FF Anthology?
X-Box is also easy to program, and you just need DirectX8 SDK for it, unlike PS2 which have to use a dev kit and also come with steep curve of learning. There's will be NO DirectX 9 for X-Box whatsoever or anything like that as all things are handled with Directx8, and with the hardware is streamlined, software companies don't have to deal with multiple hardware configuration anymore. Not to mention it'll come with a HDD and network adapter right from the box. The rumor of X-Box being much expensive than PS2 is a bull. No one knows what's the price yet.
Well, the only thing I don't like about X-Box is their controller....
Sure, DVD playback and backwards compatibility are nice things to have... But what if you already have a DVD player and an old system? (Also, most new computers can play DVDs.) They're just making the system more expensive so that you can buy stuff that you already have.
They also are already planning some cool looking games - the only problem is Square won't be on that system, more than likely :(
If you take one particular effect, which we've chosen specifically because Gamecube supports it in hardware and PS2, we can do it in one render what takes the PS2 eight renders. Also, we aren't saying how long a render takes, so it could mean anything.
See what I mean? Technical specs can always be distorted to mean absolutely anything.
As for DVD playback - I think it's gonna be an important feature in consoles. Sure, most PC's (including mine) have DVD playback, but unlike a console, most PC's aren't connected to the main TV. You can't deny it's a nice bonus.
PS2's hardware can render an effect at a time. Say to make a polygon (1), apply a texture (2), and make it look "shiny" (3) it has to be rendered 3 times in PS2 and one in Gamecube. Gamecube can do up to 8 things like that to a polygon in one pass (as can X-Box I think) but PS2 can only do one thing per pass.
Think about it this way: The X-Box needs lots of functionality to be able to do that eight-effects things. And it doesn't matter one bit if the PSX2 have eight times faster renders.
In addition, what if you want to have brute polygon effecets? The best graphics out there, like the pre-rendered videos, are done because they use so insanely many polygons. They don't necesarrily even texture them, instead they make out the textures from the polygons... So, if you don't want to draw any effects, the PSX2 have the capability of eight times more polygons (well, the X-Box probably isn't eight times slower in rendering so it isn't completely true)
I like the PSX2 bandwidth approach much better. The PC bus was designed for running office applications, not running 3D graphics.
From what I read, I think there isn't supposed to be much speed loss from rendering multiple effects at once... anyway, if the speed loss is too great, the programmers COULD program the games to render 8 times instead of render one polygon for 8 effects, couldn't they?
Anyway, the Gamecube demos I've seen look awesome and I think the system will be great once it comes out, and I don't think its gonna be priced terribly high. Nintendo may not win, but I think they might come out ahead of Microsoft in the new set of consoles......
This is all pure speculation of course - I'm just saying the only proper way to compare consoles is to actually *compare* them. Since the gamecube and XBox aren't out yet, we can't do that. Any discussion, even based on technical specs, is just guesswork until then.
If you feel that there is anything I should have added please tell, anyone that tells me I should not have included certain things in can go shove it up their arse
Microsoft shouldn't be considered the "underdog" b/c it has more money than both Sony and Nintendo combined...
My friend is sitting here reading this, and he says "You better buy an X-Box before its forcefully implanted into your spinal cord."
Thats why there seems very little point in buying an X-Box since I reckon pretty much all X-Box games will be ported over to PC so they reach a wider audience.
Plus the gamecube is gonna rock and kick PS2's ass.
No is my answer. Which PC graphic card support DirectX 8 functions in hardware? Only the ultra-expensive NV20. Which soundcard will support DLS2? None as of yet!
An X-Box game will have to massively reenginnered for porting to PC ( reducing graphic elements or changing sound engine ). That's my point here. There's a big difference between DirectX 8-supported hardware and DirectX-compliant hardware. Due to it's price, NV20 won't sell unless the price dropped. While Live! and Vortex2 card technically support DLS2 in hardware, their current drivers don't have that feature.
[This message has been edited by Srethron Askvelhtnod (edited March 04, 2001).]
You've still missed the point. Yes, there'll be a lot of changes needed to port XBox games to the PC. But it will *still* be less than games written for the PS2/Gamecube, that will in all likelehood use a COMPLETELY different system to DirectX. And so what if nothing supports DX8 100% in hardware yet? Do you need 100% hardware acceleration? Nope. Do you see XBox's for sale? No, so by the time they come out, I suspect a lot of PC hardware *will* have DX8 support.
Who say X-Box is just another PC? X-Box is MUCH faster than equivalently clocked PC. It don't have to deal with the superslow AGP and PCI connectors. It have almost dedicated high-speed bus to every component of the console, ie. RAM, HDD etc. Developers will surely take this into consideration and probably will put more texture processing that probably even AGP 8x couldn't handle.
I've see that you agree with my point of an X-Box game have to be massively rewritten for porting to PC. But I don't think that porting PS2/Gamecube games MUST be harder than porting X-Box games. For example the Summoner Demo for PC. Still looking very good for me here. After all, PS2 games still use large potion of C/c++ language.
Also I don't agree with the statement that say when X-Box come out, more hardware will have 'DirectX-8 support". For the second time, I must stress that DirectX8-supporting hardware IS DIFFERENT with DirectX8-compliant hardware. If a DirectX8 driver for Voodoo5 have come out, will it automatically made Voodoo5 to have the same features as NV20? No. Radeon2 ( which should support Directx 8 ), don't have a fixed release date yet. I don't think that there will be a lot of DirectX-compliant hardware out there when the X-Box come out.
Anyway, I sure think that an emulator for X-Box will be easy to write than of PS2/Gamecube.
to save in epsxe make sure the game is forced to run in pal mode then you can save. failing that press f4 when you are about to save. its much easier though to just force the game into pal mode
*Any* console will produce better effects than the PC at the moment. Remember the PSX? *Far* out performed the PC on release, but for the past 4+ years it's been lagging *way* behind. Happens with every console.
Probably in few years time, X-Box2 is already out using NV40 and Pentium6 and will be futhrer ahead of that time PC technology have to offer.
it's sure is worth a look if in few years time, a decent X-Box emulator will already come out or not.
If I had a PC that was optimized for gaming, I'm sure that I could get results not much lower than the X-box's "stats". The X-box and its routines are optimized for gaming, so yeah, there probably would be a difference, but nothing as large as you're supposing. Also, as Ficedula has been saying, we won't know the *real* stats/performance of Gamecube/X-box until they're released. Until then, the hype is all fluff--nothing more.
"I've see that you agree with my point of an X-Box game have to be massively rewritten for porting to PC. But I don't think that porting PS2/Gamecube games MUST be harder than porting X-Box games. For example the Summoner Demo for PC. Still looking very good for me here. After all, PS2 games still use large potion of C/c++ language."
Summoner was designed from the ground up for *both* the PS2 and the PC. IIRC, same deal with Oni.
"Also I don't agree with the statement that say when X-Box come out, more hardware will have 'DirectX-8 support". For the second time, I must stress that DirectX8-supporting hardware IS DIFFERENT with DirectX8-compliant hardware. If a DirectX8 driver for Voodoo5 have come out, will it automatically made Voodoo5 to have the same features as NV20? No. Radeon2 ( which should support Directx 8 ), don't have a fixed release date yet. I don't think that there will be a lot of DirectX-compliant hardware out there when the X-Box come out."
So? The X-box isn't supposed to be released until fall sometime. More than enough time for PCs. How often do new versions of DirectX come out? Pretty often. I wouldn't be surprised if DirectX 9 is being used on PCs when the X-box comes. New graphics chips come out roughly every 6 months. We're due for some new ones that are a step above anytime now. When new stuff comes out, the price of the now "obsolete" cards (i.e. Radeon) comes down to reasonable levels.
"Yes, in a few years time, PC will have better graphic effects than of X-Box. So what? In few years time, the X-Box will drop in price ( as the case with PS ) and having it is much more viable as you can play with X-Box games ASAP and don't have to wait for any porting process to happen!"
I think by fall, when 1GHz systems are approaching the mid-range standard (as in, the slowest stores will sell), someone will be hard at work on an X-box emulator as soon as it is released. I mean, what's hard to emulate? The P4 has a 400MHz BUS, compared to the X-box's 200. I'm sure the new Athlon machines will have something similar to the P4's so that they can compete effectively. Same with other features. RAM? Not a problem. Sure, it operates differently, but nothing that can't be emulated. Hard drive? 8GB is pretty small these days. etc. etc.
Ficedula: ..."except in a few years time, when the PC produces better effects than the XBox."
I'm guessing it'll actually only be 1 1/2 years or less before the PC is once again firmly in the lead. Why? Well, the Athlon 1.7GHz should be out sometime soon. From there, it's just a matter of Moore's Law.
[This message has been edited by Srethron Askvelhtnod (edited March 04, 2001).]
90% chance X-box software houses will see the profitability in the easier porting for a PC release.
On the other hand the guys who r making that abbe's oddysey think did convert it to xbox inside a week....
So? The X-box isn't supposed to be released until fall sometime. More than enough time for PCs. How often do new versions of DirectX come out? Pretty often. I wouldn't be surprised if DirectX 9 is being used on PCs when the X-box comes. New graphics chips come out roughly every 6 months. We're due for some new ones that are a step above anytime now. When new stuff comes out, the price of the now "obsolete" cards (i.e. Radeon) comes down to reasonable levels.
Define pretty often. In the past, new release of DirectX is out roughly once a year. But for now, there's still no news about DirectX9. I've been beta-testing DirectX for Microsoft since DirectX 6 days and up to today, there's still are no inside news for DirectX 9. I think it wouldn't come out this year at least, because no news for it as of yet ( as opposed to the last time ).
If it truly X-Box will come out this US fall, PC is highly likely won't have time to catch-up. New graphic card every 6 months? How long NV20 takes to come out? More than 1 full year since NV15 released. Radeon2 will still don't match NV25 speed even if ATI released them in time ( which is hould be now ). Note the word if. The graphic chip market now seems to have beeen slowing down it seems. No competition. And I don't see that NV20 prices will come down before X-Box launch at least.
I think by fall, when 1GHz systems are approaching the mid-range standard (as in, the slowest stores will sell), someone will be hard at work on an X-box emulator as soon as it is released. I mean, what's hard to emulate? The P4 has a 400MHz BUS, compared to the X-box's 200. I'm sure the new Athlon machines will have something similar to the P4's so that they can compete effectively. Same with other features. RAM? Not a problem. Sure, it operates differently, but nothing that can't be emulated. Hard drive? 8GB is pretty small these days. etc. etc.
I do want to see how those emulator programmers try to emulate the way X-Box handles data streaming, Dolby Digital AC-3 processing and large texture handling routine in PC; that 3 are just for examples.
P4 may have 400Mhz FSB, but their PCI and AGP still operate at 33Mhz and 66Mhz respectively, a hindrance for some of X-Box features to be emulated in PC effectively. AMD P4 contenders like Palomino, Thoroughbred and Claw/SledgeHammers CPUs, won't come out before 2002 as AMD don't regard P4 as a threat for them.
can u lot shut the hell up all readY!!!!
Xbox is gonna be a m$ console and a pc by then will be soo much faster, like the 3rd generation of athlons at 1.9MHz or 3rd generation pentium4s at 2Ghz not enuff for u?
Yes consoles r high bus bandwidth machines hence the RDRAM in ps2 and N64s but thats only 400MHz of memory bus speed where DDR is reaching 2100MHz!!!
put thats not in a consoles' price range... yet!
90% chance X-box software houses will see the profitability in the easier porting for a PC release.
On the other hand the guys who r making that abbe's oddysey think did convert it to xbox inside a week....
How a PC will be so much faster when X-Box come out? There's will be no peers for NV25 yet for PC at that time. Clock speed isn't everything. A 1Ghz P3 isn't 2x performance of P3 500Mhz in real world applications.
Consoles like X-Box and PS2 are high-bandwidth machines, and that's what setting them apart from PCs, which is a low bandwidth machine. DDR at 2100Mhz? Where do you get that figure? RDRAM is not 400Mhz, but a lot higher than that.
Intel and AMD roadmaps is a good indications of what's in store form them when X-Box come out.
Both AMD ands Intel will have 2Ghz CPU by the end of this year, but seriously I don't think if those 2Ghz machines can match X-Box in terms of performance when all you have is NV20 as the best VGA card you can get. Another thing you have to remember is that a 2Ghz CPU+NV20 combo will be VERY EXPENSIVE, so anyone is better off buying an X-Box anyway.
a) 2GHz + GeForce 2 is still cheaper than 2GHz + GeForce 2 + X-Box. You *need* the PC anyway, so while graphics cards and such can be upgraded for a not so high cost the x-box comes *in addition* to your PC equipment. The x-box is great for people who don't need a PC but we don't belong to those.
b) Multiple processors. 2GHz processors will be expensive but why buy that when you can get two 1 GHz processors on the same board for a much smaller cost? More and more games support it and there's no drawbacks in applications.
In some years we won't be running 10GHz processors, we will be emulating windows on many small RISC processors. The Pentium platform (CISC) is a step in the wrong direction, RISC processors (PowerPC ones for instance) rules...
a) 2GHz + GeForce 2 is still cheaper than 2GHz + GeForce 2 + X-Box. You *need* the PC anyway, so while graphics cards and such can be upgraded for a not so high cost the x-box comes *in addition* to your PC equipment. The x-box is great for people who don't need a PC but we don't belong to those.
Why do you need a 2Ghz PC when you already have a X-Box? My computer is perfectly fine for the moment, and I'll only upgrade when I can't play games with this anymore. If only for Internet surfing or typing, even my computer is an overkill. Chances are, if I were to buy an X-Box, I won't buy those overpriced 2Ghz, at least before their price dropped. If I must buy a 2Ghz computer just to use a X-Box emulator, I'll rather buy the console itself, which is surely cheaper.
b) Multiple processors. 2GHz processors will be expensive but why buy that when you can get two 1 GHz processors on the same board for a much smaller cost? More and more games support it and there's no drawbacks in applications.
Now which 1Ghz processors you want to buy and make an SMP system? AMD? Intel? While Durons/TBird CPUs technically support SMP, but where's the chipset? Intel 1Ghz CPU? Do you think it support SMP? Depending on the stepping and batches, some support SMP but others aren't, or none of them support SMP anyway. You'll have to check against Intel database to know if your CPU support SMP or not.
How many games support SMP? Too little, I can count all SMP-enabled games with my hands. And a 2Ghz system will be faster than a system with 2 1Ghz CPUs in SMP mode. Point of diminishing return anyone?
In some years we won't be running 10GHz processors, we will be emulating windows on many small RISC processors. The Pentium platform (CISC) is a step in the wrong direction, RISC processors (PowerPC ones for instance) rules...
Duh! Who say RISC CPU is more superior than of CISC CPU? It's been proven wrong long time ago. This is a good read for you. Click here
chill man
what i mean is that nvidia WILL release gforce4 by middle next year and by then xbox will be mainstream.
why? coz ati r casing them down the throat.
BUT it is still chaeper to own a xbox than a pc.
but what the hell can you do on a xbox that u cant do on a pc?
surf,emails, games, etc.
Dreamcast with Windows CE already can be used to surf the Internet, wrote e-mail etc. So it's not too ambitious to say if X-Box could do it too, even I don't remember Microsoft have said that.
[This message has been edited by cHiBiMaRuKo (edited March 05, 2001).]
Point taken. What about DirectX 8.1?
"If it truly X-Box will come out this US fall, PC is highly likely won't have time to catch-up. New graphic card every 6 months? How long NV20 takes to come out? More than 1 full year since NV15 released. Radeon2 will still don't match NV25 speed even if ATI released them in time ( which is hould be now ). Note the word if. The graphic chip market now seems to have beeen slowing down it seems. No competition. And I don't see that NV20 prices will come down before X-Box launch at least."
Release dates:
Geforce 256: September 1999
Geforce 2: Spring 2000
Geforce 2 Ultra: Late september 2000
NV20/GeForce 3: February 27, 2000
NVIDIA Introduces GeForce3 for the PC
Industry's Most Advanced Processor Ever is the Core of Both DirectX 8.0 and Xbox Game Console
INTEL DEVELOPER FORUM - SAN JOSE, CA - February 27, 2001 - Just 5 days after the original announcement for the Macintosh platform, NVIDIA Corporation (NASDAQ: NVDA) announced today the GeForce3TM GPU (graphics processing unit) for the PC. The GeForce3 GPU is based on a radical new graphics architecture that will offer breakthrough graphics to multiple markets, including the desktop PC and the Macintosh® platforms. This groundbreaking GPU includes the same core technology as NVIDIA's highly anticipated Xbox GPU (XGPU) for Microsoft's XboxTM game console.
>>Read on
Like I said, every 6 months. The earliest the X-box could possibly be out is 6-7 months from now.
"I do want to see how those emulator programmers try to emulate the way X-Box handles data streaming, Dolby Digital AC-3 processing and large texture handling routine in PC; that 3 are just for examples."
I'm no programmer. Explain briefly what the problems are and I'll try to answer.
"Remember that X-bOX DDR-RAM is 128-bit not 64 bit as in your article. It's so much faster than DDR-RAM for PC. Next GeForce4 is ready by the time X-Box launch? With no competition, will you think nVidia will do that? That move will hurt themselves. Probably GeForce3 MX is more viable product that will come from nVidia at that time."
Where'd you get that from? My understanding is that:
1. The X-box only has 64MB RAM
2. It's Unified RAM, not DDR
"Both AMD ands Intel will have 2Ghz CPU by the end of this year, but seriously I don't think if those 2Ghz machines can match X-Box in terms of performance when all you have is NV20 as the best VGA card you can get. Another thing you have to remember is that a 2Ghz CPU+NV20 combo will be VERY EXPENSIVE, so anyone is better off buying an X-Box anyway."
Even if the market does slow a bit, nVidia has 10 months till the end of the year. More than enough time to develop and release a post NV20 card. If they don't, someone else will. Otherwise, PC Gamers in the US (very hardcore bunch) will kill.
I don't see how a P3 733MHz with an NV20 based card will be able to match a 1-2 GHz processor with a second or third generation NV20 card in terms of FPS. I'll be happy to compare framerates with you when the time comes, provided I have one of those PCs.
"Why do you need a 2Ghz PC when you already have a X-Box? My computer is perfectly fine for the moment, and I'll only upgrade when I can't play games with this anymore. If only for Internet surfing or typing, even my computer is an overkill. Chances are, if I were to buy an X-Box, I won't buy those overpriced 2Ghz, at least before their price dropped. If I must buy a 2Ghz computer just to use a X-Box emulator, I'll rather buy the console itself, which is surely cheaper."
Lot's of reasons:
1. A P733 MHz, no matter *how* optimized, will never be as fast as a 1466+ MHz processor.
2. Console and PC Games are in most cases very different. I'd prefer it stayed that way. If I ever want to play console, I go to a friend's or get the emulator.
I still don't think a 2GHz processor will be necessary for X-Box emulation. Probably 1GHz with a GeForce 3 will be enough.
[This message has been edited by Srethron Askvelhtnod (edited March 05, 2001).]
Point taken. What about DirectX 8.1?
It wasn't a major release. Even the folks at the Microsoft beta newsgroups are caught off-guard by it release.
Release dates:Geforce 256: September 1999
Geforce 2: Spring 2000
Geforce 2 Ultra: Late september 2000
NV20/GeForce 3: February 27, 2000
GeForce2 NV15 is released on March 2000 and it's successor NV20 come out in February. Clearly they've missed their 6-month release cycle. I don't consider GF2 Ultra as a new card because it use the same chip as GF2 GTS.
I do want to see how those emulator programmers try to emulate the way X-Box handles data streaming, Dolby Digital AC-3 processing and large texture handling routine in PC; that 3 are just for examples."I'm no programmer. Explain briefly what the problems are and I'll try to answer.
How the programmers of any X-Box emulator want to simulate the way X-Box transfer the high-res textures on high speed data buses ( talikng about gigabytes of data per second ) on an AGP 4x port which don't even reach 1Gigabytes per second in bandwidth?
How the programmers will try to emulate Dolby Digital AC-3 processing ( which could take a lot of CPU time ) on today soundcards and also dodge the posibble legal issues from Dolby Laboratories?
Those 2 are only the examples. More issues about DVD data transfer mechanism etc. must be also taken into account.
Where'd you get that from? My understanding is that:
1. The X-box only has 64MB RAM
2. It's Unified RAM, not DDR
Which RAM will you think X-Box will use? Dreamcast use SDRAM, PS2 use RDRAM. As far as I know, Micron will supply the DDR for Microsoft. The "unified RAM" terms means that all 64 MB RAM are used by all parts of the console, ie; textures, sound buffer, game data, etc. But X-Box itself use DDR-RAM 128bit that were commonly used only in videocards.
Even if the market does slow a bit, nVidia has 10 months till the end of the year. More than enough time to develop and release a post NV20 card. If they don't, someone else will. Otherwise, PC Gamers in the US (very hardcore bunch) will kill.
I don't see how a P3 733MHz with an NV20 based card will be able to match a 1-2 GHz processor with a second or third generation NV20 card in terms of FPS. I'll be happy to compare framerates with you when the time comes, provided I have one of those PCs.
I believe that there's will be no nVidia new videochip, judging for their current competitors condition. Unless ATI or Matrox or others like NEC can come out with a worthy competition to NV15, there's no reason for nVidia to continue their policy of 6-moths-one-product-release, as the policy is effective only when there's competition. Futhermore, nVidia have already making steps to enter the soundcard market, making them busier.
The hardcore bunch only make a very tiny market for nVidia products, so highly unlikely that they will make nVidia suffer. It's those OEM deals from Dell, Compaq and others that nVidia were interested in, and those companies don't give a damn whatever nVidia will release new chipset every 6 months or 1 year as they themselves are resellers. I'm confident if ATI and Matrox can give a worthy competitors to NV20, nVidia will resume the 6-months policy again. That's why competitions are good.
Lot's of reasons:
1. A P733 MHz, no matter *how* optimized, will never be as fast as a 1466+ MHz processor.
2. Console and PC Games are in most cases very different. I'd prefer it stayed that way. If I ever want to play console, I go to a friend's or get the emulator.I still don't think a 2GHz processor will be necessary for X-Box emulation. Probably 1GHz with a GeForce 3 will be enough.
Why need 1433Mhz when 733Mhz ( it 667Mhz actually, Microsoft have slowed X=Box down ) will get the job done? The X-Box is fast NOT only because of CPU processing power, but also of kick-ass video processor, operating with a lot more bandwidth that NV20 can only dream of, and also the whole system operate using data buses much faster ( 200Mhz )than the limitations of PCI ( 33Mhz ) and AGP ( 66Mhz ) buses inside of PCs. Not only the CPU, the whole console is optimized for gaming ( and others ? ). The sound processor, the graphics etc.
Don't know if a 1Ghz with a NV20 can emulate X-Box, but even AC-3 will take a lot of CPU time already.
[This message has been edited by cHiBiMaRuKo (edited March 06, 2001).]
The main graphics chipset for the X-Box is joint developed by Nvidia and Microsoft. What's stopping Nvidia for doing the same thing for the PC?
Go to nvidia.com and readup, the geforce 3 is supposed to surpass or equal the xbox in everyway. If you're talking bout the RAM it uses, the GEFORCE 3 will not be using DDR RAM or RDRAM, it will be using something new.
The fact remains most people already *have* a PC so I doubt the XBox would be cheaper than upgrading. Say I wanted to fully upgrade my PC to a 1Ghz+ monster. What'd it cost?
New M/Board. < 100UKP
New CPU 100UKP
New GFX card. 100UKP
Oh, that's it.
Monitor? Got a nice one already. Case? That too. Hard disk? UDMA66 already, good enough for me. Sound card? Live! CD? DVD drive already. RAM? 192MB of PC100 is plenty good enough, but I could upgrade it to 133 pretty cheaply.
The XBox, for anyone with a half-decent PC already, is more expensive than upgrading and won't give you much better performance. I'm not planning on buying a PS2, but at least it's not more than majorly upgrading my PC, and the price *will* fall.
The main graphics chipset for the X-Box is joint developed by Nvidia and Microsoft. What's stopping Nvidia for doing the same thing for the PC?
Without any competition, nVidia won't make available the technology to PC users. Do you think nVidia is mad to release a new chip ( let say NV25 )that will only compete with their own product NV20? It's just business strategy from nVidia. If ATI or others can't release a worthy product that can compete with NV20, nVidia won't likely release a new chip because if they do so, nVidia will found out that they are competing with their own product ( NV20 ), which is bad in bussiness sense. nVidia can do it of course, but the timing of release must be also taken into account.
Go to nvidia.com and readup, the geforce 3 is supposed to surpass or equal the xbox in everyway. If you're talking bout the RAM it uses, the GEFORCE 3 will not be using DDR RAM or RDRAM, it will be using something new.
GeForce 3 will use DDR-SDRAM just like GeForce2 Ultra ( 4ns 128bit DDR-SDRAM ). It's only that NV20 will have much better memory controller than what of NV15 had offered, thus more efficient use of the bandwidth available.
XBox has 64MB of RAM for *everything*? Jesus, that's crap. The PC doesn't *need* an incredibly fast data bus because it has more RAM, generally speaking. With only 64MB RAM I can see the XBox will be frantically swapping data in and out of memory to make room for textures and sound effects .... PC won't have that problem, once it's loaded into RAM once, that's it. Same with the PS2; it has an overfast data bus to compensate for the fact that it has (compared to a modern PC) little RAM to store textures etc in.
Micron DDR-SDRAM for X-Box have a total of 6.4GB per second bandwidh thus I'm sure that flushing textures in and out is easy.
The fact remains most people already *have* a PC so I doubt the XBox would be cheaper than upgrading. Say I wanted to fully upgrade my PC to a 1Ghz+ monster. What'd it cost?New M/Board. < 100UKP
New CPU 100UKP
New GFX card. 100UKP
Oh, that's it.
You can get a GeForce3 ( which is now have a MSRP price of 600 pound sterling )at 100 pound sterling at the end of the year? Or did you want to buy only a GF2 MX? Don't think that NV20 price will drop so fast, unless ATI and other give nVidia very intense heat.
The XBox, for anyone with a half-decent PC already, is more expensive than upgrading and won't give you much better performance. I'm not planning on buying a PS2, but at least it's not more than majorly upgrading my PC, and the price *will* fall.
Did Microsoft released the price for X-Box already? No I don't think so. So how do you know that X-Box will cost more than 300 pound sterling? I'll keep my opinion on X-Box price till Microsoft announce them out, then only I'll say if upgrading PC is cheaper than buying a X-Box.
As for memory bandwidth ... yes, that's the whole point. It *has* to have fast bandwidth 'cos it has a truly pitiful amount of RAM compared to the PC. The fast bandwidth lets it move textures etc in and out very fast ... on the PC, you wouldn't *need* to move them at all, since you could store them all in RAM permanently.
Add to that the fact that specs haven't even been finalised yet - so games can't be relied on to come out at a particularly fast rate - and the XBox does not look like a good bet. I walked into a high street shop a few weeks ago, and they had PS2's for sale over the counter, and a pretty respectable range of titles for it. Now, MS aren't releasing XBox in Europe to start off with, so even if they DO make a release date by the end of the year (I suppose it *could* happen) it'll never make it to the UK before mid-late 2002 - so it'll be 2003 before you can buy it over-the-counter. By Moores law, in 2003 the average PC will be running well over 1GHz, and a top-end model will be at least 2Ghz. Hmmm. Not much competition, I feel.
Oh yes: Specs. Microsoft is well known for sticking to "predicted" specs, isn't it? Remember the required specs for Win95? A 386 with 4MB RAM. Yes, MS, I think that'll work. All the graphics we've seen so far are based on "predictions" of what the XBox can produce. Nobody actually knows what it'll do in reality
[This message has been edited by ficedula (edited March 06, 2001).]
Duh! Who say RISC CPU is more superior than of CISC CPU? It's been proven wrong long time ago. This is a good read for you. Click here
I've only skimmed the article, but from what I see it is all about how bad CISC cpus are. Perhaps you got the terms mixed up? Remember, Intel = CISC!
For those who aren't sure about the difference: RISC processors complete an instruction very fast (maybe in one clock cycle) but only have very simple instructions.
CISC processors can take a while to do an instruction - up to 10 clock cycles or so - but have lots of instructions which can do slightly more complex things.
So if you have a 500MHz CPU and it's CISC, it might only be executing 50M instructions per second, if those instructions all took 10 cycles. Of course, other things like waiting for disk access/other devices to do things slow it down further.
In contrast, a RISC processor could execute 50M instructions per second with a clock rate of only 100MHz, or maybe 200MHz - because each instruction executes so quickly.
You might say "So what? Surely it takes more instructions to do everything with RISC because each instruction only does one simple thing". Yes - in theory. In practice, nobody codes in assembler, so no program ever uses the full potential of the hundreds of different commands on a CISC CPU. Compilers aren't always 100% correct about the absolute most efficient CPU instructions to generate.
Whereas with a RISC, because there aren't so many instructions, it's quite easy for compilers to generate code which does use the most efficient instructions - there aren't many possibilities to choose from!
So on modern PC's, RISC generally outperforms CISC unless you handcode everything in assembler, and noone does that. It takes too long.
I've only skimmed the article, but from what I see it is all about how bad CISC cpus are. Perhaps you got the terms mixed up? Remember, Intel = CISC!
Here's what the article say.
RISC vs. CISC, get over it
This seems to have touched a nerve with a number of people, since I called the whole CISC vs. RISC argument bullshit. It is. RISC is just a simpler way of designing a processor, but you pay a price in other ways. By placing the constraint that the instruction set of a processor be fixed with, i.e. all instructions be 2 bytes, or 4 bytes, or 16 bytes in size (as with the Itanium), it allows the engineers to design a simpler decoder and to decode multiple instructions per clock cycle. But it also means that the typical RISC instruction wastes bits because even the simplest operation now requires, in the case of the PowerPC, 4 bytes. This in turn causes the code on the RISC processor to be larger than code on a CISC processor. Larger code means that for the same size code cache, the RISC processor will achieve a lower hit rate and pay the penalty of more memory accesses. Or alternately, the RISC processor requires a larger code cache, which means more transistors, and this merely shifts transistors from one area of the chip to another.
The people who declared x86 and CISC processors dead 10 years ago were dead wrong. CISC processors merely used the same kind of tricks as RISC processors - larger cache, multiple decoders, out-of-order execution, more registers (via register renaming), etc. In some cases, such as during task switching when the entire register set of the processor needs to be written to memory and then a different register set read in, the larger number of "visible" registers causes memory memory traffic. This in turn puts a load on the data cache, so as with the code cache, you either make it larger and use more transistors, or you pay a slight penalty.
My point is, these idiots from 10 years ago were wrong that RISC is somehow clearly superior to CISC and that CISC would die off. It's merely shifting transistors from one part of the chip to another. On the PowerPC, all instructions are 32-bit (4 bytes) long. Even a simple register move, an addition of 2 registers, a function return, pushing a value to the stack, all of these operations require 4 bytes each. Saving the 32 integer registers alone requires 128 bytes of code, 4 bytes per instruction times 32 instructions. Another 128 bytes of reload them. Ditto for the floating point registers. So who cares that it simplifies the decoder and removes a few transistors there. It causes more memory traffic and requires more transistors in the cache.
And the decoding problem is not that big of a problem for two reasons. I'll use the example of the 68040, the PowerPC, and the x86. A PowerPC chip can decode multiple instructions at once since it knows that each instruction is 4 bytes long. A 68040 processor has instructions that are a minimum of 2 bytes long and can go up to 16 bytes in size (I think, I can't think of an example off the top of my head that's longer than 16). Let's say 16. The necessary bits required to uniquely decode the instruction are usually found in the first 2 bytes of the instruction, 4 bytes for floating point. That's all the decoder needs to figure what this instruction is. It needs to decode the additional bytes only in cases of complex addressing modes. This is one area where Motorola screwed up (and likely decided the fate of the 68K) is that they truly made a complex instruction that requires decoding of almost every byte.
In the case of x86, Intel either lucked out or thought ahead and made sure that all the necessary bits to decode the instruction are as close to the beginning of the instruction as possible. In fact, you can usually decode an x86 instruction based on at most the first 3 bytes. The remaining bytes are constant numbers and addresses (which are also constant). You don't need to decode, say, the full 15 bytes of an instruction, when the last 10 bytes are data that gets passed on down into the core. So as one reader pointed out in email, Intel stuck with the old 8-bit processor techniques (such as the 6502) where you place all your instruction bytes first, then your data bytes. In the case of the 6502, only the first byte needed decoding. Any additional bytes in the instruction were 8-bit or 16-bit numeric constants.
So decoding x86 is quite trivial, almost as easy as decoding RISC instructions. AMD seems to have figured out how to do it. Intel almost had it figured out in the P6 family (with only the 4-1-1 rule to hold them back), and then for the Pentium 4 they just decided to cut features and just gave up on the whole decoding things. That's Mistake #3 on my list of course, but this in no way demonstrates how superior fixed instruction sizes are over variable sized instructions.
Over the years, CISC and RISC have kept pace with each other. Sure, one technology may leap ahead a bit, then the other catches up a few months later. Neither technology has taken a huge lead over the other, since the decision whether to used fixed or variable sized instructions and whether to have 8 16 32 or 64 registers in the chip are just two factors in the overall design of the processor. Much of the rest of the design between RISC and CISC chips is very similar. And over time, ideas get borrowed both ways
True. Haven't read the article, but from what I've learnt at fast clock speeds RISC is far better than CISC.
For those who aren't sure about the difference: RISC processors complete an instruction very fast (maybe in one clock cycle) but only have very simple instructions.CISC processors can take a while to do an instruction - up to 10 clock cycles or so - but have lots of instructions which can do slightly more complex things.
So if you have a 500MHz CPU and it's CISC, it might only be executing 50M instructions per second, if those instructions all took 10 cycles. Of course, other things like waiting for disk access/other devices to do things slow it down further.
In contrast, a RISC processor could execute 50M instructions per second with a clock rate of only 100MHz, or maybe 200MHz - because each instruction executes so quickly.
You might say "So what? Surely it takes more instructions to do everything with RISC because each instruction only does one simple thing". Yes - in theory. In practice, nobody codes in assembler, so no program ever uses the full potential of the hundreds of different commands on a CISC CPU. Compilers aren't always 100% correct about the absolute most efficient CPU instructions to generate.
Whereas with a RISC, because there aren't so many instructions, it's quite easy for compilers to generate code which does use the most efficient instructions - there aren't many possibilities to choose from!
So on modern PC's, RISC generally outperforms CISC unless you handcode everything in assembler, and noone does that. It takes too long.
Obviously both of you have to read the article more thoroughfully. It's quite long and a good off-line read.
No, MS haven't declared the price of the XBox. They haven't declared release dates, either (and before you say the end of this year or something, Gates was quoted as saying he wouldn't release it until he got 3x the graphics performance of his rivals). All the estimates say that at $300 MS would be selling at a big loss. While they probably will sell at a loss, there is a limit. So more than $300 looks likely, and European prices always go through the roof compared to the US.
Even at US300, Sony is selling PS2 at a loss too ( covered by licensing fees ). So I think Microsoft will do the same thing too.
Add to that the fact that specs haven't even been finalised yet - so games can't be relied on to come out at a particularly fast rate - and the XBox does not look like a good bet. I walked into a high street shop a few weeks ago, and they had PS2's for sale over the counter, and a pretty respectable range of titles for it. Now, MS aren't releasing XBox in Europe to start off with, so even if they DO make a release date by the end of the year (I suppose it *could* happen) it'll never make it to the UK before mid-late 2002 - so it'll be 2003 before you can buy it over-the-counter. By Moores law, in 2003 the average PC will be running well over 1GHz, and a top-end model will be at least 2Ghz. Hmmm. Not much competition, I feel.[Oh yes: Specs. Microsoft is well known for sticking to "predicted" specs, isn't it? Remember the required specs for Win95? A 386 with 4MB RAM. Yes, MS, I think that'll work. All the graphics we've seen so far are based on "predictions" of what the XBox can produce. Nobody actually knows what it'll do in reality.
I think the X-Box spec is pretty much finalised already, by the fact that nVidia have already producing the GPU and the MCPx chips for X-Box. Only that the specs aren't publicised much as I thought it would be. In general, information available today should be enough to gauge how well X-Box performance will be.
oh guys about psx emulators:
good news for the next release of epsxe (due in about 1-2weeks) its gonna feature dual shock / force feedback support and, AND.....
Wait for it....
SAVE STATES/ FREEZE SAVES!!!!
so say good bye to sodding psx crappy memory card problems!! and so hello to Zsnes style saving!!
Of man.. save states. A nifty feature indeed. Dual shock support? Will it support PS analogue controllers via DirectPad Pro?
Whats even better for me is that now, finally ePSXe 1.20 supports FF9 PAL!!!!!
Yeah Baby!!!
So now its time for Qhimm to roll out his FF9 editor. Sweet!
As you might have realised, that article somehow failed to convince me on the RISC/CISC issue.
Oh, THAT was a well constructed argument! I mean, you can tell they spent hours and hours researching that, seeking out all the technical data, so they could make the informed and persuasive argument: "It's crap". I mean, that convinced me! Before, I always thought you needed *reasons* and *proof* when you were saying an argument was wrong, but no! all you have to do it say "that's crap" and immediately everybody has to be convinced! Well, that's going to work SO well in my university work; "I could give proof, but no; it's just crap." Obviously all those scientists who try to PROVE theories are wasting their time, all they need to do is swear at anybody who disagrees and that *makes* them correct!
I think the author already give proofs, backed in second or third level programming language examples. He just doesn't say that "RISC is superior than CISC" theory is crap ( even I learn that in one of my 8088 subjects in uni ), but also gives proof of what's he saying. Why don't you give him an e-mail and try to convince him that he is wrong?
Also I don't think there's anything wrong with passing variable length instructions under the RISC architechture, though I have no real knowledge in this area. I just know that the general opinion (on slashdot etc.) is that Intel is crap because of the reasons Fic mentioned.
It is a bit like the X-Box vs. PSX2 discussion in a way, RISC being PSX2 and CISC being X-Box...
And at any rate, the clock speed race is wrong. What, no games support dual processors? Well, except for that being wrong (Unreal Tournament supports it, enough for me...) once the world start using them games supporting them will be written!
Ever heard of the Transmeta processor? It emulates the 386 architecture on a RISC chipset. It consumes very little power, and because of that it doesn't need a fan and so consumes even less power. It's of course not as effective but still usable for everything but games. I believe that is where the world is heading...there's simply no point in faster and faster with bigger and bigger fans when you can have many small ones running cold.
"This in turn causes the code on the RISC processor to be larger than code on a CISC processor. Larger code means that for the same size code cache, the RISC processor will achieve a lower hit rate and pay the penalty of more memory accesses. Or alternately, the RISC processor requires a larger code cache, which means more transistors, and this merely shifts transistors from one area of the chip to another."
Nope, got it wrong. By having every instruction 4 bytes in length exactly, it speeds up code. Because you know that 32-bytes of code contain 8 instructions, and when you're executing instruction #7, say, it's time to go and get the next block of code. It also means instructions never get split up over code boundaries - for example, if your cache line was 32-bytes, with variable length instructions, what happens when an instruction starts on the last byte and carries on elsewhere? As an example, on Win98 there's a code optimizer tool that pads executable code so the instructions all end on 4K boundaries. THIS INCREASES THE SIZE OF THE FILE BUT IMPROVES SPEED OF EXECUTION.
"The necessary bits required to uniquely decode the instruction are usually found in the first 2 bytes of the instruction, 4 bytes for floating point. That's all the decoder needs to figure what this instruction is. It needs to decode the additional bytes only in cases of complex addressing modes."
First of all he says they're "usually" found in the first two bytes. Then he says "that's all it needs". Then he changes mind again and says that sometimes it DOES need the rest of the instruction! Either
a) You get all the instruction, leading to increasing memory traffic, exactly what he was accusing the RISC of or
b) You don't, and when it turns out you DID need the rest of it, you're f***ed. (Well, actually it means another memory access - but you've just lost out speed-wise).
"...could give the appearance of executing 1 instruction per cycle. Any given instruction still takes multiple clock cycles to execute, but by overlapping several instructions at once at different stages of execution, you get the appearance of one instruction per cycle."
You do; on this he's quite right, if you've got enough simultaneous fetch/decode/execute subunits. He also points out this fails if the CPU is wrong about which instruction is going to be executed next (in the case of a Jump (GOTO, if you like) command). So, everybody, would would we all prefer:
a) A CPU which appears to execute 1 instruction per cycle except when it jumps, when the next few instructions will take up to 10 cycles.
b) A CPU which appears to execute 1 instruction per cycle except when it jumps, when the next few instructions will take up to 3 cycles.
Hmmm .... I'd want (b). That's the RISC.
Incidentally, JUMP instructions are fairly common in code. I've done some assembler programming as part of my course, and taking the example assembler I wrote for of the exercises, out of 17 instructions, 3 were conditional jumps. Lots of things generate jumps; calling a function, IF statements, loops, etc.
Also, he mentions exactly *how* modern CISC processors gain their speed: Multiple decoders, like you need for the above trick. What about the PIII? "Decoders 2 and 3 can decode only simple, RISC-like instructions that break down into 1 micro-op." OK: So it's fast because the extra decoders in it are small RISC processors... that's exactly the case here: Modern CISC's *do* use a lot of RISC-type architecture to run as fast as possible. Does this suggest, perhaps, that RISC has speed advantages over CISC?
The guy who wrote that article seems to have a good grounding on CPU history, and a fairly good grasp of theory, but a less-than-100% idea of how the basic electronics in the CPU are working. Given the choice between believing this guy, who seems to have a lot of x86 experience but not a lot else, or believing my CompSci lecturers - who tell us to go and check everything out in the hardware labs, which we do - I'll go with the lecturers.
quote from www.g256.com today
_________________________________________
MS to lose $2bn on Xbox
--------------------------------------------------------------------------------
Icenum Thursday March 8, 2001 3:04 AM Your Time
The Register has some darn interesting insight on MS and the XBox, good for consumers and bad for MS..Awwwww:
Merrill Lynch has come to the shock conclusion that Microsoft is going to lose a lot of money on Xbox. The console could drain up to $2 billion from the Beast of Redmond's coffers before break-even.
As the world+dog knows, consoles are sold at a loss in the early days, as manufacturers subsidise the low price points needed to drive sales of the machine.
Microsoft has yet to announce Xbox pricing, but most observers put it at around $250. Merrill Lynch reckons each console will cost around $375 to produce - simple maths tells you Microsoft is flushing $125 down the pan. The figure might be higher or lower depending on what Microsoft ultimatly asks punters to pay. The Xbox price tag isn't expected to be much higher than the $299 Sony currently charges for the PlayStation 2.
_________________________________________
[This message has been edited by The SaiNt (edited March 08, 2001).]
Nope, got it wrong. By having every instruction 4 bytes in length exactly, it speeds up code. Because you know that 32-bytes of code contain 8 instructions, and when you're executing instruction #7, say, it's time to go and get the next block of code. It also means instructions never get split up over code boundaries - for example, if your cache line was 32-bytes, with variable length instructions, what happens when an instruction starts on the last byte and carries on elsewhere? As an example, on Win98 there's a code optimizer tool that pads executable code so the instructions all end on 4K boundaries. THIS INCREASES THE SIZE OF THE FILE BUT IMPROVES SPEED OF EXECUTION.
Ahaha, good, but as you have said above, and also the author want to point out, THIS INCREASES THE SIZE OF THE FILE BUT IMPROVES SPEED OF EXECUTION. By larger file size, more memory
would be needed for operation. Chances are high that, that L1 and L2 cache won't be able to hold all the data needed for execution, thus cache miss rate will be higher. Cache miss cost CPU cycles, I think you know that, and RISC speed advantage that it should have is gone anyway.
"The necessary bits required to uniquely decode the instruction are usually found in the first 2 bytes of the instruction, 4 bytes for floating point. That's all the decoder needs to figure what this instruction is. It needs to decode the additional bytes only in cases of complex addressing modes."First of all he says they're "usually" found in the first two bytes. Then he says "that's all it needs". Then he changes mind again and says that sometimes it DOES need the rest of the instruction! Either
a) You get all the instruction, leading to increasing memory traffic, exactly what he was accusing the RISC of or
b) You don't, and when it turns out you DID need the rest of it, you're f***ed. (Well, actually it means another memory access - but you've just lost out speed-wise).
Did the author already say that it will only need to decode the whole instructions when in complex addressing mode. He also say only SOMETIMES the 68040 cip hhave to do that. The fact that all the data need for execution usually lies in first 2 ( or 4 ) bytes is quite right, especially in today processors, except sometimes. If you get all the instructions in the first 2 ( or 4 bytes ), I don't see why does it increase memory traffic because usually it don't take as much physical RAM as RISC would do.
"...could give the appearance of executing 1 instruction per cycle. Any given instruction still takes multiple clock cycles to execute, but by overlapping several instructions at once at different stages of execution, you get the appearance of one instruction per cycle."You do; on this he's quite right, if you've got enough simultaneous fetch/decode/execute subunits. He also points out this fails if the CPU is wrong about which instruction is going to be executed next (in the case of a Jump (GOTO, if you like) command). So, everybody, would would we all prefer:
a) A CPU which appears to execute 1 instruction per cycle except when it jumps, when the next few instructions will take up to 10 cycles.
b) A CPU which appears to execute 1 instruction per cycle except when it jumps, when the next few instructions will take up to 3 cycles.Hmmm .... I'd want (b). That's the RISC.
Incidentally, JUMP instructions are fairly common in code. I've done some assembler programming as part of my course, and taking the example assembler I wrote for of the exercises, out of 17 instructions, 3 were conditional jumps. Lots of things generate jumps; calling a function, IF statements, loops, etc.
Also, he mentions exactly *how* modern CISC processors gain their speed: Multiple decoders, like you need for the above trick. What about the PIII? "Decoders 2 and 3 can decode only simple, RISC-like instructions that break down into 1 micro-op." OK: So it's fast because the extra decoders in it are small RISC processors... that's exactly the case here: Modern CISC's *do* use a lot of RISC-type architecture to run as fast as possible. Does this suggest, perhaps, that RISC has speed advantages over CISC?
No, of course it doesn't! .
I think you miss the point of what the author want to point out here. He only want to point out that "RISC is here to stay and CISC will die" theory isn't true. Maybe the RISC is fast, but CISC chips had already have their ways of matching RISC speed and efficiency. After all, the author already pointed out that all enginners come from the same uni/college and learn the same books, maybe the same one as what you've learned. I a computer engineering grad from UPM in Malaysia, and my lecturers also kinda say the same thing to me. But in IT world, everything changed quickly and the not everything you've learned in university today will apply to then outside world in one or two years.
Aaaah!!
The posts in this topic are honestly getting longer and longer. But honestly speaking, I wonder if it's worth actually posting anything to reply to cHiBiMaRuKo cause he doesn't seem to read what we say.
quote from www.g256.com today
_________________________________________MS to lose $2bn on Xbox
--------------------------------------------------------------------------------
Icenum Thursday March 8, 2001 3:04 AM Your Time
The Register has some darn interesting insight on MS and the XBox, good for consumers and bad for MS..Awwwww:
Merrill Lynch has come to the shock conclusion that Microsoft is going to lose a lot of money on Xbox. The console could drain up to $2 billion from the Beast of Redmond's coffers before break-even.As the world+dog knows, consoles are sold at a loss in the early days, as manufacturers subsidise the low price points needed to drive sales of the machine.
Microsoft has yet to announce Xbox pricing, but most observers put it at around $250. Merrill Lynch reckons each console will cost around $375 to produce - simple maths tells you Microsoft is flushing $125 down the pan. The figure might be higher or lower depending on what Microsoft ultimatly asks punters to pay. The Xbox price tag isn't expected to be much higher than the $299 Sony currently charges for the PlayStation 2.
_________________________________________
You obviously don't get my point up there. Did I say that Microsoft will make money from X-Box sales? No! Microsoft will highly likely also take the strategy employed by Sony ( if you did know how Sony operates on the console market ). Did you think that PS2 really cost ONLY US 300? Maybe I should have make my point clearly next time.
[This message has been edited by cHiBiMaRuKo (edited March 08, 2001).]
Ahaha, good, but as you have said above, and also the author want to point out, THIS INCREASES THE SIZE OF THE FILE BUT IMPROVES SPEED OF EXECUTION. By larger file size, more memory
would be needed for operation. Chances are high that, that L1 and L2 cache won't be able to hold all the data needed for execution, thus cache miss rate will be higher. Cache miss cost CPU cycles, I think you know that, and RISC speed advantage that it should have is gone anyway.
Note that the author stated "THIS INCREASES THE SIZE OF THE FILE BUT IMPROVES SPEED OF EXECUTION". Execution describes the whole process thus it includes factors such as L1 and L2 cache. If he said COMPUTATION instead of EXECUTION, the factors of L1 and L2 should be considered.
Here's some info you guys should read before actually arguing any further. All the info below has been picked from www.gamespot.com
What kind of CPU will it have?
Early rumors indicated that the X-Box would be powered by an AMD Athlon running at either 600 or 650MHz. However, in an eleventh-hour decision made by Microsoft, AMD was dropped in favor of Intel due to pricing and availability concerns. Microsoft has committed to an Intel Pentium III processor, but hasn't decided on a clock speed yet. At the very least, the X-Box will have a P3-600 at its heart.
What kinds of graphics chip will it have?
In another last minute decision, Microsoft dropped start-up GigaPixel in favor of Nvidia, which is a much more established graphics-chip manufacturer. The prototype unit showed by Microsoft during Gates' presentation was running an Nvidia NV15 chipset, but the final design will feature an even more powerful NV25.
How much system memory will the X-Box have?
The X-Box will have a unified memory architecture wherein the console will share 64 megabytes of DDR RAM for the video and main system bus. While this might seem a bit unorthodox, the X-Box's unified architecture will let developers fill nearly all 64MB with memory-hungry textures, eliminating the need for texture caching, which can tax the hard drive and system bus.
What about the OS?
The X-Box will run off a Windows 2000 variant in conjunction with DirectX 8. Microsoft's Seamus Blackley told GameSpot that the current Win2K kernel "is the size of a fly burp," and it will top out no larger than 500kb.
Will PC games be able to run on the X-Box?
Unfortunately, no. The X-Box is a standalone console and not a PC-in-a-box. As such, Microsoft will be setting up a licensing and quality-assurance business model like the ones currently in place at Sega, Nintendo, and Sony. However, because of the X-Box's x86 architecture and familiar OS and API, developers should be able to port over PC games to the X-Box (or vice versa) in a matter of weeks.
On the RISC and CISC topic, you might want to read this:-
RISC vs. CISC
I'm too lazy or rather I'm too buzy too type anymore, so that's all for now.
[This message has been edited by The SaiNt (edited March 09, 2001).]
[This message has been edited by The SaiNt (edited March 09, 2001).]
BTW, that link you posted is empty.
And yes, I know the author of that article I was commenting on isn't trying to say RISC's are useless. He's trying to say they hold no advantage over CISC, and I say that's wrong.
Note that the author stated "THIS INCREASES THE SIZE OF THE FILE BUT IMPROVES SPEED OF EXECUTION". Execution describes the whole process thus it includes factors such as L1 and L2 cache. If he said COMPUTATION instead of EXECUTION, the factors of L1 and L2 should be considered.
And then what's the point? It seems you want to say that execution and computation speed both depends on L2 and L1 cache. Of course it is. And then?
Read all the X-Box and notes taken
The CPU has been upgraded to 733Mhz FiringSquad article but I already heard that it's been slowed down to 667Mhz. There's possibillity that AMD chips could also be used AMD in X-Box
I've already said that NV25 ( not exactly NV25 but a beefier version of NV20 ) will be used by X-Box. Afraid that textures will fill up the 64MB DDR-RAM and couldn't flush it out as faster as X-Box could be hoping for ( even with high bandwidth it have )? X-Box shipped with a 8GB HDD right out from the box ( not your normal PC HDD mind you ) for storing information about the actual games etc just like a swap file. HDD in X-Box . I better make it straight that HDD in X-Box is faster than what you've seen in current PC, particularly that because it's not hindered by PCI 33Mhz speed.
Fill ALL 64MB with textures? Unless I've missed something, doesn't this mean there'd be no memory left for the main game - like, say, sound effects, the polygon data, the actual program itself...
If the 64MB DDR-RAM isn't enough and X-Box couldn't flush the textures fast enough, there's always the HDD to hold the data you've mentioned. Those kind of data shouldn't take too much bandwidth as opposed to the textures.
I doubt that the textures will take all 64MB though because at most, X-Box will only render at 800x600res ( as most current TV max resolution is this ). HDTV isn't popular as of yet, but when HDTV is, X-Box2 will already come out out ( well if X-Box is successful ).
And yes, I know the author of that article I was commenting on isn't trying to say RISC's are useless. He's trying to say they hold no advantage over CISC, and I say that's wrong.
Today, that argument sure is true. Name me any true RISC or CISC CPUs out there now. There's aren't any. RISC have it's own weakness that hinder performance and CISC also have their own. If RISC have complete advantage over CISC as what you've claimed, CISC will die already.
As for swap files ... exactly how is a 6 point whatever GB/sec bandwidth helping you if you're constantly swapping to/from hard disk? A *ridiculously* fast hard disk could manage 100MB/sec in *theory* and that would mean half a second lock-up every time the XBox flushed large portions of RAM... plus no drive *ever* gets it's maximum speed rating.
Check your logic. I never said RISC had every advantage over CISC. I said it's wrong that they have *no* advantage.
And then it's safe to say that CISC have their own advantages over RISC too?
As for swap files ... exactly how is a 6 point whatever GB/sec bandwidth helping you if you're constantly swapping to/from hard disk? A *ridiculously* fast hard disk could manage 100MB/sec in *theory* and that would mean half a second lock-up every time the XBox flushed large portions of RAM... plus no drive *ever* gets it's maximum speed rating.
1. Only data like the textures and the soundstreams is needed to be changed quickly most of the time. Other data, such as game engine, the Windows 2000 kernel itself etc. aren't changed by much in gaming course. So, X-Box probably will store those static data on the swap file, and only the textures/sound streams on memory.
2. As I have said before, X-Box games usually won't render more than 800x600 resolution even at the release time at the end of the year, because conventional TV don't support res. higher than that. I believe 800x600x32bit with texture size of 2048x2048 won't even take take half the RAM on X-Box. So there's plenty of room for other processes/data etc. And if most developers use lower res than that, with I expect from early games anyway, memory requirement would be less.
3. I've also said that the HDD in X-Box aren't tied with PCI speed limitation in PC, due to architectural differences between both PC and console. These PCI problem are the one that make current HDDs ( for PC ) don't attain their rated speed. Maybe HDD in X-Box could reach their maximum speed easier than what of PC could. But at least it would be faster than what PC have to offer.
Aaaah!!!
You still don't get it do you?
When the author actually said
"THIS INCREASES THE SIZE OF THE FILE BUT IMPROVES SPEED OF EXECUTION"
The part "THIS INCREASES THE SIZE OF THE FILE " means that the size of the file does increase.
The other part "BUT IMPROVES SPEED OF EXECUTION" means that the overall speed already includes the factors such as L1 and L2 cache. In other words, the latter is faster ON THE WHOLE.
If the 2nd part said "BUT IMPROVES SPEED OF COMPUTATION", it would mean that it only increases the speed of processing but the L1 and L2 cache factors have not been included thus it WILL be possible that the it may be faster.
cHiBiMaRuKo, what is the problem with you and the bandwidth problem? Don't you see? With only 64MB of DDR RAM for both the system and the video and main system bus, the system would not be efficient at all. Firstly, you will definately not flush all of the main systems bus info into the hard disk or it will just make the whole system SUPER SLOW. The idea of unified memory is exactly what the whole AGP idea revolves on. The AGP bus has sufficient bandwidth to access the system RAM but why doesn't it do it straight away and alleviate the problem of putting RAM chips onto the video card itself? This is because the RAM chips on the video card are usually faster and owned SOLELY by the video card. Let's assume that the Geforce 3 will have 64MB of onboard RAM minimum so wouldn't that already mean an advantage over the X-box?
Now lets look at the 2nd part, where you say swapping on the hard-disk works fast because of the advantage it has on the system bus. Please note that PC hard drives nowadays have access to what is called UDMA100 technology. The only reason you would require such technolgy or speed transfer is If your hard drive spins with at least 7200 rpm. Note also that the bus speed is not the limiting factor here. The hard drives only manage to reach the 100MB/s transfer limit when the acheive a BURST speed. Therefore even if you have a 2 million MB/s bus speed for hard-drives on the X-Box, it will not be of any use. Should you say Microsoft could easily put a 10000rpm hard drive into the X-box, think about it, if Microsoft were to that, how much do you think the X-box would cost? If Microsoft were to do that and still be able to make it cost less that $300, a lot of people would go out and buy it straight away but not for the console but just to salvage all the parts in the X-box. Please also remember that a hard drive is a physical read device that requires a motor and will never ever manage to match RAM. Swapping between RAM and the Hard Disk is a BAD IDEA cause RAM is at least 10 times faster than your hard disk. Swapping to the hard disk defeats the complete purpose of the fast processing speed doesn't it?
I'm running out of time again so I'll talk about the textures tomorrow if I can find the time.
About textures differences between the PC and the PSX
And yes, hard disk's aren't limited by bus speed. They're limited by the fact they're a physical medium that requires movement to change data, unlike RAM where data is changed purely electronically.
Aaaah!!!
You still don't get it do you?
When the author actually said
"THIS INCREASES THE SIZE OF THE FILE BUT IMPROVES SPEED OF EXECUTION"The part "THIS INCREASES THE SIZE OF THE FILE " means that the size of the file does increase.
The other part "BUT IMPROVES SPEED OF EXECUTION" means that the overall speed already includes the factors such as L1 and L2 cache. In other words, the latter is faster ON THE WHOLE.
If the 2nd part said "BUT IMPROVES SPEED OF COMPUTATION", it would mean that it only increases the speed of processing but the L1 and L2 cache factors have not been included thus it WILL be possible that the it may be faster.
That doesn't change the fact that RISC CPU ( post G4 anyway - which have ridiculously large cache ) don't have superior execution rate that what of CISC have. The author ( Pentium 4 article ) have pointed out that while RISC code execution is faster, the cache problem reduce it's rate anyway.
cHiBiMaRuKo, what is the problem with you and the bandwidth problem? Don't you see? With only 64MB of DDR RAM for both the system and the video and main system bus, the system would not be efficient at all. Firstly, you will definately not flush all of the main systems bus info into the hard disk or it will just make the whole system SUPER SLOW. The idea of unified memory is exactly what the whole AGP idea revolves on. The AGP bus has sufficient bandwidth to access the system RAM but why doesn't it do it straight away and alleviate the problem of putting RAM chips onto the video card itself? This is because the RAM chips on the video card are usually faster and owned SOLELY by the video card. Let's assume that the Geforce 3 will have 64MB of onboard RAM minimum so wouldn't that already mean an advantage over the X-box?
How did you know that putting data that aren't memory intensive in HDD will make it super slow? You're seems to think that X-Box architecture is the same like of current PC, while it completely isn't. First you say that AGP have sufficient bandwidth to system memory. Woohoo, who taught you that? The AGP is slower than what you can think off. Swapping textures from system memory will make even the fastest x86 computer on earth stutters. X-Box don't have that problem. X-Box is actually just like a large graphic card, with high bandwidth for itself. No AGP or PCI, no IDE etc. Swapping textures in X-Box will be just as easy as current garphic cards do.
64MB DDR-RAM is ENOUGH for all data ( textures, sound, game engine etc ). Let say if a X-Box game is coded at 800x600x32bit and using 2048x2048 textures ( the most current TV could support ), with triple buffering, and medium texture usage and detail ( this is TV anyway ). It won't take more than 15MB of buffer and texture memory, which is the most important in ensuring stutterless gameplay.
And did you already forget that Direct3D also have support for S3TC and DXTC ver 3 and 5 ( NV10 and later support both )? Applying that 2 extension will reduce the memory requirement for textures and buffer memory futhrer. Win2k kernel for X-box will top-out at 500kb. Still plenty of space for other data to fill in.
Now lets look at the 2nd part, where you say swapping on the hard-disk works fast because of the advantage it has on the system bus. Please note that PC hard drives nowadays have access to what is called UDMA100 technology. The only reason you would require such technolgy or speed transfer is If your hard drive spins with at least 7200 rpm. Note also that the bus speed is not the limiting factor here. The hard drives only manage to reach the 100MB/s transfer limit when the acheive a BURST speed. Therefore even if you have a 2 million MB/s bus speed for hard-drives on the X-Box, it will not be of any use. Should you say Microsoft could easily put a 10000rpm hard drive into the X-box, think about it, if Microsoft were to that, how much do you think the X-box would cost? If Microsoft were to do that and still be able to make it cost less that $300, a lot of people would go out and buy it straight away but not for the console but just to salvage all the parts in the X-box. Please also remember that a hard drive is a physical read device that requires a motor and will never ever manage to match RAM. Swapping between RAM and the Hard Disk is a BAD IDEA cause RAM is at least 10 times faster than your hard disk. Swapping to the hard disk defeats the complete purpose of the fast processing speed doesn't it?
Again why do you think HDD for X-Box will be only as par as PC? You always seems to think that X-Box architecture is the same to PC. Of course the HDD in X-Box will never match the memory speed but it can attain a higher speed than what HDDs in PC can. How high a UDMA100 transfer rate in PC could go? 40Mb/s tops. Why? It is because of HDD limitation or the PC system bus? The IDE controller maybe can handle 100MB/s but how about the PCI 33Mhz? UDMA 100 HDDs could achieve more than 60Mb/s sustained transfer rate in labs but why it doesn't even reach that speed in counsumer PC? It is still HDD limitations or not? Up to you to think of it. Why I know? I work at IBM HDD factory in Sungei Besi near Kuala Lumpur, Malaysia.
I bet that HDD in X-Box could acheive at least 50MB/s in transfer speed, as it connect directly to the X-Box faster system bus. And yes, there's NO HDD in PC yet that I've seen that reach 66MB/s or 100MB/s, at least, outside the labs.
Yes, CISC *does* have some advantages over RISC and vice versa. I objected to the way you (and the author you quoted) dismissed RISC processors as of no use at all nowadays.
Did I've said anything like that? I'm just replying to DagSverre post that say CISC is stepping to the wrong direction, while it certainly isn't. And I don't remember that the author I've mentioned ever dissing RISC. He just mentioned that RISC is better than CISC argument is wrong, which is certainly true, especially nowadays.
Incidentally, 4X AGP at 32-bit has a data bandwith of well over 400MB/sec. I don't think data bandwidth is a problem on the modern PC. The slowdown you see when data is transferred between oncard/main memory is because it's using the data bus AT ALL. ANY use of the data bus causes a slowdown, even if you're using DMA. Using it to contact a hard disk is even worse - ever heard of data seek time? So if the XBox swaps data to/from hard disk, that's a further reason to avoid it.
BTW - on the subject of textures - a 512x512 texture takes up 1MB of RAM, if you're using 32bit colour, and we all do. Given that a 512x512 texture could be standard for ANY enemy, modern games could well use 32MB of ram - multiple enemies on the screen, you, the backdrops, etc. On a GeForce/Radeon/whatever with 32MB of RAM you're laughing! On an XBox, you've just blown half your RAM on textures before any game data's been loaded. Sounds? Talking a few MB at least, and music's got to be taken into account. Remember, the 8MB DLS that came with FF8 was considered to be, well, "shit", so whether or not you're using synthesised music you want another 16MB for music. Oh dear, 3/4 of all RAM gone and we STILL haven't loaded anything other than textures or sounds. Basically, by the time the XBox comes out, 64MB of RAM is a REAL limitation. Even with texture compression (and, BTW, my graphics card supports S3TC in hardware, and it doesn't help that much) you still have REAL problems.
I'm not debating that the XBox will be a reasonable console; I'm saying that by the time it comes out, the standard PC will be pretty damn good, and within a year or two it'll far outclass the XBox
xbox- us-october
jap - ????
europe- spring 2002
if the ps2 can come out over here only a month behind the us date then why cant the rest, its really stupid if you think about it, people spend most money at christmas so you bring out an expensive item after christmas when people hae spent £1000s already so no-one is gonna buy it.
A 33Mhz 32-bit bus has a data transfer rate of over 100MB/sec. So why can't a hard disk rated at 100MB/s attain 100MB/s in a PC? If ALL you're doing is copying files, shouldn't it be able to attain it's MAXIMUM data rate? Yes, it will - and you'll find it's not 100MB/s. Data bus limitations in PC account for SOME slowdowns, but not all - and they don't affect hard disk speed much. A hard disk in the XBox will attain a similar speed to the same speed in the PC, purely because any speed limitations are due to the physical construction of the drive.
You're forgetting that apart from the HDD, there's others hardware such as soundcard, internal modem, NIC etc, that also used the PCI bus. Those suckers also take bandwidth. And the figure of 40MB/s only apply to Intel i815 and i850 chipset ( or whatever chipsets that use the MTH ), as the northbridge and the southbridge aren't connected via PCI bus. All Athlon chipsets, and also Intel older chipsets such as BX and LX get much lower transfer rate than 40Mb/s, usually around 25 to 30MB/s at best. X-Box HDD is likely will connect directly to the faster system bus which is 200Mhz and should be able to clock higher speed than HDD in PCs. It's also why in the last minute, Intel were selected to provide CPU for X-Box, even that it's expensive compared to Athlon.
Incidentally, 4X AGP at 32-bit has a data bandwith of well over 400MB/sec. I don't think data bandwidth is a problem on the modern PC. The slowdown you see when data is transferred between oncard/main memory is because it's using the data bus AT ALL. ANY use of the data bus causes a slowdown, even if you're using DMA. Using it to contact a hard disk is even worse - ever heard of data seek time? So if the XBox swaps data to/from hard disk, that's a further reason to avoid it.
No, the slowdown happened because 1). The main memory IS slower than the video memory. 2). The AGP bus, even at 400MB/s as you've said, is slow. Wanna know why? AGP bus is just a derivative from the PCI bus. Hmm.. shared bandwidth again. And actually AGP 4x bandwidth is more than 1Ghz, theoritically. Compared with the X-Box bandwidth, AGP 4x still sucks. The 1Gb/s bandwidth is hardly used.
BTW - on the subject of textures - a 512x512 texture takes up 1MB of RAM, if you're using 32bit colour, and we all do. Given that a 512x512 texture could be standard for ANY enemy, modern games could well use 32MB of ram - multiple enemies on the screen, you, the backdrops, etc. On a GeForce/Radeon/whatever with 32MB of RAM you're laughing! On an XBox, you've just blown half your RAM on textures before any game data's been loaded. Sounds? Talking a few MB at least, and music's got to be taken into account. Remember, the 8MB DLS that came with FF8 was considered to be, well, "shit", so whether or not you're using synthesised music you want another 16MB for music. Oh dear, 3/4 of all RAM gone and we STILL haven't loaded anything other than textures or sounds. Basically, by the time the XBox comes out, 64MB of RAM is a REAL limitation. Even with texture compression (and, BTW, my graphics card supports S3TC in hardware, and it doesn't help that much) you still have REAL problems.
Here's is a link on how to calculate video memory taken by textures etc. PlanetQuake
512x512 textures is small, as Direct3D spec support up to 2048x2048. Calculate and with medium textures ( high textures don't make a difference on TV sets -especially the large ones ), pull all other at high selection ( 32bit, mipmap enabled, triple buffering etc. ) and still don't reach 20MB is video memory.
Music? One of the new feature in DirectX8 is that both DirectMusic and DirectSound are combined into one. So no more sucky sound like in FF VII. DLS is history now. You can create a loopable pre-recorded musics with Directx8, and also to take account with all TV sets around the world, 44Khz and 22Khz won't make a difference, also with mono and stereo, unless of course the TV set is also hooked to a home theathre sound system. Well, it's up to the developers to choose how the music should be played in their games.
About S3TC? Well what card did you use? Radeon or nVidia? FYI, nVidia have disabled S3TC support in all their drivers ( especially Det3 ), mostly to boost sales of their 64MB cards. Probably that's why you don't see any improvement in S3TC supporting games like Unreal Tournament and Soldier of Fortune. On other cards like Savage2000 which also support S3TC, the performance difference after enabling S3TC is noticeable, at least for me.
I'm not debating that the XBox will be a reasonable console; I'm saying that by the time it comes out, the standard PC will be pretty damn good, and within a year or two it'll far outclass the XBox
PC will of course overtake X-box in one year time or more after X-Box released. But to say a computer, even at that time, to be able to fully emulate X-Box is debatable. Texture handling, AC-3 support is just two cases which is interesting to see if any emulator programmers out there could handle.
Damn, if Aureal still lives now, maybe we already have a soundcard that can decode AC-3 in hardware, thus making X-Box ( and also PS2 ) emulation easier.
Take PSX - its been out for what, five years? When it came out do you think that people thought emulation of it possible? It was the most state-of-the-art thing then.
I managed to get 60 fps with ePSXe on 400 mhz using SOFTWARE ONLY, zero hardware accelleration, on FF9.
Maybe the same will be true with the X-Box eventually.... Nothing's impossible to emulate, we just have to wait for the PCs to get enough memory and have fast enough data transfer.
And I doubt the speed will be faster on the X-box vs a 1.2Gig Athlon or the 1.5 P4 (who wants to spend 3000 dollars for one of those, when you can get a 1.2 Athlon at 1/3 the cost and at the same performance level).
The reason (not the P4 question)?
The current video card are the Bottleneck for these systems....Maybe the Geforce 3 will solve that, but I don't think it will. From what I've read on it. It's just slightly faster with a whole lot of extra effects added. Effects that no games will use until, maybe next year.
You'll notice that when the faster video card is put in these Systems it get a big step, but the framerate between the processors are almost the same. What this tells me is that the Video Card cannot keep up with the CPU's....Even on the lower 900mhz Athlon, the Video Card is the bottleneck. You can really see it on the 1600X1200 benchmarks.
In short...Today's PC processors are way ahead of there 3d video Cards. Besides, 30FPS is really all you need for smooth Graphics. The Geforce MX version does that even on the highest resolution. I doubt the PS2's are doing 30FPS, almost every review that I see on Tech-TV that features the PS2 looks choppy, to me.
[This message has been edited by Threesixty (edited March 11, 2001).]
Uhm, Threesixty, the only reason you see choppyness in Tech-TV is because the those videocam's that record it run at a different frequency compared to the TV which displays the PS2 graphics. In more detail, two different light sources have do not share the same wavelength start cycle and thus do not have coherence. This causes the picture to appear choppy. Simply said, both of the light sources (the TV and the vidcam) take and display pic's at a different rate thus making it appear choppy. Human eyes are different cause the picture's are retained in the brain so it is easier for our eyes to be tricked to see an animation. The vidcam on the other hand is not tricked and only displays what it records or actually sees. Just as a sidenote, PAL games on the PSX use a frame rate of 50 FPS while NTSC games uss a frame rate of 60 FPS. The PSX2 DOES NOT run at less than 30 FPS! 30FPS is only used for video display I believe.
You're missing the point again. Yes, PCI bus is shared between modem, soundcard, etc, etc... that's why I said when I'm doing *nothing else* but copy hard disk data. On my recent ish drive (1 year old) when copying absolutely nothing but data (nothing using sound card, nothing using modem, etc, etc) from hard disk I get a transfer rate of slightly over 20MB/sec. The latest 100MB/sec hard disks don't get above 50-60MB/sec (which, incidentally, proves you're wrong about the 40MB/sec limitation on the bus; how can they get 50-60 otherwise?). Hard disks are limited *purely* by physical construction. The bus bandwidth is *not* going to slow them down unless (A) you're using a 486 or (B) you're also running a game using the network card, modem, soundcard, video capture card and second video card. *I* don't run any games like that, at any rate.
Now what PC chipset did you have? And how do you copy the file? From a HDD to another HDD within the same IDE channel, or different channel? Or copying files within different partitions in the same drive. Give me the info on the details on how you make your copying test, and I will explain where the bottleneck is.
And how did I get 60Mb/s? Do you think I use a computer? At my workplace ( I already told you where I work didn't you? ), we use a special device to calculate how much bandwidth a particular HDD could give. We're not using a computer because there's a lot of bottlenecks in it.
You'll notice that when the faster video card is put in these Systems it get a big step, but the framerate between the processors are almost the same. What this tells me is that the Video Card cannot keep up with the CPU's....Even on the lower 900mhz Athlon, the Video Card is the bottleneck. You can really see it on the 1600X1200 benchmarks.
But in X-Box, the games will be developed at most 800x600 resolution. In this case, the CPU ( PIII 667Mhz? )is the bottleneck and not the video card ( hey this is NV25 anyway ). And X-Box don't need >100fps framerates either, just 25-30fps depending on the TV standard ( PAL or NTSC ).
Just as a sidenote, PAL games on the PSX use a frame rate of 50 FPS while NTSC games uss a frame rate of 60 FPS. The PSX2 DOES NOT run at less than 30 FPS! 30FPS is only used for video display I believe.
The figure you're talking about is field per second. Do you ever do video capture in computer? NTSC programmes/games play at 30 frames per second and PAL ones is at 25 frame per second.
miroTest. Came with my video capture card. You're supposed to use to it to work out how much compression to use on the video you capture, by seeing how quickly the hard disk can write information.
Is your HDD is ATA100 on an ATA100 controller with an 80-pin IDE cable?
Or did you use Windows 2000 with SP1 installed or using Linux no later than 2.3 kernel version or Windows 95/98 original ( not SE )? If yes to the first question, you got a slow HDD there because my UDMA 100 HDD could score consistenly 40MB/s with SiSoft Sandra HDD benchmark ( 25k drive index point ). You should consider IBM 75GXP ATA100 7200 rpm which is fast. I could testify for that :)
If yes to the second question, then those OS I've mentioned DON'T support UDMA100 even if your hardware do. FYI, I use Windows 2000 with Service Pack 2 RC 2.5, which support UDMA100. So it may explain the low mark you ( and everyone else ) get there. 20MB/s for a ATA100 HDD ( from any manufacturer ) is slow, as it could go higher than that. No shit about the physical limitations of the HDD because the HDD technology have vastly improved in recent years.
[This message has been edited by The Skillster (edited March 14, 2001).]
I'll ask again: Why does your UDMA100 only get 40MB/s (faster than mine, yes, but still nowhere near 100MB/s) if it's not due to the physical construction? The bus can transfer over 100MB/s. The UDMA standard can transfer 100MB/s. Yet you've just said you're only getting 40. Could it be, perhaps, that the drive can only attain 40MB/s?
And yes, HDD technology has improved vastly. I submit as evidence my first ever hard disk (260MB) which can only get 500KB/sec. A quad-speed CD could outperform that.
Oh, and in case you want to check up about hard disks:
http://www.seagate.com/cda/products/discsales/personal/family/0,1128,234,00.html
http://www.storage.ibm.com/hardsoft/diskdrdl/desk/ds75gxp.htm
The first link is the datasheet for a Seagate drive supporting UDMA66. It states the average data transfer rate is 20MB/s, the interface supports the full 66MB/s though.
The second link is the datasheet for an IBM UDMA100 drive. It supports the standard properly, but the maximum sustainable data transfer is given as 40MB/s.
I didn't pick these drives out on purpose; they were the first two UDMA drives I found. Now, since the manufacturers seem to say that it's the hard disk that can't use the full bandwidth, how are you claiming its not?
[This message has been edited by ficedula (edited March 14, 2001).]
This drive is pretty fast and the specs do suggest that it is the physical speed of the drive that is the hold up, not the bus.
I'll ask again: Why does your UDMA100 only get 40MB/s (faster than mine, yes, but still nowhere near 100MB/s) if it's not due to the physical construction? The bus can transfer over 100MB/s. The UDMA standard can transfer 100MB/s. Yet you've just said you're only getting 40. Could it be, perhaps, that the drive can only attain 40MB/s?
The bus can transfer 100Mb/s, but as what I've said earlier, PCI bus are used by peripherals like internal modem, NIC, soundcard etc. so not all 100MB/s could be used for HDD transfer. Thus 40Mb/s at best.
UDMA100 HDD won't ever do 100MB/s either, but it could do a lot higher than 20MB/s.
Oh, and in case you want to check up about hard disks: http://www.seagate.com/cda/products/discsales/personal/family/0,1128,234,00.html
http://www.storage.ibm.com/hardsoft/diskdrdl/desk/ds75gxp.htmThe first link is the datasheet for a Seagate drive supporting UDMA66. It states the average data transfer rate is 20MB/s, the interface supports the full 66MB/s though.
The second link is the datasheet for an IBM UDMA100 drive. It supports the standard properly, but the maximum sustainable data transfer is given as 40MB/s.
The IBM figure were tested on a standard computer ( with all the bottlenecks in it ). Did you think that IBM ( or any other HDD companies on this matter ), will use their own measurement device, take the numbers from it and use it as a marketing/tech spec data? No! If any company do such things, they will get their ass hauled to court for misleading the customers.
I didn't pick these drives out on purpose; they were the first two UDMA drives I found. Now, since the manufacturers seem to say that it's the hard disk that can't use the full bandwidth, how are you claiming its not?
You take it wrong again. The HDD, at least from IBM could use all the bandwidth left, up to the HDD own limitations ( IBM GXP series could do 60-70MB/s in sustained transfer rate - lab report ). But current PC architecture don't permit such events from happening. As all the HDD will be used on PCs anyway, the manufacturers ( or at least IBM ) WON'T CLAIM that their HDD can attain theoritical UDMA33/66/100 speeds because of possible legal issues.
Also, you'll notice that hard drive rotation speeds are increasing. Most are (or used to be) 5400rpm, now we're seeing 7000rpm or so, even 10000rpm on some. The increased rotation speed gives faster data transfer. If the hard drives were already being throttled by the bus, why on earth would they bother to rotate them faster? After all, the bus is already maxed out, so they can't send any more data! Conclusion: The bus *isn't* maxed out, it's still the hard drive that's the limitation.
As for not quoting theoretical specs - why not, everybody else does. I could quote CDROM drives for a start. Technically, a 50X drive can transfer 7.5MB/sec, but it won't. The only time it's ever come near to that is in the manufacturers labs.
BTW, if you look at SCSI drives they bear out the same story. Ultra2 SCSI offers 80MB/sec transfer rate. The Ultra2 SCSI hard drive I looked at transfers data at 40MB/sec. Now, SCSI has its own bus - you can copy data from one SCSI device to another without the data ever touching the PCI bus or the CPU. So if hard drives are throttled by the bus, why on earth can't the SCSI drives (which in basic physical construction are the same as their IDE equivalents) max out at the bus maximum?
But I'm not using a NIC, soundcard, modem, video capture card or in fact *anything* else while doing these tests. Neither will IBM be. So what's soaking up the remaining bandwidth?
The AGP card is one instance. They will still take bandwidth off the system bus as AGP itself is a derivative of PCI specifications. Ever wonder why AGP 4x card can never reach its theoritical speed of 1.05GB/s? Because other suckers take up bandwidth too. And about the other PCI devices, just being there will take up bandwidth, if you have to know. Also PS/2 and USB devices will also take up bandwidth ( mouse etc. ) and ISA too. Older computers ( post i8xxe ) have that limitations.
Also, you'll notice that hard drive rotation speeds are increasing. Most are (or used to be) 5400rpm, now we're seeing 7000rpm or so, even 10000rpm on some. The increased rotation speed gives faster data transfer. If the hard drives were already being throttled by the bus, why on earth would they bother to rotate them faster? After all, the bus is already maxed out, so they can't send any more data! Conclusion: The bus *isn't* maxed out, it's still the hard drive that's the limitation.
The faster rotation will only reduce latency in most computers, it usually WON'T increase availabale bandwidth. When transferring data, it's all the same. Faster rotation will only reduce the time the HDD need to access the data, not making the HDD have more bandwidth.
As for not quoting theoretical specs - why not, everybody else does. I could quote CDROM drives for a start. Technically, a 50X drive can transfer 7.5MB/sec, but it won't. The only time it's ever come near to that is in the manufacturers labs.
Give me a company that will quote theoritically numbers/specs. Seagate and IBM certainly didnt't do such things. Both companies will take numbers from a standard computers and publish it, instead of using devices that only available on labs. Why no company ever quote 50x CD-ROMs to be able to transfer 7MB/s data? Because it's only possible in labs via special device and not in current computers.
BTW, if you look at SCSI drives they bear out the same story. Ultra2 SCSI offers 80MB/sec transfer rate. The Ultra2 SCSI hard drive I looked at transfers data at 40MB/sec. Now, SCSI has its own bus - you can copy data from one SCSI device to another without the data ever touching the PCI bus or the CPU. So if hard drives are throttled by the bus, why on earth can't the SCSI drives (which in basic physical construction are the same as their IDE equivalents) max out at the bus maximum?
Wrong, SCSI will still touch the CPU for CRC redundancy check for instance. Data won't touch the system bus? In copying files, the data will still have to go through primary memory and the SCSI controller ( especially if RAID is in action as well ). If you use SCSI add-on card, which is usually a PCI card, well, I don't know then how that data move to memory without skipping the PCI bus.
SCSI HDD is usually faster than IDE ones because simply that SCSI HDD architecture is different ( and expensive ) than IDE ones.
Plus, if it's the bandwidth limiting hard disks why are they getting faster without changing the bus they're running on?