Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - cHiBiMaRuKo

Pages: 1 2 3 [4] 5 6
76
General Discussion / FF9 ePSXe
« on: 2001-02-16 23:36:00 »
Ant: You can run ISo files with Daemon Tools. ePSXe also have their own ISO plug-ins but Daemon is more compatible.

Sir Canealot: ePSXe sure have crap sound. And also to press F4 before saving sucks. The best thing is graphics though, very good.


77
General Discussion / FF9 ePSXe
« on: 2001-02-15 01:48:00 »
ePSXe could play PAL games, no problem.

78
General Discussion / FF9 ePSXe
« on: 2001-02-13 21:14:00 »
Either Pete Midas SPU or ePSXe SPU core are the working ones. Well, I still prefer of palying with VGS because of their speed. ePSXe is just too fast or too slow, never get the speed right.

79
General Discussion / FF9 ePSXe
« on: 2001-02-12 21:22:00 »
Perfect, except for saving, where I must press F4 each time I want to save. But well, ePSXe 1.01 saving mechanism is still buggy.

80
General Discussion / FF9 and all of its complaints
« on: 2001-02-23 17:14:00 »
Your VGS aren't correctly patched.

81
General Discussion / FF9 and all of its complaints
« on: 2001-02-13 01:18:00 »
Of course C++ is there when SNES is out  :)

82
General Discussion / FFIX PC petion
« on: 2001-02-03 00:36:00 »
The full set is 12. Some of them could be found at Junon, but you'll have to fight for the rest at Gold Saucer roller-coaster shooting game side quest. It take HOURS for me to get it all.

83
General Discussion / FFIX PC petion
« on: 2001-02-02 12:00:00 »
If you have the full set of 1/35 soldiers, you can enter the battle at Fort Condor without spending any Gil.

Oh man, my 100th post!

[This message has been edited by cHiBiMaRuKo (edited February 02, 2001).]


84
General Discussion / FFIX PC petion
« on: 2001-02-01 11:43:00 »
I only agree with points number 2 and 4 ( and also the point of releasing the remake on PC ). Others are completely outrageous or will ruin the game or just plain impossible.

Point #1.
For me, Aeris death on the end of Disc 1 is one of the events that make FF7 very unique. What game else which will kill one of their top characters in the middle of the game? Even very few Hollywood movies do that. Aeris death also give more purpose to the game story. Making Aeris revivable, even it's only an optional non-compulsory sidequest, will make FF7 storyline lost some of it punchline and directions.

Point #2:
I agree on it.

Point #3:
More story is probably good, but multiple endings is hard to be agreeable, because it's impossible to do, unless a massive storyline overhaul is done. A massive storyline overhaul=a new game, at least for me.

Point #4
Agree with this.

Point #5
The list of materia/accessories/weapons/enemies/enemy skills/armours in FF7 original is more than enough to beat the hell out of Sephiroth, or to do anything you want in the game. Why you need more? Making ridiculous points ( sorry really have to stress my points ) only will make the folks at Square pissed off ( if they do read the petition ) and they probably won't do other points that worth doing.

Point #6
Sephiroth being uncontrollable by the player is a sign of his personality ( a man with pride and high ego ). Making him controllable ( it's actually possible if you know how to do it - but only with Gameshark ) by player will reduce his personality to just a normal character, and not the best villain in all FF series, rubbing the shine off FF7 solid story.

Point #7
How long you've played FF7? I've logged more than 2000 hours playing FF7 ( either on PC or PSX ) and I think everyone here knows what happens to all characters mentioned in point #7. For example Reeve ( the first name ). How 's he die? You could actually watch a FMV showing his last seconds on Earth. What happened to others by the time Cloud and Co. go to fight Sephiroth could be explained.

Point #8
More playable characters that were unseen before? It's impossible. If to make playable chracters from existing ones such as Reeve ( like in point #7 ), probably it's possible, but massive overhaul would be needed.

Point #9
All items that we get in FF7 in normal mode as usable IINM. What seemingly useless items?


My main point is not to make impossible or ridiculous request to Square, but just limiting it to the ones that could really give the game a boost, like your point #2 and #4. For example, your point #3 could be changed by asking Square to renovate their stupid ending instead of making new ones.

[This message has been edited by cHiBiMaRuKo (edited February 01, 2001).]


85
General Discussion / FFIX PC petion
« on: 2001-01-29 17:17:00 »
About the FF7 remake petition, I found it's hard to agree with some of the points the petition maker proposed. So I don't sign it.

86

 
Quote
But I'm not using a NIC, soundcard, modem, video capture card or in fact *anything* else while doing these tests. Neither will IBM be. So what's soaking up the remaining bandwidth?

The AGP card is one instance. They will still take bandwidth off the system bus as AGP itself is a derivative of PCI specifications. Ever wonder why AGP 4x card can never reach its theoritical speed of 1.05GB/s? Because other suckers take up bandwidth too. And about the other PCI devices, just being there will take up bandwidth, if you have to know. Also PS/2 and USB devices will also take up bandwidth ( mouse etc. ) and ISA too. Older computers ( post i8xxe ) have that limitations.


 

Quote
Also, you'll notice that hard drive rotation speeds are increasing. Most are (or used to be) 5400rpm, now we're seeing 7000rpm or so, even 10000rpm on some. The increased rotation speed gives faster data transfer. If the hard drives were already being throttled by the bus, why on earth would they bother to rotate them faster? After all, the bus is already maxed out, so they can't send any more data! Conclusion: The bus *isn't* maxed out, it's still the hard drive that's the limitation.

The faster rotation will only reduce latency in most computers, it usually WON'T increase availabale bandwidth. When transferring data, it's all the same. Faster rotation will only reduce the time the HDD need to access the data, not making the HDD have more bandwidth.


 

Quote
As for not quoting theoretical specs - why not, everybody else does. I could quote CDROM drives for a start. Technically, a 50X drive can transfer 7.5MB/sec, but it won't. The only time it's ever come near to that is in the manufacturers labs.

Give me a company that will quote theoritically numbers/specs. Seagate and IBM certainly didnt't do such things. Both companies will take numbers from a standard computers and publish it, instead of using devices that only available on labs. Why no company ever quote 50x CD-ROMs to be able to transfer 7MB/s data? Because it's only possible in labs via special device and not in current computers.

Quote
BTW, if you look at SCSI drives they bear out the same story. Ultra2 SCSI offers 80MB/sec transfer rate. The Ultra2 SCSI hard drive I looked at transfers data at 40MB/sec. Now, SCSI has its own bus - you can copy data from one SCSI device to another without the data ever touching the PCI bus or the CPU. So if hard drives are throttled by the bus, why on earth can't the SCSI drives (which in basic physical construction are the same as their IDE equivalents) max out at the bus maximum?

Wrong, SCSI will still touch the CPU for CRC redundancy check for instance. Data won't touch the system bus? In copying files, the data will still have to go through primary memory and the SCSI controller ( especially if RAID is in action as well ). If you use SCSI add-on card, which is usually a PCI card, well, I don't know then how that data move to memory without skipping the PCI bus.
SCSI HDD is usually faster than IDE ones because simply that SCSI HDD architecture is different ( and expensive ) than IDE ones.


87
Damn.. I was sent to an emergency outstation work in Singapore, so this late reply ( all the way from Singapore too ). So if I can reply fast enough, well I got lot of work to do.


 

Quote
I'll ask again: Why does your UDMA100 only get 40MB/s (faster than mine, yes, but still nowhere near 100MB/s) if it's not due to the physical construction? The bus can transfer over 100MB/s. The UDMA standard can transfer 100MB/s. Yet you've just said you're only getting 40. Could it be, perhaps, that the drive can only attain 40MB/s?

The bus can transfer 100Mb/s, but as what I've said earlier, PCI bus are used by peripherals like internal modem, NIC, soundcard etc. so not all 100MB/s could be used for HDD transfer. Thus 40Mb/s at best.
UDMA100 HDD won't ever do 100MB/s either, but it could do a lot higher than 20MB/s.


 

Quote
Oh, and in case you want to check up about hard disks: http://www.seagate.com/cda/products/discsales/personal/family/0,1128,234,00.html" TARGET=_blank>http://www.seagate.com/cda/products/discsales/personal/family/0,1128,234,00.html  
 http://www.storage.ibm.com/hardsoft/diskdrdl/desk/ds75gxp.htm" TARGET=_blank>http://www.storage.ibm.com/hardsoft/diskdrdl/desk/ds75gxp.htm  

The first link is the datasheet for a Seagate drive supporting UDMA66. It states the average data transfer rate is 20MB/s, the interface supports the full 66MB/s though.

The second link is the datasheet for an IBM UDMA100 drive. It supports the standard properly, but the maximum sustainable data transfer is given as 40MB/s.

The IBM figure were tested on a standard computer ( with all the bottlenecks in it ). Did you think that IBM ( or any other HDD companies on this matter ), will use their own  measurement device, take the numbers from it and use it as a marketing/tech spec data? No! If any company do such things, they will get their ass hauled to court for misleading the customers.


 

Quote
I didn't pick these drives out on purpose; they were the first two UDMA drives I found. Now, since the manufacturers seem to say that it's the hard disk that can't use the full bandwidth, how are you claiming its not?

You take it wrong again. The HDD, at least from IBM could use all the bandwidth left, up to the HDD own limitations ( IBM GXP series could do 60-70MB/s in sustained transfer rate - lab report ). But current PC architecture don't permit such events from happening. As all the HDD will be used on PCs anyway, the manufacturers ( or at least IBM ) WON'T CLAIM that their HDD can attain theoritical UDMA33/66/100 speeds because of possible legal issues.


88
 
Quote
miroTest. Came with my video capture card. You're supposed to use to it to work out how much compression to use on the video you capture, by seeing how quickly the hard disk can write information.

Is your HDD is ATA100 on an ATA100 controller with an 80-pin IDE cable?
Or did you use Windows 2000 with SP1 installed or using Linux no later than 2.3 kernel version or Windows 95/98 original ( not SE )? If yes to the first question, you got a slow HDD there because my UDMA 100 HDD could score consistenly 40MB/s with SiSoft Sandra HDD benchmark ( 25k drive index point ). You should consider IBM 75GXP ATA100 7200 rpm which is fast. I could testify for that  :)

If yes to the second question, then those OS I've mentioned DON'T support UDMA100 even if your hardware do. FYI, I use Windows 2000 with Service Pack 2 RC 2.5, which support UDMA100. So it may explain the low mark you  ( and everyone else ) get there. 20MB/s for a ATA100 HDD ( from any manufacturer ) is slow, as it could go higher than that. No shit about the physical limitations of the HDD because the HDD technology have vastly improved in recent years.


89
What program(s) did you use for benchmarking? I sure want to see the program myself!

90
 
Quote
You're missing the point again. Yes, PCI bus is shared between modem, soundcard, etc, etc... that's why I said when I'm doing *nothing else* but copy hard disk data. On my recent ish drive (1 year old) when copying absolutely nothing but data (nothing using sound card, nothing using modem, etc, etc) from hard disk I get a transfer rate of slightly over 20MB/sec. The latest 100MB/sec hard disks don't get above 50-60MB/sec (which, incidentally, proves you're wrong about the 40MB/sec limitation on the bus; how can they get 50-60 otherwise?). Hard disks are limited *purely* by physical construction. The bus bandwidth is *not* going to slow them down unless (A) you're using a 486 or (B) you're also running a game using the network card, modem, soundcard, video capture card and second video card. *I* don't run any games like that, at any rate.

Now what PC chipset did you have? And how do you copy the file? From a HDD to another HDD within the same IDE channel, or different channel? Or copying files within different partitions in the same drive. Give me the info on the details on how you make your copying test, and I will explain where the bottleneck is.

And how did I get 60Mb/s? Do you think I use a computer? At my workplace ( I already told you where I work didn't you? ), we use a special device to calculate how much bandwidth a particular HDD could give. We're not using a computer because there's a lot of bottlenecks in it.

Quote
You'll notice that when the faster video card is put in these Systems it get a big step, but the framerate between the processors are almost the same. What this tells me is that the Video Card cannot keep up with the CPU's....Even on the lower 900mhz Athlon, the Video Card is the bottleneck. You can really see it on the 1600X1200 benchmarks.

But in X-Box, the games will be developed at most 800x600 resolution. In this case, the CPU ( PIII 667Mhz? )is the bottleneck and not the video card ( hey this is NV25 anyway ). And X-Box don't need >100fps framerates either, just 25-30fps depending on the TV standard ( PAL or NTSC ).

Quote
Just as a sidenote, PAL games on the PSX use a frame rate of 50 FPS while NTSC games uss a frame rate of 60 FPS. The PSX2 DOES NOT run at less than 30 FPS! 30FPS is only used for video display I believe.

The figure you're talking about is field per second. Do you ever do video capture in computer? NTSC programmes/games play at 30 frames per second and PAL ones is at 25 frame per second.



91
 
Quote
A 33Mhz 32-bit bus has a data transfer rate of over 100MB/sec. So why can't a hard disk rated at 100MB/s attain 100MB/s in a PC? If ALL you're doing is copying files, shouldn't it be able to attain it's MAXIMUM data rate? Yes, it will - and you'll find it's not 100MB/s. Data bus limitations in PC account for SOME slowdowns, but not all - and they don't affect hard disk speed much. A hard disk in the XBox will attain a similar speed to the same speed in the PC, purely because any speed limitations are due to the physical construction of the drive.

You're forgetting that apart from the HDD, there's others hardware such as soundcard, internal modem, NIC etc, that also used the PCI bus. Those suckers also take bandwidth. And the figure of 40MB/s only apply to Intel i815 and i850 chipset ( or whatever chipsets that use the MTH ), as the northbridge and the southbridge aren't connected via PCI bus. All Athlon chipsets, and also Intel older chipsets such as BX and LX get much lower transfer rate than 40Mb/s, usually around 25 to 30MB/s at best. X-Box HDD is likely will connect directly to the faster system bus which is 200Mhz and should be able to clock higher speed than HDD in PCs. It's also why in the last minute, Intel were selected to provide CPU for X-Box, even that it's expensive compared to Athlon.


 

Quote
Incidentally, 4X AGP at 32-bit has a data bandwith of well over 400MB/sec. I don't think data bandwidth is a problem on the modern PC. The slowdown you see when data is transferred between oncard/main memory is because it's using the data bus AT ALL. ANY use of the data bus causes a slowdown, even if you're using DMA. Using it to contact a hard disk is even worse - ever heard of data seek time? So if the XBox swaps data to/from hard disk, that's a further reason to avoid it.

No, the slowdown happened because 1). The main memory IS slower than the video memory. 2). The AGP bus, even at 400MB/s as you've said, is slow. Wanna know why? AGP bus is just a derivative from the PCI bus. Hmm.. shared bandwidth again. And actually AGP 4x bandwidth is more than 1Ghz, theoritically. Compared with the X-Box bandwidth, AGP 4x still sucks. The 1Gb/s bandwidth is hardly used.

Quote
BTW - on the subject of textures - a 512x512 texture takes up 1MB of RAM, if you're using 32bit colour, and we all do. Given that a 512x512 texture could be standard for ANY enemy, modern games could well use 32MB of ram - multiple enemies on the screen, you, the backdrops, etc. On a GeForce/Radeon/whatever with 32MB of RAM you're laughing! On an XBox, you've just blown half your RAM on textures before any game data's been loaded. Sounds? Talking a few MB at least, and music's got to be taken into account. Remember, the 8MB DLS that came with FF8 was considered to be, well, "shit", so whether or not you're using synthesised music you want another 16MB for music. Oh dear, 3/4 of all RAM gone and we STILL haven't loaded anything other than textures or sounds. Basically, by the time the XBox comes out, 64MB of RAM is a REAL limitation. Even with texture compression (and, BTW, my graphics card supports S3TC in hardware, and it doesn't help that much) you still have REAL problems.

Here's is a link on how to calculate video memory taken by textures etc.  http://www.planetquake.com/rocketland/haqsau/vidmemcalc.shtml" TARGET=_blank>PlanetQuake
512x512 textures is small, as Direct3D spec support up to 2048x2048. Calculate and with medium textures ( high textures don't make a difference on TV sets -especially the large ones ), pull all other at high selection ( 32bit, mipmap enabled, triple buffering etc. ) and still don't reach 20MB is video memory.

Music? One of the new feature in DirectX8 is that both DirectMusic and DirectSound are combined into one. So no more sucky sound like in FF VII. DLS is history now. You can create a loopable pre-recorded musics with Directx8, and also to take account with all TV sets around the world, 44Khz and 22Khz won't make a difference, also with mono and stereo, unless of course the TV set is also hooked to a home theathre sound system. Well, it's up to the developers to choose how the music should be played in their games.

About S3TC? Well what card did you use? Radeon or nVidia? FYI, nVidia have disabled S3TC support in all their drivers ( especially Det3 ), mostly to boost sales of their 64MB cards. Probably that's why you don't see any improvement in S3TC supporting games like Unreal Tournament and Soldier of Fortune. On other cards like Savage2000 which also support S3TC, the performance difference after enabling S3TC is noticeable, at least for me.

Quote
I'm not debating that the XBox will be a reasonable console; I'm saying that by the time it comes out, the standard PC will be pretty damn good, and within a year or two it'll far outclass the XBox

PC will of course overtake X-box in one year time or more after X-Box released. But to say a computer, even at that time, to be able to fully emulate X-Box is debatable. Texture handling, AC-3 support is just two cases which is interesting to see if any emulator programmers out there could handle.
Damn, if Aureal still lives now, maybe we already have a soundcard that can decode AC-3 in hardware, thus making X-Box ( and also PS2 ) emulation easier.


92
 
Quote
Aaaah!!!
You still don't get it do you?
When the author actually said
"THIS INCREASES THE SIZE OF THE FILE BUT IMPROVES SPEED OF EXECUTION"

The part "THIS INCREASES THE SIZE OF THE FILE " means that the size of the file does increase.

The other part "BUT IMPROVES SPEED OF EXECUTION" means that the overall speed already includes the factors such as L1 and L2 cache. In other words, the latter is faster ON THE WHOLE.

If the 2nd part said "BUT IMPROVES SPEED OF COMPUTATION", it would mean that it only increases the speed of processing but the L1 and L2 cache factors have not been included thus it WILL be possible that the it may be faster.

That doesn't change the fact that RISC CPU ( post G4 anyway - which have ridiculously large cache ) don't have superior execution rate that what of CISC have. The author ( Pentium 4 article ) have pointed out that while RISC code execution is faster, the cache problem reduce it's rate anyway.

Quote
cHiBiMaRuKo, what is the problem with you and the bandwidth problem? Don't you see? With only 64MB of DDR RAM for both the system and the video and main system bus, the system would not be efficient at all. Firstly, you will definately not flush all of the main systems bus info into the hard disk or it will just make the whole system SUPER SLOW. The idea of unified memory is exactly what the whole AGP idea revolves on. The AGP bus has sufficient bandwidth to access the system RAM but why doesn't it do it straight away and alleviate the problem of putting RAM chips onto the video card itself? This is because the RAM chips on the video card are usually faster and owned SOLELY by the video card. Let's assume that the Geforce 3 will have 64MB of onboard RAM minimum so wouldn't that already mean an advantage over the X-box?

How did you know that putting data that aren't memory intensive in HDD will make it super slow? You're seems to think that X-Box architecture is the same like of current PC, while it completely isn't. First you say that AGP have sufficient bandwidth to system memory. Woohoo, who taught you that? The AGP is slower than what you can think off. Swapping textures from system memory will make even the fastest x86 computer on earth stutters. X-Box don't have that problem. X-Box is actually just like a large graphic card, with high bandwidth for itself. No AGP or PCI, no IDE etc. Swapping textures in X-Box will be just as easy as current garphic cards do.

64MB DDR-RAM is ENOUGH for all data ( textures, sound, game engine etc ). Let say if a X-Box game is coded at 800x600x32bit and using 2048x2048 textures ( the most current TV could support ), with triple buffering, and medium texture usage and detail ( this is TV anyway ). It won't take more than 15MB of buffer and texture memory, which is the most important in ensuring stutterless gameplay.
And did you already forget that Direct3D also have support for S3TC and DXTC ver 3 and 5 ( NV10 and later support both )? Applying that 2 extension will reduce the memory requirement for textures and buffer memory futhrer.  Win2k kernel for X-box will top-out at 500kb. Still plenty of space for other data to fill in.

Quote
Now lets look at the 2nd part, where you say swapping on the hard-disk works fast because of the advantage it has on the system bus. Please note that PC hard drives nowadays have access to what is called UDMA100 technology. The only reason you would require such technolgy or speed transfer is If your hard drive spins with at least 7200 rpm. Note also that the bus speed is not the limiting factor here. The hard drives only manage to reach the 100MB/s transfer limit when the acheive a BURST speed. Therefore even if you have a 2 million MB/s bus speed for hard-drives on the X-Box, it will not be of any use. Should you say Microsoft could easily put a 10000rpm hard drive into the X-box, think about it, if Microsoft were to that, how much do you think the X-box would cost? If Microsoft were to do that and still be able to make it cost less that $300, a lot of people would go out and buy it straight away but not for the console but just to salvage all the parts in the X-box. Please also remember that a hard drive is a physical read device that requires a motor and will never ever manage to match RAM. Swapping between RAM and the Hard Disk is a BAD IDEA cause RAM is at least 10 times faster than your hard disk. Swapping to the hard disk defeats the complete purpose of the fast processing speed doesn't it?

Again why do you think HDD for X-Box will be only as par as PC? You always seems to think that X-Box architecture is the same to PC. Of course the HDD in X-Box will never match the memory speed but it can attain a higher speed than what HDDs in PC can. How high a UDMA100 transfer rate in PC could go? 40Mb/s tops. Why? It is because of HDD limitation or the PC system bus? The IDE controller maybe can handle 100MB/s but how about the PCI 33Mhz? UDMA 100 HDDs could achieve more than 60Mb/s sustained transfer rate in labs but why it doesn't even reach that speed in counsumer PC? It is still HDD limitations or not? Up to you to think of it. Why I know? I work at IBM HDD factory in Sungei Besi near Kuala Lumpur, Malaysia.
I bet that HDD in X-Box could acheive at least 50MB/s in transfer speed, as it connect directly to the X-Box faster system bus.  And yes, there's NO HDD in PC yet that I've seen that reach 66MB/s or 100MB/s, at least, outside the labs.

Quote
Yes, CISC *does* have some advantages over RISC and vice versa. I objected to the way you (and the author you quoted) dismissed RISC processors as of no use at all nowadays.

Did I've said anything like that? I'm just replying to DagSverre post that say  CISC is stepping to the wrong direction, while it certainly isn't. And I don't remember that the author I've mentioned ever dissing RISC. He just mentioned that RISC is better than CISC argument is wrong, which is certainly true, especially nowadays.


93
 
Quote
Check your logic. I never said RISC had every advantage over CISC. I said it's wrong that they have *no* advantage.

And then it's safe to say that CISC have their own advantages over RISC too?

Quote
As for swap files ... exactly how is a 6 point whatever GB/sec bandwidth helping you if you're constantly swapping to/from hard disk? A *ridiculously* fast hard disk could manage 100MB/sec in *theory* and that would mean half a second lock-up every time the XBox flushed large portions of RAM... plus no drive *ever* gets it's maximum speed rating.

1. Only data like the textures and the soundstreams is needed to be changed quickly most of the time. Other data, such as game engine, the Windows 2000 kernel itself etc. aren't changed by much in gaming  course. So, X-Box probably will store those static data on the swap file, and only the textures/sound streams on memory.

2. As I have said before, X-Box games usually won't render more than 800x600 resolution even at the release time at the end of the year, because conventional TV don't support res. higher than that. I believe 800x600x32bit with texture size of 2048x2048 won't even take take half the RAM on X-Box. So there's plenty of room for other processes/data etc. And if most developers use lower res than that, with I expect from early games anyway, memory requirement would be less.

3. I've also said that the HDD in X-Box aren't tied with PCI speed limitation in PC, due to  architectural differences between both PC and console. These PCI problem are the one that make current HDDs ( for PC ) don't attain their rated speed. Maybe HDD in X-Box could reach their maximum speed easier than what of PC could. But at least it would be faster than what PC have to offer.  


94

 
Quote
Note that the author stated "THIS INCREASES THE SIZE OF THE FILE BUT IMPROVES SPEED OF EXECUTION". Execution describes the whole process thus it includes factors such as L1 and L2 cache. If he said COMPUTATION instead of EXECUTION, the factors of L1 and L2 should be considered.

And then what's the point? It seems you want to say that execution and computation speed both depends on L2 and L1 cache. Of course it is. And then?

Quote
Read all the X-Box and notes taken

The CPU has been upgraded to 733Mhz  http://firingsquad.gamers.com/news/newsarticle.asp?searchid=1666" TARGET=_blank>FiringSquad article  but I already heard that it's been slowed down to 667Mhz. There's possibillity that AMD chips could also be used  http://www.msxbox.com/php/full_post.php3?id=700" TARGET=_blank>AMD in X-Box


I've already said that NV25 ( not exactly NV25 but a beefier version of NV20 ) will be used by X-Box. Afraid that textures will fill up the 64MB DDR-RAM and couldn't flush it out as faster as X-Box could be hoping for ( even with high bandwidth it have )? X-Box shipped with a 8GB HDD right out from the box ( not your normal PC HDD mind you ) for storing information about the actual games etc just like a swap file.  http://www.msxbox.com/hardware_php/hd.php3" TARGET=_blank>HDD in X-Box . I better make it straight that HDD in X-Box is faster than what you've seen in current PC, particularly that because it's not hindered by PCI 33Mhz speed.

Quote
Fill ALL 64MB with textures? Unless I've missed something, doesn't this mean there'd be no memory left for the main game - like, say, sound effects, the polygon data, the actual program itself...

If the 64MB DDR-RAM isn't enough and X-Box couldn't flush the textures fast enough, there's always the HDD to hold the data you've mentioned. Those kind of data shouldn't take too much bandwidth as opposed to the textures.

I doubt that the textures will take all 64MB though because at most, X-Box will only render at 800x600res ( as most current TV max resolution is this ). HDTV isn't popular as of yet, but when HDTV is, X-Box2 will already come out out ( well if X-Box is successful ).


 

Quote
And yes, I know the author of that article I was commenting on isn't trying to say RISC's are useless. He's trying to say they hold no advantage over CISC, and I say that's wrong.

Today, that argument sure is true. Name me any true RISC or CISC CPUs out there now. There's aren't any. RISC have it's own weakness that hinder performance and CISC also have their own. If RISC have complete advantage over CISC as what you've claimed, CISC will die already.


95
 
Quote
Nope, got it wrong. By having every instruction 4 bytes in length exactly, it speeds up code. Because you know that 32-bytes of code contain 8 instructions, and when you're executing instruction #7, say, it's time to go and get the next block of code. It also means instructions never get split up over code boundaries - for example, if your cache line was 32-bytes, with variable length instructions, what happens when an instruction starts on the last byte and carries on elsewhere? As an example, on Win98 there's a code optimizer tool that pads executable code so the instructions all end on 4K boundaries. THIS INCREASES THE SIZE OF THE FILE BUT IMPROVES SPEED OF EXECUTION.


Ahaha, good, but as you have said above, and also the author want to point out, THIS INCREASES THE SIZE OF THE FILE BUT IMPROVES SPEED OF EXECUTION. By larger file size, more memory
would be needed for operation. Chances are high that, that L1 and L2 cache won't be able to hold all the data needed for execution, thus cache miss rate will be higher. Cache miss cost CPU cycles, I think you know that, and RISC speed advantage that it should have is gone anyway.


 

Quote
"The necessary bits required to uniquely decode the instruction are usually found in the first 2 bytes of the instruction, 4 bytes for floating point. That's all the decoder needs to figure what this instruction is. It needs to decode the additional bytes only in cases of complex addressing modes."

First of all he says they're "usually" found in the first two bytes. Then he says "that's all it needs". Then he changes mind again and says that sometimes it DOES need the rest of the instruction! Either
a) You get all the instruction, leading to increasing memory traffic, exactly what he was accusing the RISC of or
b) You don't, and when it turns out you DID need the rest of it, you're f***ed. (Well, actually it means another memory access - but you've just lost out speed-wise).

Did the author already say that it will only need to decode the whole instructions when in complex addressing mode. He also say only SOMETIMES the 68040 cip hhave to do that. The fact that all the data need for execution usually lies in first 2 ( or 4 ) bytes is quite right, especially in today processors, except sometimes. If you get all the instructions in the first 2 ( or 4 bytes ), I don't see why does it increase memory traffic because usually it don't take as much physical RAM as RISC would do.


 

Quote
"...could give the appearance of executing 1 instruction per cycle. Any given instruction still takes multiple clock cycles to execute, but by overlapping several instructions at once at different stages of execution, you get the appearance of one instruction per cycle."

You do; on this he's quite right, if you've got enough simultaneous fetch/decode/execute subunits. He also points out this fails if the CPU is wrong about which instruction is going to be executed next (in the case of a Jump (GOTO, if you like) command). So, everybody, would would we all prefer:

a) A CPU which appears to execute 1 instruction per cycle except when it jumps, when the next few instructions will take up to 10 cycles.
b) A CPU which appears to execute 1 instruction per cycle except when it jumps, when the next few instructions will take up to 3 cycles.

Hmmm .... I'd want (b). That's the RISC.

Incidentally, JUMP instructions are fairly common in code. I've done some assembler programming as part of my course, and taking the example assembler I wrote for of the exercises, out of 17 instructions, 3 were conditional jumps. Lots of things generate jumps; calling a function, IF statements, loops, etc.

Also, he mentions exactly *how* modern CISC processors gain their speed: Multiple decoders, like you need for the above trick. What about the PIII? "Decoders 2 and 3 can decode only simple, RISC-like instructions that break down into 1 micro-op." OK: So it's fast because the extra decoders in it are small RISC processors... that's exactly the case here: Modern CISC's *do* use a lot of RISC-type architecture to run as fast as possible. Does this suggest, perhaps, that RISC has speed advantages over CISC? No, of course it doesn't!.

I think you miss the point of what the author want to point out here. He only want to point out that "RISC is here to stay and CISC will die" theory isn't true. Maybe the RISC is fast, but CISC chips had already have their ways of matching RISC speed and efficiency. After all, the author already pointed out that all enginners come from the same uni/college and learn the same books, maybe the same one as what you've learned. I a computer engineering grad from UPM in Malaysia, and my lecturers also kinda say the same thing to me. But in IT world, everything changed quickly and the not everything you've learned in university today will apply to then outside world in one or two years.


 

Quote
Aaaah!!
The posts in this topic are honestly getting longer and longer. But honestly speaking, I wonder if it's worth actually posting anything to reply to cHiBiMaRuKo cause he doesn't seem to read what we say.
quote from  http://www.g256.com" TARGET=_blank>www.g256.com   today
_________________________________________

MS to lose $2bn on Xbox
--------------------------------------------------------------------------------
Icenum Thursday March 8, 2001 3:04 AM Your Time
The Register has some darn interesting insight on MS and the XBox, good for consumers and bad for MS..Awwwww:
Merrill Lynch has come to the shock conclusion that Microsoft is going to lose a lot of money on Xbox. The console could drain up to $2 billion from the Beast of Redmond's coffers before break-even.

As the world+dog knows, consoles are sold at a loss in the early days, as manufacturers subsidise the low price points needed to drive sales of the machine.

Microsoft has yet to announce Xbox pricing, but most observers put it at around $250. Merrill Lynch reckons each console will cost around $375 to produce - simple maths tells you Microsoft is flushing $125 down the pan. The figure might be higher or lower depending on what Microsoft ultimatly asks punters to pay. The Xbox price tag isn't expected to be much higher than the $299 Sony currently charges for the PlayStation 2.
_________________________________________

You obviously don't get my point up there. Did I say that Microsoft will make money from X-Box sales? No! Microsoft will highly likely also take the strategy employed by Sony ( if you did know how Sony operates on the console market ). Did you think that PS2 really cost ONLY US 300? Maybe I should have make my point clearly next time.

[This message has been edited by cHiBiMaRuKo (edited March 08, 2001).]


96

 
Quote
Oh, THAT was a well constructed argument! I mean, you can tell they spent hours and hours researching that, seeking out all the technical data, so they could make the informed and persuasive argument: "It's crap". I mean, that convinced me! Before, I always thought you needed *reasons* and *proof* when you were saying an argument was wrong, but no! all you have to do it say "that's crap" and immediately everybody has to be convinced! Well, that's going to work SO well in my university work; "I could give proof, but no; it's just crap." Obviously all those scientists who try to PROVE theories are wasting their time, all they need to do is swear at anybody who disagrees and that *makes* them correct!

I think the author already give proofs, backed in second or third level programming language examples. He just doesn't say that "RISC is superior than CISC" theory is crap ( even I learn that in one of my 8088 subjects in uni ), but also gives proof of what's he saying. Why don't you give him an e-mail and try to convince him that he is wrong?


97

 
Quote
oh guys about psx emulators:
good news for the next release of epsxe (due in about 1-2weeks) its gonna feature dual shock / force feedback support and, AND.....
Wait for it....
SAVE STATES/ FREEZE SAVES!!!!
so say good bye to sodding psx crappy memory card problems!! and so hello to Zsnes style saving!!

Of man.. save states. A nifty feature indeed. Dual shock support? Will it support PS analogue controllers via DirectPad Pro?


98

 
Quote
I've only skimmed the article, but from what I see it is all about how bad CISC cpus are. Perhaps you got the terms mixed up? Remember, Intel = CISC!

Here's what the article say.

RISC vs. CISC, get over it

This seems to have touched a nerve with a number of people, since I called the whole CISC vs. RISC argument bullshit. It is. RISC is just a simpler way of designing a processor, but you pay a price in other ways. By placing the constraint that the instruction set of a processor be fixed with, i.e. all instructions be 2 bytes, or 4 bytes, or 16 bytes in size (as with the Itanium), it allows the engineers to design a simpler decoder and to decode multiple instructions per clock cycle. But it also means that the typical RISC instruction wastes bits because even the simplest operation now requires, in the case of the PowerPC, 4 bytes. This in turn causes the code on the RISC processor to be larger than code on a CISC processor. Larger code means that for the same size code cache, the RISC processor will achieve a lower hit rate and pay the penalty of more memory accesses. Or alternately, the RISC processor requires a larger code cache, which means more transistors, and this merely shifts transistors from one area of the chip to another.

The people who declared x86 and CISC processors dead 10 years ago were dead wrong. CISC processors merely used the same kind of tricks as RISC processors - larger cache, multiple decoders, out-of-order execution, more registers (via register renaming), etc. In some cases, such as during task switching when the entire register set of the processor needs to be written to memory and then a different register set read in, the larger number of "visible" registers causes memory memory traffic. This in turn puts a load on the data cache, so as with the code cache, you either make it larger and use more transistors, or you pay a slight penalty.

My point is, these idiots from 10 years ago were wrong that RISC is somehow clearly superior to CISC and that CISC would die off. It's merely shifting transistors from one part of the chip to another. On the PowerPC, all instructions are 32-bit (4 bytes) long. Even a simple register move, an addition of 2 registers, a function return, pushing a value to the stack, all of these operations require 4 bytes each. Saving the 32 integer registers alone requires 128 bytes of code, 4 bytes per instruction times 32 instructions. Another 128 bytes of reload them. Ditto for the floating point registers. So who cares that it simplifies the decoder and removes a few transistors there. It causes more memory traffic and requires more transistors in the cache.

And the decoding problem is not that big of a problem for two reasons. I'll use the example of the 68040, the PowerPC, and the x86. A PowerPC chip can decode multiple instructions at once since it knows that each instruction is 4 bytes long. A 68040 processor has instructions that are a minimum of 2 bytes long and can go up to 16 bytes in size (I think, I can't think of an example off the top of my head that's longer than 16). Let's say 16. The necessary bits required to uniquely decode the instruction are usually found in the first 2 bytes of the instruction, 4 bytes for floating point. That's all the decoder needs to figure what this instruction is. It needs to decode the additional bytes only in cases of complex addressing modes. This is one area where Motorola screwed up (and likely decided the fate of the 68K) is that they truly made a complex instruction that requires decoding of almost every byte.

In the case of x86, Intel either lucked out or thought ahead and made sure that all the necessary bits to decode the instruction are as close to the beginning of the instruction as possible. In fact, you can usually decode an x86 instruction based on at most the first 3 bytes. The remaining bytes are constant numbers and addresses (which are also constant). You don't need to decode, say, the full 15 bytes of an instruction, when the last 10 bytes are data that gets passed on down into the core. So as one reader pointed out in email, Intel stuck with the old 8-bit processor techniques (such as the 6502) where you place all your instruction bytes first, then your data bytes. In the case of the 6502, only the first byte needed decoding. Any additional bytes in the instruction were 8-bit or 16-bit numeric constants.

So decoding x86 is quite trivial, almost as easy as decoding RISC instructions. AMD seems to have figured out how to do it. Intel almost had it figured out in the P6 family (with only the 4-1-1 rule to hold them back), and then for the Pentium 4 they just decided to cut features and just gave up on the whole decoding things. That's Mistake #3 on my list of course, but this in no way demonstrates how superior fixed instruction sizes are over variable sized instructions.

Over the years, CISC and RISC have kept pace with each other. Sure, one technology may leap ahead a bit, then the other catches up a few months later. Neither technology has taken a huge lead over the other, since the decision whether to used fixed or variable sized instructions and whether to have 8 16 32 or 64 registers in the chip are just two factors in the overall design of the processor. Much of the rest of the design between RISC and CISC chips is very similar. And over time, ideas get borrowed both ways


 

Quote
True. Haven't read the article, but from what I've learnt at fast clock speeds RISC is far better than CISC.
For those who aren't sure about the difference: RISC processors complete an instruction very fast (maybe in one clock cycle) but only have very simple instructions.

CISC processors can take a while to do an instruction - up to 10 clock cycles or so - but have lots of instructions which can do slightly more complex things.

So if you have a 500MHz CPU and it's CISC, it might only be executing 50M instructions per second, if those instructions all took 10 cycles. Of course, other things like waiting for disk access/other devices to do things slow it down further.

In contrast, a RISC processor could execute 50M instructions per second with a clock rate of only 100MHz, or maybe 200MHz - because each instruction executes so quickly.

You might say "So what? Surely it takes more instructions to do everything with RISC because each instruction only does one simple thing". Yes - in theory. In practice, nobody codes in assembler, so no program ever uses the full potential of the hundreds of different commands on a CISC CPU. Compilers aren't always 100% correct about the absolute most efficient CPU instructions to generate.

Whereas with a RISC, because there aren't so many instructions, it's quite easy for compilers to generate code which does use the most efficient instructions - there aren't many possibilities to choose from!

So on modern PC's, RISC generally outperforms CISC unless you handcode everything in assembler, and noone does that. It takes too long.

Obviously both of you have to read the article more thoroughfully. It's quite long and a good off-line read.

Quote
No, MS haven't declared the price of the XBox. They haven't declared release dates, either (and before you say the end of this year or something, Gates was quoted as saying he wouldn't release it until he got 3x the graphics performance of his rivals). All the estimates say that at $300 MS would be selling at a big loss. While they probably will sell at a loss, there is a limit. So more than $300 looks likely, and European prices always go through the roof compared to the US.

Even at US300, Sony is selling PS2 at a loss too ( covered by licensing fees ). So I think Microsoft will do the same thing too.

Quote
Add to that the fact that specs haven't even been finalised yet - so games can't be relied on to come out at a particularly fast rate - and the XBox does not look like a good bet. I walked into a high street shop a few weeks ago, and they had PS2's for sale over the counter, and a pretty respectable range of titles for it. Now, MS aren't releasing XBox in Europe to start off with, so even if they DO make a release date by the end of the year (I suppose it *could* happen) it'll never make it to the UK before mid-late 2002 - so it'll be 2003 before you can buy it over-the-counter. By Moores law, in 2003 the average PC will be running well over 1GHz, and a top-end model will be at least 2Ghz. Hmmm. Not much competition, I feel.

[Oh yes: Specs. Microsoft is well known for sticking to "predicted" specs, isn't it? Remember the required specs for Win95? A 386 with 4MB RAM. Yes, MS, I think that'll work. All the graphics we've seen so far are based on "predictions" of what the XBox can produce. Nobody actually knows what it'll do in reality.

I think the X-Box spec is pretty much finalised already, by the fact that nVidia have already producing the GPU and the MCPx chips for X-Box. Only that the specs aren't publicised much as I thought it would be. In general, information available today should be enough to gauge how well X-Box performance will be.



99

 
Quote
The main graphics chipset for the X-Box is joint developed by Nvidia and Microsoft. What's stopping Nvidia for doing the same thing for the PC?

Without any competition, nVidia won't make available the technology to PC users. Do you think nVidia is mad to release a new chip ( let say NV25 )that will only compete with  their own product NV20? It's just business strategy from nVidia. If ATI or others can't release a worthy product that can compete with NV20, nVidia won't likely release a new chip because if they do so, nVidia will found out that they are competing with their own product ( NV20 ), which is bad in bussiness sense. nVidia can do it of course, but the timing of release must be also taken into account.


 

Quote
Go to nvidia.com and readup, the geforce 3 is supposed to surpass or equal the xbox in everyway. If you're talking bout the RAM it uses, the GEFORCE 3 will not be using DDR RAM or RDRAM, it will be using something new.

GeForce 3 will use DDR-SDRAM just like GeForce2 Ultra ( 4ns 128bit DDR-SDRAM ). It's only that NV20 will have much better memory controller than what of NV15 had offered, thus more efficient use of the bandwidth available.


 

Quote
XBox has 64MB of RAM for *everything*? Jesus, that's crap. The PC doesn't *need* an incredibly fast data bus because it has more RAM, generally speaking. With only 64MB RAM I can see the XBox will be frantically swapping data in and out of memory to make room for textures and sound effects .... PC won't have that problem, once it's loaded into RAM once, that's it. Same with the PS2; it has an overfast data bus to compensate for the fact that it has (compared to a modern PC) little RAM to store textures etc in.

Micron DDR-SDRAM for X-Box have a total of 6.4GB per second bandwidh thus I'm sure that flushing textures in and out is easy.


 

Quote
The fact remains most people already *have* a PC so I doubt the XBox would be cheaper than upgrading. Say I wanted to fully upgrade my PC to a 1Ghz+ monster. What'd it cost?

New M/Board. < 100UKP
New CPU 100UKP
New GFX card. 100UKP
Oh, that's it.

You can get a GeForce3 ( which is now have a MSRP price of 600 pound sterling )at 100 pound sterling at the end of the year? Or did you want to buy only a GF2 MX? Don't think that NV20 price will drop so fast, unless ATI and other give nVidia very intense heat.


 

Quote
The XBox, for anyone with a half-decent PC already, is more expensive than upgrading and won't give you much better performance. I'm not planning on buying a PS2, but at least it's not more than majorly upgrading my PC, and the price *will* fall.

Did Microsoft released the price for X-Box already? No I don't think so. So how do you know that X-Box will cost more than 300 pound sterling? I'll keep my opinion on X-Box price till Microsoft announce them out, then only I'll say if upgrading PC is cheaper than buying a X-Box.



100
 
Quote
Point taken. What about DirectX 8.1?

It wasn't a major release. Even the folks at the Microsoft beta newsgroups are caught off-guard by it release.

 

Quote
Release dates:

Geforce 256: September 1999
Geforce 2: Spring 2000
Geforce 2 Ultra: Late september 2000
NV20/GeForce 3: February 27, 2000

GeForce2 NV15 is released on March 2000 and it's successor NV20 come out in February. Clearly they've missed their 6-month release cycle. I don't consider GF2 Ultra as a new card because it use the same chip as GF2 GTS.

 

Quote
I do want to see how those emulator programmers try to emulate the way X-Box handles data streaming, Dolby Digital AC-3 processing and large texture handling routine in PC; that 3 are just for examples."

I'm no programmer. Explain briefly what the problems are and I'll try to answer.

How the programmers of any X-Box emulator want to simulate the way X-Box transfer the high-res textures on high speed data buses ( talikng about gigabytes of data per second ) on an AGP 4x port which don't even reach 1Gigabytes per second in bandwidth?

How the programmers will try to emulate Dolby Digital AC-3 processing ( which could take a lot of CPU time ) on today soundcards and also dodge the posibble legal issues from Dolby Laboratories?

Those 2 are only the examples. More issues about DVD data transfer mechanism etc. must be also taken into account.

 

Quote
Where'd you get that from? My understanding is that:
1. The X-box only has 64MB RAM
2. It's Unified RAM, not DDR

Which RAM will you think X-Box will use? Dreamcast use SDRAM, PS2 use RDRAM. As far as I know, Micron will supply the DDR for Microsoft. The "unified RAM" terms means that all 64 MB RAM are used by all parts of the console, ie; textures, sound buffer, game data, etc. But X-Box itself use DDR-RAM 128bit that were commonly used only in videocards.

 

Quote
Even if the market does slow a bit, nVidia has 10 months till the end of the year. More than enough time to develop and release a post NV20 card. If they don't, someone else will. Otherwise, PC Gamers in the US (very hardcore bunch) will kill.
I don't see how a P3 733MHz with an NV20 based card will be able to match a 1-2 GHz processor with a second or third generation NV20 card in terms of FPS. I'll be happy to compare framerates with you when the time comes, provided I have one of those PCs.

I believe that there's will be no nVidia new videochip, judging for their current competitors condition. Unless ATI or Matrox or others like NEC can come out with a worthy competition to NV15, there's no reason for nVidia to continue their policy of 6-moths-one-product-release, as the policy is effective only when there's competition. Futhermore, nVidia have already making steps to enter the soundcard market, making them busier.

The hardcore bunch only make a very tiny market for nVidia products, so highly unlikely that they will make nVidia suffer. It's those OEM deals from Dell, Compaq and others that nVidia were interested in, and those companies don't give a damn whatever nVidia will release new chipset every 6 months or 1 year as they themselves are resellers. I'm confident if ATI and Matrox can give a worthy competitors to NV20, nVidia will resume the 6-months policy again. That's why competitions are good.


 

Quote
Lot's of reasons:
1. A P733 MHz, no matter *how* optimized, will never be as fast as a 1466+ MHz processor.
2. Console and PC Games are in most cases very different. I'd prefer it stayed that way. If I ever want to play console, I go to a friend's or get the emulator.

I still don't think a 2GHz processor will be necessary for X-Box emulation. Probably 1GHz with a GeForce 3 will be enough.

Why need 1433Mhz when 733Mhz ( it 667Mhz actually, Microsoft have slowed X=Box down ) will get the job done? The X-Box is fast NOT only because of CPU processing power, but also of kick-ass video processor, operating with a lot more bandwidth that NV20 can only dream of, and also the whole system operate using data buses much faster ( 200Mhz )than the limitations of PCI ( 33Mhz ) and AGP ( 66Mhz ) buses inside of PCs. Not only the CPU, the whole console is optimized for gaming ( and others ? ). The sound processor, the graphics etc.

Don't know if a 1Ghz with a NV20 can emulate X-Box, but even AC-3 will take a lot of CPU time already.  

[This message has been edited by cHiBiMaRuKo (edited March 06, 2001).]


Pages: 1 2 3 [4] 5 6