1
FF8 Tools / Re: [FF8] Field Importer/Exporter - PuPu (v2.1)
« on: 2017-03-12 05:13:26 »
Sorry again! I uploaded the wrong version as zwuw let me know. Here is the most recent version -- PuPu_v2.1
This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.
is there a general debug tool i can run with FF8 to try to get more information for you guys about this?PIX for Windows. Sort of hard to get a copy these days but I'm sure they're out there.
I've noticed it only lags depending on the resolution you set..... But once I set it to 1920x1080 it's obviously there.Now that's an interesting observation. I'm using a 1920x1080p and I noted animation lag early in Tonberry's development, but assumed it was something I wasn't understanding about FF8's DirectX. If it doesn't occur at all at resolutions below that, it is certainly a hint towards a solution.
Never happened on earlier versions(1.61 and back). Those places were always more pixelated, though. I use this spot quite frequently to test textures while gathering codes. It could be something on my end. Never know, really. I only took a screenshot. But what's really happening near the fountain is that it's switching between multiple textures where there is suppose to be animation for the fountain.If it used to be pixelated there, this might be a good omen
The Steam Overlay has been a problem since the earliest versions of Tonberry. At this point our efforts are focused on making Tonberry run as efficiently as possible without the overlay enabled.The Steam Overlay is itself a d3d injector and, to my knowledge, 'there can only be one'
Actually Berrymapper already integrates this algorithm, otherwise it wouldn't read the character textures. The problem is sometimes top and bottom textures are switched for the same character. Berrymapper can't identify which texture (top or bottom) is useful. Berrymapper delivers both top and bottom hashcodes. You have to choose manually which one to use.Ahh, I see. Sorry for my ignorance about your program--I haven't had time to test it myself. I can see that since the hash algorithm is not symmetrical across the 2 quarters, the hash codes for each are different. In the future, this should be considered so that we can have only a single hash code to represent each no matter which position it is in. For now, perhaps Tonberry can be modified to add 'top' or 'bottom' to each of those names, to allow it to use both codes.
where the tiles are flipped to the opposite sites of the image (top/bottom). I'll have all of the codes if I use all 4 objmap codes for each enemy. I think it has something to do with if the texture is loaded with multiple rows of enemies...I sent some relevant info to someone in a PM the other day regarding what it means when images fall into different dump folders. I remember encountering this problem and designing a solution for it (this is why objmap file exists, but no one uses it and I haven't tested if it works):
If I provide all of these codes (objmap codes actually), then it seems like most every instance of that enemy works unless they share a hashcode.
-unsupported: size/format is not yet supported (256x256 currently only supported)
Originally I programmed Tonberry to only care about 256x256 size textures, which is the size of the memory image buffer. The hashing algorithm only looks at the left side of this square since the left half (128x256) contains the last image loaded into memory. I later added the getobj() function, which checks the top/bottom left quarters of memory for a single texture since I noticed many objects like save points and many character textures use only a quarter of the memory (128x128), so it searches through hashes in the objmap.csv. I'm not sure that anyone uses/knows about that functionality, but maybe they do (might explain why many character hashes fail). Anyhow, unsupported fall outside of those ranges and really only should occur if the texture is larger than 256x256 (video frames, for example).
-nomatch: not yet hashed, but will be in the future with modder help
In other words, there isn't any hashcode in the csv files that matches the image (hasn't been replaced yet by modders)
-nomatch2: hashed once, failed second round of hashing (shouldn't happen)
When textures are so similar that the second hash fails too
-noreplace: hashed and intended to be replaced but the new texture failed to load or didn't exist
-replaced: successfully replaced
-error: texture not otherwise accounted for (shouldn't happen)
Last bucket category, no images should really fall into it
I only wrote code in a handful of files. For the D3D9CallbackSC2 I wrote d3d9Callback.cpp/h and GlobalContext.cpp/h. ExtraCode was just unused code. For the D3D9Interceptor, I wrote d3d9Wrapper.cpp/h, Globals.cpp/h. Most all that should be needed to be modified would be in GlobalContext.cpp, I think. There you will find the hash algorithms and the logic for replacing images. It did take me a long time to understand how the directx code works, but if someone simply wants to change the way hashing works they shouldn't need to know that anyways. I think you have a good idea for bundling hash files with their mods--that would make things easier to work with and to update. The biggest thing that is needed, however, is to write new smarter hash algorithms that don't suffer from collisions. This may fix many problems and if more research is done, may decrease problems with animation lag. To compile the code you'll need Microsoft Visual Studio C++ 2010 Express (which is free). I believe you can just open the vcxproj file and then build the DLL. You might have to mess with some configuration, but hopefully it is already configured with the libraries and paths. I remember it being a pain in the ass to do the setup but you might not have to do all that since I sent out the vcxproj file.You may need to modify some path variables in build config to match your local directories.
Method: Find 100 or so pixel locations that generate a unique hashcode for ~10,000 images.
Perform transforms on images, then compare transforms to place images into subsets. Repeat until there are maybe no more than 100 images per subset...I don't know, might need more or less, have to find out when you do it. OpenCV is probably the best codebase to use for that.
References:
http://reference.wolfram.com/language/guide/MathematicalMorphology.html
http://www.seas.upenn.edu/~bensapp/opencvdocs/ref/opencvref_cv.htm
Extract from each subset the set of pixels that differentiate each image from another. Then compare those sets of pixels from each other subset and extract the ones that are consistent in a large percentage of the subsets
boom, you have your essential pixels for the hash code
Could you tell me what do you want to improve in the hashing algorithm? The main problem is the memory consumption in worldmap exploration. I think the problem is that there are a lot of tiny textures that are loaded from the same big texture. So each time a new portion of the worldmap appears on screen , the big texture is loaded. So the hashing algorithm is run several times when the camera moves. This is not the case with field textures or character textures because they are always on camera.You hit the nail right on the head with this post. When I built Tonberry, I did not take any time to consider these inefficiencies regarding animations/world map. I also did not spend much time thinking about the best way to select pixels for the hashing algorithms and my lazy solution was to just create the hash2map algorithm by hand-selecting pixels I thought would help distinguish. The second algorithm is only run when the first one fails (has a collision), of course. There must be a better way to do this, or at least a better way to select pixels--I would be rather surprised if what I came up with so quickly was really the best possible solution.
About the collisions. It is only an assumption but i think sometimes 2 images have the same hashcodes because they are almost the same texture, only slightely blurred ( for depth of field animation). So it may happen that the comparison between pixels give the same results for a blurred texture:
Imagine a white texture with a black circle in the center. Even if i blur the texture the center of the black circle would still be darker than the borders. So if i run the hashing algorithm it would give the same result.
This error won't happen if the compared pixels are close enough.
You necessarily need to run a second algorithm that reads intermediate pixels . That's the reason of hash2map algorithm.
http://s36.photobucket.com/user/Killerx20/media/Midgar%20Renders/NightTest6.jpg.htmlHoly...that guy has so many near-render-worthy scenes that could be added to the Team Avalanche project. Anyone contacted him?
This guys last rendering is looking like hes getting near done, must be at least 70% complete? SpooX is this the same model you're working on with this guy? Looks awesome btw.
I only wrote code in a handful of files. For the D3D9CallbackSC2 I wrote d3d9Callback.cpp/h and GlobalContext.cpp/h. ExtraCode was just unused code. For the D3D9Interceptor, I wrote d3d9Wrapper.cpp/h, Globals.cpp/h. Most all that should be needed to be modified would be in GlobalContext.cpp, I think. There you will find the hash algorithms and the logic for replacing images. It did take me a long time to understand how the directx code works, but if someone simply wants to change the way hashing works they shouldn't need to know that anyways. I think you have a good idea for bundling hash files with their mods--that would make things easier to work with and to update. The biggest thing that is needed, however, is to write new smarter hash algorithms that don't suffer from collisions. This may fix many problems and if more research is done, may decrease problems with animation lag. To compile the code you'll need Microsoft Visual Studio C++ 2010 Express (which is free). I believe you can just open the vcxproj file and then build the DLL. You might have to mess with some configuration, but hopefully it is already configured with the libraries and paths. I remember it being a pain in the ass to do the setup but you might not have to do all that since I sent out the vcxproj file.
I agree, GitHub is a definite must for most programs like these.I don't mind if someone makes one from the source and posts the link--i'll add to main post.