Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Omzy

Pages: [1] 2 3 4 5 6 ... 9
1
Sorry again! I uploaded the wrong version as zwuw let me know. Here is the most recent version -- PuPu_v2.1

2
Sorry for the delay, here is the source code if anyone is interested in improving this program: -- See link in later post

It uses visual studio, if I recall its a fairly simple program so shouldnt be too hard to dig into if you have some basic programming knowledge.

3
Ahh, ok sorry for the misunderstanding. I'll try to dig up the source code and post it here after work. I recall it being a fairly simple program, if anyone wants to dig into it and improve it it would probably be pretty easy.

4
This is the reasoning behind facepalmer and project eden scripts. The entire scene must be composed before resize is performed for each individual animation or you will get bending wherever 2 blocks come together on the original texture.  This is why upscale with this method is extremely tedious without a powerful script. Look into the project eden script code. Can anyone else elaborate on the explanation here? I'm busy at work

5
These are the kinds of bands you expect to see when you upscale a texture before it is reassembled into its layers. It almost looks like you used the raw textures from the archives. You must properly export the textures using PuPu to get the layers, up scaling this way won't work. This is simply an artifact of up scaling and doesn't really have anything to do with PuPu. PuPu can perform the original export and assembly, you do your magic, then reimport and it deassembles back into blocks.

6
PuPu just pulls the bitstream exactly as it is in the archives and rearranges the tiles to match their proper coordinates. It doesn't at all influence any pixel values, so it is not the cause of banding--the original textures are. Please let me know if I'm misunderstanding the question.

7
Team Avalanche / Re: [FF7] Recreating scenes in 3D
« on: 2016-05-17 10:53:05 »
Amazing work =)

8
is there a general debug tool i can run with FF8 to try to get more information for you guys about this?
PIX for Windows. Sort of hard to get a copy these days but I'm sure they're out there.

9
I've noticed it only lags depending on the resolution you set..... But once I set it to 1920x1080 it's obviously there.
Now that's an interesting observation. I'm using a 1920x1080p and I noted animation lag early in Tonberry's development, but assumed it was something I wasn't understanding about FF8's DirectX. If it doesn't occur at all at resolutions below that, it is certainly a hint towards a solution.

10
Actually, now that I remember, if I detected that there was no match or an unresolvable collision, I ignored it and let d3d keep the original texture there. So if there are now colliding textures in previously pixelated spots, it might mean that 'ignore' was turned off and it is just placing the last thing in memory there. Again, sorry if I overlook stuff, it has been a while since I looked at the code.

11
Never happened on earlier versions(1.61 and back). Those places were always more pixelated, though. I use this spot quite frequently to test textures while gathering codes. It could be something on my end. Never know, really. I only took a screenshot. But what's really happening near the fountain is that it's switching between multiple textures where there is suppose to be animation for the fountain.
If it used to be pixelated there, this might be a good omen

12
The Steam Overlay has been a problem since the earliest versions of Tonberry. At this point our efforts are focused on making Tonberry run as efficiently as possible without the overlay enabled.
The Steam Overlay is itself a d3d injector and, to my knowledge, 'there can only be one'

13
Actually Berrymapper already integrates this algorithm, otherwise it wouldn't read the character textures. The problem is sometimes top and bottom textures are switched for the same character. Berrymapper can't identify which texture (top or bottom) is useful. Berrymapper delivers both  top and bottom hashcodes. You have to choose manually which one to use.
Ahh, I see. Sorry for my ignorance about your program--I haven't had time to test it myself. I can see that since the hash algorithm is not symmetrical across the 2 quarters, the hash codes for each are different. In the future, this should be considered so that we can have only a single hash code to represent each no matter which position it is in. For now, perhaps Tonberry can be modified to add 'top' or 'bottom' to each of those names, to allow it to use both codes.

14
Ah, ok, I didn't realize that. In that case, BerryMapper may benefit from a simple rewrite for all objects(including character textures and other 'quarter' memory objects). Here is the algorithm for hashing objects that takes into account these quarters of memory:
http://pastebin.com/N3VKFuJs

Edit: This algorithm simply takes all of the pixel values from Hash_Algorithm_1 from the top half up to 32 pixels and the same for the bottom half up to 32 pixels. It then produces a hash value from those 32 pixels only. If these values are plugged in, (theoretically) Tonberry should recognize them regardless of whether they are top left or bottom left.

15
where the tiles are flipped to the opposite sites of the image (top/bottom).  I'll have all of the codes if I use all 4 objmap codes for each enemy.  I think it has something to do with if the texture is loaded with multiple rows of enemies...

If I provide all of these codes (objmap codes actually), then it seems like most every instance of that enemy works unless they share a hashcode.
I sent some relevant info to someone in a PM the other day regarding what it means when images fall into different dump folders. I remember encountering this problem and designing a solution for it (this is why objmap file exists, but no one uses it and I haven't tested if it works):

Quote
-unsupported: size/format is not yet supported (256x256 currently only supported)
Originally I programmed Tonberry to only care about 256x256 size textures, which is the size of the memory image buffer. The hashing algorithm only looks at the left side of this square since the left half (128x256) contains the last image loaded into memory. I later added the getobj() function, which checks the top/bottom left quarters of memory for a single texture since I noticed many objects like save points and many character textures use only a quarter of the memory (128x128), so it searches through hashes in the objmap.csv. I'm not sure that anyone uses/knows about that functionality, but maybe they do (might explain why many character hashes fail). Anyhow, unsupported fall outside of those ranges and really only should occur if the texture is larger than 256x256 (video frames, for example).
-nomatch: not yet hashed, but will be in the future with modder help
In other words, there isn't any hashcode in the csv files that matches the image (hasn't been replaced yet by modders)
-nomatch2: hashed once, failed second round of hashing (shouldn't happen)
When textures are so similar that the second hash fails too
-noreplace: hashed and intended to be replaced but the new texture failed to load or didn't exist
-replaced: successfully replaced
-error: texture not otherwise accounted for (shouldn't happen)

Last bucket category, no images should really fall into it

16
From: http://forums.qhimm.com/index.php?topic=15291.msg223843#msg223843
Quote
I only wrote code in a handful of files. For the D3D9CallbackSC2 I wrote d3d9Callback.cpp/h and GlobalContext.cpp/h. ExtraCode was just unused code. For the D3D9Interceptor, I wrote d3d9Wrapper.cpp/h, Globals.cpp/h. Most all that should be needed to be modified would be in GlobalContext.cpp, I think. There you will find the hash algorithms and the logic for replacing images. It did take me a long time to understand how the directx code works, but if someone simply wants to change the way hashing works they shouldn't need to know that anyways. I think you have a good idea for bundling hash files with their mods--that would make things easier to work with and to update. The biggest thing that is needed, however, is to write new smarter hash algorithms that don't suffer from collisions. This may fix many problems and if more research is done, may decrease problems with animation lag. To compile the code you'll need Microsoft Visual Studio C++ 2010 Express (which is free). I believe you can just open the vcxproj file and then build the DLL. You might have to mess with some configuration, but hopefully it is already configured with the libraries and paths. I remember it being a pain in the ass to do the setup but you might not have to do all that since I sent out the vcxproj file.
You may need to modify some path variables in build config to match your local directories.

17
Currently the hash algorithm checks a pixel's color against the next pixel in the series and records a binary value for whether the next color is a higher RGB value or not. This eventual binary string of pixel comparisons is converted to a decimal number and is the final hash code. There is no rule anywhere that says this is how the hash code has to be built.

Some food for thought on the matter (from my gf who is into image processing):
Quote
Method: Find 100 or so pixel locations that generate a unique hashcode for ~10,000 images.

Perform transforms on images, then compare transforms to place images into subsets. Repeat until there are maybe no more than 100 images per subset...I don't know, might need more or less, have to find out when you do it. OpenCV is probably the best codebase to use for that.
References:
http://reference.wolfram.com/language/guide/MathematicalMorphology.html
http://www.seas.upenn.edu/~bensapp/opencvdocs/ref/opencvref_cv.htm

Extract from each subset the set of pixels that differentiate each image from another. Then compare those sets of pixels from each other subset and extract the ones that are consistent in  a large percentage of the subsets

boom, you have your essential pixels for the hash code

So you could separate images based on the overall average color, for example, and make sure you have equal sized subsets. Then you chart the pixels in each subset that are different amongst most images in the set and those become your pixels.

An alternative approach I thought of would be to take the entire database, for each pixel in each image, record the color of the pixel so you can build a frequency chart for each pixel (histogram). You could identify which pixels have the biggest spread (flattest bell curve) and then use those pixels. This would be easy to write an algorithm for and may yield good results. If there were still collisions, a second step could be used.

18
Could you tell me what do you want to improve in the hashing algorithm?  The main problem is the memory consumption in worldmap exploration. I think the problem is that there are a lot of tiny textures that are loaded from the same big texture. So each time a new portion of the worldmap appears on screen , the big texture is loaded. So the hashing algorithm is run several times when the camera moves. This is not the case with field textures or character textures because they are always on camera.

About the collisions. It is only an assumption but i think sometimes 2 images have the same hashcodes because they are almost the same texture, only slightely blurred ( for depth of field animation). So it may happen that the comparison between pixels give the same results for a blurred texture:

Imagine a white texture with a black circle in the center. Even if i blur the texture the center of the black circle would still be darker than the borders. So if i run the hashing algorithm it would give the same result.
This error won't happen if the compared pixels are close enough.

You necessarily need to run a second algorithm that reads intermediate pixels . That's the reason of hash2map algorithm.
You hit the nail right on the head with this post. When I built Tonberry, I did not take any time to consider these inefficiencies regarding animations/world map. I also did not spend much time thinking about the best way to select pixels for the hashing algorithms and my lazy solution was to just create the hash2map algorithm by hand-selecting pixels I thought would help distinguish. The second algorithm is only run when the first one fails (has a collision), of course. There must be a better way to do this, or at least a better way to select pixels--I would be rather surprised if what I came up with so quickly was really the best possible solution.

Edit: A potentially 'smarter' solution would be to take every texture in memory dumps (as complete a collection of unique textures as one can amass through an entire complete playthrough speedrun) and run algorithms on them to build a histogram of all of the differences between pixels. This approach could yield the perfect minimal set of pixels to check that would guarantee a collision-free set of codes. The computation and knowledge required to do this would be moderate to advanced, I think, but not beyond the realm of these forums. The eventual technique could also be documented so that it could be extended to FF7 and FF9.

19
You'll need this new version of Tonberry. He edited the source code to support this directory structure.

20
Completely Unrelated / Re: my birthday week
« on: 2015-02-16 23:14:43 »
Happy birthday, Zara!  ;D

21
*For future modders who are hashing textures, please keep or upload a copy of the debug output textures themselves. If the hash algorithms change, this hashing will have to be re-done and it will be much easier if we have all of the original dumps.*

22
Team Avalanche / Re: Midgar City Information
« on: 2015-02-13 02:12:51 »
http://s36.photobucket.com/user/Killerx20/media/Midgar%20Renders/NightTest6.jpg.html

This guys last rendering is looking like hes getting near done, must be at least 70% complete? SpooX is this the same model you're working on with this guy? Looks awesome btw.
Holy...that guy has so many near-render-worthy scenes that could be added to the Team Avalanche project. Anyone contacted him?

23
Team Avalanche / Re: [HD Remake] WIP Sector 5 slums
« on: 2015-02-11 02:00:07 »
Excellent work! I check here often hoping that there are new screens  :-P

24
Hey guys, to add a bit more clarity, another message I sent JeMaCheHi might also be helpful:
Quote
I only wrote code in a handful of files. For the D3D9CallbackSC2 I wrote d3d9Callback.cpp/h and GlobalContext.cpp/h. ExtraCode was just unused code. For the D3D9Interceptor, I wrote d3d9Wrapper.cpp/h, Globals.cpp/h. Most all that should be needed to be modified would be in GlobalContext.cpp, I think. There you will find the hash algorithms and the logic for replacing images. It did take me a long time to understand how the directx code works, but if someone simply wants to change the way hashing works they shouldn't need to know that anyways. I think you have a good idea for bundling hash files with their mods--that would make things easier to work with and to update. The biggest thing that is needed, however, is to write new smarter hash algorithms that don't suffer from collisions. This may fix many problems and if more research is done, may decrease problems with animation lag. To compile the code you'll need Microsoft Visual Studio C++ 2010 Express (which is free). I believe you can just open the vcxproj file and then build the DLL. You might have to mess with some configuration, but hopefully it is already configured with the libraries and paths. I remember it being a pain in the ass to do the setup but you might not have to do all that since I sent out the vcxproj file.

25
I agree, GitHub is a definite must for most programs like these.
I don't mind if someone makes one from the source and posts the link--i'll add to main post.

Pages: [1] 2 3 4 5 6 ... 9