Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - blippyp

Pages: [1] 2 3 4
Were you unable to get the saves working? Or do you plan to just release videos and packs as you progress? Either way, I'm glad to see all this moving forward.

Honestly, I still haven't had a chance to look at those, although I'm pretty sure it's a no brainer to get them going, I've simply just been rather busy the past couple days. As far as how it'll be released, it will definitely be all at once when it's 'completed' or I quit ;-P

I'm already using AngelWings backgrounds pack, so I guess it's just a matter of replacing files. I'm guessing this pack only includes the backgrounds from the video? Luckily I just got the ragnarok on disk3 so I can just check out Balamb :)

Well those entire sections are completed (bc, bd and bg folders) so whatever they have to do with in the game, is updated, much of it isn't in the 'video', but just a slideshow at the end, so if you can get to those other screens, you'll likely find bugs I'm not even aware of yet...  ;-P

This is looking phenomenal blippyp! Masks are really good too! You have a version people can use to try it out?

Mega Link To Backgrounds in Video

Thanks! Here ya go

Should go without saying, but of course you need Tonberry for this to work and anything it requires (I don't think anything, but read the mod page), I would also suggest using the hashmaps from AngelWing if memory serves. Although I'm sure most of the people who want to test this know this, but please make sure you make backups of your original folders so you can replace them when you're done.

Sorry it took so long, but here's a video with my current GAN generation. It's still not perfect, there are clearly a few areas where I need to figure out what's going on, but either way, this is what I get now when I 'push a button' to build the screens. This was approx 2K images in total for upgrading Balamb, Garden and the Cave. It probably took me about 2-3 hours to generate all the images with their masking and imported again with Pupu, so that's pretty good considering there are about 13K images in the game, I can now rebuild the entire game with a handful of screens that need some special attention, and hopefully I'll still find some fixes for those issues.

Youtube Video

I can not resist and ask at least a couple of images), especially with the battle)

Unfortunately I mis-spoke (sorry, it's been a while since I've played this game), the 'battle' scenes I was referring to was when the missile is headed to the Garden (I always think that happens the same time as the big Garden battle for some reason), but I'm pretty sure it happens well before (the battle doesn't happen until you're near Edea's house iirc). So there isn't much of the battle to see (most of that is video any way), so the images I was referring to mostly however is that area under the Garden where you activate the console so you can fly the ship. At the end of the video I'm about to post, I have attached a slideshow of most of the images you didn't get to see in the video that were also updated, but I didn't show previously.) The video will be posted soon, it's uploading now, I will make a link for it here when it's done.

my own trained ff7 model finished its 500000 iteration but the results are lower quality than my over models.
if you can share your model once trained, i'd like to test it to check the results on ff7 ^^

I'm impressed you actually got to 500K iterations, I'm still under 70K myself. I have no problem with sharing the model if you want it though when it's done.

Hope so. They're at all points in the game for this very reason. Field upscales. Look forward to seeing the results. Be nice to use and to compare with my own tests. And Happy Birthday. :)

Oh I'm sure they will be extremely helpful, you probably just gave me the best present I'll get today :) haha

At my age (and clearly I'm only getting older) I'd hate to have to 'pause' production just to play a game I haven't fully played to the end in years and even then it was on a playstation, just so I can verify my images. I'm certainly looking forward to playing it again (especially now), but clearly this will help a ton. And the best part, is just as you said - They're exactly for this purpose, so that's just 'too awesome' :)

It's no problem. :) I'll send them now.

I can't tell you how thankful I am for this, thankyou so much, I'm positive they will be super useful! :)

So I passed out :( haha

But did wake up again approximately about the time I said I'd be here, so not too bad I suppose :P

Unfortunately, my script crashed (I think I forgot to delete some image variables, so as time passed it just kept chewing up more and more ram). However, I thave 'hopefully' corrected the script, and it is going on it's merry way again, but unfortunately it's late so that means no video tonight.

Before I restarted the script though, I threw the backgrounds it did finish into my game to check it out, and all looks good except for one common detail I keep noticing. The lighting effects. It appears most of them don't like the new masking I'm using and leave a 'halo' around where the lights are. This is certainly a set back, but not a deal breaker so far. It doesn't appear to affect all the lights, but they are going to have to be identified and the original masking re-applied to them, which isn't the end of the world, but still kind of sucks (manual labor). But so far that's all I've noticed, so could be worse.

With it being my birthday tomorrow (gotta bake a cake and stuff), I can't promise a video tomorrow, but I will try, I can probably pull it off at some point, but no promises.

If you need saves, I have them for just about every part of the game. Don't mind sharing them with you if it helps you out.

Yes please, that would be super helpful! I'd appreciate that so much :)

Alright, so it finished generating the images in about 40 mins (about 2K images), so that was pretty darn fast. Based on generating Balamb, I think doing the masking takes about twice as long (my gpu does the images, but my nasty i5 does the masking). So I'm gonna go watch a movie and it should be done by the time it's over, then I just need to import the images with Pupu, which shouldn't take long. So I'm guessing I'll be seeing what it's like in-game in less than 2 hours. If I'm still awake enough, I might just spit out a video tonight for this which will free up tomorrow for me since it's my birthday. Seeing no issues would put such a smile on my face and would make for a great b-day gift for myself :P

I'll likely put all/many of the images I can't show in game due to lack of saves into a slide show also so people can see what else was produced in the video.

I found a pretty nasty logical error in my code for layering the backgrounds, and so far it seems to have solved all the issues in my previous video (minus the masking of course). I have rebuilt all of Balamb already and it seems quite functional as is, the only issue I noticed was one where the flag still kind of 'disappears' for a split second, but I think I figured out why that is, so hopefully during this next batch of screens I'm making that issue will get solved. I am also generating all of the bg folder tonight, so if all goes well, I should be able to take 10 minutes tomorrow and make a video of Balamb at least with my current GAN generation and the new masks (THANKS TO SATSUKI), which although are still not perfect, are much better then they were, and tbh, I think it would be very hard to get better masks without actually rebuilding them by hand, which I clearly have no intention of doing myself.

With a little luck I'll also be able to include the entire Garden tomorrow in the video as well (that's the bg folder, which is rather large), but I'm generating the entire folder, so that would make up for a LOT of gameplay done, not just running around the Garden, but many battle scenes, stuff like when you go on that jaunt to make the Garden fly, etc.. etc.. So basically anything Garden related. Unfortunately however, I personally do not have any save games that far ahead, so all I can really show is basically what you can see from the start of the game, so a lot of screens I won't be able to show.

To be honest, even as the GAN is currently creating the images, it looks pretty good imo, in fact, even the NPC's are starting to fit in with the style, so I'm really pleased with the results so far, although there are some scenes with NPC's, like I was complaining about before that simply can't be fixed imo, without an artist redoing them.

So fingers crossed, and hopefully by tomorrow there will be a lot of new content to show off.

I'm making another shoutout to anyone who might read this. I'm looking for a way to be able to tp to any point in the game so that I can check out the screens and test them without being forced to complete the entire game. Even someone's SAVED games would do me a world of good for testing. I have only gotten to the point so far where you fight Seifer and the Sorceress for the first time (that's when I discovered FFVIII had mods, and have had tunnel vision for this background mod only since), so I'm still not very far in the game and can't see much myself. So if anyone has saved games I can use, PLEASE LET ME KNOW, or better if someone knows someway to cheat so I can tp around, that would be so much better. I'm sure some people know how to finish this game pretty easily, but I"m not one of those people, I tend to grind it out and it takes forever, so any help would be awesome, thx.

If all goes well, I will likely very strongly consider pushing the entire content out (should take a couple of days I'd think to generate every background image including the masking in the game) and release a beta version within a week probably. It's possible (if not likely) I'm being overly hopeful atm, but honestly, I feel it's just about ready for it, and if the bg folder generates with little fuss, I think it's a pretty safe bet that the rest of game will generate fine as well since the bg folder covers a lot of material. I'd be an idiot to think I still won't run into any more surprises, but maybe I'll get lucky. :P

Okay so a slight update on progress. Getting Satsuki's masking working was a much bigger headache then originally anticipated. Although I had a script put together to do it, and it seemed to work great, incorporating that into a recursive script that would mass modify more than one image at a time was a nightmare. I'm sure there is some easy fix for it, but in the end I gave up. Best I can tell is that it has something to do with how imagemagick works, but it kept running into all kinds of memory errors for some reason, I spent probably over a few hours yesterday trying to fix it using the script I currently had, but it just wasn't working, every attempt just caused more memory errors for god only knows what reason.

So, I woke up this morning deciding to finally sit and look at the code and try to reproduce what it was doing in GIMP instead, luckily it didn't take me long to determine the 'magic' behind the script, and I spent most of the day converting it over and as far as I can tell it is working fine. I generated images from my GAN yesterday (which are really old compared to how far it is now), but I ran my code to attach the masking to those images, and it looks like everything worked out fine.

I have to go get my daughter soon, so I might not have time to test it in-game until later probably, but I'm expecting it to work (although I'm also still expecting layering issues). If all goes well, I will likely re-produce the images again with my latest GAN or possibly put some effort into marking my files for the backgrounds for the Garden instead (for a change of scenery), either way, assuming everything is good, I will likely produce another video to demonstrate the state I'm currently at in a day or two (tomorrow is my birthday, so not sure how much work I'll put into this tomorrow), but I am excited to see it also, so who knows.

The hard part now (I think) is going thru the backgrounds to decide on 'base' backgrounds required and then fixing the layering issues that are sure to appear in the game. My GAN is still learning as I type this and will likely keep it going until I have either determined the 'base' backgrounds for each screen or until it 'over learns', which hopefully I don't miss the mark on that, at this point I will start saving the states every 5000 interations just to be sure, but I think it still has a long way to go without worries of that.

That's exactly the problem, I know when I'm finished there will be lots of stupid layer issues all over the game and the only way to find and fix them will be to play the game for myself. That's basically why I intend on releasing a beta, so hopefully others will also install it and help me out by letting me know where the issues pop up. Should speed it along nicely, assuming anyone uses it of course :P

Here's my result of a quick test of this picture with my auto tool (use waifu as prefilter then esrgan), if you want i can clean the tool and give it to you for your tests ^^

That turned out quite nice. But I'm happy with what I'm using actually (in comparison, my GAN is learning from images like the one posted in this message).

It just clearly takes a while to learn is all, nor will it produce the exact image I'm sure, but I'm hoping it will learn to get close, but I suspect it will take a 2-3 days to learn to pick up on the details. The really time consuming part of my previous process was cleaning up all the mess that the other GAN's did, but with my GAN, it's not producing that mess, so I should be able to easily slap a few filters on the images to get a similar effect, even if it never learns to pick up the details, depends on how close it gets I suppose. Considering how much faster this is, I'm loving it, and I'm still only around 12000 iterations still, so I have a long way to go if I choose. I'm taking a break atm though to finish up the masking part, so I can run another test on Balamb using whatever my GAN can offer at the time and test out the masking again, if all goes well, I'll just let it keep learning until I want my computer back and then take a couple of days to wrap this up into a beta state. Ironically, the time consuming part of this process now will be configuring a base background for each image, whereas before that was the short part haha. Aside from that the weeding out the bugs will be the biggest hassle once I've produced everything since I'll have to locate them and then fix them kind of by hand I suppose, which isn't that bad, but somewhat time consuming as I'm sure you're well aware since afaik, that's exactly what you're doing now with yours if I'm correct.

Looks damn good! It might be a bit too aggressive on the bottom though, but the overall details are beautiful. The missile looks great, as well as the speakers and the wood. What kind of images are you training it with? The FF9 model was trained with a variety of concept art which has a similar art direction. But not too sure if it would fit FF8 though.

Still has a long way to go unfortunately, but if I had to use this, I would, it's a huge time saver. I'm training it using my own images actually. So it's literally learning to convert the original files to the style I was previously commited to (which would of taken months to complete all the images), which is a mix of three separate GAN's combined with the filtering I put it thru in GIMP. So in essence, it's the only step needed, it's learning to skip all of that and instead resize them like I wanted all in one much faster process, assuming all works out, and if not, it at least gives me a really close base to what I've been aiming for even if I do have to run a couple filters thru it in the end. It's definitely missing some of the finer details my previous process was doing, but it's still learning so I'll save judgement for when I've stopped that process and commit to whatever training I end up with, and considering the time saved, I'm willing to go with a slightly different result. All my previous attempts at a 'faster' process was simply lacking too much. This is close, but still a little blurry for my liking, I'm hoping some further training will fix that, but if not, I'm willing to deal with it.

I tested the idea of using a GAN for masking yesterday and ultimately gave up on it after a while, it was working well, but clearly the masks weren't going to line up. I think there are methods for fixing this, but I'm giving up on that aspect for now and began working on my GAN to replace my own filtering process. The results for this was drastically much better. Below is an example of the type of images it's already producing at only about 194 epochs (about 7000 iterations). Already this is putting a huge smile on my face and is fairly comparable to what I was previously attempting to produce for this project. I certainly prefer my previous methods overall, however when you consider that this image takes about 5 seconds to produce compared to the 2-2/12 minutes it takes in my previous process per image, this is a drastic increase in productivity. Instead of producing the images in months, I could now generate every image in the game to this quality in less than a day. I haven't tested how long it would take to produce masks for this yet, but I'm guessing about another day for that. After that it would be simply a matter of playing the game and finding mistakes (likely caused by using a complex background and would require those images to have a closer look and rebuilding them, which shouldn't take long).

Overall, I am very happy and excited. Despite getting a little too ambitious last night and trying to force my system to produce more than it could (causing the process to crash while I was sleeping and basically wasting all last night training the GAN), this is looking very promising indeed.

Produced by my current GAN:

This is a completely unaltered image produced by the GAN without running any further filters thru it afterwards. Another point of note is that this image isn't even included in my training process, so this is what it's producing with an image introduced to the GAN that it's completely unaware of.

it's been a while, but imagemagick has it's own way of dealing with multiple files (I believe they are sequential, ie: image000.png, image001.png, image002.png) so if your files are number as such they are easy to manipulate. If you're moving files around that don't have names like that, you'll have to generate the names for the files and then likely keep a record as well (image000.png=somefile.png, image001.png=anotherfile.png, etc...) then when you're done manipulating the files in sequence, rename them again.

Little update since I haven't posted here in a couple of days. I haven't stopped working on this, but I am taking a pause. Although yes - I can (and possibly should) begin generating the images for this project, as far as I know everything is now in place and ready to go on a satisfactory level. However, I have been digging into AI Neural Networking on a custom level, and already see potential for this if I customize it. I'm considering/testing out a couple possibilities. First off, I think that this might be a straight-shot to my final image processing, whereas currently, I'm running the images thru a few ai nn filters, and then applying many different filters on top of that. I'm hoping that I might be able to skip all of that and run one single ai filter on the images, and then maybe a much simpler 'cleaning' filter run thru gimp afterwards. Although it would take time to set this up, and even test it. If it works, it could drastically speed up my design time from requiring months to only a month or so (and most of that would likely be testing/designing the new models). Second, I'm also looking into the possibility of generating a model that will automatically upscale the masking as well (less likely) but still want to test it out, as there is also great potential for this also I think.

You can format the imagemagik input to take (and write) a sequence of images easily enough, but among many other things, I need to figure out a script that can work through a whole directory recursively.

I didn't really look at it much, but noticed you were using python and batch scripting, recursion is pretty simple tbh:

Code: [Select]
@echo off
for /r %%f in (*.*) do (
echo The File: %%f
echo The Directory: %%~pf
echo The Filename: %%~nf
echo The File Extension: %%~xf

Code: [Select]
for root, dirnames, filenames in os.walk(src+"/"):
for filename in fnmatch.filter(filenames, '*.*'):

Okay, I'm a big dummy and should of listened to you to begin with! :P
I disabled lmdb, and off it went without a hitch using the variables as they should be. (so much wasted time, I even installed python 3.6.7, 3.6.2, Anaconda...) haha
definitely taking a lot longer now...

Have you tried the worker-batch to 1-2 or 2-4, i think the "0" worker you used make it bugs

ya, anything above 0 makes it crash for some reason, we seem to have nearly identical systems, except that I'm using an i5, not that it should matter much though considering we're using our GPU's, which are identical. However I'm on Windows 7 and using Python 3.7.1. I'm pretty sure I ran into this issue before I even started using lmdb, so don't think it's related, but I'm not aware of the buildtools bug you were talking about which might be part of the problem I suppose. My guess is that the issue has more to do with either the OS differences or the Python version differences. I also just realized that I'm using PyTorch v1.0 not 0.4.1, but every attempt so far to downgrade has failed with  a 'not supported wheel on this platform' message :'(

okay, so if I'm understanding you correctly, I did exactly the same thing, except that I put all my files in the val folders (which I'm guessing would only make the process take longer in the end) and I didn't normalize my files, which again, I'm guessing would only have an effect on how my model ends up.

aside from that I didn't have scipy installed, which I just installed.

so I'm pretty confused as to why this isn't working for me, as was suggested in the notes on the github, I modified the worker/batch arguments and now the process seems to complete, but the model doesn't actually 'change', instead it just makes a copy of it from what I can tell and it completed the training super fast.

oh the joys of all of this... the best part is that if I figure this out, I can look forward to days of my computer doing nothing but crunching numbers just to end up with an even worse model than what I'm already using haha ;-P

-I swiped thru just over 1.3K images in like 10 minutes
>I think you do something wrong, i own a gtx1060, the process uses about 90% of gpu (gpu is the bottleneck in basicSR) and it take about 3 minutes for 200 iter (and the learning process need to do 500 000 iter, so about 5.2days 24h/24h)

As a side note, just for confirmation on how I did it. I put the 'original hi-res' images in the hqval folder, and my original downsized versions in the lqval folder. I then ran the script on my hqval folder putting those files into the hr folder, and then downsized those and put them into my lr folder if that makes sense. I also ran that command you gave me on all the mages to makes sure they were all png 24 format.

-I swiped thru just over 1.3K images in like 10 minutes
>I think you do something wrong, i own a gtx1060, the process uses about 90% of gpu (gpu is the bottleneck in basicSR) and it take about 3 minutes for 200 iter (and the learning process need to do 500 000 iter, so about 5.2days 24h/24h)

ya, I have the same card and I just compared the PSNR model with the one it produced for me and the results were identical actually, so I'm guessing it wasn't doing anything. I think this is related to the worker/batch arguments I modified to get it to 'work'. I'm guessing I'll have to try a linux install due to this torch issue in windows I suppose. What OS are you using?

Model training on its way...

Alright, so I installed BasicSR and ran it thru a test, despite your urging and considering that I do have a SSD drive, I still used the lmdb however ;-P

Also, I couldn't use your recommended (or the default) n_workers or batch_size settings. It bugged out on me, according to the posted issues on the github, this is a known issue on windows with torch, someone recommended that I set them to 0 and +1 total images respectively, and then it just ran thru them without hassle. I also had some weird issue with the files as well. Where it checks for fps (line 41'ish) about it trying to divide by zero, so I simply altered the code there to check if the elapsed variable was 0 and if it was to skip the next line and instead set the fps to 0 by default.

I swiped thru just over 1.3K images in like 10 minutes, it was super fast actually (of course my 'high res' images were very small). I basically used the original backgrounds from FFVIII as my high-res images, so they were weren't exactly high res to begin with. But even still, if it only takes four times that long to swipe thru images four times bigger, that's still really fast. I'm thinking I did something wrong, but it spit out a model for me and I used it and it worked fine (although it sucked).

Was just curious what your strategy was for creating your own model. Are you using your current 'processes' upscaled images you created for FFVII as your high res images? Or are you using an entirely different set of images to generate your model?  Or did you do like I did with my first test, and use the original files as the high res?

I'm curious if applying this to a bunch of masks would also be helpful...??? What do you think?

This was neat though, not sure how I'll use this, but glad I have it to play with now :)

Pages: [1] 2 3 4