Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - satsuki

Pages: 1 ... 22 23 24 25 26 [27] 28 29 30 31 32
651
No animation bug on thoses 3 screens
No release date, i'll release it after my ff7 esrgan pack, so bug correction will be maximum.
I also need  to automate a black offset correction (fast but need to test to be sure)

652
Quote
does 4x include these optimizations
>Yes

Quote
Can you give us an idea of what has been improved on?
>Mainly hand correction on black pixels and layers junctions
>Some of the source files wasn't right numbered so some animation ingame wasn't right
>Light dots was showing at the edge of the lights

So wait, the newer version will realy be more powerfull.
Some screen must be optimised by hand after using the tool because of their special light layering, so far :
-game1, game2 and blin1

653
Thanks but waifu do a better cleaning job.
I'll try on complexe screen to see if it can boost the final quality

654
Reunion R06 will allow for all versions to be used with the English 1.02 ;)  I've even added in the ability for all exe text to be pulled from external text file.  In other words, we won't have issues like incompatibility between versions anymore.
Great stuff ! but how about the modified font ?

655
Its a shame they did not save the original files back in the days. Such a shame.
That's i saying to myself each time i've upscaled any graphic of the game

656
@CaptRobau
I tried the mod in about 30 screens at diffrent game positions so here's my impressions:
-Seems you have used my 4xcut tool to convert the background (would be cool to say it in the informations), as i said in this tool, i'll release a better version after i release my own ff7 pack, so all optimisation how can be included in the soft will be added/corrected.
-Seems you don't run through the game for bug hunting, almost all light pixel's bug (chorace for exemple) and black pixels (gon_i for exemple) witch aren't corrected in my released tool are showing ingame
-The upscale tool you used seems to produce good results on high quality background (like the gon_i) but seems to add "fake texture" so on textured "natural' stuff (dust, wood floor) the render seems great, but if you look at other elements the "human works" (bottle, fabric, boxes, painting...) it's look "muddy" or "fleecy" or "no right" (don't know to say this impression in english)
-The upscale tool you used seems to produce bad results on low quality background (like the blin ones) the results are grainy and kind of blurry just like if you're drawing a picture on a moving paper.
-because of the inconsistency of the upscale tool between screens elements types, some screens (like the farm one) have some elements eavily textured (the grass) and some other blurry (the house roof) at the same time

So to my mind, it's great you tryed this tool witch seems promising in some textures types, but if don't optimise the filtering, the game will be realy inconsistent from one screen to another and (to my mind) it will ruin your works because the "bad" screen will look even badder after a "good" screen.

Don't take it the bad way, it's only my opinion, i hope it can help you improve your works

@DLPB
The jpeg would be usefull for slow internet connexions but how about the transparency ?
I've read somewhere (if i remember) that your own corrected aali driver will be steam and 1.02english exe only, the french exe have be moded at some point for a full retranslation (with some font modification too so can't use the english exe for a full retranslation) so i realy hoppe you can relase a 1.02fr compliant driver or at least share your works so maybe another devel can adapt it (not me i can't understand this enough T_T).
Thanks

657
As soon as possible.

As i said i don't want to release a bugged or insconstant upscale from screen to screen.
As ff7 background quality isn't constant from one screen to another in original game, the upscaled version need to be tweaked as much as possible to get a great render.
So i've done the game twice to try to find the better filtering option for each screen, i tried lots of waifu/esrgan/imagmegick combo filtering (from no filtrering at all, just esrgan, to a combinaison of about 5 filtering renders layered).

There's no more big bug ATM, and the background pack is well optimised.

But i have a great alpha tester how's doing the game right know and have found little fix and optimisation i correct while is testing (some black pixel (not missed a lot myself but he found some)) a mis-animated door ...
I'll take the time needed, but when a release will be avaible, it won't be a beta.

658
here's some more ingame :















659
WIP / Re: Final Fantasy VIII - Graphical Update Mod (WIP)
« on: 2019-01-26 21:41:17 »
If your gan model is final, i'd like to test it on some ff7 field witch my current method is not realy good

660
Thanks AlphaAtlas, take your time, i'm doing the game again to correct the black offset bug i have on some screens, so i won't process the videos ATM

661
Team Avalanche / Re: JuseteĀ“s field scenes
« on: 2019-01-22 19:18:25 »
Great works again !

662
The first release will take some time (i found that on some TV or mis-calibrated computer screens, the black offset isn't right (#020301 or so instead of #000000)), i'm generating the whole background again then do the game again for test.
I also need to found better filtering for some screens.

Maybe one or two month, i don't want to speed up the process, i want to do a bug free release.

I don't need more beta tester right now but i'll tell you if i need.

663
Game checked to the end.

To do before release:
-Need to find better filtering for 12 fields.
-Need to do some world map and battle fields optimisation.
-Need to wait the final alpha test from azertyuiop2 (ff7.fr) to spot bug or optimisation i havent seen.

After the release:
-Check if i can do a good videos upscale method

664
WIP / Re: Final Fantasy VIII - Graphical Update Mod (WIP)
« on: 2019-01-20 07:36:39 »
the esrgan process is not realy adapted to videos, it's needs realy clear still picture to works

665
I Know
My processus is automated.
But lots of small optimisations need to be done by hand for an optimal result.

666
Yes it's an hard work to optimize the layer's cutting but it's realy worth it.
A base layer cutting unoptimised can realy ruin your works.

For exemple with FF7, the base layer cutting (only 4x scale without any edge optimisation):


My optimised version (layer done with potrace and imagemagik):


Of course it's your works and you'll done it as you want to but a great layer cutting will make your upscale shine.

667
WIP / Re: Final Fantasy VIII - Graphical Update Mod (WIP)
« on: 2019-01-17 16:15:53 »
my own trained ff7 model finished its 500000 iteration but the results are lower quality than my over models.
if you can share your model once trained, i'd like to test it to check the results on ff7 ^^

668
WIP / Re: Final Fantasy VIII - Graphical Update Mod (WIP)
« on: 2019-01-15 20:49:22 »
Yes the longest part is to check and correct by hand all layers, do small optimisation when esrgan is not as good as you want...
I've checked all ff7 cd1 and have a great alpha tester who can find some possible optimisation too ^^.
Of all parts if you want a great result ingame, you need to bug hunting, because the best upscale can be ruined by small layers bugs here and there

669
WIP / Re: Final Fantasy VIII - Graphical Update Mod (WIP)
« on: 2019-01-15 18:33:40 »
Here's my result of a quick test of this picture with my auto tool (use waifu as prefilter then esrgan), if you want i can clean the tool and give it to you for your tests ^^

670
Great you made it works ^^

671
The imagemagic normalization is only if you get a picture import error while runing training.
Have you tried the worker-batch to 1-2 or 2-4, i think the "0" worker you used make it bugs

"... the best part is that if I figure this out, I can look forward to days of my computer doing nothing but crunching numbers just to end up with an even worse model than what I'm already using"
That will probably be the same for my model training, but with a lot of luck maybe ....

672
That's not exactly the way it needed to be

1)Took your HQ pictures and run extract_subimgs_single.py on that folder
>Results in a ff8hq folder

2)Took the picture from  ff8hq folder and make a 25% downsize
>Results in a ff8lq folder

3)Copy about 10%/15% of pictures from  ff8hq folder
>Results in ff8hqval folder

4)Copy about 10%/15% of (the same than hq) pictures from  ff8lq folder
>Results in ff8lqval folder

5)"Normalize" with imagemagik

edit your train_esrgan.json file with your folders:
Code: [Select]
, "datasets": {
    "train": {
      "name": "FF8"
      , "mode": "LRHR"
      , "dataroot_HR": "E:/ESRGANModel/Dataset/ff8hq"
      , "dataroot_LR": "E:/ESRGANModel/Dataset/ff8lq"
      , "subset_file": null
      , "use_shuffle": true
      , "n_workers": 4
      , "batch_size": 8
      , "HR_size": 128
      , "use_flip": true
      , "use_rot": true
    }
   , "val": {
      "name": "val_set14_part"
      , "mode": "LRHR"
      , "dataroot_HR": "E:/ESRGANModel/Dataset/ff8hqval"
      , "dataroot_LR": "E:/ESRGANModel/Dataset/ff8lqval"
   }

launch the training ^^

As workers don't seems to work, try the values 1-2 or 2-4 instead of the original 8-16

My computer using windows 10 pro 1809 64 bits (corei7 8700k / 8go ram / gtx1060-6go)
here's my python config (default value without any mods, from ersgan use to basicSR):
1)install python 3.6.2 64bits
2)pip install scipy
3)pip install pillow
4)pip install numpy opencv-python
5)pip3 install http://download.pytorch.org/whl/cu90/torch-0.4.1-cp36-cp36m-win_amd64.whl
6)pip3 install torchvision
7)pip install numpy opencv-python
8)pip install tensorboardX
As i dont use lmdb i haven't put it here, but i installed it with a wheel file because of the well know vc++ 14 error (even if i installed the right buildtools)

673
-progress_bar.py
>I removed it from the code, as the lmdb woring fast, no progresse needed for my tests ^^ (and i don't use it at end so...)

-I swiped thru just over 1.3K images in like 10 minutes
>I think you do something wrong, i own a gtx1060, the process uses about 90% of gpu (gpu is the bottleneck in basicSR) and it take about 3 minutes for 200 iter (and the learning process need to do 500 000 iter, so about 5.2days 24h/24h)

-the original backgrounds from FFVIII as my high-res images
>I used the team avalanche/jmp/jusete 3d art background as HQ picture, than downscale them and make a color reduction to get LQ picture (color reduction is a test as ff7 use paletted color pictures)

-I'm curious if applying this to a bunch of masks would also be helpful
>I don't think it'll helping anything, the best bet is that you found and use HQ ff8 art as close as possible as game picture (not sure my approach with 3d art will be usefull, it'a just an idea)

674
Figured I'd post an update on that vapoursynth video/batch image tool here.

Though I'm not doing much original work in the first place, I'm almost done. I've made picture tutorials for installing CUDA and cuDNN from Nvidia's website, I've made a .bat file that installs and checks MXNet, I've made a .bat that downloads or updates the Super Resolution Zoo repo from Github, and I've made a python file that grabs all the neural net info files.

I just need to write some code in that python file that parses the NN info files and automatically inserts the correct syntax into a Vapoursynth script. Basically, without the script, you'd have to manually enter the path and correct arguments for the super resolution model you want, like:
Code: [Select]
sr_args = dict(model_filename=r'D:\CustomPrograms\VapourSynth64Portable\Models\Super-Resolution-Zoo\MSRN\MSRN_4x', device_id=0, up_scale=4, is_rgb_model=True, pad=None, crop=None, pre_upscale=False)Which is tedious if you're trying to decide between 30+ different (pretrained) models. Anyway, I have plenty of time to finish that this weekend.

Thanks for your works, i think i'll have some fun with this ^^

675
Yes it's with basicSR.
Some usefull informations
1)don't install nor use lmdb, it's a pain to install with python (on my windows at least) and it takes lots of ressources (and even in my classic no ssd drive, there's no benefits)
2)edit and use the extract_subimgs_single.py script for HQ set generation from your HQ sources
3)generate the LQ set with the imagemagik command : "convert.exe -filter box -resize 25%% #input# #output#"
4)Be sure all your HQ and LQ picture use the same color depth and setting (for that i use the imagemagik command :  "convert.exe #inptut# -define png:format=png24 #inpt#" , it ensure that all the png are compressed the same way and it's auto overwriting the source).
5)Make a set of verification of about 10% of the total pictures (for LQ anb HQ)
6)edit  the train_ESRGAN.json with your folders and to avoid a out of memory error :
      , "n_workers": 4
      , "batch_size": 8
I also set a "save_checkpoint_freq": 5000 to be able to stop/resume the training easily

for reference here's my edited python files (all the other files are as in the git without modification and i use python 3.6 and not anaconda):

extract_subimgs_single.py
Code: [Select]
import os
import os.path
import sys
from multiprocessing import Pool
import numpy as np
import cv2
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from utils.progress_bar import ProgressBar


def main():
    """A multi-thread tool to crop sub imags."""
    input_folder = "E:/ESRGANModel/ff7sources/"
    save_folder = "E:/ESRGANModel/dataset/ff7hq/"
    n_thread = 20
    crop_sz = 480
    step = 240
    thres_sz = 48
    compression_level = 0  # 3 is the default value in cv2
    # CV_IMWRITE_PNG_COMPRESSION from 0 to 9. A higher value means a smaller size and longer
    # compression time. If read raw images during training, use 0 for faster IO speed.

    if not os.path.exists(save_folder):
        os.makedirs(save_folder)
        print('mkdir [{:s}] ...'.format(save_folder))
    else:
        print('Folder [{:s}] already exists. Exit...'.format(save_folder))
        sys.exit(1)

    img_list = []
    for root, _, file_list in sorted(os.walk(input_folder)):
        path = [os.path.join(root, x) for x in file_list]  # assume only images in the input_folder
        img_list.extend(path)

    def update(arg):
        pbar.update(arg)

    pbar = ProgressBar(len(img_list))

    pool = Pool(n_thread)
    for path in img_list:
        pool.apply_async(worker,
            args=(path, save_folder, crop_sz, step, thres_sz, compression_level),
            callback=update)
    pool.close()
    pool.join()
    print('All subprocesses done.')


def worker(path, save_folder, crop_sz, step, thres_sz, compression_level):
    img_name = os.path.basename(path)
    img = cv2.imread(path, cv2.IMREAD_UNCHANGED)

    n_channels = len(img.shape)
    if n_channels == 2:
        h, w = img.shape
    elif n_channels == 3:
        h, w, c = img.shape
    else:
        raise ValueError('Wrong image shape - {}'.format(n_channels))

    h_space = np.arange(0, h - crop_sz + 1, step)
    if h - (h_space[-1] + crop_sz) > thres_sz:
        h_space = np.append(h_space, h - crop_sz)
    w_space = np.arange(0, w - crop_sz + 1, step)
    if w - (w_space[-1] + crop_sz) > thres_sz:
        w_space = np.append(w_space, w - crop_sz)

    index = 0
    for x in h_space:
        for y in w_space:
            index += 1
            if n_channels == 2:
                crop_img = img[x:x + crop_sz, y:y + crop_sz]
            else:
                crop_img = img[x:x + crop_sz, y:y + crop_sz, :]
            crop_img = np.ascontiguousarray(crop_img)
            # var = np.var(crop_img / 255)
            # if var > 0.008:
            #     print(img_name, index_str, var)
            cv2.imwrite(
                os.path.join(save_folder, img_name.replace('.png', '_s{:03d}.png'.format(index))),
                crop_img, [cv2.IMWRITE_PNG_COMPRESSION, compression_level])
    return 'Processing {:s} ...'.format(img_name)


if __name__ == '__main__':
    main()

train_ESRGAN.json
Code: [Select]
{
  "name": "RRDB_ESRGAN_x4_FF7" //  please remove "debug_" during training
  , "use_tb_logger": true
  , "model":"srragan"
  , "scale": 4
  , "gpu_ids": [0]

  , "datasets": {
    "train": {
      "name": "FF7"
      , "mode": "LRHR"
      , "dataroot_HR": "E:/ESRGANModel/Dataset/ff7hq"
      , "dataroot_LR": "E:/ESRGANModel/Dataset/ff7lq"
      , "subset_file": null
      , "use_shuffle": true
      , "n_workers": 4
      , "batch_size": 8
      , "HR_size": 128
      , "use_flip": true
      , "use_rot": true
    }
   , "val": {
      "name": "val_set14_part"
      , "mode": "LRHR"
      , "dataroot_HR": "E:/ESRGANModel/Dataset/ff7hqval"
      , "dataroot_LR": "E:/ESRGANModel/Dataset/ff7lqval"
   }
  }

  , "path": {
    "root": "E:/ESRGANModel/InWorks"
   // , "resume_state": "E:/ESRGANModel/InWorks/experiments/RRDB_ESRGAN_x4_FF7/training_state/15000.state"  //<=edit and uncomment to resume training
    , "pretrain_model_G": "../experiments/pretrained_models/RRDB_PSNR_x4.pth"
  }

  , "network_G": {
    "which_model_G": "RRDB_net" // RRDB_net | sr_resnet
    , "norm_type": null
    , "mode": "CNA"
    , "nf": 64
    , "nb": 23
    , "in_nc": 3
    , "out_nc": 3
    , "gc": 32
    , "group": 1
  }
  , "network_D": {
    "which_model_D": "discriminator_vgg_128"
    , "norm_type": "batch"
    , "act_type": "leakyrelu"
    , "mode": "CNA"
    , "nf": 64
    , "in_nc": 3
  }

  , "train": {
    "lr_G": 1e-4
    , "weight_decay_G": 0
    , "beta1_G": 0.9
    , "lr_D": 1e-4
    , "weight_decay_D": 0
    , "beta1_D": 0.9
    , "lr_scheme": "MultiStepLR"
    , "lr_steps": [50000, 100000, 200000, 300000]
    , "lr_gamma": 0.5

    , "pixel_criterion": "l1"
    , "pixel_weight": 1e-2
    , "feature_criterion": "l1"
    , "feature_weight": 1
    , "gan_type": "vanilla"
    , "gan_weight": 5e-3

    //for wgan-gp
    // , "D_update_ratio": 1
    // , "D_init_iters": 0
    // , "gp_weigth": 10

    , "manual_seed": 0
    , "niter": 5e5
    , "val_freq": 5e3
  }

  , "logger": {
    "print_freq": 200
    , "save_checkpoint_freq": 5000 //5e3
  }
}

Pages: 1 ... 22 23 24 25 26 [27] 28 29 30 31 32