That's not exactly the way it needed to be
1)Took your HQ pictures and run extract_subimgs_single.py on that folder
>Results in a ff8hq folder
2)Took the picture from ff8hq folder and make a 25% downsize
>Results in a ff8lq folder
3)Copy about 10%/15% of pictures from ff8hq folder
>Results in ff8hqval folder
4)Copy about 10%/15% of (the same than hq) pictures from ff8lq folder
>Results in ff8lqval folder
5)"Normalize" with imagemagik
edit your train_esrgan.json file with your folders:
, "datasets": {
"train": {
"name": "FF8"
, "mode": "LRHR"
, "dataroot_HR": "E:/ESRGANModel/Dataset/ff8hq"
, "dataroot_LR": "E:/ESRGANModel/Dataset/ff8lq"
, "subset_file": null
, "use_shuffle": true
, "n_workers": 4
, "batch_size": 8
, "HR_size": 128
, "use_flip": true
, "use_rot": true
}
, "val": {
"name": "val_set14_part"
, "mode": "LRHR"
, "dataroot_HR": "E:/ESRGANModel/Dataset/ff8hqval"
, "dataroot_LR": "E:/ESRGANModel/Dataset/ff8lqval"
}
launch the training ^^
As workers don't seems to work, try the values 1-2 or 2-4 instead of the original 8-16
My computer using windows 10 pro 1809 64 bits (corei7 8700k / 8go ram / gtx1060-6go)
here's my python config (default value without any mods, from ersgan use to basicSR):
1)install python 3.6.2 64bits
2)pip install scipy
3)pip install pillow
4)pip install numpy opencv-python
5)pip3 install
http://download.pytorch.org/whl/cu90/torch-0.4.1-cp36-cp36m-win_amd64.whl6)pip3 install torchvision
7)pip install numpy opencv-python
8)pip install tensorboardX
As i dont use lmdb i haven't put it here, but i installed it with a wheel file because of the well know vc++ 14 error (even if i installed the right buildtools)