Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - AlphaAtlas

Pages: [1]
1
Yeah, NNEDI3 (I meant that version) would be meant here for image doubling. At its time, it was the best scaler by far, and it yielded some very light denoising and great antialiasing as a side bonus. And it was developed with neural networks.

As for M(V)Degrain3, it's just the latest motion compensated denoiser I know of. It is quite slow (much more than, say, dfttest) and at its time struck a great balance between denoising and actual detail removal. Where can I read about KNLMeansCL and V-BM3D?

https://en.wikipedia.org/wiki/Block-matching_and_3D_filtering

https://en.wikipedia.org/wiki/Non-local_means

https://github.com/Khanattila/KNLMeansCL/wiki

https://github.com/HomeOfVapourSynthEvolution/VapourSynth-BM3D

This is a really good (but sloooooow) artifact remover too: https://forum.doom9.org/showthread.php?t=173470

Doom9 has some good threads on all of em, and the vapoursynth fatpack supports all those filters right out of the box. You can also enhance those filters by using cutting edge masks to further preserve detail, like retinex.

Someones working on a GPU BM3D, but it doesn't support temporal denoising yet, which means it really doesn't have an advantage over the ML-based image denoisers that are out there.

https://github.com/JeffOwOSun/gpu-bm3d

Also, if you think MVDegrain3 is slow, just wait until you try BM3D or Oyster on the CPU :D

2
Sorry for being kinda afk. I'll clean up the vapoursynth assistant more soon, but it more or less works now.

I'm a little rusty on Avisynth filtering, but I'd wager the FMVs could look alright with some MVDegrain3, Gradfunkmirror and NNEDI2. Maybe with a light AddGrain to fake extra detail.

I've tried NNEDI3 and fast denoiser with a test video recently, and I can tell you that neural network upscaling + KNLMeansCL or V-BM3D for temporal denoising looks WAY better.  And it shouldn't be too slow with a good GPU.

Also, when you say NNEDI2... are the source clips interlaced, or did you just mean NNEDI2 image doubling?

3
I didn't really look at it much, but noticed you were using python and batch scripting, recursion is pretty simple tbh:

BATCH FILE RECURSION
Code: [Select]
@echo off
for /r %%f in (*.*) do (
echo The File: %%f
echo The Directory: %%~pf
echo The Filename: %%~nf
echo The File Extension: %%~xf
)

PYTHON RECURSION
Code: [Select]
src="C:/Windows"
for root, dirnames, filenames in os.walk(src+"/"):
for filename in fnmatch.filter(filenames, '*.*'):
infile=root+filename

Thanks, I may use that. The real trick is feeding that data to the ImageMagick plugin, which I can't just stick in another for loop IIRC.

http://www.vapoursynth.com/doc/plugins/imwri.html

I think I can physically move/rename all the images to a single directory with that script, let Vapoursynth do its thing on the nicely formatted images, then move the processed images back using the saved directory data once the script is done.

4
Thanks for your works, i think i'll have some fun with this ^^


I lied about having plenty of time to finish it this weekend ;D. This is a WIP, but a functional WIP:

https://github.com/AlphaAtlas/vs_mxnet_helper/

You can format the imagemagik input to take (and write) a sequence of images easily enough, but among many other things, I need to figure out a script that can work through a whole directory recursively.




5
Figured I'd post an update on that vapoursynth video/batch image tool here.

Though I'm not doing much original work in the first place, I'm almost done. I've made picture tutorials for installing CUDA and cuDNN from Nvidia's website, I've made a .bat file that installs and checks MXNet, I've made a .bat that downloads or updates the Super Resolution Zoo repo from Github, and I've made a python file that grabs all the neural net info files.

I just need to write some code in that python file that parses the NN info files and automatically inserts the correct syntax into a Vapoursynth script. Basically, without the script, you'd have to manually enter the path and correct arguments for the super resolution model you want, like:
Code: [Select]
sr_args = dict(model_filename=r'D:\CustomPrograms\VapourSynth64Portable\Models\Super-Resolution-Zoo\MSRN\MSRN_4x', device_id=0, up_scale=4, is_rgb_model=True, pad=None, crop=None, pre_upscale=False)Which is tedious if you're trying to decide between 30+ different (pretrained) models. Anyway, I have plenty of time to finish that this weekend.

6
A dutch webpage send be to this topic (long time member here) first time I have see them metioning a mod! It more a page for hardware and other technical related newbits
Well done.

Yeah, someone linked this thread on /r/pcgaming, which PC Gamer is notorious for watching, then other tech media saw it on PC Gamer and reported on it.

7
Cool !
i'll wait for it ^^

Do you happen to have an Nvidia GPU?

The upscalers will work on a CPU, but they're slow. AFAIK, only Waifu2X has a version that's accelerated by AMD GPUs atm.

8
I kown but i'm stuck with old avisynth, never take the time to go to vapoursynth.
I've done lots of video filtering long time ago (started with the first release of virtualdub to end with avisynth with its great features).
But honestly the hard part of this type of upscale is that esrgan (and other ia of the same type) is not designed to upscale cleaned up video picture but low resolution still picture.
With the original psx videos of ff7 you have to deal with low resolution (320*xxx) + temporal noise + low colors palette compression... not realy easy to work with ^^'
Right now i don't have much time to spend in it because i'm already do a test/correction off the fields of the game.. maybe when my pack will be done ^^

Yeah. I've tested a few algos, and ESGRAN DOES NOT like being fed dirty frames. It seems to amplify halos, noise and such even more than other AI upscalers.

As for the difficulty of Vapoursynth... This week, I'm working on packaging the Vapoursynth Fatpack and this amazing repo in one easy package: https://github.com/WolframRhodium/Super-Resolution-Zoo

So, theoretically, you'd download a big zip file, unzip some separate Nvidia (or Intel) files I'm not allowed to redistribute, run a .bat file to install everything, then open a custom vapoursynth script in VSEDIT, and you'd be able try dozens of different AI upscalers, temporal GPU denoisers, artifact removers and such and such by just changing a number in a text file. The setup could batch process basically any image or video file under the sun, no conversion needed, and previewing/tweaking would be fast.



9
I use almost the same than for the esrgan background pack of ff7 i'm making:
1)Use waifu for noise reduction only (for this video i use noise level 2 with an old waifu build : "--scale_ratio 1 -m noise_scale --noise_level 2", i can upload you the specif exe if you can't get this result with modern waifu builds)
2)make a 25% downscale with imagemagik ("-filter box -resize 25%")
3)Upscale with esrgan (model used is an interpolation of Manga109 and RRDB_ESRGAN, ratio 80%Manga/20%ESRGAN)

I know that the "downscale then upscale again" process is a mathematic aberation but if you read the esrgan paper, they train the models with a set of high quality picture they donwscale with the bicubic algorythm (about the same as imagemagik box filter).
So the best way to get the full power of esrgan is to provide bicubic downscaled picture ^^
I tried looooots of methods with ff7 background and videos, the "noise reduction, downscale, upscale again" was always the best (off cource for the ff7 background, as a high resolution doesn't exist, i use waifu for noise reduction + upscale, before downscaling with imagemagic, and it's working great)

Waifu only looks at individual frames because it was programmed as an image upscaler. What you need a high quality temporal denoiser, right? I highly recommend V-BM3D or KNLMeansCL, as both are way better than waifu at temporal noise/grain like that.

You can actually do everything (artifact removal, denoise, upscaling, resize back down) in a single step with vapoursynth, and do intermediate steps like 2x or 3x... But that's a deeep rabbit hole to go down.

Pages: [1]