Author Topic: 3-Sweep: Extracting Editable Objects from a Single Photo, SIGGRAPH ASIA 2013  (Read 17359 times)

Micky

  • Insane poster
  • *
  • Posts: 300
  • Karma: 5
    • View Profile
Well, kind of on-topic, even though an accessible implementation is still a bit off:
http://youtu.be/Oie1ZXWceqM
(Depending if they release the code or not.)
Still, together with the existing camera and walkmesh info this may be a quick way to get a starting layout to work off.

Here's the abstract: http://www.faculty.idc.ac.il/arik/site/3Sweep.asp
« Last Edit: 2013-09-06 14:59:29 by Micky »

Mayo Master

  • Moderator
  • Freak
  • *
  • Posts: 651
  • Karma: 130
    • View Profile
That looks interesting, but I don't see how this tool would be of much help. Thanks to SpooX, we can already have walkmesh and camera data in a 3d modelling software, and the basis for modelling is simply made by setting the original image as background image from the camera view. In our case, we modellers have to replace low-res looking objects by high res looking objects, and this involves making higher-poly objects and higher-res textures, and neither can be obtained from the original picture.
The only possible use I could see would be if we would use 3-sweep to model objects taken from other photographs, in order to import them into our scene. But then, I can't figure what can be made with 3-sweep that can't be made within a 3d modelling software directly. Overall it feels like "3d modelling made simple", reminding me of Sketchup.
Unless I fail to see any use for this?
 

Micky

  • Insane poster
  • *
  • Posts: 300
  • Karma: 5
    • View Profile
Think of it as a quick way to get the low res models out of a field screen. So instead of setting the background image into the camera, you would load it into the tool and sweep over key objects and export the result as a textured mesh.
The code gives you both geometry and a synthesised texture. Though until they actually release the code, or somebody implement the paper, it is hard to say how useful it is.

DLPB

  • No life
  • *
  • Posts: 9697
  • Karma: 315
  • For I realized that God's a young man, too.
    • View Profile
This looks impossible to me.  Not only is it managing to find the edges so finely (and easy), but it's placing data where it doesn't exist. In terms of FF7 objects, this wouldn't work regardless... a lot of it requires full updates to models (as in true 3D modelling with textures) and already uses 3d modelling software.
« Last Edit: 2013-09-06 17:48:04 by DLPB »

Micky

  • Insane poster
  • *
  • Posts: 300
  • Karma: 5
    • View Profile
This looks impossible to me.  Not only is it managing to find the edges so finely (and easy), but it's placing data where it doesn't exist. In terms of FF7 objects, this wouldn't work regardless... a lot of it requires full updates to models (as in true 3D modelling with textures) and already uses 3d modelling software.
I think that is because it mostly shows you where it works best - many cylindrical or bent cylindrical objects. There are a few at the end where it doesn't work, for example the tooth paste tube. And I guess it makes many assumptions about connectivity. They fix the background (and I guess the back side of a model) with a texture synthesis algorithm. If you look closely for example in the picture with the telescope you see that the filled in data doesn't look that realistic. (Theory and GIMP plugin for something similar is here: http://www.logarithmic.net/pfh/resynthesizer Look at the removing objects from images link for an example).

I was planning to experiment with something similar to this to add shadow casters and receivers to field files, where an exact match isn't really required.

Mayo Master

  • Moderator
  • Freak
  • *
  • Posts: 651
  • Karma: 130
    • View Profile
The work that is done with this software on cylindrical objects (at least on the video) is nothing that can't be done with a classic 3d modelling software (besides, modelling objects with a symmetry around an axis of revolution is relatively simple). Sure, it is interesting to see how they can generate textures to guess how the background looks like, or how the back of an object would look like, but I believe the classic way of texturing objects outperforms these results by a long shot.