You’ve seen it in movies and shows — the hero takes a blurry still picture, and with a few keystrokes, generates a view from a different angle or sometimes even a full 3D model. Turns out, thanks to machine learning and work by several researchers, this might be possible. As you can see in the video below, using “shape-guided diffusion,” the researchers were able to take a single image of a person and recreate a plausible 3D model.
Of course, the work relies on machine learning. As you’ll see in the video, this isn’t a new idea, but previous attempts have been less than stellar. This new method uses shape prediction first, followed by an estimate of the back view appearance. The algorithm then guesses what images go between the initial photograph and the back view. However, it uses the 3D shape estimate as a guideline. Even then, there is some post-processing to join the intermediate images together into a model.
The result looks good, although the video does point out some areas where they still fall short. For example, unusual lighting can affect the results.
This beats spinning around a person or a camera to get many images. Scanning people in 3D is a much older dream than you might expect.
That website just crashes my browser with over 2GB ram used by a single page
My phone overheated in about 30 seconds hahaha
Like when they started using AI in combination with facial recognition technology – yes, using tons of processing power you can generate new information based on single images, look at scenes from different vantage points, turn a blurry face into a tack sharp face, but these generated scenes don’t contain any information that wasn’t visible in the original image. You can’t look behind a person’s back and see a murder weapon. There might be one generated, but it will be unrelated to reality.
Unfortunately, you show this stuff to a jury and unless you have a savvy judge, they might eat it up.
ENHANCE!
https://www.youtube.com/watch?v=I_8ZH1Ggjk0
Interested in trying this. Where can I get access to it?
That bulge in the pants on the person on the right. Come on!
After seeing many side eyes and smirks between TSA agents I snuck a peak at their backscatter screens.
My junk is a yellow alert! Rough angle of the dangle is displayed. Left, right or center.
Why telling you?
I’m telling everyone.
lol same. I sometimes get a manual pat down even if I’m wearing gym shorts with no pockets and no underwear. I had to look at their screen too out of curiosity, and it’s hilarious to see that yellow crotch box alert 😂
It’s the right knee. It’s a dancer. Look at the arms. The right foot is near the left knee, and the right leg is going away from us. It is not some elephantiasis. Lol or maybe it is. Lol
I seem to remember some company had worked out a program that trawled the web and matched photos in order to make 3D imagery, even with interior shots for some places.
It was largely based on the general public just simply posting thousands of images of common places like landmark buildings and such.
Over time, the link went dead and I lost track of the project.
I’m sort of thinking it was a Microsoft project, but not certain.
Craig, I see a dancer with the arms up and for me the “bulge” is his right knee. We can see the foot near his left knee, so I imagine in dancing position, having the right knee opened to his right side. Lol
No elephantiasis
How can get it and I try it