Comment Location:
http://ayden-kim.blogspot.com/2010/12/reading-21-teddy.html
Summary:
Now this is interesting. The user "draws" 3D objects by sketching a 2D sketch and then letting the system Teddy work its algorithm. Teddy doesn't recognize individual shapes, such as a square or triangle. Instead, it takes an enclosed shape and does a number of operations to it to transform the sketch into a 3D shape. Operations include bending, painting, and extrusion; there are multiple variations of each operation, depending on the shape. Only specialists in the author's general research areas tested the Teddy system, but they gave very positive reviews.
Discussion:
I noticed the light source differed between some of the sketches show in figure 6. This makes me wonder if the light source is decided by Teddy or if it can be customized by the user. Here is the future: combine this with Maya, so I can draw something and convert it to a rendered 3D object. The farther future: scan a sketch(es) of a game character and create a 3D based on the input. This would reduce the workload of game developers when creating characters, enemies, and levels.
I like very much your future ideas, however I see that the second one can be tough to implement. It seems that Teddy uses mostly on-line recognition methods, scanning would imply different challenges, maybe having to scan different views of the object. Perhaps the game developers are willing to draw directly on tablets if the benefits of using Teddy are provided.
ReplyDelete