|Lucian I. Posted: June 04, 2012|
If anybody is still reading this post, I managed to narrow it down to using Poser and Python for this task. But while it's relatively easy (I guess) to render the image, it's more challenging to get the depth map. Does anybody have any experience with something like this?
|Lucian I. Posted: May 31, 2012|
First off, let me say that I have no background in rendering, 3D modeling, or anything related. So I came here to humbly ask the advice of experts.
What I ultimately need is a huge database (100,000+) of 3D models of hands from different types of people (young, old, male, female, skinny, chubby) as seen from different angles (front, top, bottom, left, right and all in between).
Since I couldn't possibly render all of these by hand, I would need some piece of software that can output a 3D model given these parameters: hand type, distance, camera position (or rotation of the hand), and of course rotation, side movement and twist for every joint. If the software has something that prevents unnatural hand positions (like fingers merging into one another or bent backwards) that would be even better. The data that I need from it is practically the point cloud of the hand as seen from the camera or a depth map (each pixel representing the distance from the camera to the surface of the model). As such, texture is mostly useless, though would be a good addition. Also, if I had the model of 3D vertices or triangles, I think I could work out a way to extract the data I need myself.
I have already found software like that (libhand), but it lacks 3D data (although I'm going to look in the source code to see if I can extract it somehow - it uses OGRE), only has 1 model of a hand (male, 25-30yo) and allows joints to bend 360 degrees.
So, do you guys know of any specialized software that can do that for me? This project is for my master's thesis, so it's pretty important.
Looking forward to see your response.