I'm writing a series of blog entires to help explain how Mocap works. Its targeted toward NaturalPoint systems and Arena. They can be found at:
http://blog.fie.us/
The category is Mocap Series.
I've posted the first entry on reconstruction.
Hopefully this will help the community get a better grasp of the technology involved.
How Mocap Works Series
Re: How Mocap Works Series
A second entry on Retroreflectivity and Markers has been posted.
Re: How Mocap Works Series
Thanks, Brad! This is very informative. I'll be sure to check out your blog periodically. (Will you be taking questions?)
--Tim
--Tim
-
- Posts: 164
- Joined: Wed Jan 17, 2007 10:31 am
Re: How Mocap Works Series
brad - GREAT Blog!
Very informative.
Very informative.
Re: How Mocap Works Series
Brad,
Have you used or seen Realviz's Movimento? I followed their development for years until final release sometime last year. The site is extremely limited to info on the product, but I recently noticed several case studies. The one I am most interested in is the Gollum reproduction by a guy in France. Its amazingly well done facial capture. Check it out in Gallery section http://movimento.realviz.com/motion-cap ... llery.html
From my understanding you can use as little as 2 regular dv cameras. I was wondering is there a way to use our Optitrack cameras to pull in the point data? And secondly do you think the Gollum is point driven morph targets, clusters or a weighted bones driven mesh? Trying to figure out the best way to tie in the data to the mesh. Whats your best guess? Then there is the tying of point data to an actor in motionbuilder under the Use Case for full bodies. Can you check that out and let me know how they tied it together..again if you have an idea.
My thoughts are that Movimento could be used with the Optitrack cameras to capture facial and body mocap at the same time and brought into Motionbuilder. Or capture a live performence of the body with Arena and MB, then capture the face and hands with Movimento.
Thoughts, suggestions?
Will
Have you used or seen Realviz's Movimento? I followed their development for years until final release sometime last year. The site is extremely limited to info on the product, but I recently noticed several case studies. The one I am most interested in is the Gollum reproduction by a guy in France. Its amazingly well done facial capture. Check it out in Gallery section http://movimento.realviz.com/motion-cap ... llery.html
From my understanding you can use as little as 2 regular dv cameras. I was wondering is there a way to use our Optitrack cameras to pull in the point data? And secondly do you think the Gollum is point driven morph targets, clusters or a weighted bones driven mesh? Trying to figure out the best way to tie in the data to the mesh. Whats your best guess? Then there is the tying of point data to an actor in motionbuilder under the Use Case for full bodies. Can you check that out and let me know how they tied it together..again if you have an idea.
My thoughts are that Movimento could be used with the Optitrack cameras to capture facial and body mocap at the same time and brought into Motionbuilder. Or capture a live performence of the body with Arena and MB, then capture the face and hands with Movimento.
Thoughts, suggestions?
Will
Re: How Mocap Works Series
I don't have time right now to do the full research I would need to, to answer completely so a lot of this will be conjecture. I appologize. I have seen the gollum example you are referring to though so I do have a good understanding of the quality and scenario you describe.
I think the two products could probably be made to coexist with some elbow greese and perhaps a little coding. To take full advantage of movimento you'd probably want to just write a little app for the optitrack camera, that writes the objects (blips) to disk in a simple format and then offline, turns it into an AVI or MOV of white dots on black at high res. Then, you'd just import those movies into movimento as if they were any video source from say a camcorder. Movimento's human assisted tracking would eat up the virtual video like candy and you'd be able to move forward from there. That's the kludgy way to do it anyway I'd think. It might also be possible to write some code on the movimento end to skip the video step but I'd need to see their SDK docs to know more.
I would guess that the gollum example is a combination of point driven deformation (skinned mesh) and shape animation. The shape animation could be driven by the facial joints using links (driven keys). If it were a VFX shot rather than a purist's tech demo, there would also be a layer of hand animated fixes/enhancements on top.
Motionbuilder's "actor face" is a strange beast that doesn't make sense to me. I think its just an old solution from long ago that they have not replaced or removed yet. It expects a fairly simple set of markers and it uses them to drive a specific set of blendshapes. I would think you'd approach it like you would the human body, where you attempt to "retarget" the data geometrically and heirarchically. Instead, it tries to boil it down to parameters like "smile" and "smirk".
I think attempts at using optitrack cameras to do face and body and hands all at once are a little beyond the hardware's specs in typical usage scenarios. If you're talking about setting up the cameras on tripods and capturing all at once, they simply don't have the resolution to cover the space for a full body capture and also have the fine detail to capture the minutia of the face and a detailed hand setup at one time in one volume of reasonable size. You simply need higher res cameras or something like 50-100 v100s. Don't get me wrong, I'd rather have to build a 50 camera v100 setup than a 16 camera vicon MX40 setup because the price difference between those two setups is huge. However, the fact of the matter is, Arena and NP are not quite ready for that kind of scale yet IMHO. Though I do know that scalability is high on the priority list and they want to get there. And I think they can. It will take a little time though.
I could see some specialized scenarios where you have a normal volume but you also have a camera or two head mounted to do face.
I could see a very complex scenario where you have a normal volume and some cameras on auto pan/tilt mounts with telephoto lenses that follow an actor's head and/or hands to get better resolution for those body parts.
Both of those uncommon setups would require some pretty hefty custom software though. Year or so of development I'd think.
My immediate pragmatic approach would be to try to break out the face and hands as separate passes of capture if I simply needed to get the job done and keep the budget low.
If I really needed facial at capture time, I'd have a couple of hidef witness cams (sony handcams, nothing too serious) and subcontract to image metrics
http://www.image-metrics.com/
If high res hands were particularly critical I'd look into a data glove or shape tape solution.
All in all though, I don't think movimento will gain you much here. Remember, that gollum piece was done as a facial capture, not as a full body capture. Facial capture is MUCH easier for the mocap system, and much harder for the rigger/animator. Full body is harder on the system and easier on the rigger/animator. Arena doesn't support it yet but there has already been some suggestions that simply setting up a rigid body with all your facial markers and a high slack factor in Arena might be enough to get some facial capture going. I've not tried it yet myself. I've been focusing more on my own software.
If you have a specific project or scenario you are thinking about, feel free to contact me further, though unfortunately, my schedule just got really tight in the past few days and threatens to stay that way for the next year. Also, you may want to contact NaturalPoint directly as they are pretty good about helping productions with needs (and it sounds like you've got something specific in mind). They are actively working on features and fixes, and some of those may be applicable to your project.
Obviously, if you need full body facial and hands right now, you need to go to House of Moves or Giant studios. They're the vendors that have done it and can do it for you for the going market rate, assuming they're not booked solid.
I think the two products could probably be made to coexist with some elbow greese and perhaps a little coding. To take full advantage of movimento you'd probably want to just write a little app for the optitrack camera, that writes the objects (blips) to disk in a simple format and then offline, turns it into an AVI or MOV of white dots on black at high res. Then, you'd just import those movies into movimento as if they were any video source from say a camcorder. Movimento's human assisted tracking would eat up the virtual video like candy and you'd be able to move forward from there. That's the kludgy way to do it anyway I'd think. It might also be possible to write some code on the movimento end to skip the video step but I'd need to see their SDK docs to know more.
I would guess that the gollum example is a combination of point driven deformation (skinned mesh) and shape animation. The shape animation could be driven by the facial joints using links (driven keys). If it were a VFX shot rather than a purist's tech demo, there would also be a layer of hand animated fixes/enhancements on top.
Motionbuilder's "actor face" is a strange beast that doesn't make sense to me. I think its just an old solution from long ago that they have not replaced or removed yet. It expects a fairly simple set of markers and it uses them to drive a specific set of blendshapes. I would think you'd approach it like you would the human body, where you attempt to "retarget" the data geometrically and heirarchically. Instead, it tries to boil it down to parameters like "smile" and "smirk".
I think attempts at using optitrack cameras to do face and body and hands all at once are a little beyond the hardware's specs in typical usage scenarios. If you're talking about setting up the cameras on tripods and capturing all at once, they simply don't have the resolution to cover the space for a full body capture and also have the fine detail to capture the minutia of the face and a detailed hand setup at one time in one volume of reasonable size. You simply need higher res cameras or something like 50-100 v100s. Don't get me wrong, I'd rather have to build a 50 camera v100 setup than a 16 camera vicon MX40 setup because the price difference between those two setups is huge. However, the fact of the matter is, Arena and NP are not quite ready for that kind of scale yet IMHO. Though I do know that scalability is high on the priority list and they want to get there. And I think they can. It will take a little time though.
I could see some specialized scenarios where you have a normal volume but you also have a camera or two head mounted to do face.
I could see a very complex scenario where you have a normal volume and some cameras on auto pan/tilt mounts with telephoto lenses that follow an actor's head and/or hands to get better resolution for those body parts.
Both of those uncommon setups would require some pretty hefty custom software though. Year or so of development I'd think.
My immediate pragmatic approach would be to try to break out the face and hands as separate passes of capture if I simply needed to get the job done and keep the budget low.
If I really needed facial at capture time, I'd have a couple of hidef witness cams (sony handcams, nothing too serious) and subcontract to image metrics
http://www.image-metrics.com/
If high res hands were particularly critical I'd look into a data glove or shape tape solution.
All in all though, I don't think movimento will gain you much here. Remember, that gollum piece was done as a facial capture, not as a full body capture. Facial capture is MUCH easier for the mocap system, and much harder for the rigger/animator. Full body is harder on the system and easier on the rigger/animator. Arena doesn't support it yet but there has already been some suggestions that simply setting up a rigid body with all your facial markers and a high slack factor in Arena might be enough to get some facial capture going. I've not tried it yet myself. I've been focusing more on my own software.
If you have a specific project or scenario you are thinking about, feel free to contact me further, though unfortunately, my schedule just got really tight in the past few days and threatens to stay that way for the next year. Also, you may want to contact NaturalPoint directly as they are pretty good about helping productions with needs (and it sounds like you've got something specific in mind). They are actively working on features and fixes, and some of those may be applicable to your project.
Obviously, if you need full body facial and hands right now, you need to go to House of Moves or Giant studios. They're the vendors that have done it and can do it for you for the going market rate, assuming they're not booked solid.
Re: How Mocap Works Series
Thanks for all the info. Yeah its amazing how busy its getting. I have jobs lined up too, with no end in site. I worked on a film before this one called "The Curious Case Of Benjamin Buttons" for a year and had two days off before I started this one! Anyways, from looking at RealViz's web site their case studies show that you can use standard DV cameras, handheld (obvious use of their camera motion tracking software), and many are capturing face and body together and then adding in hands later. I've dropped a line to the author of the Gollum to see if he will devulge his process.
I do indeed have a trailer that I am putting together to show a producer who is interested in seeing an all CG program. Since I am coming from a make-up effects background, I have an animatronics take on the execution of mocap, in particular facial mocap. The idea of full HD radiosity rendered all CG TV shows are almost there. I know Ron Thorton, from my old Babylon 5 days has proven over and over that you can create a show for budget and on time.
I think theres a gap in this growing field, where alot of art is lost in the technical aspect of creating CG. I'd love to build a pipeline that would allow traditional artists to come in with the creative freedom to build all CG project. I've waited 10 years and I am starting to see break through. With sculpting I see Mudbox and Zbrush. So, hopefully the gap will continue to close and many I know can begin to migrate over.
I do indeed have a trailer that I am putting together to show a producer who is interested in seeing an all CG program. Since I am coming from a make-up effects background, I have an animatronics take on the execution of mocap, in particular facial mocap. The idea of full HD radiosity rendered all CG TV shows are almost there. I know Ron Thorton, from my old Babylon 5 days has proven over and over that you can create a show for budget and on time.
I think theres a gap in this growing field, where alot of art is lost in the technical aspect of creating CG. I'd love to build a pipeline that would allow traditional artists to come in with the creative freedom to build all CG project. I've waited 10 years and I am starting to see break through. With sculpting I see Mudbox and Zbrush. So, hopefully the gap will continue to close and many I know can begin to migrate over.