EMotion FX Engine Interview
EMotion FX Engine - Exclusive Interview
Coming off the big E3 event, Xboxcore will be featuring new content, weekly. This new feature will focus on an aspect of gaming that rarely gets discussed: Game engines. Each week Xboxcore will have a focus on different game engines with information coming straight from the design teams. Whether you’re a gamer who’s interested in the inner workings of your favorite game, or you just wanted to know what makes a game tick; this is your inside look into the other side of gaming. Perhaps you thought about becoming a game designer, but wanted to know what you’re getting into? Or maybe you’re starting out and wanted to know what tools are the best to use? Hopefully this exclusive content will help give you some insight into the core of gaming. This week's Engine Focus will be on Mystic's Game Development EMotion FX.
The Emotion FX engine series is used mostly for animation purposes, and features a range of functions suited for next-generation platforms. The Emotion FX engine can be “plugged into” just about any 3D engine, and supports: Motion blending, lipsync technology, physic controllers, and much more. The list of features for the Emotion FX 2 can be viewed here. We had a Q&A session with Mystic Game Development's John van der Burg, the Development Director, regarding some of these features and how the EMotion FX makes use of them.
Xboxcore: How versatile is the EMotion FX 3 engine, when it comes to various development teams? Or otherwise, does the engine cater towards the veterans of the business, or is it designed for the up-and-coming developers as well?
John: We deal with both. However it has to be commercially interesting for us of course, since we all need to make a living from this. We have clients that are startups, where we mostly work with special prototype licenses. But we also have AAA companies using it. Of course we mainly aim for the AAA companies, but we sometimes do special types of licensing to companies that don't have all the cash upfront. We only do this if we think the project looks very promising to us.
But the fact that we aim mostly for the veterans of the business probably shows from the fact that we are adding support for next-gen consoles. Most big publishers release many of their games on multiple platforms, so it is very important for us to support platforms like the XBox 360.
Xboxcore: Displacement mapping would change the way meshes and objects are constructed and used in games. How far down the road do you think it will be before displacement mapping becomes a standard for character development in the gaming world?
John: Currently things are moving towards more and more image based rendering. This means that we try to create the illusion of 3D detail by using special effects like parallax mapping. However in games things are a bit different than what you usually see in movies.
You can do displacement mapping on different ways. The first way would be to just displace each vertex of the mesh. This is what currently is done for terrain engines, which basically applies a displacement map (height map) on a grid. However, this doesn't add more detail.
So another way would be to use hardware tessellation and apply displacement mapping on the newly generated vertices as well. That would add more geometry detail. This should be possible with DirectX 10 where you can use geometry shaders that are able to tessellate primitives such as triangles.
Then displacing these values in a vertex shader would create real 3D geometry detail. What you see now is that image space tricks are done to make 2D surfaces appear like they are 3D. This includes parallax mapping and other similar techniques. These techniques don't really add geometry detail.
I think adding geometry detail is the next step for more mature DirectX 10 hardware.
What we want to do is apply subdivision surfaces and displacement mapping. That would allow us to work with relatively simple geometry, which is then rendered on screen as if they are extremely detailed 3D models. So I think first we will see mostly things such as parallax mapping, and in about two years I think game developers will be able to use real displacement mapping. The first generation of hardware of DirectX 10 won't really be mega fast with geometry shaders and tessellation, so I think we have to wait a little bit more before it will be really usable in games. And imagine this in combination with additional parallax like effects. This should look really great
Theoretically you could also make amazing looking terrains this way, where you displace multiple times. Unfortunately this also has limitations, where collision detection might be one of the biggest things. As basically it's not possible to read back geometry data from the hardware (without destroying performance completely) you can input very low detail geometry, but what you see on screen can look very different. A simple example would be using hardware displacement mapping for a terrain. The geometry your engine works with would basically this flat grid, while on screen you would see nice mountains and hills. You also expect to be able to walk on the hills and mountains, as that is what you visually see. But the engine wouldn't know the final geometry of what you see on screen. Of course for
this simple terrain example with one displacement it is not really a big issue, but once you would start displacing any geometry, and not just terrain, or when you start applying multiple displacement passes, things get really tricky.
The point I'm trying to make is that even displacement mapping will have it's [sic] limitations on where it can be used. For characters it isn't really an issue, as we can just place simple invisible collision meshes (hit boxes) inside the model, which we can use for collision and physics.
Also complex shaders, including displacement mapping, require more texture data. So this all will grow the amount of memory required on the graphics cards more and more. Fortunately this is already growing and going towards one gigabyte. It might sound like a lot of memory, but it is never enough,
especially not when you want real high detail.
I hope I'm wrong about my time estimation though. The sooner we get to it, the better I'm sure that once DirectX 10 is out, or even before, using the software driver to emulate the hardware that isn't there yet, someone will make some cool displacement mapped rendering on some models. However
this won't mean you will see it in games yet. I think first the initial DX10 hardware will have to become faster at geometry shaders before it will be used a lot.
Xboxcore: Following up on that question. How far down the road do you think it will be before the EMotion FX supports displacement or parallax mapping?
John: EMotion FX isn't a rendering engine, but mainly an animation system. What we
do is provide people with all the data they need to render their characters, and perform all the animation management and playback etc. Our exporters for 3D Studio Max and Maya can export all the information they need to be able to add displacement or parallax mapping. So this is already possible.
Xboxcore: The EMotion FX engine has the Smart/Progressive morph. Exactly, how does the Smart/Progressive morph based facial expressions work?
John: It is using a technique that is already used for a very long time in modeling and
animation software like 3D Studio Max and Maya. What it does is allow you to specify a base pose of your character and a set of target poses. Imagine the base/neutral pose of your character's head, so that it has its mouth closed, eyes open and basically no expression on the face.
Now you specify a set of morph targets, which can be like the face with eyes closed, another one with a smile, another one with eyebrows in a mad looking pose, etc.
The progressive morph system of EMotion FX will filter out the changed vertices (or transformations), compared to the base pose. Now you can link a weight value to each of the morph targets, which specifies how active the given morph target is.
Since the final blending of the morph targets is additive / progressive this means that you can combine them together. This allows you to close the eyes while smiling, etc.
This also allows you to mix lipsync with other facial expressions. So you could make a character smile while it is talking. The difference between EMotion FX and 3D Studio Max and Maya is that it also allows you to perform bone/transformation based morphing. So instead of recording vertex changes,
it records the changes in the actual bone transformations. This is a lot more suited for today's hardware and takes less memory. It is also possible to mix different techniques together (such as mesh morph and bone morph). Inside EMotion FX the morph system is technique independent, so we can easily add new ways of defining morph targets.
Xboxcore: Lipsync Technology makes a lot of sense in this age of gaming. Especially considering newer games are using more and more polygons to detail facial expressions and vocal intonations through the characters. Is this the same sort of technology that has been used in 3D animated movies to get characters to emote accurate lip movements?
John: Yes, this is the same technology as used in movies. Artists create a set of
morph targets for different phonemes. Phonemes are basically basic sounds detected inside speech. Like OOO, AAA, MMM, etc. Each of those phonemes is represented by a morph target. Our lipsync system will automatically control the weight values of these morph targets, based on the sound they have to lipsync to. It is also possible to edit the result afterwards, if there are places that don't look so good at some point during playback time.
Our lipsync system uses Hidden Markov Models to detect what phonemes are active at what time, and how intense. This is a widely used technique in speech recognition. Next to this our lipsync system is character independent. This means that you can create a lipsync motion for one character and play it on any other character, even if that character has a different morph target setup. So you could generate some lipsync motion for a human, and play it on for example a frog.
Xboxcore: The inverse kinematics and motion tools show advances on motion fluidity with seamless transitions. Does this mean developers can use the animation studio and the available tools to hand-animate characters that could replicate motion-captured animations?
John: Not really. EMotion Studio is mainly an application to test characters and motions with in our system. Artists can use it to inspect their exported models in detail and test all their motions on these characters and inspect the blending and other things. Also they can test attachments, lipsync and
edit facial animations.
I don't think it is a good idea to make new software to animate characters. I mean to create new animations with. The reason for this being that programs like 3D Studio Max and Maya already are so good at it and already have so many features for this, that it is simply a bad idea to even try.
Besides that, we try to let the artists work as much as possible in the environment they are used to working in, so they can pretty much setup everything already inside 3D Studio Max and Maya, using our custom plugins. The inverse kinematics and lookat controller are examples of features that can improve interactivity and quality of in-game character behavior.
Xboxcore: What are some of the roles the engine's multiple Inverse Kinematics solvers carry out?
John: They can be used for all kinds of different things. We also provide different solvers, as often you will see that one solver performs better for certain tasks than another. We currently have three different IK solvers implemented. They are used in games to plant feet at the ground, grab things, such as mounted guns, keeping the hands of a character at a steering wheel while you can adjust the distance from the driver to the steering wheel, etc.
Xboxcore: With the attachment points allowing developers to add or remove objects, easily and efficiently, does this mean it will allow for even more interactivity between game characters and environmental objects using the EMotion FX engine?
John: Well, attachments have been used for many years in games already. They just allow you to attach specific items to a character. An example can be a model of a gun attached to the hands of a character. Now if you move the character, the gun will move with it as well.
Other examples can be attaching characters to other characters. Think of a cowboy riding a horse. You can attach the cowboy to the horse, so that when you move the horse, the cowboy will move with it. Still you can animate both the horse and the cowboy individually while being attached.
Xboxcore: With the collision detection feature and Character Factor, given the options, is it possible to have deformable models on certain triangular points? So, for instance, it would be easier for developers to make it where a character's arm dents from being hit by an object, or their chest caves specifically to the collision of an object?
John: The hit/collision detection system can give you very accurate information about a hit, for example a bullet. It can give you information such as the exact point of intersection, so where the bullet enters on the body of the character and all other surface and material information. For example you can check at what pixel in a given texture (texel) it is intersecting with. That way you could define very detailed material information. Not on a per mesh basis but on a per pixel basis. You could define areas of metal, and other areas of flesh for example. Based on this you could then take action. An example could be to render a bullet impact sprite on top of that location in the texture, so get a flesh wound on the character.
To take this a step further you can replace body parts with different body parts. So if you shoot someone in the arm with a shotgun, you can actually rip off the arm and replace it with an arm that has been half shot off and is all bloody etc. There is no direct support for what you explained. But it is possible, as you could generate the new chest of the character on the fly, by modifying the existing chest mesh data, based on the impact. So it is definitely possible, but it will require some programming work. Most likely you want to let this be handled by a physics system though. Next generation physics systems will allow such things, by using soft-body physics. This allows you to deform objects, based on their impacts.
Xboxcore: With the multi-processor support, does this affect the way the animations and characters are used in conjunction with the processing, or does this affect how the entire game is developed for a multi-processing platform?
John: It does not directly have any effect on the game itself, other than that game companies will be able to use more complex and higher quantities of characters on screen at once. What it does is spread the calculations over different processors or cores. So for example, instead of processing all characters one after the other, EMotion FX can process multiple characters at the same time. There is no limit on the number of CPUs or cores, and EMotion FX can automatically scale with the number of CPUs. This all means that you can do the same amount of work in less time, which means higher framerates. Or you can use the extra available framerate to add more detail or increase the number of characters. So this will really help improving in-game character coolness.
If you add more CPUs or cores, it can perform more and more calculations in parallel. This is something that your software has to be designed for. EMotion FX 3 has been written with massive parallel processing in mind. The nice thing about EMotion FX's design is that the programmers that have to write the game don't notice anything from the multi-processing, as it is completely transparent to them.
Nextweek, stay tuned in as we focus on the game engine that made Dreamfall: The Longest Journey a success, the Shark 3D.
Article By: VGcore Staff