We are now all very used to seeing computer generated images and animations in games and films. Computer Generated Images (CGI) were very obvious in the early days of their development but now even in low budget films there will be some CGI that many people are likely to not even notice, nor care. Video games are an obvious use of digitally created interactive experiences but it may not always be known the effort and talent needed to create many of the assets in game environments from characters to animation, sound track and environments on top of the challenge of making something that plays well. Tools and middleware have evolved significantly over the past 10-15 years as the complexity of what can be done with graphics and sound has increased. The two major platforms in this space are Unity and Unreal. Just over a decade ago I was primarily using Unity to create experiences such as a virtual training hospital and some game focussed projects in part because it was really easy to generate content that would run in a web browser at the time. I recently spent a little time looking at Unreal Engine though because of an interesting cloud based tool that Epic/Unreal have created that works with their development environment called Metahuman. 

People are complicated

The nature of the human brain is such that we constantly see faces in naturally occurring things. Our brains do work spotting eyes nose and mouth shapes in knots of wood, clouds and random marks on a table top. Yet the more accurate a deliberate recreation of a face tries to get the more we spot that they are not quite right. This occurs in still images such as sketches and paintings but it is even more apparent when things start to have to be animated. Mannerisms, micro expressions, blinking and the shape of a talking mouth all start to betray a computer animated face, on top of the actual rendering of the materials such as skin and hair. Of course there is also the entire rest of the body to consider too with the same level of challenge to make something believable. Animators and 3D modellers have been creating digital people of all shapes and sizes for many years and as with all high end labour intensive processes inevitably a tool arrives to make things a little easier. Unreal’s Metahuman Creator is one part of the puzzle to help create more believable digital humans in both physical look and in the ability to animate and control them.

A face in the clouds

Metahuman Creator relies on cloud services to customize and alter the many parameters that are available to create a new person. It is also slightly different to many graphics applications that also rely on some cloud calculations in that its entire interface is rendered in the cloud just like a cloud game. It is not a web page with controls in HTML but a full image of a user interface rendered remotely. Using some basic templates of existing metahuman models the user can merge and craft a new person, or they can dive into the more detailed parameters and sculpting potential. Most role playing game players and sports gamers will have come across character editors for their specific games, the concept is not new but it’s the detail and the resulting asset that makes the different here. I started to create a version of the lead character in my sci novels Reconfigure and Cont3xt. Roisin started to appear on my creator interface with a few blends and clicks. The tool then provides a bit of life to the creation and starts animating the model, this instant life giving approach is really interesting as normally the modelling to animation chain might be slightly longer. A picture of Roisin can be seen here and a short video clip here.

Teleporting to the desktop

I installed Unreal development engine on my gaming desktop machine along with the Quixel Bridge, which is a way to wrangle data formats from one place to another dealing with each tools quirks in format. This has specific connections to the Metahuman creator and allows the download and subsequent import of the model into the development space. A sample Metahuman project when run provides a couple of talking synthetic people showing the full range of facial gesture and movement that is possible. I swapped out the sample to my character and it worked perfectly. This should always be the case but sometimes seemingly similar models actually have a different structure under the covers for the animation to drive. I am running a decent spec gaming machine but not over the top and the results running life, in development, are impressive and interesting.

Motion Capture for all

Motion capture/ Mocap is the process of taking the movement of a real person or thing and converting that into the animation drivers for a digital puppet. There are very precise high end systems for this but they are getting cheaper all the time. For the Metahuman a plugin exists in Unreal to capture facial movement data in real-time from an iPhone. This highlights how accessible this is now becoming. I was, with a little bit of effort setting up, able to blink, wink, talk and nod my head around and see the highly detailed model of the Metahuman do the same thing live. Obviously if you can do it live, you can also run from recordings. Phones are increasingly able to determine body position too, just as the specific game controller the Kinect for the Xbox could back a decade ago. It’s not yet going to be as good as a pro mocap suit and rig but it’s enough to get going to prove how something might look. The increased use of Artificial Intelligence and simulation of physics in game engines and development environments also starts to allow creator to express a suggestion of what they would like done and the software can fill in the rest.

What’s it all for?

If you have a story to tell or an experience you want to create imagine how difficult it would be to firstly have to create the alphabet, then create what words and sentences are, then a keyboard to write it all down on top of working out what those things are supposed to be supporting in the way of the story or idea you want to share. The more tools can support the creative process the more people get to be creative. Look at the explosion in photography and bite size video content that has evolved from us all having a camera with us at all times. Metahuman Creator and tools like it are ones that sit on top of a whole lot of other tooling that can help teams large and small do what they want to do. It does not mean all CGI has to be this photorealistic, but for those projects that do it might be the difference between a nice idea never to be created or a working experience. For me I want to explore the potential to build scenes or even and entire film of my books, not easy at all but if I have digital actors that I can create animations relatively simply, combined with what I know about virtual environments, lighting, audio and so on I could be a one person film factory in my spare time, and if I can do it then you can do in whatever field you are interested in.