Wednesday, 12 March 2014

Alpha Presentation and an Overall Update

Alpha Presentation


          David and I had our alpha presentation recently, and from the response and critique of our lecturer I believe we did an excellent job.

David's environment is ahead of schedule, and besides a few possible improvements that our lecturer suggested, the environment is complete. My animations were in 'key frames'-state, i.e. they had no in-between frames, but presented the strong key frame poses that followed the timing of the storyboard and cinematic. The first few seconds of the character coming to life were already fleshed out with in-between frames, and I had intended to do that for the whole of the cinematic ready for alpha, however issues I had with the rig and texture outputs caused me to be behind in my personal schedule (issues I will detail later in this entry). That being said, considering the definition of alpha stage, my animations were alpha-level, as nothing would now change except the completion of the animations; the alpha-level animations communicated what the final cinematic would look like.


)


Overall Update

          Currently I have the textures for the character and sword finalised, with a possibility of using an emission texture later on for the eyes and maybe some of the cracks. The animation for 1 of the characters (as there are a total of 4 statues) is partially fleshed out, and fully blocked out with key frames in place. Only in-between frames are left to do for that one. The other 3 will have similar animation, but varied in the way they wake up and the way they do their battle cries in order to make each seem unique in the sense that they are separate entities even though they look identical.

          Leading up to the alpha presentation I had a few issues that meant we were not ready as early as we thought.

I had completed the textures for the character using poly painting in ZBrush and was ready to export them, however without realising I had used the wrong base mesh that had incorrect UVs, which mean when I tried exporting the texture map from ZBrush there were parts of the UV missing or incorrectly placed. I had 2 base meshes for the character, 1 was the actual low poly mesh that would be used in engine with the rig and the other was low poly with parts of the mesh altered in order to prepare it for ZBrush (it had triangle polygons converted to quads and also had extra edge loops in order to spread poly distribution so that it would sub divide without causing stretches). I had some how imported the wrong one for use in ZBrush.

ZBrush allows you to reimport the low poly mesh that you started with, so that you may update things such as UVs yet still keep all the sculpting that you have done in the higher sub division levels. I tried doing this but realised that because I had altered the mesh in preparation for ZBrush (more edge loops etc.) meant that ZBrush saw it as a completely different mesh, and therefore when it applied the sculpted sub division levels it caused it to soften edges and destroy the work I had put into it. For a long time I couldn't find any solutions and thought that I would have to start over, meaning the days I spent working on the high poly sculpt and poly painting would be a waste. Thankfully I was in a 2nd year lecture at the time of trying to find a solution, and the lecturer (Leavon Archer) was highly knowledgeable in ZBrush, and knew of a solution to my problem. Since I was using poly painting in ZBrush Leavon told me that the initial base mesh's UVs don't matter, as you can export the high poly sub tools (individual meshes) as .obj files which contain the colour information on a per-vertex basis. xNormals, which I had been using for normal and AO map baking, also has the ability to bake diffuse texture maps using the per-vertex colour information. This solved my problem and saved me a huge amount of time in redoing all that work in ZBrush.

I took this diffuse texture and applied the ambient occlusion and cavity maps to it in Photoshop. The AO was to give it shadow and light details, and the cavity map made the smaller details pop out in the diffuse, which the AO can struggle with. I also removed the solid black shadows that you can see in the AO bakes at the bottom left of the AO, and used a dodge and burn layer in Photoshop to bring it to the same light levels as the rest of the texture.

Diffuse texture derived from high poly bake of vertex colour information to low poly, with AO and cavity maps applied
          Running up to alpha I re-baked my ambient occlusion map for the character as I realised that sadly when batch baking all the individual meshes at once in xNormals it puts them all in the same 3D space, and so light information was being taken from meshes that were close together. I therefore redid the bakes, but on an individual basis. The differences can be seen below:

AO batch bake


AO re-bake - individually baked
As I was running short of time for alpha, the sword only had a diffuse map applied to them. Since then I created a normal, AO and cavity map, and applied them. I also increased the texture size for the character to 2048 x 2048, as to make sure I was ready for alpha I baked them at 1024 x 1024 for the sake of baking time. 2048 x 2048 allows more detail to be applied as it creates a larger texture space, though I would be quite happy leaving them at 1024 as at that size the quality is still quite high, but as this is for a cinematic I am going for 2048 as it increases the detail quality but isn't excessive like a 4096 texture would be. The sword texture size is 512 x 512, though I recognise that for game purposes it could easily be acceptable to use 256 x 256.


Sword diffuse with AO and Cavity maps being applied in Photoshop, at 512 x 512 texture size.
          The final update for alpha I will mention are the problems I had with the rig. The rig had 3 main issues, 2 of which are still persistent now. The first issue was that when I imported the rig and animations together as an .fbx file, the skeleton was moving as intended and so was the majority of the mesh. One part however, the right arm, was moving erratically despite the right arm of the skeleton moving correctly. After trial and error I solved the issue by importing the skeleton and mesh as one .fbx file in the rigging 't-pose' that had no animations baked onto it at all. From inside UDK I then imported the animation as a separate .fbx file and applied it to the skeleton inside the engine. This corrected the issue. As a side note, with all the issues I've been having with the rig, I've found my work flow needs to be as follows: animate the rig using control curves, bake the animation to the bones, delete everything except the mesh and single skeleton joint chain, clean the scene and then export the animation.

The second issue, that still persists, is the inability to use the master controller on the rig. I would normally translate this in space and then apply walking animations to the legs that moved in time with the master controller speed. It's a great way to block out timing and spacing, and then apply the walking motion of the legs. However, because this rig was aimed at game use (where the character stays in one spot regardless of the animation sequence, and is then moved in world space via the engine in game) the master controller applies no translation or rotation values to the skeleton root. Therefore I have to leave the master controller where it is and select each controller and move it in time with the rest. It's not a big issue, but it means that I cannot 'zero-out' the controllers if I want to start fresh because this would reset them back to their default positions back at the master controller. It makes this rig inefficient for cinematic animation, and in the future I will build cinematic rigs with master controllers that apply translation values to the skeleton.

The final issue is that I had to alter the type of animations that I wanted to do because of the inability of my rig to let me control the sword the way I had wanted. The rig doesn't allow me to leave the sword in a position and continue to control the hand that was holding it as a separate movement. I found the solution to this, rigging 'space-switching', but for an unknown reason I cannot get this to work with my rig. Space switching allows me to set the translation information of the sword to be controlled by two or more different points (E.g. The hand and the floor) through the use of interchangeable governing parent constraints. I couldn't get this to work with my rig, and I am not sure why, but as it isn't imperative to have this on my rig I weighed up my options and decided to not waste time trying to solve why I couldn't apply it, and instead changed my animation slightly so that the character did not pick up the sword and instead already starts off with the sword in it's hand.

No comments:

Post a Comment