Unity Viewmodels

We have some nasty hacks in Rust. One of the nastiest, most intrusive hacks is the viewmodel system.

The Usual Way

The common sense way to render a viewmodel would be how games have pretty much always rendered viewmodels. Right before rendering them, clear the depth buffer then render.

You clear the depth buffer so that your viewmodel doesn’t poke through stuff that you’re too close to.

2014-04-07_11-06-18

The Unity Way

You can do this in Unity by using a second camera. You set the layer of the viewmodel to a layer that only the second camera renders. You can change the FOV and make it clear the depth. You’re a happy chappy.

This works perfectly apart from one thing. It’s not affected by the lights of the first camera. It doesn’t receive shadows.

The Alternative Unity Way

People on the Unity forums suggest just parenting the viewmodel to the main camera, and moving it back when you get too close to walls. This works I guess, I’m not convinced it makes much sense though. And you can’t change the FOV.

Plus there’s crazy floating point inaccuracy issues with meshes in Unity if you stray far away from (0,0,0) – which we do in Rust.

2014-04-07_11-13-43

2014-04-07_11-14-37

The Rust Way

So here’s what Rust does.

  • Custom Shaders for all viewmodel meshes
    • Use only the projection matrix with modified znear/zfar (custom projection, no inaccuracy issues)
    • Pass 1 – clear the depth buffer (renders the mesh, but only clears the depth)
    • Pass 2 – render (renders the mesh as normal, depth isn’t an issue)

This all solves the issues we were seeing. We have a viewmodel that doesn’t clip into stuff, that receives shadows properly from the world, we can change the FOV and we have no more floating point inaccuracy issues.

But this brings a new issue. Any particle effects or whatever on the viewmodel don’t really work anymore, because the viewmodel is being projected differently, and has been written to the depth buffer differently. So I guess we need to write a whole new set of shaders for them to use too.

This works, but it hurts my heart. I hate having to have custom shaders for viewmodels, that do the exact same job as the normal shaders but with a couple of hacks. It’s one of those areas of Unity where you are forced to make compromises. Sometimes this turns out to be kind of a great thing because you end up finding new solutions to these problems that enhance the gameplay and give you a USP.

Other times it turns into a huge viral mess. Everything you try to add needs more and more hacks. You get to a point where you think fuck it, lets just use a second camera, render all the viewmodel effects in that – and manually work out the light of the viewmodel based on ray traces – like Half-Life 1 did.

How this could work

Anyone do this any different? Anyone found an elegant solution to it?

 

26 thoughts on “Unity Viewmodels

    1. I think its a font family called FontAwesome looks very similar anyway as for the Unity issues garry I don’t have a clue in the slightest haven’t used unity much

    2. As a terrible to be webdesigner i saw Font-Awesome, Open Sans and a font called Titillium however im not sure thats ever used in there

  1. Interesting post, I’ve recently encountered the main issue you’re talking about when using the standard 2-camera setup, and wondered if there was an easy solution. Guess not.

  2. Can’t you instantiate particle effects and such and put them in the main camera’s layer? There are some helper methods in the Unity transform object that will help you adjust the position given the different fov. There is probably something I’m overlooking since it seems like you would have already tried that.

  3. Yup.
    In my game I pretty much immediately got to the “screw it, let’s use a secondary camera and do a raycast for shadows” stage. It worked in Unreal Tournament 3.

  4. in the beginning of the frame, you can render the character only (with a simple replacement shader) into destination alpha or stench, then render your scene, then at the end of the opaque pass you can clear the depth buffer where you marked it in the beginning and then render the character with your full shaders.

    1. Hm, some clarification on this would be nice.
      Isn’t he already clearing the depth buffer? What makes this different (aside from allowing you to use any shader for the first person mesh)?
      Also, how do you selectively clear the depth buffer in Unity?

    2. How does that help? I legitimately want to know, because I find all of this very confusing.
      Curious, if I render a full-screen quad with Graphics.Blit, does that have the effect of clearing the depth buffer, or does it have the effect of writing the highest value to the depth buffer (which would make anything rendered afterward invisible)?

    3. Masaaki: Ah, sorry, I’ll try to give more details:
      Blit shaders also tend to use ZTest Always (and disabled fog, and Cull Off), so they overwrite anything that’s already there.

      “Clearing the depth buffer” in this case means overwriting the depth values with the depth values of the full-screen quad. You can also set them to any value to want manually by either changing the vertex position’s z values, or outputting a value through the DEPTH semantic from the fragment shader.

      In one case I had to clear the depth buffer to the near clip-plane (0.0f), in stead of the far clip plane (1.0f), which can use the same approach.

    4. Oh I think I get it.
      Unity produces a screen-space texture for the shadowmap which the shaders use to apply the results in the lighting calculations (they sample the shadowmap in screen space)
      I think this will work to my advantage, actually. If my gun pokes through a wall, let’s say, on the dark side of the wall, the whole gun is dark (even if a bit of the gun pokes through the other side), and if it pokes through the light side of the wall, the gun isn’t shadowed (even if the gun pokes through the wall and should *technically* be receiving shadows).
      Interesting…

  5. What if you used two materials on viewmodels instead of two passes? I.e. material one has a custom shader that ONLY clears depth, and material two is the normal material that you would use for a model.
    As far as I understand, having two materials on a mesh in Unity renders it two times, so this would probably work, and you get away with only one custom shader for alll your needs.

    Not sure about particles, though – I don’t think you can have multiple materials on particles…

  6. The floating point errors stem from the fact that your viewmodel is a Skinned mesh renderer right? If thats the case and I’m understanding the problem correctly you could do the following:
    -Place your skinned viewmodel at origin on a layer that doesn’t get rendered
    -Parent a “dummy” mesh filter + renderer with needed materials to your view camera(this would be like in the “Alternate Unity Way” you mentioned above)
    -Use a script to, every frame, do something like: skinnedMeshRenderer.BakeMesh(dummyMeshFilter.mesh);
    -For attachments like particles and whatnot you would just apply the offset from the origin mesh, but relative to the dummy mesh(fortunately unity doesn’t seem to have any issues with rendering particles far from origin)

    I’m not sure how performant the BakeMesh function is… you’d have to of course test it to make sure its not eating up resources.

    Also, you’d have to either move the model back when the view camera gets too close to something, or, parent a second camera to your main camera dedicated to just rendering the view model(like how you said in “The Unity Way” except it would receive lighting because the second camera is parented to the view camera).

  7. The “usual way” seems like quite a hack to start with. My (probably naive) suggestion would be to prevent intersection in the first place, for example with a collider.

    And it seems to me that in such a large world, you’ll need to deal with the limitations of floating point precision anyway for other objects, by keeping the camera near the origin and moving the world around.

    1. Yeah that is pretty naive ;)
      The problem with that approach or any similar approaches is that either:
      1.) The gun shifts backwards when you approach a wall. It looks crappy.
      – OR –
      2.) You don’t let the player get that close to the wall. It also looks crappy because the player expects to be able to get much closer to the wall (based on expectations from other games)

      The “usual way” isn’t such a big hack – it’s really just a simplified version of how it’s done in other games (clear depth buffer, render view model). This would be the absolute perfect approach – if Unity didn’t clear the shadow map per camera.

    2. The gravity gun suddenly becoming tiny when you get close to a wall looked really crappy to me too. Besides, it’s really confusing in (stereoscopic) 3D.

      I guess the original problem is that first person view models are much farther forward than they would be in real life, because your vertical field of view is much smaller than in real life.

      I don’t know if Mirror’s Edge applied the same technique (I’m under the impression they didn’t), but one thing they definitely did was animating the hands based on what’s in front of you

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s