πŸ“¦ about 🧻 posts

We have some nasty hacks in Rust. One of the nastiest, most intrusive hacks is the viewmodel system.

The Usual Way

The common sense way to render a viewmodel would be how games have pretty much always rendered viewmodels. Right before rendering them, clear the depth buffer then render.

You clear the depth buffer so that your viewmodel doesn’t poke through stuff that you’re too close to.

2014-04-07_11-06-18

The Unity Way

You can do this in Unity by using a second camera. You set the layer of the viewmodel to a layer that only the second camera renders. You can change the FOV and make it clear the depth. You’re a happy chappy.

This works perfectly apart from one thing. It’s not affected by the lights of the first camera. It doesn’t receive shadows.

The Alternative Unity Way

People on the Unity forums suggest just parenting the viewmodel to the main camera, and moving it back when you get too close to walls. This works I guess, I’m not convinced it makes much sense though. And you can’t change the FOV.

Plus there’s crazy floating point inaccuracy issues with meshes in Unity if you stray far away from (0,0,0) – which we do in Rust.

2014-04-07_11-13-43

2014-04-07_11-14-37

The Rust Way

So here’s what Rust does.

  • Custom Shaders for all viewmodel meshes

    • Use only the projection matrix with modified znear/zfarΒ (custom projection, no inaccuracy issues)
    • Pass 1 – clear the depth buffer (renders the mesh, but only clears the depth)
    • Pass 2 – render (renders the mesh as normal, depth isn’t an issue)

This all solves the issues we were seeing. We have a viewmodel that doesn’t clip into stuff, that receives shadows properly from the world, we can change the FOV and we have no more floating point inaccuracy issues.

But this brings a new issue. Any particle effects or whatever on the viewmodel don’t really work anymore, because the viewmodel is being projected differently, and has been written to the depth buffer differently. So I guess we need to write a whole new set of shaders for them to use too.

This works, but it hurts my heart. I hate having to have custom shaders for viewmodels, that do the exact same job as the normal shaders but with a couple of hacks. It’s one of those areas of Unity where you are forced to make compromises. Sometimes this turns out to be kind of a great thing because you end up finding new solutions to these problems that enhance the gameplay and give you a USP.

Other times it turns into a huge viral mess. Everything you try to add needs more and more hacks. You get to a point where you think fuck it, lets just use a second camera, render all the viewmodel effects in that – and manually work out the light of the viewmodel based on ray traces – like Half-Life 1 did.

How this could work

Anyone do this any different? Anyone found an elegant solution to it?

Β 

question_answer

Add a Comment

An error has occurred. This application may no longer respond until reloaded. Reload πŸ—™