Monday, April 7, 2014We have some nasty hacks in Rust. One of the nastiest, most intrusive hacks is the viewmodel system.
The Usual WayThe common sense way to render a viewmodel would be how games have pretty much always rendered viewmodels. Right before rendering them, clear the depth buffer then render. You clear the depth buffer so that your viewmodel doesn't poke through stuff that you're too close to.
The Unity WayYou can do this in Unity by using a second camera. You set the layer of the viewmodel to a layer that only the second camera renders. You can change the FOV and make it clear the depth. You're a happy chappy. This works perfectly apart from one thing. It's not affected by the lights of the first camera. It doesn't receive shadows.
The Alternative Unity WayPeople on the Unity forums suggest just parenting the viewmodel to the main camera, and moving it back when you get too close to walls. This works I guess, I'm not convinced it makes much sense though. And you can't change the FOV. Plus there's crazy floating point inaccuracy issues with meshes in Unity if you stray far away from (0,0,0) - which we do in Rust.
The Rust WaySo here's what Rust does.
- Custom Shaders for all viewmodel meshes
- Use only the projection matrix with modified znear/zfar (custom projection, no inaccuracy issues)
- Pass 1 - clear the depth buffer (renders the mesh, but only clears the depth)
- Pass 2 - render (renders the mesh as normal, depth isn't an issue)
How this could workAnyone do this any different? Anyone found an elegant solution to it?
Turn your phone to view blog list