在某些游戏中,即使将视频质量设置为高水平,我也会看到灯光穿过墙壁。
我最近玩过的一些游戏示例包括《无主之地2》(导弹爆炸)和《克苏鲁的呼唤》(灯笼;该游戏甚至使用了虚幻引擎4)。
这是错误还是出于性能原因?
示例无主之地2:https://youtu.be/9bV83qA6_mU? t = 419(几次,但很快)
额外的问题:在后一种情况下,有什么方法可以廉价地使用光线跟踪解决问题?我想昂贵的RT肯定可以解决问题,但是我想知道它是否也可以“便宜”的形式用于解决此特定问题。
在某些游戏中,即使将视频质量设置为高水平,我也会看到灯光穿过墙壁。
我最近玩过的一些游戏示例包括《无主之地2》(导弹爆炸)和《克苏鲁的呼唤》(灯笼;该游戏甚至使用了虚幻引擎4)。
这是错误还是出于性能原因?
示例无主之地2:https://youtu.be/9bV83qA6_mU? t = 419(几次,但很快)
额外的问题:在后一种情况下,有什么方法可以廉价地使用光线跟踪解决问题?我想昂贵的RT肯定可以解决问题,但是我想知道它是否也可以“便宜”的形式用于解决此特定问题。
Answers:
Expanding on TomTsagk's correct answer, I thought it might help to describe a bit more about why games work like this.
Light in games doesn't really "travel" from the source, to the surface, to the camera, getting obstructed along the way.
To figure out how bright to draw each pixel of a surface based on a given light, we use (or approximate) a math formula that uses the facing direction of the surface and the direction from this point on the surface to the light source. That's it, just the direction it's shining from — we don't typically cast a ray to check if the light actually reaches this pixel, because doing that for every pixel on the screen and checking the ray against all the detailed geometry in the scene is usually still too expensive for realtime games.
So, by default, no game lights cast shadows. The direction to a light stays the same even if there's a shadowcaster in the way, so the math gives us the same brightness value.
If we want to simulate shadows, we have to do that separately. One common way is with what's called a Shadow Map. In this version, before we shade our scene, we first render the scene from the perspective of each light, as though that light were a camera, storing the depth of each pixel it sees into an off-screen texture.
然后,当我们为场景着色时,我们可以比较该像素到光源的数学距离与我们在阴影图中相应像素处记录的深度之间的比较。如果阴影贴图的深度较小,则表示此处与光线之间还有另一个表面,我们改为在阴影中绘制此像素。
有很多很酷的技术可以使这些基于地图的阴影看起来更好,同时减少了伪像/锯齿,但现在我将对其进行介绍。可以说,它们通常也不免费。
因为这需要从每个光源的角度再次渲染(最多)整个场景-如果点光源向各个方向(北,南,东,西,上,下}都发光,则最多可以渲染六次,我们需要重新渲染-随时移动阴影地图,这会变得非常昂贵。
So games will often focus their rendering budget on the most important lights in the scene — like the directional sun light — to ensure they have good-looking shadows. Small, short-lived, minor lights like the flash of an explosion are often forgivable if they leak past occluders a little. Often this is more palatable to players than a hitch in the framerate due to a sudden increase in rendering cost from all the extra shadow map rendering and calculation. Especially if it's a busy action scene where fluidity matters more than pixel perfection.
Long story short, this happens for performance reasons.
When there's a light on the screen, by default it shines on all objects (obstructed or not), so the game would need to make extra calculations to see which object is affected by what.
This is easier to solve on static objects by using static and baked lighting, but this is not the same on dynamic lights, like explosions as you noted.
For your bonus question, ray tracing and cheap do not go together on the same sentence. The only reason ray tracing hasn't been mainstream up until now is performance. Assuming all lights use ray tracing, then this problem would be "solved", but at the expense of performance.