I’ve been asked how lighting is done in Arcane Worlds, so I decided to write this article.
It’s mostly technobabble, so I’m going to assume you have some coding/math skills to understand it.
There are no pictures. You’ve been warned.
When I first started with the prototype I tried computing “real” sky with multiple scattering. It had spectacular sunsets, but the day sky was rather bland and the night… um… was pitch black without moon or stars.
It was really hard to tune to get the colors I wanted, due to obscure physical parameters and issues with HDR mapping. Tying it to land lighting had issues too. And the huge sun I wanted is far from small Earth-like sun expected in scattering computations.
So, I ditched it. Now I’m using an empirical sky model which is much easier to tune.
This model has three parts: vertical gradient, sun-axis gradient and circumsolar region.
Vertical gradient is the most important one – it’s used as a multiplier for both other parts. It’s a zenith-horizon color gradient with exponential falloff, so it’s “thicker” at the horizon, simulating longer light path through the atmosphere near the horizon.
where “mu” is the cosine of zenith-to-view angle, and coefficients are computed from zenith/horizon colors. Note: it’s mirrored using abs() at the horizon, and that 0.001 is to avoid division by zero.
Sun-axis gradient simulates (among other things) the anisotropic nature of Rayleigh scattering which makes sky darker at viewing directions orthogonal to sun direction. It’s the main gradient to tune when making a sunset sky.
Formula is simple:
where “vs” is the cosine of view-to-sun angle. As you see, it interpolates three colors: solar, orthogonal and anti-solar.
Circumsolar region simulates Mie scattering of sunlight, producing sun disk and corona. It’s added to the sun-axis gradient, and multiplied (together with it) by vertical gradient to produce the final sky+sun color.
where “vsd” is the cosine of view-to-sun angle, shifted by sun size and clamped to produce the sun disk, and “hf” is the “horizon factor” to cut off the sun disk and corona below the horizon. Also, I replaced acos() with an approximation, but it’s here for clarity.
The “mu” term is actually shifted and scaled a bit to lower the horizon line, to make it look like a “small planet” with noticeably curved horizon. The area below the horizon line is darkened a bit so it looks like a fogged/reflecting planet surface.
The fog color is computed like sky, but without circumsolar region (hf=0). Then it’s darkened a bit (just like the “planet” in the background below the horizon) and applied using the simple scalar fog density term.
The fog density formula is unrealistic, crafted to produce 100% fog just before the visible land surface ends. It also fades slower vertically, so the land below you is less fogged with altitude than it could be.
The interesting effect: since the land closer to camera is visually closer to nadir (opposite of zenith), the fog color essentially changes with distance, producing a nice gradient. It could be mistaken for computed scattering, but it’s actually fake.
The huge sun I wanted should cast soft shadows (large penumbra). Also, I want multiple and dynamic light sources: moons, suns, lightning strikes. Shadow maps aren’t good at it. So, I used aperture lighting.
I’d refer you to this article for details, but here’s the essential idea: for each point, approximate environment visibility with a circular “window” (aperture). The aperture depends on the geometry only, so it can be precomputed and then used with multiple lights. So, it’s a kind of directional ambient occlusion. The circular aperture can be specified with a vector and a scalar size, fitting it in 4 scalars, or even in 3 using tangent space. The range is low, so 8 bits per component is enough to store it.
It’s easy to make soft shadows with aperture lighting – just use a large light source. A small light will produce harder but inaccurate shadows (still can be good enough). Ambient occlusion and sky light are easy to make too.
In Arcane Worlds, the apertures are computed on GPU, sampling a lot of heightmap points (actually, that heightmap is land+water now, i.e. top surface). It’s done 20 times per second at most (game logic / physics step).
The land is lit by sun (see paper link above for details and formulas) and sky.
The sun color is constant in shader. There’s the usual normal-to-light dot product, except I interpolate the light direction between sun and aperture vectors, weighting by sun/aperture areas, so that small aperture receives light from aperture direction mostly.
The sky light is simple: compute sky color in the middle direction between surface normal and aperture vector (approximating diffuse lighting from environment), and scale it by aperture area.
The fog is applied after multiplying the light and surface colors.
The foam is lit like land, but ignoring the normal and associated dot products, because the foam is strongly translucent and has no real surface normal.
The sky reflection is obvious. The aperture is used to occlude the sky (approximately), interpolating on its edge between sky reflection and the approximate land lighting in the reflected direction. So that blurred land reflections you can see in the demo are actually produced by apertures.
No Fresnel term is used. The fog is applied as usual.
Some final notes
I aim at painterly look with Arcane Worlds, so I choose what looks good instead of what is realistic or physically correct.
As you might have noticed, there are no real HDR computations or mappings, the sky is tuned in low dynamic range. But a decent gamma correction is still important. The formulas above are in linear color space, but the stored and final colors are packed to gamma 2.0. Why 2.0 and not the common 2.2 monitor gamma? Because it’s faster to compute and the difference is small anyway.
The sky is tuned using “sky.nut” Squirrel script. You can edit it and it should be reloaded automatically by the demo when you save it.