In previous posts I described the process of extracting level geometry, lightmap, and lightgrid, from a quake3 BSP. The only other major thing left unsaid about BSP was collisions and surfaceflags; it was probably inevitable I’d get to it. So, here I want to cover that as well as the quake3 shader system, which caps off the process of making and using BSP’s.
The 90s idtech engines had a complicated relationship with the editor - lots of func_ this and surfaceflags that. It wasn’t very easy to understand, but surfaces could be marked to not cast shadows, not collide, both, or neither. Some were brushes that became entities (doors and elevators), others were brushes that described a playable volume (such as water or triggers). It was, frankly, a little messy. I’ve been eager to bring in lighting and geometry from that world, but collisions? You can get by just fine with Unreal-generated collisions.
…Unless you want to do water.
If we rely on Unreal-generated collisions for our mesh, it’ll presume every surface collides and is drawn equally. Every face will block both movement and visibility. Water, in our case, is just a brush, so Unreal presumes we mean for it to be solid. More broadly, every engine needs to offer a set of flags for a given surface. It might collide or not, it might block light or not, and it might receive light or not. Unreal solves this more elegantly with its collision system (and material system, for light). Quake had reasonable solutions too, and since we’re working as closely as possible to "take a bsp and make a real level with it", we’ll have to translate.
See, water in these old games was made of brushes with flags marking it as water, such that it was transparent, wouldn’t block movement, and would give screen effects when the player was in it. The same can generally be said of things like spiderwebs or vines or bead doors - things that were easily created with brushes, but shouldn’t block the player. Unreal has no understanding of this, and an obvious solution is just not to model it in the map, and manually create meshes and volumes representing these non-colliding objects, sized for each environment they appear. That’s not… awful, but it’s surprisingly tedious to get right, can quickly result in ugly clipping and geometry, and I’d rather just… use the level editor.
Before we dive into the details, let’s see the scene that inspired (read: aggravated-me-into-doing) all this. A sunken room, with hip-deep faintly glowing sludge.
| Untextured, no surflights |
|---|
In this scene, the water is solid. Sure, it’s got a nice shearing material on it, but you can walk on it. But, it’s also got lighting problems - there are no fixtures, the light is an ugly blue that doesn’t match the water, and it’s lit unevenly.
| Textured, no surflights |
|---|
After replacing all the placeholder textures and adding fill lights, it looks… better. It’s still obviously a first pass, but it’s at least reminiscent of a level. But all these fixtures need lights - do we really need to manually add light sources to every one of them by hand? That sounds really tedious.
Fortunately, idtech 3 features surface lighting (which we’ll describe more technically below), meaning we can just give a texture a lightness value and the map compiler will add lights along the surface to simulate it being lit.
| Textured, with surflights |
|---|
Great, with 15min of texturing and adding fixtures, we’ve pretty convincingly lit the scene. Sure, there are some linear fill lights involved, sure there are still some placeholder geometry on the walls and maybe pillars. Sure the platforms could be a little more interesting, and we could use a centerpiece, maybe some more interesting wall decoration - but this first pass is looking like something you’d actually recognize as a game.
Quake3 had an extensive shader system. They weren’t shaders in the way we think of them today (compiled gpu procedures), they’re closer to "materials". You could define a set of surfaceflags and editor-specific effects to apply to a given texture. For our purposes, there’s really only a few we care about, the rest are a little too quake3-the-game specific to be useful.
surfaceparm nonsolid - will not affect collisionssurfaceparm nodraw - will not be a visible face (but, unless nonsolid, will still collide)surfaceparm nolightmap - won’t receive lightmap shadows.q3map_surfacelight <int> - the entire surface will emit this amount of lightq3map_lightsubdivide <int> - the distance between generated lights for the surfaceThese are self-explanatory enough. We can control visibility and collision separately for a surface. This alone gives us enough flexibility to handle all those cases above, but let’s linger on surfacelight.
In Unreal (and most PBR renderers, i imagine), a light-emitting surface has a channel describing where light emits. Back in the day, that just wasn’t a thing. There were static lights, and you could create coronas to imply bloom. Instead, a surfacelight is a regular texture that the map compiler will create static lights in front of at regular intervals, to mimic the idea that the surface itself emits light. This is handy to reduce effort when lighting a map (fixtures automatically emit light, fills and feature lights are easier to manage), and to keep your level editor less cluttered.
It’s also extremely handy for irregular surfaces. Such as a body of water punctuated by several small islands - we want the water to emit light, but also the freedom to just move around islands without worrying about needing to shift manually-placed lights, or rearrange patches of water.
For these, we’ll need;
Collisions aren’t directly represented in BSP, the way that lightmaps or lightgrids are. Instead, they’re collections of planes, associated with brushsides/brushes, referenced by leafs. The way q3 did it was to find which leaf you were in (a binary search), then simply do collision checks against the planes of brushsides that the leaf contained. The brushes included in the BSP are a subset of the actual brushes used to make the game (they’re really just the solid ones), and so for our purposes we can walk every leaf, find each collideable brushside, derive a triangulated face for it, and store it as a UCX mesh within our gltf.
You’ve probably heard "brushes" and "brushwork" a lot when talking about idtech mapping and modding. You’ve probably done some of it. You "draw" rectangles into a 3d space, texture each side as desired (textures are applied with a default consistent texel density), and maybe move the edges around a little. They’re dead simple to use and modify, and create a characteristic blocky-but-expressive static geometry.
Under the hood, brushes are defined by a set of planes, forming a convex volume. For the purpose of collisions, planes are perfect, because it’s trivial to find intersections of simple shapes on a plane, and to determine if something is on one side or the other of the plane; that’s the basis of collisions, so brushes are doubly handy. However, Unreal UCX meshes are expected to be geometry, with triangles and vertices, not planes.
The magic sauce (which is somewhat common knowledge) is to do the following;
That process of deriving vertices from brush planes is, I’m just going to put it out there, inspired. I can’t really rave enough about how easy brushes are to work with, and computationally how simple they are. It may not be the most obvious algorithms at first, but they’re dead easy once you’ve done them.
Here’s what all that looks like, in psuedocode;
for brush : brushes {
let brushPlanes be each brushside's Plane;
for side : brushsides {
// nonsolid?
if textures[side.TextureIdx].SurfaceFlags & 0x4000 != 0 {
continue
}
sideVerts := calculateVertices(brushPlanes, i)
triVerts := triangulateVertices(sideVerts)
add to set of collision vertices
identify indices for each in collision vertex set
create mesh for indices
}
}
The calculation of vertices roughly looks like;
func calculateVertices(planes[], specificPlane int) {
for i, p1 := range planes {
for j, p2 := range planes {
p3 := planes(specificPlane)
skip duplicates
det := p1.Normal.Dot(p2.Normal.Cross(p3.Normal))
if det is nearly zero {
not valid
}
// calculate the vertex
v := [3]float{
(p1.Distance*p2.Normal[1]*p3.Normal[2] - p1.Distance*p2.Normal[2]*p3.Normal[1] - p2.Distance*p1.Normal[1]*p3.Normal[2] + p2.Distance*p1.Normal[2]*p3.Normal[1] + p3.Distance*p1.Normal[1]*p2.Normal[2] - p3.Distance*p1.Normal[2]*p2.Normal[1]) / det,
(p1.Distance*p2.Normal[2]*p3.Normal[0] - p1.Distance*p2.Normal[0]*p3.Normal[2] - p2.Distance*p1.Normal[2]*p3.Normal[0] + p2.Distance*p1.Normal[0]*p3.Normal[2] + p3.Distance*p1.Normal[2]*p2.Normal[0] - p3.Distance*p1.Normal[0]*p2.Normal[2]) / det,
(p1.Distance*p2.Normal[0]*p3.Normal[1] - p1.Distance*p2.Normal[1]*p3.Normal[0] - p2.Distance*p1.Normal[0]*p3.Normal[1] + p2.Distance*p1.Normal[1]*p3.Normal[0] + p3.Distance*p1.Normal[0]*p2.Normal[1] - p3.Distance*p1.Normal[1]*p2.Normal[0]) / det,
}
add to return set
}
}
That’s a lot more code than I’d normally, add, but I found that there really just weren’t great sources publicly available for this stuff, and the majority of the time is just figuring out how exactly you’re supposed to do this.
Anyway, at this point you’ve derived a brushface to a triangulated surface, and you just repeat that for each bface.
There’s a few tricks and assumptions built into the above. We can derive vertices of a face from the bplanes, but how do we know which vertices should go in which face?
EDIT 2024-04-17: Previously, the following paragraph was here, but was a little inaccurate.
The short answer is it doesn’t matter, just make two triangles, and use Unreals "Generate Convex Hull Per UCX" import option (on by default), which will disregard the order of vertices in faces, create a convex hull, and then use that. There’s no meaningful import-time cost.
Since we know the normal of the bface, we can project each vertex from 3d to 2d on the normal. Then, we can use Delauney Triangulation (such as this go lib) to derive triangles from an arbitrary set of 2d points.
"Project each vertex from 3d to 2d" sounds a little daunting, but it just looks like this;
arbitraryVec := {a, b, c}.normalized
displacement := (v dot plane.normal) - plane.dist
u := (normal cross arbitraryVec).normalized
v := (normal cross u).normalized
arbitraryVec here is any plane that isn’t parallel to the bplane we’re working with. Personally i use an absurd plane (.865, .1234, .4321) that is unlikely to be parallel with anything in a map, and switch to a different one in the (as-yet-un-hit) case that it’s parallel. You could grab a different bface’s plane, but unfortunately it’s likely to be parallel since we’re overwhelming working with 3d rects where at least one plane is parallel to the one we’re working with.
Back in the old days, the way npc’s navigated the map was usually either without pathing (see; Doom), or with a hand-placed nav graph, where the LD had to place navnodes everywhere that npc’s should be able to go, and manually link them. This was such a common thing that games clear up to MOHAA (at least) were doing it, well into the 00’s.
But, quake3 contained the "Area Awareness System" (AAS), which was one of the most thorough implementations of what we now call a navigation mesh. And, as far as i can tell, was the very first implementation of a navmesh for games (a book was published in 2000, but quake3 came out in 1999). The sole author of AAS, JMP Van Waveren ("MrElusive") was an OG, having come up through the Doom modding scene and got hired at id, staying there until his death in 2017. In 2001 he defended a masters thesis on the implementation of the quake3 bot (try beating that), a large part of which included AAS, which he seems singlehandedly responsible for.
Anyway, AAS was a navmesh constructed entirely from collision data within the BSP. Playable surfaces were derived from floor collisions, and connections between areas were automatically established by "reachabilities" – automated hull checks that decided if a bot could swim, jump, walk, use an elevator, or rocket-jump between the areas. Imagine if your Unreal navmesh could calculate game-specific links between meshes, that’d be nuts.
AAS was also clever for using BSP to subdivide a playable concave surface into several convex ones – since travelling between any point in a convex area doesn’t require pathfinding, it’s cheap to just walk straight to where you need to go. When you travel between areas, instead of specific nodes, you have edges of potential exits with reachabilities of different weights. And all of this is almost free to calculate, since the AAS is already computed. Navmeshes are greatly preferred over nav graphs today for this reason.
I’m including all this in an aside about collision, because it’s wholly derived from collision, and technically you could also make your own Quake-like navmesh entirely from the collision data. But, there’s really no reason to, since our goal here is bringing everything into Unreal, which has a very mature and effective navmesh for free. But if you were building your own engine, let’s just say you could do a lot worse than looking into how AAS was constructed.