Two friends are talking: "When the day after tomorrow becomes "yesterday", the day which we will then call "today" will have the same distance to Sunday as the day, which we were calling "today" when yesterday still was "tomorrow". Question: what was the day of the week when the friends were talking?
Uhm, what?..
Well, it's just a convoluted way to say that "the day Today+3 has the same distance to a Sunday as the day Today-2":
It's pretty easy to find the solution by just trying all the seven possibilities until the correct one is found.
However, who needs to think when one could write a program!
Let's do exactly that, using Prolog programming language. For example, SWI-Prolog can be used, and it also has an online frontend for quick experimenting.
Facts
In Prolog, we state the problem (plus some ground knowledge) declaratively, and then ask the runtime to "figure out" the solution for us. So let's start from the ground knowledge about the "tomorrow" relationship:
su, mo, tu, we, th, fr, sa, su are so-called atoms (they must start from the lowercase), tomorrow is a predicate, and what we did here is stating that "It is a true fact that Monday is a tomorrow to Sunday. It is a true fact that Tuesday is a tomorrow to Monday...It is a true fact that Sunday is a tomorrow to Saturday. It is known."
By listing these kind of facts, we create the knowledge base which can be later used for submitting queries about.
For example, a query "is Thursday tomorrow of Wednesday?" On which Prolog answers "true."
?- tomorrow(we, th).
true.
Or, a query "is Friday tomorrow of Wednesday?" On which Prolog answers "false."
?- tomorrow(we, fr).
false.
There are more interesting types of queries we can do using so-called free variables (which names must start from the uppercase):
?- tomorrow(we, X).
X = th ;
false.
What happened there is that we told: "we know that the first argument is Wednesday, but we don't know the second one (naming it X). Could you figure out for us which values could X take, so that the statement is true?". And Prolog happily infers for us all the possible values of X, listing them separated by a semicolon. In this case it's a single possible value ("Tuesday"), after which it writes "false" meaning "no more values" (this may differ for other Prolog implementations):
We can put the free variable into any position. For example "which day Wednesday is tomorrow of?"
?- tomorrow(X, we).
X = tu ;
false.
Or even, "what are the days so that one is tomorrow of another?". On which Prolog will print all the knowledge base it has so far, saying that "well, it can be any of these pairs, but nothing else that I know of":
?- tomorrow(X, Y).
X = su, Y = mo ;
X = mo, Y = tu ;
X = tu, Y = we ;
X = we, Y = th ;
X = th, Y = fr ;
X = fr, Y = sa ;
X = sa, Y = su ;
false.
Likewise, we can specify the knowledge about what "distance to Sunday" value is for different days of the week:
% "distance to Sunday":dist_to_sunday(su,0).dist_to_sunday(mo,1).dist_to_sunday(tu,2).dist_to_sunday(we,3).dist_to_sunday(th,3).dist_to_sunday(fr,2).dist_to_sunday(sa,1).
Predicates
So far we've specified only facts (so-called "headless predicates", or "headless Horn clauses"), which were just stating that something is always true. We can create more complex predicates, which would reflect a logical implication (if the right part is true, then the left part is true). For example, here's how we specify "yesterday" based on "tomorrow":
% "yesterday":yesterday(X,Y):-tomorrow(Y,X).
which means "Y is yesterday of X if X is tomorrow of Y". And again, we can make queries, e.g. asking "what is the day, that is yesterday of Sunday?":
?- yesterday(su, X).
X = sa ;
false.
Prolog will "figure out" that since in order for some day to be yesterday to Sunday, the latter must be tomorrow for that day, and Sunday is tomorrow to Saturday.
Predicates can have several statements at the right hand side, separated by comma, which means and:
% "the day after tomorrow":day_after_tomorrow(X,Y):-tomorrow(X,Z),tomorrow(Z,Y).
Y is the day after tomorrow to X, if there is some Z, such as Z is tomorrow to X and Y is tomorrow to Z.
?- day_after_tomorrow(su, X).
X = tu ;
false.
Problem statement
We now can put the full problem description into a separate predicate, which is basically a mapping of the problem formulation into a sequence of statements separated by comma.
Essentially, we state that there exist such days X (Today+3) and Y (Today-2), so that there is some distance-to-Sunday value D, which is the same for both X and Y:
% the problem statementconversation(Today):-% the day after tomorrow becomes "yesterday":day_after_tomorrow(Today,X0),yesterday(X,X0),% the same distance to Sundaydist_to_sunday(X,D),dist_to_sunday(Y,D),% when yesterday still was still "tomorrow"yesterday(Today,Y0),tomorrow(Y,Y0).
Now we can make a query: "what should be X, so that our problem statement is true?"
?- conversation(X).
X = we ;
false.
And the answer is: Wednesday!
Which makes sense, if we try to visualize it:
Problem MkII
Consider a slight modification to the problem:
Two friends are talking: "When the day after tomorrow becomes "yesterday", the day which we will then call "today" will have the same distance to Sunday as the day, which we were calling "today" when the day before yesterday still was "tomorrow". Question: what was the day of the week when the friends were talking?
The only difference from the original problem is that instead of (Today-2) we are talking about (Today-3):
And that's when having written a program kind of starts to pay off. We just add a definition of "the day before yesterday":
% "the day before yesterday":day_before_yesterday(X,Y):-day_after_tomorrow(Y,X).
And the new "conversation" predicate is modified correspondingly:
conversation(Today):-% the day after tomorrow becomes "yesterday":day_after_tomorrow(Today,X0),yesterday(X,X0),% the same distance to Sundaydist_to_sunday(X,D),dist_to_sunday(Y,D),% when the day before yesterday still was still "tomorrow"day_before_yesterday(Today,Y0),tomorrow(Y,Y0).
Running it reveals that the new answer is Sunday, which again makes sense:
?- conversation(X).
X = su ;
false.
Problem MkIII
One more slight modification:
Two friends are talking: "When the day after tomorrow becomes "yesterday", the day which we will then call "today" will have the same distance to Sunday (not the current week's one) as the day, which we were calling "today" when the day before yesterday still was "tomorrow". Question: what was the day of the week when the friends were talking?
Which means that when we measure the distance to Sunday, that Sunday should not be on the same week as "Today" that we are looking for:
This complicates things a bit, since now our "distances to Sunday" are different depending on the week.
A hack we could do is to separate the "previous week", "current week" and "next week" days, and do something like:
% "distance to Sunday" for week beforedist_to_sunday(mo0,1).dist_to_sunday(tu0,2).dist_to_sunday(we0,3).dist_to_sunday(th0,3).dist_to_sunday(fr0,2).dist_to_sunday(sa0,1).dist_to_sunday(su0,0).% "distance to Sunday" for the current weekdist_to_sunday(mo,1).dist_to_sunday(tu,2).dist_to_sunday(we,3).dist_to_sunday(th,4).dist_to_sunday(fr,5).dist_to_sunday(sa,6).dist_to_sunday(su,7).% "distance to Sunday" for the week afterdist_to_sunday(mo1,1).dist_to_sunday(tu1,2).dist_to_sunday(we1,3).dist_to_sunday(th1,3).dist_to_sunday(fr1,2).dist_to_sunday(sa1,1).dist_to_sunday(su1,0).
Which is a way too verbose as we'd need to do something similar with the "tomorrow" predicate as well.
What if we specified both "days" and "distances" as lists:
% list of days for three weeks in a row:days([mo0,tu0,we0,th0,fr0,sa0,su0,% previous weekmo,tu,we,th,fr,sa,su,% current weekmo1,tu1,we1,th1,fr1,sa1,su1]).% next week% corresponding distances to Sunday (which is not on current week)distances([1,2,3,3,2,1,0,1,2,3,4,5,6,7,6,5,4,3,2,1,0]).
and then used these lists to set up the facts about both "distance_to_sunday" and "tomorrow"?.. We can do that by specifying some helper (recursive) predicates that work on lists. The first one is "lookup": it is supposed to be true when variables X and Y have the same index inside the corresponding lists (the 3rd and 4th predicate arguments):
% true when X and Y are at the same position in corresponding listslookup(X,Y,[X|_],[Y|_]).lookup(X,Y,[_|T1],[_|T2]):-lookup(X,Y,T1,T2).
The first line says that "this predicate is true when both X and Y are at the beginning of their corresponding lists". The second line says "otherwise, recursively assert the same for the list tails". Then, "dist_to_sunday" can be written as:
which can be read as "X has distance to Sunday D, if if X and D stand at the same positions in the corresponding lists L1 (which is a list of days) and L2 (which is a list of distances).
Similarly, for "tomorrow" we create another helper predicate, "consecutive", which will be true if two given variables appear consecutively in some list:
% true when Y follows X in a listconsecutive(X,Y,[X,Y|_]).consecutive(X,Y,[_|T]):-consecutive(X,Y,T).tomorrow(X,Y):-days(L),consecutive(X,Y,L).
The rest of the program remains the same as before. Here's the full online version.
Querying it gives the new answer, which is actually the same as before, except now "distances to Sunday" are indeed measured differently:
?- conversation(X).
X = su ;
false.
Problem MkIV
Finally, one more slight modification:
Two friends are talking: "When the day after tomorrow becomes "yesterday", the day which we will then call "today" will have the same distance to Sunday (not that day's week one) as the day, which we were calling "today" when the day before yesterday still was "tomorrow". Question: what was the day of the week when the friends were talking?
I.e. "distance to Sunday" now is measured even differently: the Sunday should not be on the same week as the day we are measuring distance from.
Looking at how distances-to-Sunday change this time, we can see that we are back to the simpler case when they repeat every week (except now with a different numbers). So we either could revert to the way the code was in the very first problem, or reuse the previous one and just shorten the arrays:
The rest is the same as before, and here's the online version.
Regardless of what we do there, we get the answer to the query:
?- conversation(X).
false.
Which means that there does not exist any such X that would satisfy the "conversation" predicate. Simply saying, there is no solution to this problem.
No solution?.. Really?..
That's what our program tells us, and it probably can not lie. It' has been correct so far, every time... right?
Well, "no solution" is also in a sense a solution, so mission accomplished!
Or is it? What about this one:
What if the friends were talking exactly during the midnight between Wednesday and Thursday?..
The fallacies of problem solving
So there actually is a solution. Of course, one could argue about its "correctness", pointing ambiguities and whatnot.
This does not change the point: too often we paint ourselves into the corner of some particular mind frame. Step by step, getting deeper into the details of the incidental complexity of the solution itself (rather than the original problem), at some point we lose the perspective and stop realizing which exactly problem is that we are solving, even though each step appears logical, maybe even elegant. And thus the domain of possible solutions gets artificially limited to whatever is dictated by the particular implementation. The boundaries of the box, that we keep thinking inside, get more and more rigid with every step, until "no solution" appears a viable and logical outcome.
It gets worse in case when originally we succesfully solved some other (but similar) problem, and then gradually refined the solution to the (seemingly, slightly) modified requirements.
Which often happens when developing software systems, where requirements are usually a moving target. How many times did you hear that something is "impossible" to do or there is "no solution" to some problem in a context of some existing software? Sometimes that very solution might be right there for taking, on the surface, where no one is seeing it. Because people were thinking they are solving some different problem than it really is.
There are ways of breaking out of this rigid mind cage. There are conventional wisdoms involving golden hammers, nails, kisses and what else. There are methodologies that already started aging but still have at their foundation the very idea of avoiding digging too deeply in the wrong direction before it's too late.
Even being aware of all that, it's still very tempting to follow the sweet track of what appears a nice/logical evolution of a code just to find out one fine day that it's not potent anymore. Like, at all.
So, in a sense, this was an "un-tutorial", a reminder that sometimes in order to find a solution one needs to unlearn whatever he thinks he knows about the problem.
(the non-obfuscated version is on GitHub, and the result can be seen online on ShaderToy. Be aware, that thing requires WebGL-enabled browser and a decent videocard).
No data input required, it's a self-contained code, which does both the procedural scene description and raytracing/shading, all in a single fragment shader. The only assumptions are that we draw a fullscreen quad (AKA "two triangles") and there are a couple of uniforms (time and resolution) externally set.
If you've seen the stuff people put on ShaderToy, and especially the amazing work of its creator Inigo Quilez, then it might not amuse you too much.
Otherwise, read on for the recipe.
Ingredients
To cook a Christmas tree shader, such as this, you will need:
A so-called "distance function", which in a sense is a way to describe the scene geometry (in our case, as one big math formula)
Code that implements a "raymarching algorithm" (using the aforementioned distance function)
A way to somehow specify material properties for different parts of the scene
A shading code, which computes the color of a pixel according to some lighting model
"Camera" code, which handles the current position and ray direction for the given fragment
Spice up with a fog, reflections, shadows, ambient occlusion and other effects, to taste
Optional: serve in the obfuscated form
Distance function
Traditionally, 3D scenes are represented via polygonal (triangular) meshes, or some higher-level analytical description of the geometry.
But let's say, that instead of having an explicit scene description (polygonal or whatever), we somehow managed to specify a certain "magic" function. This function, for any point in space, would return a distance to the closest point (wherever it is) on some of the scene's objects:
// returns the distance from the given point in space
// to the closest point on the scene's geometry
float distf(vec3 pos) {
float d = ... // some computations
return d;
}
We don't really need to know where exactly is this closest point located, just a distance to it.
Furthermore, we don't even need the exact distance - only a "good enough" upper boundary (i.e. some value, which is guaranteed to be not smaller that the real distance).
An important property is that the distance function is signed, meaning that if our point pos is inside some object, the returned distance will still make sense. However, it will be negative, and its absolute value is bigger or equal to the closest distance to get "outside".
Raymarching algorithm
What's the use of such a function for the Christmas trees?.. Well, the already mentioned ray marching optimization is one of the benefits:
(sorry, could not help myself)
Seriously, though, also known as a "sphere tracing", it's nothing else than just an optimization of the raytracing step.
The basic idea of raytracing is that for each pixel in the image plane we shoot the corresponding ray from the eye into the scene and find where this ray hits the geometry or light sources (possibly bouncing around).
Ideally, we should be able to analytically find this intersection point, given some scene representation:
This is feasible for the basic environments (e.g. composed from spheres), but in practice gets complicated, if not impossible, when more complex shapes are added.
So instead, we step along the ray in small increments, every time asking a question: "are we there yet?" (in other words, has the ray penetrated any object yet). That's what is called raymarching, and it obviously can be very inefficient for small step sizes:
When we have a distance function, however, we can potentially proceed in much bigger steps, since we know that there is no way we can penetrate any object, stepping by the amount returned from the distance function:
The shader code that is preforming the described procedure:
#define NEAR_CLIP_PLANE 1.0
#define FAR_CLIP_PLANE 100.0
#define MAX_RAYCAST_STEPS 200
#define STEP_DAMPING 0.7
#define DIST_EPSILON 0.001
float rayMarch(in vec3 rayOrig, in vec3 rayDir) {
float t = NEAR_CLIP_PLANE; // current distance along the ray
for (int i = 0; i < MAX_RAYCAST_STEPS; i++) {
float d = distf(rayOrig + rayDir*t);
if (d < DIST_EPSILON || t > FAR_CLIP_PLANE) break;
t += d*STEP_DAMPING;
}
return t;
}
It returns the distance to the point hit by the ray (or bails out if hitting the scene boundaries or wondering around for too long).
Note that in this particular implementation we "damp" the step (every time stepping a bit closer than we could have), which means that it will take more steps to get to the destination. We'll see later why this may be needed.
Alright, that's all good, but how do cram a complex scene into the distance function, to start with?..
Divide et impera.
Distance function: primitive shapes
There is a bunch of papers for an in-depth insight, but the basic idea is nothing new - we start with the primitive shapes and then somehow combine them into more and more complex objects. Pretty much like with CSG, except we are doing it not on the explicit parametric object representation, but on the distances instead.
Thanks to some useful properties of the distance fields, this opens a few interesting modelling opportunities.
There is a little "playground" shader, which I will be using for the demonstration purposes, you can also tinker with it directly online on ShaderToy.
Plane
Plane is one the simplest shapes, but also is very useful, as we'll see afterwards. The distance from point p to a plane, specified by a normal n and an offset s is computed by the formula:
At first sight, it may not appear too useful as a building block. But it actually is useful, as we'll see later.
Cone (infinite)
Similarly, an infinite cone, with the tip at (0,0,0) and going infinitely downwards. In our case the cone is specified by a 2D vector n, which is a normal to the cone's side:
There are quite a few other primitives, like boxes etc (including the cool ones with rounded corners). Check the corresponding Inigo Quilez' page.
For our Christmas tree we only need the described ones, though.
Distance function: boolean operations
The primitives on their own are not too exciting. But here comes the kicker: boolean operations on the primitives' distance fields are expressed quite easily. Which opens a world of opportunities for combining things.
Union
Union is the most basic operation for "adding" objects to the scene. It's a min-function (which kind of makes sense intuitively: the closest distance "wins"):
Now, we can mix the boolean operations to our liking. One nice trick is using the plane primitive to chop the unwanted parts off:
float distf(vec3 p) {
float d1 = sphere(p, 0.5);
float d2 = torus(p, 0.5, 0.3);
float d = add(d1, d2);
// cut with a plane
float d3 = plane(p, vec3(0.0, 0.0, -1.0), -0.1);
d = diff(d, d3);
return d;
}
Distance function: Affine transformations
Still, having all the primitives centered at (0,0,0) is a bit boring. We want to move, rotate, scale them around. That's what the affine transformations do.
Translation
One important thing to note is that while we were applying boolean operations to the results of other distance functions, the affine transformations are working on their arguments instead. For example, to "move" a primitive sphere, we instead move the input point p, so that it "believes" that the sphere is still centered at (0,0,0):
A rotation around Y axis (the one looking upward) is:
Note that the formula does not affect the third vector component (and is not dependent on it), so in this we can write the code for a two-component vector:
A more general rotation, around an arbitrary axis, can be represented either as a sequence of rotations or via quaternions, to avoid so-called gimbal lock problem, but let's not go there.
Scale
Scale is different in a sense that it modifies both the argument and the result of the distance function in question:
Intuitively, it's like Alice having a "Drink Me" potion, growing three times bigger (according to the scale value) and using her steps to measure the distance to the object. Then, having an "Eat Me" cake to grow normal again and trying to raymarch to the very object using the just measured amount of steps. Obviously, those were "big Alice" steps, so they need to be rescaled correspondingly, so that "normal Alice" could use them.
Order of translation/rotation/scale
It matters, so whatch out.
Distance function: Extruding to infinity
There is more that can be done by transforming the distance function argument. One example is clamping, which essentially extrudes an object out to infinity along the given axis (here it's Y):
Check out this infinite pencil:
float distf(vec3 p) {
p.y = max(p.y, 0.0); // extrude the base of the cone down to infinity (use min to extrude up)
p = translate(p, vec3(0.0, 1.5, 0.0)); // move the cone 1.5 units upward
float d = cone(p, normalize(vec2(1.0, 0.1))); // the infinite cone
return d;
}
Distance function: swapping coordinates
The earlier described primitives are consistently oriented with "upwards" going along the Y axis. Using the GLSL vector component swizzling this can be adjusted, to orient them differently. Here, for example, we swap y and z coordinates and have the torus oriented along the Z axis instead:
Another useful operation is symmetric mirroring, which is achieved by just taking absolute values from the input coordinates:
float distf(in vec3 p) {
// mirror upper part symmetrically about XoZ plane
// (p.y = -abs(p.y) for the lower part)
p.y = abs(p.y);
return torus(translate(p.xzy, vec3(0.0, 0.0, 0.25)), 0.5, 0.1);
}
Distance function: capping infinite primitives
Now that we know the simple distance primitives and basic operations on them, we could easily derive the distance functions for things like conventional (finite) cylinder/cone:
Cylinder
To get a capped cylinder of height h:
Take an infinite cylinder
Chop it by a plane from top
Mirror symmetrically about the ground plane
float capped_cylinder(in vec3 p, float r, float h) {
p.y = abs(p.y); // mirror upper part about XoZ plane
float cyl = cylinder(p, r);
float capPlane = plane(p, vec3(0.0, 1.0, 0.0), h*0.5);
float d = intersect(cyl, capPlane); // cap cylinder from the top
return d;
}
Note that we could avoid some extra computations if we passed not the height/radius pair, but rather the height/normal. That would make the function interface a bit more confusing, though.
Distance function: Distortions
One more type of the distance field modifications is adding perturbations to the shapes. We modify the result of some distance function via the distortion function d, which is itself dependent on p:
Note that the actual function in the example pretty arbitrary. No doubt one could get something nicer looking with fewer instructions, after spending some time tweaking the parameters.
Distance function: cloning the primitives
At this point you may have started wondering: "well, that's all nice and well, but are we going to add every needle in the Christmas tree, in a loop, via a boolean operation?.. in a fragment shader?.. that does not scale at all!"
That's true, it does not scale, but fortunately we have another trick up our sleeves, again thanks to the properties of the distance functions.
We can clone an object almost for free using the modulo operations. The idea is again to transform the distance function argument.
Intuitively, it's that we split the space into cells, find which cell our point p gets into and pretend that this cell has its own copy of the "world". Essentially, we manipulate with the local coordinate systems of the objects in a way similar to what we've been doing with the affine transforms, symmetrical mirroring etc.
Positional cloning
We can get the object infinitely cloned along some axis (let's say, X) by taking modulo and offsetting the input p.x coordinate:
In practice we don't often need the infinite amount of objects, however. Fortunately, that can be limited by either clamping (extruding) the input coordinates or by chopping the resulting distance by planes:
(the second version of the function also returns the number of the sector that p is in, which can be needed for adding further variations to the cloned objects)
repeatAng is a function from vec2 to vec2, and similarly to the rotation transform we can perform this operation around other axes by swizzling the components:
Now that we've got a whole bunch of building blocks for our distance functions, we can finally try and describe something that we could put on a Christmas tree!
Materials
Before proceeding, though, we will need to make a small modification to the distance function, in order to be able to pass around not only the distance which the ray travels before hitting the scene, but also some information about the material of the point being hit. One way of doing it is make the distance function return vec2 (two-component vector) instead of float, with distance in the first component and material ID in the second:
You may notice that boolean operations now also require to work with vec2 values instead of float, so they get modified correspondingly:
void diff(inout vec2 d1, in vec2 d2) {
if (-d2.x > d1.x) {
d1.x = -d2.x;
d1.y = d2.y;
}
}
void add(inout vec2 d1, in vec2 d2) {
if (d2.x < d1.x) d1 = d2;
}
void intersect(inout vec2 d1, in vec2 d2) {
if (d1.x < d2.x) d1 = d2;
}
The actual material ID can be further used for shading. For example, here's how we could dispatch the diffuse material color based on the material ID:
vec3 getMaterialColor(float matID) {
vec3 col = BACKGROUND_COLOR;
if (matID <= MTL_OBJ1) col = vec3(0.8, 0.8, 1.8);
else if (matID <= MTL_OBJ2) col = vec3(1.4, 1.3, 0.3);
return col;
}
An interesting side-effect of performing intersection/difference of the objects with different materials is that the resulting object now has both of the materials (on the image: union, intersection, difference - left to right):
Topper
Let's look into a little bit less obvious shapes and how to make them. For example, the Christmas tree topper, that consists of a star shape and a cylindrical holder. The code for the star looks like:
float star(vec3 p) {
p.xy = repeatAng(p.xy, 5.0); // 3. Clone five cornders radially about Z axis
p.xz = abs(p.xz); // 2. Symmetrical about XoY and ZoY
vec3 n = vec3(0.5, 0.25, 0.8);
float d = plane(p, normalize(n), 0.1); // 1. A plane cutting the corner of X+Y+Z+
return d;
}
Note how "1 -> 2 -> 3" is kind of backwards in the code. That's because we do the most transformations on the input arguments of the current distance field. So for the star shape:
Have a plane primitive, that chops away the corner of the X+Y+Z+ coordinate system octant
Symmetrically mirror it about both XoY and ZoY planes (so we get a kind of infinite pyramid)
Symmetrically clone the tip of that pyramid about the Z axis
Then we can scale, offset that star and add the cylindrical base:
Similarly we compose other objects by using the primitives, affine transformations, boolean operations, distortions and other operations we've just looked into:
This one is a bit more elaborated, but still uses the same principles. Let's start from making (an infinite) branch:
Make a single needle using a cone primitive
Clone that needle along the Z axis
Chop the half of it
Skew the infinite row of needles downwards, so that our needles will be eventually bent forward
Rotate the whole row back to align with the Z axis
Revolve/clone about Z axis
Twist the needles to have the branch appear less regular
#define NEEDLE_LENGTH 0.5
#define NEEDLE_SPACING 0.2
#define NEEDLE_THICKNESS 0.05
#define NEEDLES_RADIAL_NUM 17.0
#define NEEDLE_BEND 0.99
#define NEEDLE_TWIST 1.6
#define NEEDLE_GAIN 1.5
float needles(in vec3 p) {
p.xy = rotate(p.xy, -length(p.xz)*NEEDLE_TWIST); // 7. twist the needles
p.xy = repeatAng(p.xy, NEEDLES_RADIAL_NUM); // 6. replicate the needles radially
p.yz = rotate(p.yz, -NEEDLE_BEND); // 5. rotate the row of needles to align back with Z axis
p.y -= p.z*NEEDLE_GAIN; // 4. skew the row of needles down along Y
p.z = min(p.z, 0.0); // 3. remove the Z+ part
p.z = repeat(p.z, NEEDLE_SPACING); // 2. clone the needle along Z
return cone(p, NEEDLE_THICKNESS, NEEDLE_LENGTH); // 1. A single needle (cone)
}
There is one important problem to talk about. With all the bending and twisting going on we are distorting the space, so that distances do not get preserved anymore, and the distance function is not guaranteed to return the equal-or-less distance.
To understand why that happens, recall again Alice with her "Eat Me" cake... except this time she measures her steps using a distorted looking glass, which warps the space. After drinking the "Drink Me" potion it's not sufficient to scale the number of steps with a constant anymore: this scale will be different at every point in space.
While it may be possible to actually come up with such "inverted distortion" function for certain cases, there is an easier hack that we've already mentioned before: the step damping. Essentially, we are just being conservative and saying: "oh well, if the distance function can lie and actually penetrate the objects, we'll use only a fraction of what it returns".
Take a look at these two images. To the left is the result without step damping, while the right one has damping of 0.7:
Anyway, back to our branch. Let's add a stem to it:
Now that we have a branch, we can use very similar procedure to get the whole tree:
#define BRANCH_ANGLE 0.5
#define BRANCH_ANGLE_FADE 0.11
#define BRANCH_SPACING 1.3
#define BRANCH_NUM_MAX 9.0
#define BRANCH_NUM_FADE 2.2
#define BRANCH_CURVATURE 0.08
#define TREE_H 4.0
#define TREE_R 3.0
#define TRUNK_WIDTH 0.025
#define TREE2_ANGLE 0.4
#define TREE2_OFFSET 0.4
#define TREE2_SCALE 0.9
vec2 halfTree(vec3 p) {
float section = floor(p.y/BRANCH_SPACING);
float numBranches = max(2.0, BRANCH_NUM_MAX - section*BRANCH_NUM_FADE);
// 6. Revolve/clone around Y:
p.xz = repeatAng(p.xz, numBranches);
// 5. Offset to get the tree foundation:
p.z -= TREE_R;
// 4. Rotate to get the tree slope:
p.yz = rotate(p.yz, BRANCH_ANGLE);
// 3. Clone vertically:
p.y = repeat(p.y, BRANCH_SPACING);
// Offset to compensate for the bend:
p.y -= BRANCH_SPACING*0.2;
// 2. Bend it to a parabolic shape:
p.yz = rotate(p.yz, -length(p.yz)*BRANCH_CURVATURE + BRANCH_ANGLE + section*BRANCH_ANGLE_FADE);
// 1. A branch:
vec2 b = branch(p);
return b;
}
There are a couple of tricks going on there with bending differently and adjusting the repeatAng factor based on the distance of the branch from the ground. Also, one can see artifacts at some intermediate stages - in this case that's OK, since we are later sweeping that part under the rag anyway.
To fake a bit more variation we repeat the "half-tree" two times, the second copy being slightly scaled, offset and rotated:
vec2 tree(vec3 p) {
// the first bunch of branches
vec2 res = halfTree(p);
// the second bunch of branches (to hide the regularity)
p.xz = rotate(p.xz, TREE2_ANGLE);
p.y -= BRANCH_SPACING*TREE2_OFFSET;
p /= TREE2_SCALE;
vec2 t2 = halfTree(p);
t2.x *= TREE2_SCALE;
add(res, t2);
// trunk
vec2 tr = vec2(cone(p.xyz, TRUNK_WIDTH, TREE_H*2.0), MTL_OBJ2);
add(res, tr);
res.x = intersect(res.x, sphere(p - vec3(0.0, TREE_H*0.5 + 1.0, 0.0), TREE_H + 0.8));
return res;
}
Shading
It's a big topic on its own, but let's take a quick ride, especially since the actual lightning there is as basic as possible: one global directional light, Phong lighting model plus reflections and shadows.
Fog
Let's start from it, since it's the most obvious thing in terms of shading, which can be done with the distance value returned by the distance function. The "fog" happens by mixing in the background color exponentially, depending on the distance to the point:
vec3 applyFog(vec3 col, float dist) {
return mix(col, BACKGROUND_COLOR, 1.0 - exp(-FOG_DENSITY*dist*dist));
}
vec3 render(in vec3 rayOrig, in vec3 rayDir) {
vec2 d = rayMarch(rayOrig, rayDir);
float t = d.x;
col = vec3(0.8, 0.8, 1.8); // a constant color for now
col = applyFog(col, t);
return col;
}
Materials
Seeing only the object contours is rather boring, let's try and add some colors as described in the "Materials" section. Using this, the rendering function gets modified:
vec3 getMaterialColor(float matID) {
vec3 col = BACKGROUND_COLOR;
if (matID <= MTL_GROUND) col = vec3(3.3, 3.3, 4.5);
else if (matID <= MTL_NEEDLE) col = vec3(0.152,0.36,0.18);
else if (matID <= MTL_STEM) col = vec3(0.79,0.51,0.066);
else if (matID <= MTL_TOPPER) col = vec3(1.6,1.0,0.6);
else if (matID <= MTL_CAP) col = vec3(1.2,1.0,0.8);
else col = jollyColor(matID);
return col;
}
vec3 render(in vec3 rayOrig, in vec3 rayDir) {
vec2 d = rayMarch(rayOrig, rayDir);
float t = d.x, matID = d.y;
col = getMaterialColor(matID); // get material color at ray the hit position
col = applyFog(col, t);
return col;
}
Lambertian (diffuse) component
This is the simplest lighting model that assumes matte surfaces. CA is the "ambient" term (which is used to fake global illumination), CL is the global directional light color. n is the surface normal at the point where it was hit by the ray, and l is the direction of light:
The surface normal from the equation can be computed using the finite difference formula:
Which means that in order to compute the normal we need to evaluate the distance function six more times.
The shading code then looks like this:
#define GLOBAL_LIGHT_COLOR vec3(0.8,1.0,0.9)
#define GLOBAL_LIGHT_DIR normalize(vec3(-1.2, 0.3, -1.1))
#define AMBIENT_COLOR vec3(0.03, 0.03, 0.03)
vec3 render(in vec3 rayOrig, in vec3 rayDir) {
vec2 d = rayMarch(rayOrig, rayDir);
float t = d.x, matID = d.y;
vec3 mtlDiffuse = getMaterialColor(mtlID);
vec3 hitPos = rayOrig + t*rayDir;
vec3 n = normal(hitPos);
float diffuse = clamp(dot(n, GLOBAL_LIGHT_DIR), 0.0, 1.0); // diffuse term
vec3 col = mtlDiffuse*(AMBIENT_COLOR + LIGHT_COLOR*diffuse);
col = applyFog(col, t);
return col;
}
Looks like plastic. It's fantastic!
Phong (specular) component
Let's bring in some shiny stuff. Phong model adds so-called specular component on top of the diffuse Lambertian one. Essentially it's a fake reflection that assumes that environment consists from a single blob light source. Here's the formula:
v is view direction, alpha is the specular "sharpness" coefficient (used to adjust the shape of the reflected fake blob), r is so-called "reflection vector", which is used so commonly that GLSL has a command for it (reflect):
Stil not shiny enough? Fear not, 'cause that's raytracing we are dealing with! After the ray hits an object, we can cast so called secondary rays, including the one in the direction of the reflection vector. For reflections we'll keep doing it either until we rich the hit number or raymarching step limit:
#define MAX_RAY_BOUNCES 3.0
#define BAUBLE_REFLECTIVITY 0.7
vec3 render(in vec3 rayOrig, in vec3 rayDir) {
vec3 resCol = vec3(0.0);
float alpha = 1.0;
for (float i = 0.0; i < MAX_RAY_BOUNCES; i++) {
vec2 d = rayMarch(rayOrig, rayDir);
float t = d.x, mtlID = d.y;
// ... SNIP ...
col = applyFog(col, t);
resCol += col*alpha; // blend in (a possibly reflected) new color
if (mtlID <= MTL_BAUBLE ||
abs(dot(nrm, rayDir)) < 0.1) { // poor man Fresnel
break;
}
rayOrig = pos + ref*DIST_EPSILON;
alpha *= BAUBLE_REFLECTIVITY;
rayDir = ref;
}
return vec3(clamp(resCol, 0.0, 1.0));
}
Shadows
They are important to give depth and volume to the scene. This technique is used, again taking benefit of the fact that we are dealing with the distance fields:
Camera
There is a piece of code that finds the primary ray direction, based on the camera position/direction and the position of the current pixel (fragment) we are raytracing:
Finally, the main shader function, which does the camera animation/setup, then calls the shading function to get the color of the corresponging pixel and passes it outside:
If we just "randomize" the RGB color values in a naive way, chances are that there will be a bunch of non-jolly baubles, which arguably would bring a way less joy and happiness into the house:
A way of avoiding "bad" colors is to operate in some color space, where one has more control (in respect to the particular goal at hand).
For example, we can use the YIQ color space and the observation that in this space "bad" (too bleak, too dark etc) colors happen more often to the middle of the IQ plane, so we'd better avoid the middle. With some tweaking, one can achieve a "randomized" mapping from 1-dimensional value to a color scale:
Of course, a similar result could be achieved by using, for example, the HSV color space, having a constrained saturation/brighness and a random hue.
The problem in a general sense is to come up with some custom color space transformation (possibly, non-linear), and tune its parameters. The latter could be done algorithmically based on specified palette of desired (undesired) colors.
It looks jolly enough as it is, though. So let's leave the improvement as an exercise to the curious reader :)
Shader code obfuscation
A few words about how obfuscation of the code into the "fir tree" shape was done.
I hacked together a little script that would take the original fragment shader and first run through the C Preprocessor to get rid of all the "#define" statements:
cpp -P tree.frag > tree1.frag
Then, Shader Minifier was used to compact the code as much as possible:
After that, the line breaks were "creatively" inserted (manually), so the code gets the shape of three big triangles and a little rectangle at the bottom, and then a simple python one-liner was run to center all the lines to get the actual fir-tree shape.
Shader Validator is a useful tool to automatically check that the shader still compiles while doing the "creative" inserting of the line breaks:
glslangValidator -d tree3.frag
Theoretically, it should be possible (and would be way cooler) to do the line break insertion automatically, taking, for example a template image like this:
as the guideline, and try to do the best at inserting line breaks such that the code still compiles and the overall code shape follows the template as close as possible. It sounds like an interesting combinatorial problem... which is left as another exercise to the reader :)
Finally, it was syntax highlighted via Pygments, using the "friendly" theme, which is conveniently green-ish:
A couple of weeks ago, I had this goal to come up with an obfuscated code that does something Christmas'y (and as a bonus, mind-boggling). Figuring out how do those magic shaders on ShaderToy work, and then writing one myself, turned out to be a suitable and rewarding challenge.
Many of the techniques, employed in those shaders, are originally coming from the computer graphics academia, and then picked up by the demoscene crowd and made into something cool-looking, just for fun.
But it often happens that those ideas, albeit not widely applicable in practice initially, eventually find their way into the mainstream. So let's wait and see how soon game engines will be using raytracing for rendering.
And even if they don't, the fundamentals of these techniques are stil useful.
One definitely interesting topic is the procedural content creation, with all its flaws and benefits. Like in the case of the Christmas tree shader, our scene is just one big math formula with a list of parameters which need to be artistically tweaked. This tweaking can be tedious, non-intuitive and frustrating (not in the small part due to the model itself being potentially flawed). Here's the screen from Fragmentarium with just a fraction of sliders: