Introduction: From Math to Light

In the previous lesson, you completed a deep dive into the vec3 class and gained a solid understanding of vector mathematics. You learned how vectors represent positions, directions, and colors, and you explored operations like addition, scalar multiplication, dot products, and cross products. You saw how the same vec3 class serves multiple purposes through type aliases like point3 and color. That lesson gave you the mathematical toolkit you'll need for everything that follows.

Now it's time to put that toolkit to work. In this lesson, we're making the crucial transition from mathematical foundations to actual ray tracing. We're going to create our first rays, build a simple virtual camera, and cast those rays into a scene. By the end of this lesson, you'll generate your first "traced" image — a beautiful gradient background that simulates a sky. While this might seem simple compared to rendering complex 3D objects, it represents a fundamental milestone: you'll be casting rays through pixels and determining colors based on those rays, which is the core mechanism of ray tracing.

The outcome of this lesson is concrete and visual. You'll implement a ray class that represents a ray with an origin and direction. You'll set up a virtual camera with specific parameters like aspect ratio, viewport dimensions, and focal length. You'll write code that casts a ray from the camera through each pixel of your image. And you'll implement a ray_color() function that returns a gradient color based on the ray's direction, creating a smooth transition from white at the bottom to blue at the top — just like a real sky.

This lesson is where ray tracing truly begins. Everything before this was preparation; everything after this will build on what you create today. The rays you define here will eventually intersect with spheres, planes, and other objects. The camera you build will evolve to support different viewing angles and perspectives. The color function will grow more sophisticated to handle lighting, shadows, and reflections. But it all starts with these fundamentals: rays, a camera, and a simple way to determine color. Let's begin by understanding what a ray really is.

Understanding Rays: The Math Behind P(t) = A + tb

At the heart of ray tracing is a deceptively simple mathematical concept: the ray. A ray is defined by two pieces of information — an origin point and a direction vector. Together, these two pieces let us describe an infinite line that starts at the origin and extends forever in the specified direction. In mathematical notation, we express a ray as a function of a parameter t:

P(t) = A + t·b

Let's break down what each part of this equation means. A is the origin of the ray, a point in 3D space represented as a point3 (which, as you know, is just a vec3). This is where the ray begins. b is the direction vector, also a vec3, which tells us which way the ray is pointing. The parameter t is a real number that lets us move along the ray. When t is zero, P(0) = A + 0·b = A, so we're at the origin. When t is one, P(1) = A + b, so we've moved one unit of the direction vector away from the origin. When t is two, we've moved two units, and so on.

The beauty of this formulation is that by varying t, we can reach any point along the ray. If is positive, we move forward from the origin in the direction of . If is negative (though we typically don't use negative values in ray tracing), we'd move backward. This parameterization gives us a way to "walk" along the ray and check for intersections with objects in the scene. When we ask, "Does this ray hit that sphere?" we're really asking, "Is there some value of where lies on the surface of the sphere?"

Building the Ray Class

Now that you understand what a ray is mathematically, let's implement it in code. We'll create a ray class that encapsulates the origin and direction and provides methods to work with them. This class will be simple — much simpler than the vec3 class — because a ray is conceptually simpler. It's just a container for two vectors and a way to evaluate the ray equation.

Create a new file called ray.h in your src directory. This header file will define our ray class. Let's start with the include guards and necessary includes:

We need to include vec3.h because our ray will use vec3 objects for both the origin and direction. The include guards prevent multiple inclusion, just like in vec3.h.

Now let's define the class itself. The class has two private data members: the origin and the direction. We'll call them orig and dir:

Let's walk through this implementation piece by piece. The class has two constructors. The first is a default constructor that takes no arguments: ray() {}. This creates an uninitialized ray, which isn't particularly useful, but it's good practice to provide a default constructor so you can create arrays of rays or use rays in contexts where default construction is required.

Creating a Virtual Camera

Now that we have a way to represent rays, we need to create rays that correspond to pixels in our image. This is where the virtual camera comes in. The camera is our viewpoint into the 3D scene — it determines what we see and from what perspective. In this lesson, we'll implement a very simple camera model, sometimes called a pinhole camera, which is the most basic camera model used in computer graphics.

Before we write any code, let's understand the concept. Imagine you're looking through a window at a scene outside. Your eye is at a specific position (the camera position), and the window is the viewport — a rectangular region through which you see the world. Each point on the window corresponds to a direction you could look. If you look through the center of the window, you're looking straight ahead. If you look through the top-left corner, you're looking up and to the left. The window itself is positioned at some distance from your eye, which we call the focal length.

In our virtual camera, we'll set up a similar arrangement. The camera will be positioned at the origin of our coordinate system, at point (0, 0, 0). The viewport will be a rectangle positioned in front of the camera, perpendicular to the direction the camera is looking. We'll define the viewport's dimensions (width and height) and its distance from the camera (focal length). Then, for each pixel in our image, we'll calculate which point on the viewport that pixel corresponds to, and we'll create a ray from the camera position through that viewport point.

Let's define the parameters we need. First, we need to decide on an aspect ratio for our image. The aspect ratio is the ratio of width to height. Modern widescreen displays typically use a 16:9 aspect ratio, so let's use that:

Next, we need to choose an image width in pixels. The height will be calculated from the width and aspect ratio. Let's use 400 pixels wide, which will give us a reasonably sized image without taking too long to render:

We calculate the height by dividing the width by the aspect ratio. The static_cast<int> converts the result from a double to an integer, which is necessary because image dimensions must be whole numbers. With a width of 400 and an aspect ratio of 16/9, the height will be 225 pixels.

Now let's define the viewport dimensions. The viewport is measured in world space units, not pixels. We'll choose a viewport height of 2.0 units, which is arbitrary but convenient. The viewport width is calculated from the height and aspect ratio, just like the image dimensions:

Casting Rays: From Camera Through Pixels

Now that we have our camera set up, we need to cast rays from the camera through each pixel of our image. This is where we connect the discrete world of pixels (our output image) with the continuous world of 3D space (our scene). Each pixel in the image corresponds to a small region of the viewport, and we'll cast a ray through the center of that region.

The process involves iterating through every pixel in the image using nested loops, calculating the position of that pixel on the viewport, and constructing a ray from the camera origin through that viewport position. Let's walk through this step by step.

First, we need to understand how to map pixel coordinates to viewport coordinates. Our image has discrete pixel positions: (0, 0) for the top-left pixel, (image_width-1, 0) for the top-right pixel, (0, image_height-1) for the bottom-left pixel, and so on. We need to convert these discrete positions into continuous coordinates on the viewport.

We'll use normalized coordinates called u and v. The u coordinate represents the horizontal position, ranging from 0.0 at the left edge to 1.0 at the right edge. The v coordinate represents the vertical position, ranging from 0.0 at the bottom edge to 1.0 at the top edge. For a pixel at position (i, j), we calculate:

We divide by image_width - 1 and image_height - 1 rather than by image_width and image_height because we want the coordinates to reach exactly 1.0 at the last pixel. If we divided by image_width, the rightmost pixel would have u = (image_width-1) / image_width, which is slightly less than 1.0. By dividing by , we ensure that ranges from 0.0 to 1.0 inclusive.

Adding Color with ray_color()

Now we need to implement the ray_color() function, which takes a ray and returns a color. Since we don't have any objects in our scene yet, we can't calculate intersections or lighting. Instead, we'll create a simple background gradient that varies based on the ray's direction. This will give us a pleasant sky-like appearance and demonstrate how ray direction can be used to determine color.

The idea is to create a gradient that transitions from white at the bottom of the image to blue at the top, simulating a simple sky. We'll base this gradient on the y-component of the ray's direction. Rays pointing downward (negative y) will be white, rays pointing upward (positive y) will be blue, and rays pointing horizontally will be somewhere in between.

Here's the implementation:

Let's break down what this function does. First, we normalize the ray's direction to get a unit vector:

Normalizing the direction ensures that the y-component ranges from -1.0 to 1.0, regardless of the original direction's magnitude. This makes our gradient calculation consistent. The unit_vector() function, which you learned about in the previous lesson, divides the vector by its length to produce a vector of length 1 pointing in the same direction.

Next, we map the y-component from the range [-1, 1] to the range [0, 1]:

When unit_dir.y() is -1.0 (pointing straight down), t becomes 0.5 * (-1.0 + 1.0) = 0.0. When unit_dir.y() is 1.0 (pointing straight up), t becomes 0.5 * (1.0 + 1.0) = 1.0. When unit_dir.y() is 0.0 (pointing horizontally), becomes 0.5. This value will serve as our interpolation parameter.

Summary and Preparing for Practice

You've now built the core of your ray tracer: you learned the ray equation P(t) = A + t·b, implemented a simple ray class, and set up a virtual camera with a viewport and focal length. You mapped pixels to rays, cast those rays through the scene, and used the ray_color() function to create a smooth sky gradient based on ray direction. This process connects pixel positions to 3D space and forms the foundation of all ray tracing.

Next, you'll add objects to your scene and compute ray-object intersections, allowing you to render actual 3D shapes. The upcoming practice exercises will reinforce your understanding of rays, cameras, and color gradients, preparing you for more advanced ray tracing techniques.

Sign up
Join the 1M+ learners on CodeSignal
Be a part of our community of 1M+ users who develop and demonstrate their skills on CodeSignal