Consider that we are using an area light source. How do we combine the different samples obtained by taking multiple ray samples to the light source?
A. Take the minimum value.
B. Average the values.
C. Take the maximum value.
D. Take the most frequent value.
查看答案
If we were to define an equivalent scene in OpenGL, and used the same Phong Illumination Model, what would doing our lighting (without shadows or reflections) in a ray tracing framework be equivalent to in an OpenGL context?
A. Lighting in the vertex shader.
B. Lighting in the fragment shader.
C. Lighting on the host program, then sending the colors as an attribute.
We often specify different matrices for different objects in our scene. When we use a ray tracer, we may like to use the absolute value of \( t \) as distance if our ray was formulated as \( \vec{r} = \vec{r}_0 + t\vec{r}_d \). In this case, should we normalize \( \vec{r}_d \)? If so, where should we do the normalization?(Note that \( M \) is the model transformation for a specific object)
A. Do not normalize.
B. Normalize after applying \( M^{-1} \).
C. Normalize after we find the world-space direction, and before we apply \( M^{-1} \).
D. Normalize after we find the world-space direction, and again after we apply \( M^{-1} \).
In OpenGL, we could not see objects in front of our \( z_{near} \) plane, and objects behind our \( z_{far} \) plane. What objects can we see, with respect to the virtual screen in ray tracing?
A. We can only see objects behind our virtual screen.
B. We can see objects on either side of our virtual screen.
C. We can only see objects in front of our virtual screen.
Can we define objects in a ray tracer without vertices?
A. Can we define objects in a ray tracer without vertices?
B. No, it does not make any sense to define an object without vertices.
C. No, if we cannot render the object using OpenGL, we cannot render the object in a ray tracer.
D. Yes, but we must implicitly evaluate a function and generate vertices.