Computer Science, asked by DanishKhan4643, 10 months ago

A significant problem with ray tracing (backwards ray tracing, or Whitted ray tracing, in particular) is that it samples only those properties of the surfaces that can be simulated by a single ray passing through a pixel and interacting with the scene. Much research has gone on, attempting to understand how to improve on the basic model while still being computationally tractable. Compare recursive ray tracing and ray tracing summarizing how each works, how each tries to remove aliasing effects from the Whitted model (i.e., what additional effect do they allow, or problems can they solve), how they remain computationally tractable, and their relative cost.

Answers

Answered by MortalDyanamo12
3

Answer:

Ray tracing makes anything look realistic by adding shadows and reflections and ray tracing is supported in Nvidia rtx and some 6 gb gtx cards and AMD Radeon vega 56 of and vega 64 . Ray tracing Consumpts vram ( Video Ram ) , It makes anything look very nice and It looks lovely in 4k ray tracing but the backwards is that you need and hdr display support and hdr settings to be turned on otherwise you'll see very dark . Thank You

An Overview on the Ray-Tracing Rendering Technique

Figure 1: "sliding" a raster image in front of the camera image plane.Figure 2: a ray can hit or miss the scene geometry.

In the first lesson of this section we have already quickly introduced you to the concept of rendering in general and how to produce images of 3D scenes using ray-tracing in particular. We have mentioned the ray-tracing technique in other lessons such as the lesson Rendering an Image of a 3D Scene and Rasterization: a Practical Implementation but the goal in these lessons was more to highlight the differences between the rasterization and the ray-tracing techniques, which are the main two common frameworks (or rendering techniques) for rendering images of 3D objects.

Shall we re-introduce the ray-tracing rendering technique one more time? You should already be familiar with the concept but this lesson is the first in a series in which we will study techniques that are specific to ray-tracing so it seems like a good idea to review one more time what the method is about, how it works, the basic principles of the technique, and what they main differences between rasterization and ray-tracing are.

Ray-tracing is all about rays, and rays is one of the main topics of this lesson. We will learn what they are and how they can be defined in programming. As we know by now, a raster image is made out of pixels. One way of producing an image of a 3D scene, is to actually somehow "slide" this raster image along the image plane of our virtual camera (figure 1), and shoot rays though each pixel in the image, to find which part of the scene each pixel covers. The way we do that, is simply by casting a ray originating from the eye (the camera position) and passing through the centre of each pixel. We then find what object (or objects) from the scene, these rays intersect. If a pixel "sees" something surely it sees the object that is right in front of it in the direction pointed by that ray. The ray direction as we just mentioned can simply be constructed by tracing a line from the camera's origin to the pixel centre and then extending than line into the scene (figure 2).

Now that we know "what" a pixel sees, all we need to do, is repeat this process for every pixel in the image. By setting up the pixel color with the color of the object that each ray passing through the centre of each pixel intersects, then we can form an image of the scene as seen from a particular viewpoint. Note that this method requires to loop over all the pixels in the image and cast a ray into the scene for each one of these pixels. The second step, the intersection step, requires to loop over all the objects in the scene to test if a ray intersects any of these objects. Here is an implementation of this technique in pseudo-code:

001

002

003

004

005

006

007

008

009

010

011

012

013

014

015

016

017

// loop over all pixels

Vec3f *framebuffer = new Vec3f[imageWidth * imageHeight];

for (int j = 0; j < imageHeight; ++j) {

for (int i = 0; i < imageWidth; ++i) {

for (int k = 0; k < numObjectsInScene; ++k) {

Ray ray = buildCameraRay(i, j);

if (intersect(ray, objects[k]) {

// do complex shading here but for now basic (just constant color)

framebuffer[j * imageWidth + i] = objects[k].color;

}

else {

// or don't do anything and leave it black

framebuffer[j * imageWidth + i] = backgroundColor;

}

}

}

}

Note that some of the rays might not intersect any geometry at all. For example if you look at figure 2, one of the rays does not intersect the sphere. In this particular case, we generally leave the pixel's color black or set it up to any other color we want the background to be (line 10)

Answered by ejaswanth123
0

Answer:

significant problem with ray tracing (backwards ray tracing, or Whitted ray tracing, in particular) is that it samples only those properties of the surfaces that can be simulated by a single ray passing through a pixel and interacting with the scene. Much research has gone on, attempting to understand how to improve on the basic model while still being computationally tractable. Compare recursive ray tracing and ray tracing summarizing how each works, how each tries to remove aliasing effects from the Whitted model (i.e., what additional effect do they allow, or problems can they solve), how they remain computationally tractable, and their relative cost.

Similar questions