Seite 2: Diskussion Kirk vs. Slusallek

GameStar Plus Logo
Weiter mit GameStar Plus

Wenn dir gute Spiele wichtig sind.

Besondere Reportagen, Analysen und Hintergründe für Rollenspiel-Helden, Hobbygeneräle und Singleplayer-Fans – von Experten, die wissen, was gespielt wird. Deine Vorteile:

Alle Artikel, Videos & Podcasts von GameStar
Frei von Banner- und Video-Werbung
Einfach online kündbar

Weiter

Kirk: You missed my point, perhaps intentionally. Rasterization-specific hardware is now <5% of GPU core area. Most of the silicon is devoted to instruction processing, memory access, and floating point computation. Given that a GeForce 6800 has 10-20x the floating point of the Opteron system you describe, you are a poor programmer if you cannot program it to run a ray tracer at least twice as fast on a GPU as on a CPU.

There are no barriers to writing a ray tracer on a GPU, except perhaps in your mind. The triangle database can be kept in the GPU memory buffers as texture information (textures are simply structured arrays). Multiple triangles can be accessed through longer shader programs. Although current GPU memory is limited to 256MB-512MB, the root of the geometry hierarchy can be kept resident, and the detail (leaf nodes) kept in the system memory and disk. In your example of ray tracing 30GB of triangle data, you are clearly using hierarchy or instancing to create a 350M polygon database, since in your 2-3 seconds you do not have time to read that volume of data from disk.

By the way, ray tracing is not a new idea. Turner Whitted's original ray tracing research paper was written in 1980. Most of the algorithmic innovation in the technique happened in the late 80s and early 90s. The most interesting recent advances are path tracing, which casts many more rays to get a more global illumination (light inter-reflection) result. Several universities have written path tracers for GPUs that run on extremely large databases.

Slusallek:
Well, it seems getting a bit controversial is working better than expected :-)

We fully agree that massive parallelism in GPUs and similar hardware is a great way of achieving high raw performance. However, we also know that the specific architecture of a hardware is an important factor in determining how well the raw performance can be leveraged for specific applications and algorithms.

I have the greatest respect for the research and development that resulted in the GPUs we have today. However, from the results the we and all other I have talked to are getting from implementing ray tracing on GPUs, I conclude that the current hardware is not well suited for this sort of applications. That might change with future implementations (and maybe better programmers :-) but I do see some general and tough architectural issues that need to be solved to make this work well.

The Boeing model does not use instancing at all! It contains roughly 350 million separately stored triangles, which we load on demand as required. And with some of the outside views we are seeing working sets of several GB. The key to rendering such a model is proper memory management, which is already non-trivial on a CPU. Having to deal with the added complexity of a separate GPU, graphics memory separated from main memory, and only limited means of communication with the CPU and finally disks, makes it so much harder to use this approach.

Due to the many advantages of ray tracing I believe this rendering algorithms is the right choice for future interactive graphics - and if we can run it well on a GPU, I would be the first to use it.

2 von 8

nächste Seite


zu den Kommentaren (0)

Kommentare(0)
Kommentar-Regeln von GameStar
Bitte lies unsere Kommentar-Regeln, bevor Du einen Kommentar verfasst.

Nur angemeldete Benutzer können kommentieren und bewerten.