Nvidia 8800 gts. Video cards


To begin with, NVIDIA installed the G80 on 2 video cards: GeForce 8800 GTX and GeForce 8800 GTS.

Specifications of GeForce 8800 series video cards
GeForce 8800 GTX GeForce 8800 GTS
Number of transistors 681 million 681 million
Core frequency (including allocator, texture units, ROP units) 575 MHz 500 MHz
Shader (stream processor) frequency 1350 MHz 1200 MHz
Number of shaders (stream processors) 128 96
Memory frequency 900 MHz (1.8 GHz effective) 800 MHz (1.6 GHz effective)
Memory interface 384 bit 320 bit
Memory Bandwidth (GB/s) 86.4 GB/s 64 GB/s
Number of ROP blocks 24 20
Memory 768 MB 640 MB

As you can see, the number of transistors in the GeForce 8800 GTX and 8800 GTS is the same, this is because they are absolutely the same G80 GPUs. As already mentioned, the main difference between these GPU options is 2 disabled banks of stream processors - a total of 32 shaders. At the same time, the number of working shader units has been reduced from 128 in the GeForce 8800 GTX to 96 in the GeForce 8800 GTS. NVIDIA also disabled 1 ROP (rasterization unit).

The core and memory frequencies of these video cards are also slightly different: the core frequency of the GeForce 8800 GTX is 575 MHz, and that of the GeForce 8800 GTS is 500 MHz. GTX shader units operate at 1350 MHz, GTS – 1200 MHz. With the GeForce 8800 GTS, NVIDIA also uses a narrower 320-bit memory interface and 640 MB of slower memory that runs at 800 MHz. GeForce 8800 GTX has a 384-bit memory interface, 768 MB memory / 900 MHz. And, of course, a completely different price.

The video cards themselves are very different:


As you can see in these photos, the GeForce 8800 reference boards are black (a first for NVIDIA). With a cooling module, the GeForce 8800 GTX and 8800 GTS are dual-slot. The GeForce 8800 GTX is slightly longer than the GeForce 8800 GTS: its length is 267 mm, versus 229 mm for the GeForce 8800 GTS, and, as stated earlier, the GeForce 8800 GTX has 2 PCIe power connectors. Why 2? Maximum power consumption of GeForce 8800 GTX is 177 W. However, NVIDIA says that this can only be as a last resort, when all functional units of the GPU are maximally loaded, and in normal games during testing, the video card consumed an average of 116 - 120 W, with a maximum of 145 W.

Since each external PCIe power connector on the video card itself is designed for a maximum of 75 W, and the PCIe slot is also designed for a maximum of 75 W, then 2 of these connectors will not be enough to supply 177 W, so we had to make 2 external PCIe power connectors. By adding a second connector, NVIDIA provided the 8800 GTX with a solid power reserve. By the way, the maximum power consumption of the 8800 GTS is 147 W, so it can get by with one PCIe power connector.

Another feature added to the design of the GeForce 8800 GTX reference board is a second SLI connector - a first for an NVIDIA GPU. NVIDIA has not officially announced anything about the purpose of the second SLI connector, but journalists were able to obtain the following information from the developers: “The second SLI connector on the GeForce 8800 GTX is designed for hardware support for possible expansion of the SLI configuration. With current drivers, only one SLI connector is used. Users can connect an SLI bridge to both the first and second contact groups.”

Based on this, and the fact that nForce 680i SLI motherboards come with three PCI Express (PEG) slots, we can conclude that NVIDIA plans to support three SLI video cards in the near future. Another option could be to increase the power for SLI physics, but this does not explain why the GeForce 8800 GTS does not have a second SLI connector.

It can be assumed that NVIDIA is reserving its GX2 “Quad SLI” technology for the less powerful GeForce 8800 GTS, while the more powerful GeForce 8800 GTX will operate in a triple SLI configuration.

If you remember, the original Quad SLI video cards from NVIDIA are closer in characteristics to the GeForce 7900 GT than to the GeForce 7900 GTX, since the 7900 GT video cards have lower power consumption/heat dissipation. It is quite natural to assume that NVIDIA will follow the same path in the case of the GeForce 8800. Gamers with motherboards with three PEG slots will be able to increase the speed of the graphics subsystem by assembling a triple SLI 8800 GTX configuration, which in some cases will give them better performance than Quad SLI system, judging by the characteristics of the 8800 GTS.

Again, this is just a guess.

The cooling unit of the GeForce 8800 GTS and 8800 GTX is made of a dual-slot, channeled design that removes hot air from the GPU outside the computer case. The heatsink consists of a large aluminum heatsink, copper and aluminum heat pipes, and a copper plate that is pressed against the GPU. This whole structure is blown by a large radial fan, which looks a little intimidating, but is actually quite quiet. The cooling system of the 8800 GTX is similar to the cooling system of the 8800 GTS, but the former has a slightly longer heatsink.


In general, the new cooler copes with cooling the GPU quite well, and at the same time is almost silent - like the GeForce 7900 GTX and 7800 GTX 512MB video cards, but the GeForce 8800 GTS and 8800 GTX are audible a little more. In some cases, you will need to listen carefully to hear the noise from the video card fan.

Production

All production of GeForce 8800 GTX and 8800 GTS is carried out under the NVIDIA contract. This means that whether you buy a graphics card from ASUS, EVGA, PNY, XFX or any other manufacturer, they are all made by the same company. NVIDIA does not even allow manufacturers to overclock the first batches of GeForce 8800 GTX and GTS video cards: they all go on sale at the same clock speeds, regardless of the manufacturer. But they are allowed to install their own cooling systems.

For example, EVGA has already released its version of the e-GeForce 8800 GTX ACS3 Edition with its unique ACS3 cooler. The ACS3 graphics card is hidden in a single large aluminum cocoon. It bears the letters E-V-G-A. For additional cooling, EVGA placed an additional heatsink on the back of the graphics card, directly opposite the GPU G80.

In addition to cooling, manufacturers of the first GeForce 8800 video cards can only customize their products with warranties and packaging - games and accessories. For example, EVGA bundles its video cards with the Dark Messiah game, and the GeForce 8800 GTS BFG video card is sold with a BFG T-shirt and a mouse pad.

It will be interesting to see what happens next - many NVIDIA partners believe that for subsequent releases of GeForce 8800 video cards, NVIDIA restrictions will not be so strict, and they will be able to compete in overclocking.

Since all video cards come from the same assembly line, all GeForce 8800 support 2 dual-link DVI and HDCP connectors. In addition, it became known that NVIDIA does not plan to change the memory size of the GeForce 8800 GTX and GTS (for example, 256 MB GeForce 8800 GTS or 512 MB 8800 GTX). At least for now, the standard configuration for the GeForce 8800 GTX is 768 MB, and the GeForce 8800 GTS is 640 MB. NVIDIA also has no plans to make an AGP version of GeForce 8800 GTX/GTS video cards.

Driver for 8800

NVIDIA has made several changes to the GeForce 8800 driver, which definitely need to be mentioned. First of all, the traditional overclocking utility Coolbits has been removed, replaced by NVIDIA nTune. That is, if you want to overclock a GeForce 8800 video card, you will need to download the nTune utility. This is probably good for owners of motherboards based on the nForce chipset, since the nTune utility can be used not only to overclock the video card, but also for system configuration. Otherwise, those, for example, who managed to upgrade to Core 2 and have a motherboard with a 975X or P965 chipset, will have to download a 30 MB application to overclock the video card.

Another change in the new driver that we noticed is that there is no option to go to the classic NVIDIA Control Panel. I would like to believe that NVIDIA will return this feature to its video driver, since many people liked it, unlike the new NVIDIA control panel interface.

NVIDIA video cards are traditionally considered to be some of the best on the market in terms of the combination of quality, performance and price. This pattern has been formed for quite some time. It can be traced, in particular, using the example of the 8800 GT video card, which was launched by the brand on the market in 2007. Its impressive characteristics and performance are among the main factors in the continued demand for this device today in Russia and abroad. What's so special about the corresponding graphics adapter?

General information about the device

RAM modules with a capacity of 2 GB or higher are installed;

There is a motherboard similar in characteristics to the ASUS P5B device;

There is a fairly fast hard drive - for example, WD Caviar SE.

A PC with the specified configuration will also have optimal compatibility with the overclocked 8800 GT video card.

Summary

So, the GeForce 8800 GT graphics adapter was considered one of the best products in the corresponding market segment at the time of its release. First of all, in terms of the combination of price and speed. The test results we reviewed indicate that NVIDIA's solution works more efficiently than the main analogue from the company's closest competitor in the global graphics adapter market - AMD.

The advantages of the 8800 GT video card largely determine its continued popularity in Russia. The 8800 GT graphics card is quite capable of loading many modern games. Device drivers, as we noted above, are available for the most common operating systems - Windows 7, Windows 8, Linux. Now this device is available at minimal prices - however, not from official dealers, but from private sellers.

Again 128 stronger California shooters, but with cut down spears (512MB and 256bit)

Part 1: Theory and architecture

In the previous article dedicated to the release of the new mid-range solution Nvidia Geforce 8800 GT, based on the G92 chip, we mentioned that this solution uses a chip in which not all ALU and TMU execution units are unlocked, some of them are waiting in the wings. to be included in a video card at a different price level. And now this moment has come, Nvidia announced an updated version of the GeForce 8800 GTS, which retained the same name as the younger solution based on the G80. The easiest way to distinguish it is by the amount of installed video memory; it is equal to 512 megabytes, in contrast to the previous 320 MB and 640 MB options. So this model was called Geforce 8800 GTS 512MB.

The new version of the GeForce 8800 GTS is based on the G92 chip, previously used in the GeForce 8800 GT, a video card of the so-called upper mid-price level, so we already know the main features and characteristics. Unlike the two GeForce 8800 GT models with a recommended price of $200 to $250 (which doesn’t correlate well with real prices at the moment, by the way), the new solution has a manufacturer’s recommended price of $349-399. The peculiarities of the video chip used are support for only a 256-bit memory bus, but a larger number of unlocked universal execution units. Let's take a closer look at the new lower high-end solution from Nvidia...

Before reading this material, we recommend that you carefully read the basic theoretical materials DX Current, DX Next and Longhorn, which describe various aspects of modern hardware graphics accelerators and architectural features of Nvidia and AMD products.

These materials quite accurately predicted the current situation with video chip architectures, and many assumptions about future solutions were justified. Detailed information about the Nvidia G8x/G9x unified architecture using previous chips as an example can be found in the following articles:

As we mentioned in the previous material, the G92 chip includes all the advantages of the G8x: a unified shader architecture, full support for DirectX 10, high-quality anisotropic filtering methods and the CSAA antialiasing algorithm with up to sixteen samples inclusive. Some chip blocks are slightly different from those in the G80, but the main change compared to the G80 is the 65 nm manufacturing technology, which has reduced production costs. Let's look at the characteristics of the GPU and new video solutions based on it:

Graphics accelerator Geforce 8800 GTS 512MB

  • Chip codename G92
  • 65 nm technology
  • 754 million transistors (more than G80)
  • Unified architecture with an array of shared processors for stream processing of vertices and pixels, as well as other types of data
  • Hardware support for DirectX 10, including shader model Shader Model 4.0, geometry generation and recording intermediate data from shaders (stream output)
  • 256-bit memory bus, four independent 64-bit wide controllers
  • Core frequency 650 MHz (Geforce 8800 GTS 512MB)
  • ALUs operate at more than double the frequency (1.625 GHz for GeForce 8800 GTS 512MB)
  • 128 scalar floating-point ALUs (integer and floating formats, IEEE 754 32-bit precision FP support, MAD+MUL without clock loss)
  • 64 texture addressing units with support for FP16 and FP32 components in textures
  • 64 bilinear filtering units (like G84 and G86, no free trilinear filtering and more efficient anisotropic filtering)
  • Possibility of dynamic branches in pixel and vertex shaders
  • 4 wide ROP blocks (16 pixels) with support for antialiasing modes up to 16 samples per pixel, including with FP16 or FP32 frame buffer format. Each block consists of an array of flexibly configurable ALUs and is responsible for generating and comparing Z, MSAA, and blending. Peak performance of the entire subsystem up to 64 MSAA samples (+ 64 Z) per clock, in Z only mode 128 samples per clock
  • Record results from up to 8 frame buffers simultaneously (MRT)
  • All interfaces (two RAMDAC, two Dual DVI, HDMI, HDTV) are integrated on the chip (unlike those placed on an external additional NVIO chip in the GeForce 8800)

GeForce 8800 GTS 512MB reference card specifications

  • Core frequency 650 MHz
  • Universal processor frequency 1625 MHz
  • Number of universal processors 128
  • Number of texture blocks 64, blending blocks 16
  • Effective memory frequency 1.94 GHz (2*970 MHz)
  • Memory type GDDR3
  • Memory capacity 512 megabytes
  • Memory bandwidth 64.0 gigabytes per second.
  • Theoretical maximum fill rate is 10.4 gigapixels per second.
  • Theoretical texture sampling speed up to 41.6 gigatexels per second.
  • Two DVI-I Dual Link connectors, supports output resolutions up to 2560x1600
  • SLI connector
  • PCI Express 2.0 bus
  • TV-Out, HDTV-Out, HDCP support
  • Recommended price $349-399

As you can see from the characteristics, the new version of GeForce 8800 GTS 512MB is quite different from the old ones. The number of execution units has increased: ALU and TMU, and the GPU frequency has also increased significantly, including the frequency of shader units. Despite the reduced memory bus (256-bit versus 320-bit in older versions), the memory bandwidth remained the same, since its operating frequency was raised to the corresponding value. As a result, the new GTS has significantly increased shader execution power, as well as increased texture fetch speed. At the same time, the fill rate and bandwidth remained the same.

Due to the changed memory bus width, the latter volume can no longer be equal to 320 MB or 640 MB, only 256 MB, 512 MB or 1 GB. The first value is too small, it will clearly not be enough for a card of this class, and the last one is too high, a slight increase in performance is unlikely to justify the increased price of such options (which may well appear in the future). Therefore, Nvidia chose the middle option with 512 MB cards. Which, as our recent research has shown, is the golden mean for all modern games, which are very demanding on video memory and use up to 500-600 megabytes. We never tire of repeating that this does not mean that all game resources must necessarily be located only in the local memory of the video card; resource management can be transferred to API management, especially in Direct3D 10 with video memory virtualization.

Architecture

As was written in the previous article on the GeForce 8800 GT, we can say that the G92 is the previous flagship G80, transferred to a new technological process, but with some changes. The new chip has 8 large shader units and 64 texture units, as well as four wide ROPs. Despite all the changes for the better, the number of transistors in the chip seems too large, probably, the increased complexity of the chip is explained by the inclusion of a previously separate NVIO chip, as well as a new generation video processor. In addition, transistor counts have been impacted by more complex TMUs, and there is the potential for larger caches to provide greater efficiency to the 256-bit memory bus.

There are very few architectural changes in the G92 chip; we talked about them all in the previous article, and we won’t do it again. Everything said in the reviews of previous solutions remains valid; we will present only the main diagram of the G92 chip, now with all 128 universal processors:

Of all the changes in the chip, compared to the G80, only a reduced number of ROP units and some changes in the TMU, which were described in our previous material. Let us once again point out that 64 texture units in the GeForce 8800 GTS 512MB in real applications in most cases will NOT be stronger than 32 units in the GeForce 8800 GTX. With trilinear and/or anisotropic filtering enabled, their performance will be approximately the same, since they have the same number of texture data filtering units. Of course, where unfiltered samples are used, the performance of solutions on the G92 will be higher.

PureVideo HD

One of the expected changes in the G92 was the built-in second-generation video processor, known from the G84 and G86, which received expanded support for PureVideo HD. This version of the video processor almost completely relieves the CPU when decoding all types of video data, including the “heavy” H.264 and VC-1 formats. The G92 uses a new model of programmable PureVideo HD video processor, which includes the so-called BSP engine. The new processor supports decoding H.264, VC-1 and MPEG-2 formats with resolutions up to 1920x1080 and bitrates up to 30-40 Mbps, performing the work of decoding CABAC and CAVLC data in hardware, which allows you to play all existing HD-DVD and Blu -ray disks even on medium-power single-core PCs. VC-1 decoding is not as efficient as H.264, but it is still supported by the new processor. You can read more about the second generation video processor in our reviews of the G84/G86 and G92, links to which are given at the beginning of the article.

PCI Express 2.0

Among the real innovations in the G92 is support for the PCI Express 2.0 bus. The second version of PCI Express doubles the standard bandwidth, from 2.5 Gb/s to 5 Gb/s, resulting in the x16 connector can transfer data at speeds of up to 8 GB/s in each direction, as opposed to 4 GB/s for version 1.x. It is very important that PCI Express 2.0 is compatible with PCI Express 1.1, and old video cards will work in new motherboards, and new video cards with support for the second version will remain functional in boards without its support. Provided there is sufficient external power and without increasing the interface bandwidth, of course.

The real impact of higher PCI Express bus bandwidth on performance was assessed in its materials by Nvidia's main competitor. According to them, a mid-level video card with 256 megabytes of local memory accelerates when moving from PCI Express 1.0 to 2.0 in modern games such as Company of Heroes, Call of Juarez, Lost Planet and World In Conflict by about 10%, the figures vary from 5 % up to 25% for different games and testing conditions. Naturally, we are talking in high resolutions, when the frame buffer and accompanying buffers occupy most of the local video memory, and some resources are stored in the system.

To ensure backward compatibility with existing PCI Express 1.0 and 1.1 solutions, the 2.0 specification supports both 2.5 Gbps and 5 Gbps transfer rates. PCI Express 2.0 backwards compatibility allows legacy 2.5 Gb/s solutions to be used in 5.0 Gb/s slots that will operate at lower speeds, and a device designed to version 2.0 specifications can support both 2.5 Gb/s and 5 Gb/s speeds . In theory, compatibility is good, but in practice, problems may arise with some combinations of motherboards and expansion cards.

Support for external interfaces

Everything here is the same as with the GeForce 8800 GT, there are no differences. The additional NVIO chip available on GeForce 8800 boards, which supports external interfaces outside the main one (two 400 MHz RAMDAC, two Dual Link DVI (or LVDS), HDTV-Out), in this case was included in the chip itself, support for all these interfaces built into the G92 itself.

GeForce 8800 GTS 512MB video cards usually have two Dual Link DVI outputs with HDCP support. As for HDMI, support for this connector has been implemented; it can be implemented by manufacturers on specially designed cards. Although the presence of an HDMI connector on a video card is completely optional, it can be successfully replaced by an adapter from DVI to HDMI, which is included with most modern video cards.

The 8800 GTX was a landmark event in the history of 3D graphics. It was the first card to support DirectX 10 and its associated unified shader model, which greatly improved image quality over previous generations, and it remained unrivaled in terms of performance for a long time. Unfortunately, all this power came at a cost. With expected competition from ATI and the release of lower-priced mid-range models based on the same technology, the GTX was considered a card aimed only at those enthusiasts who wanted to be at the forefront of modern advances in graphics processing.

Model history

To correct this situation, nVidia released a card of the same line GTS 640MB a month later, and a couple of months later the GTS 320MB. Both offered similar performance to the GTX, but at a much more reasonable price. However, at around $300-$350, they were still too expensive for gamers on a budget - they were not mid-range, but high-end models. In hindsight, the GTS were worth every penny, as what followed for the rest of 2007 was one disappointment after another.

First up were the supposed mid-range 8600 GTS and GT cards, which were heavily stripped-down versions of the 8800 series. They were smaller and quieter and had new HD video processing capabilities, but their performance was below expected levels. Purchasing them was impractical, although they were relatively inexpensive. The alternative ATI Radeon HD 2900 XT matched the GTS 640MB in terms of performance, but consumed a huge amount of power under load and was too expensive to be considered mid-range. Finally, ATI attempted to release the DX10 series in the form of the HD 2600 XT and Pro, which had even better multimedia capabilities than the nVidia 8600, but lacked the power to be worth the attention of gamers who had already purchased previous generation graphics cards such as the X1950 Pro or 7900 GS.

And now, a year after the start of sales of the 8800 GTX with the release of the 8800 GT, the first real update of the model with support for DirectX 10 appeared. Although it took a lot of time, the nVidia GeForce 8800 GT had the same characteristics as the GTS model, and the cost was in the range of 200-250 dollars , has finally reached the mid-range price range that everyone has been waiting for. But what made the card so special?

More is not better

As technology develops and the number of transistors in CPUs and GPUs develops, there is a natural need to reduce their size. This leads to lower energy consumption, which in turn means less heat. More processors fit on one silicon chip, which reduces their cost and theoretically puts a lower limit on the price of equipment made from them. However, changing production processes poses high risks for business, so it is customary to release a completely new architecture based on already existing and proven technologies, as was the case with the 8800 GTX and HD 2900 XT. With the improvement of the architecture comes a shift to less power-intensive hardware, on which the new design is later again based.

The 8800 series followed this path with the G80 cores of the GTX and GTS, produced using 90 nm technology, and the nVidia GeForce 8800 GT is based on the G92 chip, already made using a 65 nm process. While the change doesn't seem like much, it equates to a 34% reduction in wafer size or a 34% increase in the number of processors on a silicon wafer. As a result, electronic components are becoming smaller, cheaper, and more efficient, which is an extremely positive change. However, the G92 core is not just smaller, there is something else.

First of all, the VP2 video processing engine that was used in the 8600 series has now appeared in the GeForce 8800 GT 512MB. So now you can enjoy high-definition video without system slowdown. The final display engine, which is controlled by a separate chip on the 8800 GTX, is also integrated into the G92. The result is 73 million more transistors on-chip than the 8800 GTX (754 million versus 681 million), although the number of stream processors, texture processing and ROP power is less than that of the more powerful model.

A new version of nVidia's transparent anti-aliasing algorithm added to the GeForce 8800 GT is designed to significantly improve image quality while maintaining high system performance. In addition, the new processor did not add any new graphics capabilities.

The manufacturing company apparently thought for a long time about which functionality of the previous 8800 series cards was not fully used and could be reduced, and which should be left. The result was a GPU design that, in terms of performance, fell somewhere between GTX and GTS, but with GTS functionality. As a result, the 8800 GTS card became completely redundant. The 8800 Ultra and GTX still provide more graphics power, but with fewer features, a much higher price, and higher power consumption. Against this background, the GeForce 8800 GT 512 MB card really took a strong position.

GPU architecture

The GeForce 8800 GT uses the same unified architecture that Nvidia introduced when it first announced the G80 processor. The G92 consists of 754 million transistors and is manufactured using TSMC's 65nm process. The substrate size is about 330 mm 2 , and although this is noticeably smaller than the G80, it is still a long way from being called a small piece of silicon. There are a total of 112 scalar thread cores, which run at 1500 MHz in the standard configuration. They are grouped into 7 clusters, each of which has 16 stream processors that share 8 texture address blocks, 8 texture filter sections and their own independent cache. This is the same configuration that Nvidia used in the G84 and G86 chips at the shader cluster level, but the G92 is a much more complex GPU than either of them.

Each of the shader processors can generate two MADD and MUL commands in one clock cycle; the blocks combined into a single structure can process all shader operations and calculations that come in both integer and floating point form. What's interesting, however, is that despite the stream processors' capabilities being the same as the G80 (except for number and frequency), Nvidia claims the chip can do up to 336 GFLOPS. However, NADD and MUL calculations require 504 GFLOPS. As it turned out, the manufacturing company took a conservative approach to determining computing power and did not take MUL into account when calculating overall performance. At briefings and roundtables, Nvidia representatives said that some architectural improvements should allow the chip to approach its theoretical maximum throughput. In particular, the task manager has been improved, distributing and balancing data that comes through the pipeline. NVidia has announced that it will support double precision in future GPUs, but this chip only emulates it due to the need to follow IEEE standards.

ROP architecture

The ROP structure of the G92 is similar to that of any other graphics processor in the GeForce 8-series family. This means that each section has a L2 cache and is assigned to a 64-bit memory channel. There are a total of 4 ROP sections and a 256-bit data storage interface. Each of them is capable of processing 4 pixels per clock cycle, if each of them is specified by four parameters (RGB color and Z). If only the Z component is present, then each section can process 32 pixels per clock cycle.

ROPs support all common anti-aliasing formats used in previous GeForce 8-series GPUs. Since the chip has a 256-bit GDDR interface, Nvidia decided to make some improvements to ROP compression efficiency to reduce bandwidth and graphics memory usage when anti-aliasing is enabled at 1600x1200 and 1920x1200 resolutions.

As a derivative of the original G80 architecture, the filter and texture address blocks, as well as the ROP sections, operate at a different clock speed than the stream processors. Nvidia calls this the base speed. In the case of the GeForce 8800 GT, the video card's characteristics are determined by a frequency of 600 MHz. This theoretically results in a fill rate of 9600 gigapixels per second (Gp/s) and a bilinear texture fill rate of 33.6 Gp/s. According to users, the clock frequency is very low, and an increase in the number of transistors does not guarantee the addition or preservation of functionality. When the company switched from 110nm to 90nm technology, it reduced the number of transistors by 10% through optimization. Therefore, it would not be surprising if there are at least 16 more stream processors on the chip that are disabled in this product.

Design

The reference design of the card provides for the core, shader unit and memory to operate at 600 MHz, 1500 MHz and 1800 MHz, respectively. The 8800 GT features a single-slot cooling system, and the glossy black metal casing almost completely hides its front side. The 50 mm fan corresponds to the design of radial coolers of top models and performs its duties very quietly in all operating modes. It doesn’t matter whether the computer is idling, loaded only with the Windows desktop, or your favorite game is running - it will be practically inaudible against the background of other noise sources in the PC case. However, it is worth noting that the first time you turn on a computer with a new video card, you can be scared. The fan starts to howl when the GPU is loaded at full capacity, but the noise subsides before the desktop appears.

The metal front panel attracts fingerprints, but this is of little concern, since once installed it will be impossible to see them. According to user reviews, the cover helps prevent accidental damage to components such as capacitors on the front of the card. The green printed circuit board combined with the black heatsink bezel gives the 8800 GT a distinctive look. The model is marked with the GeForce logo along the top edge of the front panel. Mark Rein, the company's vice president, told reporters that this entailed additional costs, but was necessary to help users figure out which graphics card is the heart of the system at LAN parties.

Under the heatsink are eight 512-megabit graphics memory chips, giving a total of 512 MB of storage capacity. This is GDDR3 DRAM with an effective frequency of up to 2000 MHz. The GPU supports both GDDR3 and GDDR4, but this feature was never used in this series.

Heating and power consumption

The nVidia GeForce 8800 GT video card is very sexy. Its design is simply very pleasing to the eye and, given the internal changes to the G92, it exudes a sophisticated design feel.

More important than the aesthetic aspects, however, according to users, is the fact that the manufacturer managed to pack all the power into a single-slot device. This is not just a welcome change, it is a pleasant surprise. The characteristics of the GeForce 8800 GT are such that we can assume the presence of a cooler two slots high. The reason why Nvidia went with such a thin design was due to changes in the manufacturing process that reduced the heat to a level that a low-profile fan could handle. In fact, temperatures have dropped so much that even a relatively small cooler doesn't have to spin very quickly, resulting in the card remaining virtually silent even when handling intense games. However, the board temperature rises significantly, requiring a significant amount of air to prevent overheating. As a result of the process reduction, the GeForce 8800 GT 512 MB consumes only 105 W even under full load. Thus, only one six-pin power connector is required. This is another nice change.

The card was the first to support PCIe 2.0, allowing it to receive power up to 150 W. However, the company decided that for backward compatibility it is much easier to limit the power through it to 75 watts. This means that regardless of whether the card is connected to PCIe 1.1 or PCIe 2.0 motherboards, only 75 W are supplied through the connector, with the rest of the power supplied through the auxiliary connector.

Processor VP2

Speaking about the possibility of transmitting HDCP signals, it is worth touching on the new generation video processor that nVidia incorporated into the G92. The VP2 is a single programmable SIMD processor whose flexibility allows it to be expanded in the future. It enables very intensive processing of H.264 encoded video, shifting the load from the CPU to the GPU. In addition to VP2, there is also an H.264 stream processor and an AES128 decoder. The first of these is specifically designed to accelerate CAVLC and CABAC encoding schemes - tasks that are very CPU intensive in a pure software environment. AES128 enables faster processing of the encryption protocol required by video content security schemes such as AACS and Media Foundation. Both of these schemes require encoding of video data (both compressed and uncompressed) when transferred over buses like PCI-Express.

Improving Image Quality

Nvidia is trying hard to improve the transparent anti-aliasing technique that first appeared in the 7-series GeForce. Multisampling reduces card performance slightly, but in most cases it is not effective. On the other hand, supersapling provides much better and more stable image quality, but at the cost of reduced operation speed - it is an incredibly resource-intensive method of anti-aliasing.

The drivers that come with the video card contain a new multisampling algorithm. The differences are quite significant, but the final decision is made by the user. The good news is that since this is a driver-level change, any hardware that supports transparent antialiasing can use the new algorithm, including cards released after the GeForce 7800 GTX. To activate the new mode, you just need to download the latest updates from the manufacturer's website.

According to user reviews, updating the driver for the GeForce 8800 GT will not be difficult. Although the video card's web page only contains links to files for Windows Vista and XP, searching from the main page allows you to find what you need. For nVidia GeForce 8800 GT, Windows 7-10 drivers are installed using the 292 MB GeForce 342.01 Driver utility.

Connectivity

The output connectors of the nVidia GeForce 8800 GT are quite standard - 2 dual-channel DVI-I ports with HDCP support, which are suitable for both analog and digital interfaces of monitors and TVs, and a 7-pin analog video port provides conventional composite and component output. DVI connectors can be used in combination with a DVI-VGA and DVI-HDMI adapter, so any connection option is possible. However, Nvidia still makes audio support for use with HDMI connectors an option for third-party manufacturers - there is no audio processor inside the VP2, so audio is implemented through the on-board S/PDIF connector. This is disappointing, since the thin and quiet card is ideal for gaming home theaters.

The GeForce 8800 GT is the first graphics system compatible with PCI Express 2.0, which means it can access memory at speeds of 16 GB/s. - twice as fast as the previous standard. While this may be useful for workstations and intensive computing, it won't be of much use to the average gamer. In any case, the standard is fully compatible with all previous versions of PCIe, so there is nothing to worry about.

nVidia's partner companies offer overclocked versions of the GeForce 8800 GT, as well as game packages.

BioShock from 2K Games

BioShock was one of the best games that existed at the time the video card was released. It's a "genetically modified" first-person shooter set in the underwater city of Rapture, created on the floor of the Atlantic Ocean by a man named Andrew Ryan as part of the realization of his 1930s art deco dream. 2K Boston and 2K Australia have licensed and used Epic Games' Unreal Engine 3 to best effect, and also leveraged some DirectX 10 capabilities. All of this is controlled through an option in the game's graphics control panel.

The BioShock setting forced the developers to use a lot of water shaders. DirectX 10 technology helped improve the ripples when characters move through water, and pixel shaders were used en masse to create wet objects and surfaces. Additionally, the DX10 version of the game uses a depth buffer to create "soft" particle effects where they interact with their surroundings and look more realistic.

The nVidia GeForce 8800 GT, whose characteristics allow it to show its strengths in the BioShock game, is only slightly inferior to the GTX at a resolution of 1680x1050. As this parameter increases, the gap between the cards increases, but not by a large margin. The reason for this is likely due to the fact that the game did not support transparent anti-aliasing, making the 8800 GTX's massive memory bandwidth advantage moot.

According to user reviews, the 8800 GT also works quite well with SLI enabled. Although its capabilities are not close to those of the GTX, it competes with the Radeon HD 2900 XT graphics card with 512 MB of memory in the CrossFire configuration. Perhaps even more interesting is the fact that at 1920x1200 the 8800 GT is almost as fast as the 640MB GTS!

Crysis Syngle Player Demo from Electronic Arts

This game will literally make your video card cry! The big surprise was its graphics - they surpassed everything that was in computer games before it. Testing with the built-in GPU speed meter is much faster than in reality. About 25 fps in the performance test is enough to get an acceptable frame rate for the user. Unlike other games, the low frame rate here still looks pretty flat.

The nVidia GeForce 8800 GT video card, whose characteristics in Crysis allow it to achieve sufficient frame rates at a resolution of 1680x1050 with high detail under DirectX 10, is not as fast as the GTX, but is noticeably more productive than the Radeon HD 2900 XT and 8800 GTS 640MB. The GTS 320MB struggles to handle Crysis and will need to drop most settings to medium to get framerates above 25 fps even at 1280 x 1024 image quality.

Performance

As you'd expect, the 8800 GTX remains unbeatable, but overall the GeForce 8800 GT GTS is ahead in most tests. At the highest resolutions and anti-aliasing settings, the GT's reduced memory bandwidth lets it down and the GTS occasionally pulls ahead. However, considering the price difference and other advantages, the 8800 GT is better in any case. Conversely, a comparison of GeForce GTX 8800/GT 8800 every time confirms why the first card is so expensive. While other models begin to slow down significantly as the number of image pixels increases, transparent anti-aliasing and anisotropic filtering are used, the 8800 GTX continues to demonstrate excellent results. In particular, Team Fortress 2 at a resolution of 1920x1200 with 8xAA and 16xAF on the 8800 GTX runs twice as fast as on the GT. However, for the most part, the GeForce 8800 GT performs well. Unless, of course, you take into account the incredibly low frame rate in Crysis.

Conclusion

While the GeForce 8800 GT doesn't match the specs of the 8800 GTX series leader, it offers similar performance at a fraction of the price and includes many additional features. And if you add small size and quiet operation, the model will seem simply phenomenal.

For more than a year that has passed since the release of video cards based on NVIDIA GeForce 8800 line chips, the situation on the graphics accelerator market has become extremely unfavorable for the end buyer. In fact, an overclocker who could pay a tidy sum of money for top-end video cards simply had no alternative. A competitor from ATI (AMD) appeared later and, ultimately, was not able to compete with the GeForce 8800 GTX, and subsequently the Ultra version of the NVIDIA GeForce 8800. Therefore, NVIDIA marketers easily realized that in the absence of competition, reduce the cost of top-end Video cards are not necessary at all. As a result, throughout this entire period, prices for GeForce 8800 GTX and Ultra remained at the same very high level, and only a few could afford such video cards.

However, the upper price segment has never been the determining and priority for manufacturers of graphics chips and video cards. Yes, leadership in this class is certainly prestigious for any company, but from an economic point of view, the most profitable is the middle price range. However, as recent tests of the AMD Radeon HD 3850 and 3870, which claim supremacy in the mid-range, have shown, the performance of such video cards is unsatisfactory for modern games and, in principle, unacceptable for their high-quality modes. The NVIDIA GeForce 8800 GT is faster than this pair, but also falls short of comfort in DirectX 10 games. What comes next if there is an opportunity to pay extra? Until yesterday, there was virtually nothing, since there is literally a gap in price terms between the GT and GTX and that’s all.

But technical progress does not stand still - the appearance of the new NVIDIA G92 chip, produced using 65 nm technology, allowed the company not only to attract overclockers with the quite successful GeForce 8800 GT video card, but also yesterday, December 11 at 17:00 Moscow time, to announce a new product - GeForce 8800 GTS 512 MB. Despite the quite simple name of the video card, the new graphics accelerator has a number of significant differences from the regular version of the GeForce 8800 GTS. In today's material, we will get acquainted with one of the first GeForce 8800 GTS 512 MB video cards appearing on the Russian market, check its temperature conditions and overclocking potential, and, of course, study the performance of the new product.

advertising

1. Technical characteristics of video cards participating in testing

The technical characteristics of the new product are presented to your attention in the table below in comparison with NVIDIA video cards of the GeForce 8800 family:

Name of technical
characteristics
NVIDIA GeForce
8800 GT 8800 GTS 8800 GTS
512 MB
8800
GTX/Ultra
GPU G92 (TSMC) G80 (TSMC) G92 (TSMC) G80 (TSMC)
Technical process, nm 65 (low-k) 90 (low-k) 65 (low-k) 90 (low-k)
Core area, sq.mm 330 484 330 484
Number of transistors, million 754 681 754 681
GPU frequency, MHz 600
(1512 shader)
513
(1188 shader)
650
(1625 shader)
575 / 612
(1350 / 1500
shader)
Effective operating frequency
video memory, MHz
1800 1584 1940 1800 / 2160
Memory capacity, MB 256 / 512 320 / 640 512 768
Supported memory type GDDR3
Memory bus width, bits 256
(4 x 64)
320 256
(4 x 64)
384
Interface PCI-Express
x16 (v2.0)
PCI-Express
x16 (v1.x)
PCI-Express
x16 (v2.0)
PCI-Express
x16 (v1.x)
Number of unified shaders
processors, pcs.
112 96 128
Number of texture blocks, pcs. 56 (28) 24 64 (32) 32
Number of rasterization units (ROP’s), pcs. 16 20 16 24
Pixel Shaders/Vertex version support
Shaders
4.0 / 4.0
Video memory bandwidth, Gb/sec ~57.6 ~61.9 ~62.1 ~86.4 / ~103.7

shading, Gpixel/sec
~9.6 ~10.3 ~10.4 ~13.8 / ~14.7
Theoretical top speed
texture samples, Gtex./sec
~33.6 ~24.0 ~41.6 ~36.8 / ~39.2
Peak power consumption in
3D operating mode, Watt
~106 ~180
Power supply requirements
Watt
~400 ~400 ~400 ~450 / ~550
Reference video card dimensions
design, mm (L x H x T)
220 x 100 x 15 228 x 100 x 39 220 x 100 x 32 270 x 100 x 38
Exits 2 x DVI-I
(Dual Link)
TV-Out, HDTV-
Out, HDCP
2 x DVI-I
(Dual Link)
TV-Out,
HDTV-Out
2 x DVI-I
(Dual Link)
TV-Out, HDTV-
Out, HDCP
2 x DVI-I
(Dual Link)
TV-Out,
HDTV-Out
Additionally SLI support
Recommended cost, US dollars 199 / 249 349 ~ 399 299~349 499~599 / 699

2. Review of BFG GeForce 8800 GTS 512 MB OC (BFGR88512GTSE)

The newest video card from a company well known to overclockers comes in a very compact box, decorated in dark colors.







2024 gtavrl.ru.