Last March we told you how to measure the temperature of a processor in a realistic way, and we also saw why it was advisable to do so. This guide was quite well received, so we have decided to share with you a version of it that will be available in the near future focused on the graphics carda component that is also very sensitive to operating temperatures.
In this new guide we will see how we can measure the temperature of a graphics card, but in a realistic way. We will also explain how the measurement can vary depending on the workloadand we will tell you why it is important to put the temperature value into context with consumption of this component.
Before getting into the subject, I remind you that each graphics card can register very different temperature and power consumption values, and that this does not have to be a problem, in fact it is actually the most normal thing to do. A more powerful graphics card will normally have higher operating temperatures and power consumption than a less powerful one.
In the end, the important thing is that the measurements recorded are normal within the mean values of the model we are using. For example, a GeForce RTX 3050 can record average temperatures of 66 degrees and power consumption of 126 watts, as we saw in our review, while an RTX 3080 Ti can reach 80 degrees and record a power consumption of 349 watts.
There is a significant difference between the two graphics cards, but they are totally normal, as the first is a mid-range model and the second is a high-end model that offers much more performance What would not be normal is for the RTX 3050 to have higher temperatures than the RTX 3080 Ti.
What does it mean to measure the temperature and power consumption of a graphics card realistically?
Well, it is very simple, it means that we must do it by using a workload that brings that component to 100% utilization. It is the same as with a processor, if we use a workload that only takes this component to 50% of use we will not obtain realistic data on its temperatures or its consumption, since it will be underutilized.
It would not be the first time that I see some user saying that his graphics card only reaches 50 degrees, and then it turns out that he is giving the value that he gets when is on the Windows desktop with the fans off. It is also common to see cases of users making measurements with relatively old games whose GPU usage does not exceed 50% or 60%.
To measure the temperature and power consumption of a graphics card in a real way we must use applications or games that, as we have said, push that component to the limit. The values obtained will represent the maximums that the component can register when used at full power On the other hand, we must also measure for a minimum time so that the graphics card can reach its maximum peak temperature.
If we just open a game and run it for one minute, it will be very difficult to measure the temperature realistically, since it is at very low levels when we are on the Windows desktop, and starts to rise gradually when we run a game or a performance application. By this I mean that the graphics card does not normally go from 40 degrees at idle to 80 degrees in a matter of seconds when running a heavy workload.
Ideally, we should dedicate at least 30 minutes to the test we are using in order to be able to reliably measure the temperature. If we do relatively short tests, it is advisable to repeat them at least three times to get a true measurement. In my case, when I do performance tests, i try to avoid predefined benchmarks that some games include, and I focus on running through the same in-game scene performing different actions and killing enemies. This generates a real workload, and allows me to measure temperature and power consumption accordingly.
Applications to measure the temperature and power consumption of a graphics card
Both AMD and NVIDIA offer a tool that we can download along with their drivers that works really well, and that can be activated in a very simple way. Both manufacturers integrate an overlay that we can activate when we are in the game, and through which we can choose which performance measurements we want to be displayed. This tool comes with the radeon Adrenalin softwareand in the case of NVIDIA with GeForce Experience.
In the case of AMD, we can activate the full interface by pressing “Alt + R”, and if we only want to display the menu as a sidebar we have to use “Alt + Z”. For those using an NVIDIA graphics card the commands are “Alt + Z” to open the full interface, where we can also select whether we want to display more or less measurements and the location of the overlay, and “Alt + R” if we want to activate the measurement overlay directly.
With both tools we will be able to measure the temperature and consumption of the graphics card, but also the usage of the card and even other important aspectssuch as operating frequencies and GPU usage. All this will give you the context you need to make a fully realistic and contextualized measurement according to the workload you are using.
You can also use other tools, such as MSI Afterburnerbut with AMD and NVIDIAâ¤?s comprehensive and lightweight overlay, this application is a great choice i don’t find it necessaryin fact, on a personal note, I only continue to use it to make average performance measurements in specific situations.
Tips for measuring the temperature and power consumption of a graphics card realistically
In this case, I would directly avoid synthetic tests and i would go for a couple of demanding games. It is a good idea to try a couple of games because we will be able to make a comparison and detect possible discrepanciesand because we will be able to see how our graphics card behaves in two different scenarios. If on top of that we test titles that use different technologies and that push our graphics card to the limitsuch as ray tracing, all the better.
I tell you this because in performance testing I have more than once detected relatively large differences. These were between titles using different technologies and configurations. For example, enabling DLSS can reduce temperature but also GPU utilizationwhile enabling ray tracing can have the opposite effect, as it places a higher load on our graphics card.
When you set out to measure the temperature and power consumption of your graphics card, keep all this in mind, and establish a single, stable pattern that aims to push the component to its limits. To achieve this, make sure it is always between 99% and 100% utilization. If it does not reach these values in the games you are using, check that you have set them to maximum quality and the highest resolution possible, and move to areas where there is intense action.
To simplify everything we have said, and so that you have a general referencehere are the key guidelines that you should follow to measure temperature and consumption correctly:
- Use demanding new generation games.
- Avoid scripted benchmarks and play real scenes with significant load.
- Configure games to the maximum, and with the highest resolution possible.
- Play for 30 minutes, so that the temperature can reach its maximum peak.
- Make sure you have the latest available drivers installed.
- The temperature should stabilize after a few minutes of play, and so should the power consumption.
- The fan profile should be set to a reasonable level, so that they don’t look like an airplane about to take off.
How to interpret the results and why it is good to make this type of measurement
At first glance there is no mystery, the temperature and temperature values that we obtain will reflect the maximum that our graphics card is capable of reaching. So far so good, But are these values normal or can they indicate some kind of problem? That’s where the fun of doing a measurement like this lies, in finding out if everything is going well or if we should start worrying.
In order to understand and interpret these values we must know where the normal range of temperatures and power consumption of our graphics card is, although we must also keep in mind that these values may differ depending on certain keys:
- Design and quality of the cooling system of our graphics card. A model with a higher quality system can operate at a significantly lower temperature.
- Possible factory overclock and increased power consumption to support it. Some models come overclocked and have a higher power consumption from home, which can also cause them to reach higher temperatures.
With the above in mind, it is clear that the same graphics card with a humble cooling system will have higher temperatures than one with a superior cooling system, and this should not be something to worry about, as long as the difference is reasonable and the values remain within a normal level. This brings us to another question, Where is the normal level? This is a very complicated question, since I cannot give you concrete values model by model, but I can share with you some data that will serve as a reference:
- Low-end and low-mid range graphics cards: the Radeon RX 6500 XT would be one of the best examples. They typically run between 50 and 60 degrees, and their power consumption is usually very low. Such a model is around 100 watts.
- Mid-range graphics cardsthere is a huge variety in this range, but the most common range is between 60 and 70 degrees, although getting closer to 80 degrees would not be a real problem. Power consumptions typically range from 120 to 180 watts.
- High-end graphics cards: there is also a lot of variety, but ideally they should be in the range of 75 to 80 degrees at the most. Power consumption can vary greatly depending on the power of each graphics card, but usually ranges from 200 to 500 watts.
Based on all the information we have seen, it is normal for a GeForce RTX 3080 Ti Founders Edition to register 80 degrees after an hour playing Cyberpunk 2077 at maximum quality and ray tracing, and that its average power consumption is around 350 watts. However, it would not be normal for a GeForce RTX 3070 Founders Edition to register those values, this should move at around 73 degrees and a power consumption of 220-240 watts.
We now go for the last question, why is it good to measure the temperature and power consumption of a graphics card in a realistic way? Well, it is very simple, because will allow us to check that everything is in orderthat is to say, that our graphics card does not have overheating problems, that it is receiving the power it needs and that it does not have stability problems.
It is important to bring the graphics card to the limit, to that 100% usage, and maintain it for at least 30 minutes because that way it will be supporting a workload that will really challenge the system, that is, both the graphics card and our power supply. If there are any problems, they will come to light, we will be able to identify them and take whatever measures are necessary to solve them.
For example, a hang with normal temperatures would point directly to the power supplywhile very high temperatures would point directly to the cooling system, or perhaps to the airflow inside the PC chassis. In the first case, it would be essential to change the power supply to prevent it from dying and taking other components with it, while in the second case we must check that the graphics card fans are working properlythat it is clean, that the thermal paste is in good condition and that the airflow of our equipment is good.
As I told you at the time when we talked about measuring the temperature of a CPU, do not be alarmed if the values of your graphics card are higher than those of other users that you have read on the Internet. These have not always been made with realistic measurements, and we have no guarantee that they are true. The important thing is that your graphics card remains stably within those values that we can consider as normal.