1 of 1 people found this helpful
There is no ACPI support for SCC, nor a way to get absolute temperatures from the experimental sensors on the SCC without special lab equipment and elaborate procedures to calibrate each sensor since its values vary by individual sensor (across dies and within a die) and voltage. It is just intended to provide an indication of relative thermal changes.
The SENSOR_GATE_PULSE_CNT_RANGE indicates how long to sample the sensor for, expressed as number of tile clocks. For example, to sample for 1 microsecond while running at 400MHz (remember to include any divider value you set for the tile), enter 400. Then read the sensor registers, say every x milliseconds. One sensor is near the mesh interface, the other is near core 1.
Though you can't convert the value to degrees C, you can establish a calibration by putting the processor into a known state and collecting the readings. Lowest power (and hence lowest temperature after settling) will be with cores in HALT (in RESET will be slightly higher). You can read the CRB register from the MCPC using sccKit to see the value without disturbing the cores, e.g., using the memory widget in sccGui. Then put the cores into a highly compute intensive loop (in L1 cache, with code that pairs in the pipelines) for some time and again read the values.
Coming back to the question of temperature sensors. I understand that the hex value in bits 12:0 determine the time for which sensor needs to be sampled. But, I still am not clear on what difference does sampling for 1us or for 0.5us mean? Does sampling for longer period give out a better (meaning more representative) values? I would understand that longer sampling time is better if there is some kind of statistical averaging done in the sensors, if longer periods are used. Also, I understand that if I have to read the sensors every x seconds, setting the configuration (for example bits 12:0) should be set only once (assuming that the sampling period is smaller than x), and I can read them over and over again, by reading the associated CRB addresses.
So, in nutshell: how should I decide how long should the sensors be sampled?
Just to make my point complete, I extracted sensor values for tile (0,0): 10 readings, 1 second apart, sampling period set to 3FFF. The text is attached.
Thanks for your patience
Tile0_0_Core0.txt.zip 213 bytes
I have a question. I read the ReadSensor.pdf documents and I was worried by the dependency of the voltage supply and the temperature relations between temperature sensor reading and real temperature. My question is if do not change the voltage value explicitly is the tiles voltage changing anyway? Should I consider it or I can forgot about it.
This because if I execute sccBMC -c status with all the core running linux in idle, i see small fluctuation on the voltage level. Is this one significant in therm of error on the temperature sensor value meaning?
Are you running remotely? It's very difficult even when running locally to get quantittive results from this calibration. You have to calibrate with the chip in a temperature-controlled environment.
The voltage changes do afffect the calibration, but I think it’s small compared to other effects. Random changes in the ambient temperature of the data center may even be more significant. When you are working remotely, the sensor registers are useful, but mostly in a qualitative sense.
Are you also using the performance counters? Are you using PAPI? Did you make a new kernel with PAPI included? If you have information to report about how to use the performance counters, that would be much appreciated .Did you go back in the PAPI archives for the last version which supported P54C?
First of all, thanks for all the useful information and all the well done documents and suggestion you keep posting in this community.
We have access to a local SCC system. Probably as you suggest, in order to avoid surrounding temperature changing it could be useful to probe the temperature inside the case. Do you know if there is any temperature sensors already built in in the rocky lake board that could be used for this purpose? I will make few test on the thermal sensors in the following weeks, actually our research topic is in thermal and energy management. So using the thermal sensors in a reliable way is of primary importance for us.
Regarding the performance counters, now I'm tring to use it from inside the Linux kernel. I'm patching it to create a kernel module that throught a set of IO control can start, stop it and read it. Now still end debugging it. Unfortunately I'm not an expert on PAPI, actually I never use it before. In the past I had always worked with self made modification on the linux kernel to use the performance counters. I like to have low level control in what i see.
There are no temperature sensors on the SCC chip or board. We do have counters on the chip that are temperature sensitive. You have to do the calibration yourself. The file "How to Read the Thermal Sensor Registers" describes how to use and read these counters. It also provides a link to an sccTherm program that we used internally. Is there any additional information we can provide?
Thank you very much for your reply. I read the counters when all tiles run at the same frequency and use sccTherm –initTherm to initilize refresh rate first, so I think the settings should be same for all tiles. I have this question because I found the difference between the readings of two counters can be very large. For example, the difference of the two counters on tile 0 can be as large as 500 when there are no programs running on tile 0 (the frequency is 533 MHz and the sampling rate is 2.56 us). On the other hand, the sensor reading difference between a tile running two frequencies is not that significant. For example, the difference of sensor readings when tile 0 runing at 533MHz and 100MHz is less than 50. It is a little bit strange to me that the temperature difference caused by power change is much less than location difference. I was wondering if someone has similar experience.
Partly to help others who are trying to make sense of temperature sensors, and partly to get a quick sanity check on the results, I am attaching some graphs that I printed while measuring the thermal response of the system.
(1). All cores were cooled by resetting the SCC and not booting for about 20 minutes. (I know there is a better way to do it, I am working on it).
(2). Assuming that the cores reached a stable temperature, I ran a CPU-intensive application on cores00 and core01 (Tile 00). The application consistently consumes ~95% CPU for about 20 minutes or so.
(3). Measured tiles sensors (or rather, counts).
We know from the documents elsewhere on marc that count is sort of inversely proportional to temperature. So, a lower count will mean a higher temperature.
(4). Measurements were started 3 minutes before the application was activated, to capture the background noise. The measurement continuted till after the application finished.
(5). We do not have any control on sensors, and each sensor will have a bias and a noise. Can someone help in how to model this noise?
I have still not done any cleaning of data, and what I present is just quick graphing of the sensor reads. In each graph, there are two plots, one for the sensor close to the cores (the upper one) and the other one is for sensor close to the network switch. The graph colors are *not* consistent. Again, before putting lot of effort into this, I will like a sanity check on what I am doing.
In the attachment, there are several jpeg plots, with the name Tilexy.jpg, where x and y are the co-ordinates of the tile where the sensors are located. Again, the load is running only on tile 00 (cores 0, 1). The load starts executing exactly at the same time, in both cores.
So, here are my questions:
(a). I see an observable difference in the counter readings only for Tile 00. Also, I had expected the "difference" to be rather large.
(b). How large, would "large" be? Ah, The Intel document explains how to set the starting point for calibration, but I will need at least one more co-ordinate to determine the slope. So, I have no idea on how "large" large actually is.
(c). Can someone , who has more experience tell me if the results are sane? The readings were done by a script that I wrote, and I compared my results to Intel provided "sccTherm". sccTherm does not provide time-stamps, nor does it provide any control on frequency of observations recorded, also it used PCIe connection, which I think is best not used.
If you need numerical data, please drop me a line.
Thanks for any help.
Quick-n-dirty.tar.gz 365.5 K
Hi Yang and Davendra,
in university of bologna we had worked extensively in the last months in try to characterize and reverse ingeenering the thermal sensors of SCC.
First of all from our tests, for uniform stress condition and same sensors settings, we see high difference in the sensor output. The difference is not clusterized and so it could be supposed to be noise. Nonetheless a series of readings shows a high noise in the output of the sensors.
We then execute a series of stress test and we recognize a negative sensibility with the temperature.
Thus considering a linear dependency we have that the sensor output is SO = A+ BT, where T is the temperature and A,B are coefficient. A is positive B is negative.
Said that we performed a test on the sensor time windows (a.k.a integral time) of each sensors. What we discoverd that increasing the time windows, keeping constant the core stress, the counter value increses then overflow, then increase, then overflows. Nonetheless incresing the time windows increases the errors but also increase the absolute value of the reading (if you manually account the overflow) thus higher time windows improves the signal-to-noise ratio.
We are also currently developeding a strategy to characterize the thermal sensors by using a combination of stress patterns and least square optimizzation. We are trying to make it consistent and soon we will let it be pubblic to the marc community.
The main objective of this strategy is to override the problem of the two point characterizzation, suggested in the ReadingSensor.pdf, indeed with this streategy you are forced to impose at maximum power consumption the temperature of all the cores to be equal within each other to a maximum values. We cross checked it with hotspot simulation and we see that center cores in this case should have higher temperature leading to loose of accuracy.
Just as early results we assembled a set of videos of the thermal response of SCC under different stress patterns. Here the link:
Of course still work in progress and this is why we still not have released it.
Looking at the quick and dirty plots you attached, I see that the noise level looks consistent (the scale of the plots varies so it only seems to be wider on some) and magnitude and noise that are not surprising. I believe that you'll find the noise is Guassian white noise, you can run a statistical normality test adn an FFT to confirm. You report the higher value as the one closer to the core which is as expected.
What are you using as the workload? Workloads that access memory beyond the cache will impact routers on the path to the iMC and will stall the pipeline waiting for memory. Also, how are you observing the value of the sensor? Since you were not reading from the MCPC, how were you capturing the values?
The comment by Andrea from Univ Bologna shows a systematic approach to characterizing the sensors. I look forward to their result.