In Part I of this blog post on information security I investigated the performance of Oracle TDE with hardware acceleration using Intel AES-NI and saw significant performance gains. However, I subtly sidestepped the question of what happens when the data is cached? As we found during part I, Oracle does not cache the data at all, instead choosing to read from disk and decrypt every time the same query is run. Why does Oracle not cache the data after the first time the query is run?
Running a trace on the query and looking in the trace file helps answer the question by showing that the operation used to read the data was identified as a direct path read. This will be familiar to Oracle DBAs with Oracle parallel query experience. It means that the data is read directly into the user session's PGA (bypassing the SGA), instead of using a more familiar db file scattered read, where the data could potentially be cached but placed at the end of the LRU list and aged out more quickly if space in the buffer cache is required. Why does Oracle use a direct path read for a non-parallel query? The answer lies in Note: 793845.1 from My Oracle Support that says:
"There have been changes in 11g in the heuristics to choose between direct path reads or reads through buffer cache for serial table scans. In 10g, serial table scans for “large” tables used to go through cache (by default) which is not the case anymore. In 11g, this decision to read via direct path or through cache is based on the size of the table, buffer cache size and various other stats."
This makes a great deal of sense. Given the massive gains in platform bandwidth and latency, direct path read can be as fast as a db file scattered read and also improve scalability: There is no need to acquire a cache buffers chains latch to scan data buffered in memory to prevent that data being changed while it is in the process of being scanned. It is also beneficial in a RAC clustered environment where other nodes may be interested in the contents of the local buffer cache. As the My Oracle Support note mentions, the decision whether or not to cache the data is dependent on a number of factors. One of the most important is whether the table exceeds a size value based on the value determined by the hidden parameter _small_table_threshold. Instead of using this parameter, however, I granted the user a higher privilege and then used the event 10949 "Disable autotune direct path read for serial full table scan" to modify the default behaviour to observe its impact on clear text and encrypted data as follows:
SQL> alter session set events '10949 trace name context forever, level 1';
After doing this, re-running the same query on the clear text data does physical reads but takes slightly longer to cache.
Subsequent runs shows that we have returned almost to previous performance, although not outperforming the direct path read.
Now the data is cached in the buffer cache in the SGA and not read from disk.
Statistics ---------------------------------------------------------- ... 0 physical reads ...
Tracing also showed that autotune direct path read was disabled and db file scattered read is being used.
I then tried setting the same event for the unencrypted data with hardware acceleration disabled. Running the query on the first occasion took slightly longer than before as the data was read from disk, decrypted, and cached in the buffer cache.
However, on subsequent runs the difference was dramatic. The data was cached in clear text in memory and therefore ran considerably quicker.
Similarly, with AES-NI enabled the initial read from disk and decryption took a similar length of time as before.
Once the data was cached no decryption was required.
To recap, the following are results when the query is cached in the SGA:
Clear Text Query cached = 00:00:29.55 Software only encryption cached as clear text = 00:00:29.30 AES-NI accelerated cached as clear text = 00:00:29.49
In other words, the result is exactly the same and entirely expected from the TDE FAQ with cached data. Once the data is in the buffer cache it is in clear text and should therefore take a similar time to read irrespective of whether the tablespace is encrypted.
For Oracle encryption performance when running queries that use full table scans once the data is cached in memory (for tablespace as opposed to column encryption) it is in clear text and therefore hardware acceleration will not be used after the first read from disk. However, as we saw in Part I it would be wrong to assume that just because we size the buffer cache adequately that a table would necessarily be cached. Additionally, at this release of Oracle (18.104.22.168), whether the data is encrypted does not impact upon how full table scans are implemented. You have the manual intervention alternative and using unsupported underscore parameters to modify Oracle's behaviour. In these simple tests I have tested for only a single user, without considering the implications of scalability or clustering. If you do modify Oracle's behaviour you will need to retest that your assumptions are correct for each and every Oracle release.
What we want with Oracle database encryption as the name TDE implies is for it to be transparent without needing to modify practices at all just because we want the data to be encrypted. What these simple tests show is the best way to do this is by using Intel Xeon processors with AES-NI for Oracle database encryption acceleration.