Recently my initial post on Redo and Transaction Logs on SSDs using the X25-E received a comment almost a couple of years after the original post and as I replied the X25-E was launched in 2008 and is end of life. SSD technology has moved on considerably as also has the server platform, operating system and database so now seems like a good time for an update to see whether putting redo on the latest SSD products is still a good idea.
I’m currently working with the Intel SSD 910 Series for Database environments and for the control I will use the older Intel® RAID Controller RS2BL080 with 8 x Intel X25-E Extreme SATA SSD in a RAID 0 configuration. Unfortunately the hard disk storage array used in the previous test has long since expired and not been replaced – however the X25-E configuration provided the highest performance in the previous comparison and being an SLC drive with both onboard disk cache and write back cache enabled on the RAID controller still provides a high performance database write configuration for our control.
Before starting it was interesting to note that some published reviews of Oracle performance seemed to indicate that performance would be lower, there was one for example of the SSD 910 over at AnandTech that reports Oracle throughput at 1/3rd the rate of a single X25-E!. It is therefore beneficial to cover the details of the configuration beforehand to ensure we have it correct. I will cover configuration in this part and then tests in the next.
For my test system I used a 2 socket Intel E5 server that I had to hand (so the performance data is not going to be directly comparable with the 4 socket from the previous tests) and I installed the SSD910 in an x8 PCIe slot as per the specification. I also installed the X25-E RAID configuration. I then installed Oracle Linux 6.3. noting that Red Hat 6 and compatible releases such as Oracle Linux have support for disks with 4KB sectors. As we move ahead you will see why this is important – I used Oracle 12c however any version of Oracle from 184.108.40.206 has support for 4KB Sector Disk Drives . To configure the OS I ran the Oracle pre-install RPM.
[root@sandep1 ~]# yum list | grep rdbms oracle-rdbms-server-11gR2-preinstall.x86_64
Since I used Oracle 12c another change since the previous blog post is that raw devices are no longer supported, they still work exactly as before but there is little gain from testing unsupported configurations so I focused entirely on ASM. Using the Intel(R) SSD Data Center Tool I could see that the SSD 910 presented 4 200GB drives to the OS and ASM will configure all 4 in a single disk group.
| 0 | INTEL(R) SSD 910 200GB | | 1 | INTEL(R) SSD 910 200GB | | 2 | INTEL(R) SSD 910 200GB | | 3 | INTEL(R) SSD 910 200GB |
I had previously used the disks for numerous workloads and did not perform a write erase beforehand to ensure that the disks were operating in a steady state type configuration. I had previously partitioned all 4 disks with fdisk –c –u to disable DOS compatibility. The partition should start at sector 2048. Note that the physical sector size is reported as 4KB.. The logical sector size is reported as 512 bytes for backward compatibility.
root@sandep1 ~]# fdisk -c -u -l /dev/sda Disk /dev/sda: 200.0 GB, 200049647616 bytes 14 heads, 28 sectors/track, 996739 cylinders, total 390721968 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0xbaeb87f9 Device Boot Start End Blocks Id System /dev/sda1 2048 390721967 195359960 83 Linux
Looking at the X25-E RAID configuration that is reported as 512 byte sectors as noted in the first blog post.
Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes
As I am using ASM I next installed ASMLIB. As I used the UEK kernel the kmod-oracleasm RPM is not required however for Red Hat it is required as an additional install. For all systems you also need both oracleasm-support and oracleasmlib of which only oracleasm-support is available on the install media however you can get both from the Oracle public Yum server.
[root@sandep1 ~]# yum list | grep oracleasm oracleasm-support.x86_64 2.1.8-1.el6 @/oracleasm-support-2.1.8-1.el6.x86_64 oracleasmlib.x86_64 2.0.4-1.el6 @/oracleasmlib-2.0.4-1.el6.x86_64
After installing and configuring ASMLIB I then created ASM disks on the SSD 910 disks and the single X25-E presented device.
[root@sandep1 disks]# /etc/init.d/oracleasm createdisk SSD910A /dev/sda1 Marking disk "SSD910A" as an ASM disk: [ OK ] [root@sandep1 disks]# /etc/init.d/oracleasm createdisk SSD910B /dev/sdb1 Marking disk "SSD910B" as an ASM disk: [ OK ] [root@sandep1 disks]# /etc/init.d/oracleasm createdisk SSD910C /dev/sdc1 Marking disk "SSD910C" as an ASM disk: [ OK ] [root@sandep1 disks]# /etc/init.d/oracleasm createdisk SSD910D /dev/sdd1 Marking disk "SSD910D" as an ASM disk: [ OK ]
And checked that the disks could be discovered
[root@sandep1 media]# /usr/sbin/oracleasm-discover Using ASMLib from /opt/oracle/extapi/64/asm/orcl/1/libasm.so [ASM Library - Generic Linux, version 2.0.4 (KABI_V2)] Discovered disk: ORCL:SSD910A [48839990 blocks (200048599040 bytes), maxio 64] Discovered disk: ORCL:SSD910B [48839990 blocks (200048599040 bytes), maxio 64] Discovered disk: ORCL:SSD910C [48839990 blocks (200048599040 bytes), maxio 64] Discovered disk: ORCL:SSD910D [48839990 blocks (200048599040 bytes), maxio 64] Discovered disk: ORCL:X25E [991950848 blocks (507878834176 bytes), maxio 512]
Note that in ASMLIB there is an option to configure disks using the logical or physical sector size. I left this at the default which is to use the physical sector size. Firing up the asmca utility I could see my disks using the default diskstring of ORCL:*. If you cannot see you disks do not change the disk string to /dev/oracleasm/disks*, this bypasses ASMLIB and one of the impacts is that it will not detect the correct disk sector size. I created 2 diskgroups as shown.
Checking in ASM I could see the diskgroups had been configured correctly. Note that the correct sector size has been detected automatically, no underscore parameters in particular _disk_sector_size_override have been used. – this is not required.
[oracle@sandep1 ~]$ sqlplus / as sysasm SQL*Plus: Release 220.127.116.11.0 Production on Thu Jul 18 09:48:41 2013 Copyright (c) 1982, 2013, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 18.104.22.168.0 - 64bit Production With the Automatic Storage Management option SQL> select name, sector_size from v$asm_diskgroup; NAME SECTOR_SIZE ------------------------------ ----------- X25E 512 SSD910 4096
Next I fired up the dbca utility to create a database, for fairness I used the SSD910 diskgroup for all data storage as well as redo and will only put redo on the X25-E
This ran through to completion however on starting the new database it reported the following error.
SQL> startup ORA-01078: failure in processing system parameters ORA-15081: failed to submit an I/O operation to a disk ORA-27091: unable to queue I/O ORA-17507: I/O request size 512 is not a multiple of logical block size ORA-06512: at line 4
My Oracle Support shows an Oracle bug “Bug 14626924 Not able to read spfile from ASM diskgroup and disk with sector size of 4096” with a workaround of “Use a pfile rather than an ASM resident spfile”.
I used ASMCMD to copy the spfile, and the strings command to extract the parameters and create a pfile (using SQL*Plus gives the error ORA-17512: Block Verification Failed)
ASMCMD> cp spfileSBDB1.ora /home/oracle/sbdb1newpfile; copying +SSD910/SBDB1/spfileSBDB1.ora -> /home/oracle/sbdb1newpfile;
I then created an spfile from the pfile, this can be either external to ASM or in another ASM diskgroup without a secotr size of 4KB until the Oracle bug is fixed.
SQL> create spfile='/u01/app/oracle/product/12.1.0/dbhome_1/dbs/spfileSBDB1.ora' from pfile='/home/oracle/ sbdb1newpfile'; File created. SQL>
And an init.ora file to point to the spfile
[oracle@sandep1 dbs]$ more initSBDB1.ora SPFILE='/u01/app/oracle/product/12.1.0/dbhome_1/dbs/spfileSBDB1.ora'
The Oracle instance then started as expected:
SQL> startup ORACLE instance started. Total System Global Area 9.8615E+10 bytes Fixed Size 4661112 bytes Variable Size 7784628360 bytes Database Buffers 9.0194E+10 bytes Redo Buffers 631664640 bytes Database mounted. Database opened. SQL>
Checking the disk group also showed the correct sector size, this time from the database instance.
SQL> select name, SECTOR_SIZE from v$asm_diskgroup; NAME SECTOR_SIZE ------------------------------ ----------- X25E 512 SSD910 4096
So since we are interested in redo I took a look at the default blocksize used for the redo logs, by default these were 4KB.
SQL> select group#, blocksize from v$log; GROUP# BLOCKSIZE ---------- ---------- 1 4096 2 4096 3 4096
I could also see what happens when I create new logfiles without specifying a block size.
SQL> alter database add logfile '+X25E' size 512m; Database altered. SQL> alter database add logfile '+SSD910' size 512m; Database altered.
Oracle has correctly identified the underlying sector size and automatically created the logfile with the correct blocksize.
SQL> select group#, blocksize from v$log; GROUP# BLOCKSIZE ---------- ---------- 1 4096 2 4096 3 4096 4 512 5 4096
I then deleted the existing redo log files and created four larger redo log files (so as not to checkpoint during a test) with 2 on X25-E and 2 on SSD 910.
In summary I have configured an Oracle database with both data and redo on the SSD 910. I ensured that my disks were correctly partitioned to be aligned, I selected an OS and database that was 4KB sector aware and ensured that ASM was configured correctly in order that the disk were configured correctly and automatically in Oracle without any additional underscore parameters or block size settings required. I was then ready to run a few tests to compare performance.