The views expressed on this blog are my own and do not necessarily reflect the views of Oracle

March 26, 2016

Quorum disks in Exadata

An Exadata quarter rack has two database servers and three storage cells. In a typical setup, such a system would have three ASM disk groups, say DATA, RECO and DBFS_DG. Usually the disk group DATA would be high redundancy and the other two disk groups would be normal redundancy. The high redundancy disk group guards against the simultaneous failure of two partner disks or the complete failure of two storage cells.

There is a high availability problem with this setup. The loss of two storage cells would bring down the clusterware, which would in turn bring down all databases in the cluster; even those with datafiles in the high redundancy disk group. This is because the clusterware voting disks would be in a normal redundancy disk group, and the loss of two storage cells would mean the loss of the majority of the voting disks. The voting disks cannot be in the high redundancy disk group because that disk group would need five failgroups, and we cannot have a disk group with five failgroups in a quarter rack as we only have three storage cells.

The Exadata software version 12.1.2.3.0 introduces the functionality to create quorum disks on database servers. Those quorum disks can then be added to the high redundancy disk group to make it suitable for voting disks. With such a setup, we would have true high availability storage in a quarter rack, or any elastic rack with less than five storage cells.

In this post, I will show how to create quorum disks, add them to a high redundancy disk group, and migrate the clusterware voting disks to that high redundancy disk group.

No high redundancy disk groups

As the high availability setup in a quarter rack is questionable, I don't even have a high redundancy disk group, and my voting disks are in the normal redundancy disk group DBFS_DG.

[grid@exadb01 ~]$ asmcmd lsdg
State   Type   ... Voting_files Name
MOUNTED NORMAL ... N            DATA/
MOUNTED NORMAL ... N            RECO/
MOUNTED NORMAL ... Y            DBFS_DG/
[grid@exadb01 ~]$

To achieve my goal, I will recreate DBFS_DG as a high redundancy disk group and add quorum disks to it. Let's see what else is in that disk group.

[grid@exadb01 ~]$ asmcmd find DBFS_DG "*"
+DBFS_DG/ASM/
+DBFS_DG/ASM/PASSWORD/
+DBFS_DG/ASM/PASSWORD/pwdasm.256.885726993
+DBFS_DG/ENT1/
+DBFS_DG/ENT1/DATAFILE/
+DBFS_DG/ENT1/DATAFILE/DBFS_TS.278.904476591
+DBFS_DG/EXACLUSTER/
+DBFS_DG/EXACLUSTER/ASMPARAMETERFILE/
+DBFS_DG/EXACLUSTER/ASMPARAMETERFILE/REGISTRY.253.885726993
+DBFS_DG/EXACLUSTER/OCRFILE/
+DBFS_DG/EXACLUSTER/OCRFILE/REGISTRY.255.885726995
+DBFS_DG/_MGMTDB/
...
+DBFS_DG/_MGMTDB/CONTROLFILE/
+DBFS_DG/_MGMTDB/CONTROLFILE/Current.260.885727877
+DBFS_DG/_MGMTDB/DATAFILE/
+DBFS_DG/_MGMTDB/DATAFILE/SYSAUX.257.885727813
+DBFS_DG/_MGMTDB/DATAFILE/SYSTEM.258.885727823
+DBFS_DG/_MGMTDB/DATAFILE/UNDOTBS1.259.885727839
...
+DBFS_DG/orapwasm
[grid@exadb01 ~]$

From the above, disk group DBFS_DG has the ASM password file, ASM spfile, the Oracle Cluster Registry (OCR) and the Grid Infrastructure management repository database (MGMTDB). As I would like to recreate the disk group, I need to move those files out to a temporary location, recreate the disk group, and put the files back into the disk group. Also, I would like to achieve all this without downtime or interruption to any of the services in the cluster.

Disk group DBFS_DG might also have the Database Based File System (DBFS) tablespace, and the volume group for the ASM Cluster File System (ACFS). If that is your case, and you would like to follow my example, you would need to take care of those as well.

Move the management repository database out of DBFS_DG

To move the MGMTDB database out of DBFS_DG I will just drop it and recreate it later.

Stop and disable the ora.crf resource on both nodes.

[root@exadb01 ~]# /u01/app/12.1.0.2/grid/bin/crsctl stop res ora.crf -init
CRS-2673: Attempting to stop 'ora.crf' on 'exadb01'
CRS-2677: Stop of 'ora.crf' on 'exadb01' succeeded

[grid@exadb02 ~]$ /u01/app/12.1.0.2/grid/bin/crsctl stop res ora.crf -init
CRS-2673: Attempting to stop 'ora.crf' on 'exadb02'
CRS-2677: Stop of 'ora.crf' on 'exadb02' succeeded

[root@exadb01 ~]# /u01/app/12.1.0.2/grid/bin/crsctl modify res ora.crf -attr ENABLED=0 -init

[root@exadb02 ~]# /u01/app/12.1.0.2/grid/bin/crsctl modify res ora.crf -attr ENABLED=0 -init
[root@exadb02 ~]#

Drop the MGMTDB database.

[grid@exadb01 ~]$ $ORACLE_HOME/bin/dbca -silent -deleteDatabase -sourceDB -MGMTDB
Connecting to database
4% complete
9% complete
...
Deleting instance and datafiles
76% complete
100% complete
Look at the log file "/u01/app/grid/cfgtoollogs/dbca/_mgmtdb.log" for further details.
[grid@exadb01 ~]$

Move the OCR out of DBFS_DG

There is no move option for the OCR , so I add it to disk group RECO first, and then delete it from DBFS_DG:

[root@exadb01 ~]# /u01/app/12.1.0.2/grid/bin/ocrconfig -add +RECO

[root@exadb01 ~]# /u01/app/12.1.0.2/grid/bin/ocrconfig -delete +DBFS_DG

[root@exadb01 ~]# /u01/app/12.1.0.2/grid/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
   Version                  :          4
   Total space (kbytes)     :     409568
   Used space (kbytes)      :       2260
   Available space (kbytes) :     407308
   ID                       :  602969544
   Device/File Name         :    +RECO
      Device/File integrity check succeeded
      Device/File not configured
      Device/File not configured
      Device/File not configured
      Device/File not configured
   Cluster registry integrity check succeeded
   Logical corruption check succeeded
[root@exadb01 ~]#

Move the voting disks out of DBFS_DG

Move the voting disks to disk group RECO.

[root@exadb01 ~]# /u01/app/12.1.0.2/grid/bin/crsctl replace votedisk +RECO
Successful addition of voting disk c9b15f37b5eb4f4fbfc2b1290cac9fed.
Successful addition of voting disk be60fc32642c4f1cbf2168b284535bf3.
Successful addition of voting disk 18ee9eec92514f3cbf095ce37e4a77b6.
Successful deletion of voting disk aa6c599c8a284f04bfd48eb4acff83ff.
Successful deletion of voting disk f7db1ade96044f78bf141334417b0ab9.
Successful deletion of voting disk d671a59bec2f4fb6bf60842c099419ca.
Successfully replaced voting disk group with +RECO.
CRS-4266: Voting file(s) successfully replaced

[root@exadb01 ~]# /u01/app/12.1.0.2/grid/bin/crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
1. ONLINE   c9b15f37b5eb4f4fbfc2b1290cac9fed (o/192.168.1.7/RECO_CD_00_exacel03) [RECO]
2. ONLINE   be60fc32642c4f1cbf2168b284535bf3 (o/192.168.1.6/RECO_CD_07_exacel02) [RECO]
3. ONLINE   18ee9eec92514f3cbf095ce37e4a77b6 (o/192.168.1.5/RECO_CD_00_exacel01) [RECO]
Located 3 voting disk(s).
[root@exadb01 ~]#

Move the ASM password file out of DBFS_DG

Move the password file to disk group RECO.

[grid@exadb01 ~]$ asmcmd pwmove --asm +DBFS_DG/orapwASM +RECO/orapwASM
moving +DBFS_DG/orapwASM -> +RECO/orapwASM
Use of uninitialized value $errorflag in string eq at /u01/app/12.1.0.2/grid/lib/asmcmdpasswd.pm line 1306.
Use of uninitialized value $errorflag in string eq at /u01/app/12.1.0.2/grid/lib/asmcmdpasswd.pm line 1307.
...

[grid@exadb01 ~]$ asmcmd ls -l +RECO/orapwASM
Type      Redund  Striped  Time             Sys  Name
PASSWORD  HIGH    COARSE   MAR 18 15:00:00  N    orapwASM => +RECO/ASM/PASSWORD/pwdasm.2227.906822477
[grid@exadb01 ~]$

Tell the clusterware that the ASM password file has been moved.

[grid@exadb01 ~]$ /u01/app/12.1.0.2/grid/bin/srvctl modify asm -pwfile +RECO/orapwASM

[grid@exadb01 ~]$ /u01/app/12.1.0.2/grid/bin/srvctl config asm
ASM home:
Password file: +RECO/orapwASM
ASM listener: LISTENER
[grid@exadb01 ~]$

Move ASM spfile out of DBFS_DG

Move the ASM spfile file to disk group RECO.

[grid@exadb01 ~]$ asmcmd spmove +DBFS_DG/EXACLUSTER/ASMPARAMETERFILE/REGISTRY.253.885726993 +RECO
ORA-15056: additional error message
ORA-17502: ksfdcre:4 Failed to create file +RECO/REGISTRY.253.885726993
ORA-15177: cannot operate on system aliases
ORA-06512: at line 7 (DBD ERROR: OCIStmtExecute)
[grid@exadb01 ~]$

No luck. This has been an issue ever since the spmove command was introduced, and it still hasn't been fixed. I have to do it the old-fashioned way.

[grid@exadb01 ~]$ sqlplus / as sysasm

SQL> create pfile='/tmp/initASM.ora' from spfile;

File created.

SQL> create spfile='+RECO' from pfile='/tmp/initASM.ora';

File created.

SQL>

Check that the ASM spfile has been moved.

[grid@exadb01 ~]$ asmcmd find --type asmparameterfile RECO "*"
+RECO/EXACLUSTER/ASMPARAMETERFILE/REGISTRY.253.906822885

Verify that the GPNP profile has been updated with the new location of the ASM spfile.

[grid@exadb01 ~]$ gpnptool get -o-
...
SPFile="+RECO/EXACLUSTER/ASMPARAMETERFILE/registry.253.906822885"
...
Success.
[grid@exadb01 ~]$

Drop disk group DBFS_DG

With everything out of the disk group, I can now drop it.

[grid@exadb01 ~]$ asmcmd dropdg -r DBFS_DG
[grid@exadb01 ~]$

Recreate DBFS_DG

Recreate the disk group DBFS_DG as a high redundancy disk group.

[grid@exadb01 ~]$ sqlplus / as sysasm

SQL> create diskgroup DBFS_DG
high redundancy
disk 'o/*/DBFS*'
attribute
'compatible.asm'          = '12.1.0.2',
'compatible.rdbms'        = '12.1.0.2',
'cell.smart_scan_capable' = 'true',
'au_size'                 = '4M';

Diskgroup created.

SQL>

Note that I cannot put the voting disks into DBFS_DG just yet.

[root@exadb01 ~]# /u01/app/12.1.0.2/grid/bin/crsctl replace votedisk +DBFS_DG
Failed to create voting files on disk group DBFS_DG.
Change to configuration failed, but was successfully rolled back.
CRS-4000: Command Replace failed, or completed with errors.
[root@exadb01 ~]#

Review the crsctl trace file to see why that failed.

[root@exadb01 ~]# view /u01/app/grid/diag/crs/exadb01/crs/trace/crsctl_146226.trc
...
ORA-15274: Not enough failgroups (5) to create voting files
ORA-06512: at line 4
...

We have only three failgroups in disk group DBFS_DG, so the voting disks cannot be placed in that disk group.

Create quorum disks

Create quorum disk configurations on exadb01 and exadb02.

[root@exadb01 ~]# /opt/oracle.SupportTools/quorumdiskmgr --create --config --owner=grid --group=oinstall --network-iface-list="bondib0"
[Info] Successfully created iface exadata_bondib0 with iface.net_ifacename bondib0
[Success] Successfully created quorum disk configurations
[root@exadb01 ~]#

[root@exadb02 ~]# /opt/oracle.SupportTools/quorumdiskmgr --create --config --owner=grid --group=oinstall --network-iface-list="bondib0"
[Info] Successfully created iface exadata_bondib0 with iface.net_ifacename bondib0
[Success] Successfully created quorum disk configurations
[root@exadb02 ~]#

Create the iSCSI targets on exadb01 and exadb02 for the ASM disk group DBFS_DG, and make the targets visible to both database servers.

[root@exadb01 ~]# /opt/oracle.SupportTools/quorumdiskmgr --create --target --asm-disk-group=DBFS_DG --visible-to="192.168.1.1, 192.168.1.2"
[Success] Successfully created target iqn.2015-05.com.oracle:QD_DBFS_DG_exadb01.
[root@exadb01 ~]#

[root@exadb02 ~]# /opt/oracle.SupportTools/quorumdiskmgr --create --target --asm-disk-group=DBFS_DG --visible-to="192.168.1.1, 192.168.1.2"
[Success] Successfully created target iqn.2015-05.com.oracle:QD_DBFS_DG_exadb02.
[root@exadb02 ~]#

Note that the "quorumdiskmgr --create --target"  has created a new 128MB logical volume in volume group VGExaDb.

[root@exadb01 ~]# lvdisplay
...
 --- Logical volume ---
 LV Path                /dev/VGExaDb/LVDbVdexadb01DBFS_DG
 LV Name                LVDbVdexadb01DBFS_DG
 VG Name                VGExaDb
 LV UUID                wD64b1-se5K-mQat-AY7G-82l3-d0AT-73WNbx
 LV Write Access        read/write
 LV Creation host, time exadb01.au.oracle.com, 2016-03-18 16:10:15 +1100
 LV Status              available
 # open                 2
 LV Size                128.00 MiB
 Current LE             32
 Segments               1
 Allocation             inherit
 Read ahead sectors     auto
 - currently set to     256
 Block device           252:5
[root@exadb01 ~]#

[root@exadb02 ~]# lvdisplay
...
 --- Logical volume ---
 LV Path                /dev/VGExaDb/LVDbVdexadb02DBFS_DG
 LV Name                LVDbVdexadb02DBFS_DG
 VG Name                VGExaDb
 LV UUID                UmHSaR-qdnO-xFHq-5nAO-7r6T-cKnb-2ZYydk
 LV Write Access        read/write
 LV Creation host, time exadb02.au.oracle.com, 2016-03-18 16:17:35 +1100
 LV Status              available
 # open                 1
 LV Size                128.00 MiB
 Current LE             32
 Segments               1
 Allocation             inherit
 Read ahead sectors     auto
 - currently set to     256
 Block device           252:5
[root@exadb02 ~]#

Create the disk devices.

[root@exadb01 ~]# /opt/oracle.SupportTools/quorumdiskmgr --create --device --target-ip-list="192.168.1.1, 192.168.1.2"
[Success] Successfully created all device(s) from target(s) on machine with IP address 192.168.1.1
[Success] Successfully created all device(s) from target(s) on machine with IP address 192.168.1.2
[root@exadb01 ~]#

[root@exadb02 ~]# /opt/oracle.SupportTools/quorumdiskmgr --create --device --target-ip-list="192.168.1.1, 192.168.1.2"
[Success] Successfully created all device(s) from target(s) on machine with IP address 192.168.1.1
[Success] Successfully created all device(s) from target(s) on machine with IP address 192.168.1.2
[root@exadb02 ~]#

List the devices.

[root@exadb01 ~]# /opt/oracle.SupportTools/quorumdiskmgr --list --device
Device path: /dev/exadata_quorum/QD_DBFS_DG_exadb01
Host name: exadb01
ASM disk group name: DBFS
Size: 128 MB

Device path: /dev/exadata_quorum/QD_DBFS_DG_exadb02
Host name: exadb02
ASM disk group name: DBFS
Size: 128 MB
[root@exadb01 ~]#

Note that those devices are linked to /dev/dm-6 and /dev/dm-7.

[root@exadb01 ~]# ls -l /dev/exadata_quorum
total 0
lrwxrwxrwx 1 root root 7 Mar 18 17:11 QD_DBFS_DG_exadb01 -> ../dm-6
lrwxrwxrwx 1 root root 7 Mar 18 17:11 QD_DBFS_DG_exadb02 -> ../dm-7
[root@exadb01 ~]#

[root@exadb02 ~]# ls -l /dev/exadata_quorum
total 0
lrwxrwxrwx 1 root root 7 Mar 18 17:11 QD_DBFS_DG_exadb01 -> ../dm-6
lrwxrwxrwx 1 root root 7 Mar 18 17:11 QD_DBFS_DG_exadb02 -> ../dm-7
[root@exadb02 ~]#

Add quorum disks to DBFS_DG

Use these new disk devices as quorum disks for disk group DBFS_DG.

[grid@exadb01 ~]$ sqlplus / as sysasm

SQL> select label, path from v$asm_disk where path like '/dev%';

LABEL              PATH
------------------ --------------------------------------
QD_DBFS_DG_exadb01 /dev/exadata_quorum/QD_DBFS_DG_exadb01
QD_DBFS_DG_exadb02 /dev/exadata_quorum/QD_DBFS_DG_exadb02

SQL> alter diskgroup DBFS_DG add quorum failgroup exadb01 disk '/dev/exadata_quorum/QD_DBFS_DG_exadb01';

Diskgroup altered.

SQL> alter diskgroup DBFS_DG add quorum failgroup exadb02 disk '/dev/exadata_quorum/QD_DBFS_DG_exadb02';

Diskgroup altered.

SQL>

Note that I did not have to adjust the ASM_DISKSTRING value and that ASM was still able to see the new disks.

Move the voting disks back into DBFS_DG

Now that the disk group DBFS_DG has 5 failgroups, I can put the voting disks into that disk group.

[grid@exadb01 ~]$ crsctl replace votedisk +DBFS_DG
Successful addition of voting disk 1f3f4d6f9b324f62bf9a2ad076eb4b12.
Successful addition of voting disk 9fcd28990d6e4f02bff4c798eb32957c.
Successful addition of voting disk 5f87648eb4184fafbf8b32d780e3e224.
Successful addition of voting disk 40167c1baba24f0dbf3d2cccef8d1fa3.
Successful addition of voting disk b9af8801f5d14fc3bf88129d7b26ed48.
Successful deletion of voting disk 3ba4aacb47d64fb6bfb376460a66e0b0.
Successful deletion of voting disk 338533827f8c4fc5bfe811270339e015.
Successful deletion of voting disk 2a33194bff5a4fa2bf860e3e4b615572.
Successfully replaced voting disk group with +DBFS_DG.
CRS-4266: Voting file(s) successfully replaced

[grid@exadb01 ~]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
1. ONLINE   1f3f4d6f9b324f62bf9a2ad076eb4b12 (o/192.168.1.7/DBFS_DG_CD_09_exacel03) [DBFS_DG]
2. ONLINE   4e0441648bef4fc7bf5133fbf87f1a01 (o/192.168.1.6/DBFS_DG_CD_03_exacel02) [DBFS_DG]
3. ONLINE   00b7d8bad6234fa3bf38d4a7bad6f23b (o/192.168.1.5/DBFS_DG_CD_08_exacel01) [DBFS_DG]
4. ONLINE   490481157a0e4f70bf8dc27313557931 (/dev/exadata_quorum/QD_DBFS_DG_exadb01) [DBFS_DG]
5. ONLINE   f8bb5e9f4cd14fd9bf2b3319ac380563 (/dev/exadata_quorum/QD_DBFS_DG_exadb02) [DBFS_DG]
Located 5 voting disk(s).
[grid@exadb01 ~]$

Put OCR, ASM password file and ASM spfile back into DBFS_DG

The disk group is empty (well it has the voting disks).

[grid@exadb01 ~]$ asmcmd find DBFS_DG "*"
[grid@exadb01 ~]$

Put the OCR back into DBFS_DG.

[root@exadb01 ~]# /u01/app/12.1.0.2/grid/bin/ocrconfig -add +DBFS_DG
[root@exadb01 ~]# /u01/app/12.1.0.2/grid/bin/ocrconfig -delete +RECO
[root@exadb01 ~]#

Put the ASM password file back into DBFS_DG.

[grid@exadb01 ~]$ asmcmd pwmove --asm +RECO/orapwASM +DBFS_DG/orapwASM
moving +RECO/orapwASM -> +DBFS_DG/orapwASM
Use of uninitialized value $errorflag in string eq at /u01/app/12.1.0.2/grid/lib/asmcmdpasswd.pm line 1306.
...
[grid@exadb01 ~]$ srvctl modify asm -pwfile +DBFS_DG/orapwASM
[grid@exadb01 ~]$

Put the ASM spfile back into DBFS_DG.

[grid@exadb01 ~]$ sqlplus / as sysasm

SQL> create pfile='/tmp/init_ASM.ora' from spfile;
File created.

SQL> create spfile='+DBFS_DG' from pfile='/tmp/init_ASM.ora';
File created.

SQL>

Recreate the management repository database

Use the dbca in silent mode to do this.

[grid@exadb01 ~]$ $ORACLE_HOME/bin/dbca -silent -createDatabase -templateName MGMTSeed_Database.dbc -sid -MGMTDB -gdbName _mgmtdb -storageType ASM -diskGroupName +DBFS_DG -datafileJarLocation $ORACLE_HOME/assistants/dbca/templates -characterset AL32UTF8 -autoGeneratePasswords -oui_internal

Registering database with Oracle Grid Infrastructure
5% complete
Copying database files
7% complete
...
Creating and starting Oracle instance
43% complete
...
100% complete
Look at the log file "/u01/app/grid/cfgtoollogs/dbca/_mgmtdb/_mgmtdb0.log" for further details.
[grid@exadb01 ~]$

Enable and start the resource

[root@exadb01 ~]# /u01/app/12.1.0.2/grid/bin/crsctl modify res ora.crf -attr ENABLED=1 -init
[root@exadb01 ~]# /u01/app/12.1.0.2/grid/bin/crsctl start res ora.crf -init
CRS-2672: Attempting to start 'ora.crf' on 'exadb01'
CRS-2676: Start of 'ora.crf' on 'exadb01' succeeded
[root@exadb01 ~]#

[root@exadb02 ~]# /u01/app/12.1.0.2/grid/bin/crsctl modify res ora.crf -attr ENABLED=1 -init
[root@exadb02 ~]# /u01/app/12.1.0.2/grid/bin/crsctl start res ora.crf -init
CRS-2672: Attempting to start 'ora.crf' on 'exadb02'
CRS-2676: Start of 'ora.crf' on 'exadb02' succeeded
[root@exadb02 ~]#

The management repository database is now running on node 1.

[grid@exadb01 ~]$ ps -ef | grep pmon
grid  41256      1  0 09:52 ?        00:00:00 asm_pmon_+ASM1
grid     154078      1  0 11:22 ?        00:00:00 mdb_pmon_-MGMTDB
...
[grid@exadb01 ~]$

Verify that all files are back in DBFS_DG.

[grid@exadb01 ~]$ asmcmd find DBFS_DG "*"
+DBFS_DG/ASM/
+DBFS_DG/ASM/PASSWORD/
+DBFS_DG/ASM/PASSWORD/pwdasm.256.907065195
+DBFS_DG/EXACLUSTER/
+DBFS_DG/EXACLUSTER/ASMPARAMETERFILE/
+DBFS_DG/EXACLUSTER/ASMPARAMETERFILE/REGISTRY.253.907065421
+DBFS_DG/EXACLUSTER/OCRFILE/
+DBFS_DG/EXACLUSTER/OCRFILE/REGISTRY.255.907065619
+DBFS_DG/_MGMTDB/
+DBFS_DG/_MGMTDB/CONTROLFILE/
+DBFS_DG/_MGMTDB/CONTROLFILE/Current.260.907068103
+DBFS_DG/_MGMTDB/DATAFILE/
+DBFS_DG/_MGMTDB/DATAFILE/SYSAUX.257.907068037
+DBFS_DG/_MGMTDB/DATAFILE/SYSTEM.258.907068049
+DBFS_DG/_MGMTDB/DATAFILE/UNDOTBS1.259.907068065
+DBFS_DG/_MGMTDB/ONLINELOG/
+DBFS_DG/_MGMTDB/ONLINELOG/group_1.261.907068105
+DBFS_DG/_MGMTDB/ONLINELOG/group_2.262.907068105
+DBFS_DG/_MGMTDB/ONLINELOG/group_3.263.907068105
+DBFS_DG/_MGMTDB/PARAMETERFILE/
+DBFS_DG/_MGMTDB/PARAMETERFILE/spfile.265.907068117
+DBFS_DG/_MGMTDB/TEMPFILE/
+DBFS_DG/_MGMTDB/TEMPFILE/TEMP.264.907068109
+DBFS_DG/orapwasm
[grid@exadb01 ~]$

Database files still in normal redundancy disk group

As noted at the start of this post, all my database files are in a normal redundancy disk group. The last step in making this cluster truly highly available, would be to convert disk group DATA from normal to high redundancy. As  this post is about the quorum disks, I will show that in a separate post.

Conclusion

With the introduction of the quorum disk feature in Exadata software version 12.1.2.3.0, we can now have the proper high availability setup in Exadata quarter racks or any Exadata elastic configuration with less than five storage cells.

In addition to Exadata software version 12.1.2.3.0, you will need the Grid Infrastructure version 12.1.0.2 + bundle 12.1.0.2.160119 + patch for 22682752 + patch for 22722476 or 12.1.0.2 + 12.1.0.2.160419.