Starting with Oracle Database version 11.2, an ASM disk group can be used for hosting one or more cluster file systems. These are known as Oracle ASM Cluster File Systems or Oracle ACFS. This functionality is achieved by creating special volume files inside the ASM disk group, which are then exposed to the OS as the block devices. The file systems are then created on those block devices.
This post is about the rebalancing, mirroring and extent management of the ACFS volume files.
The environment used for the examples:
* 64-bit Oracle Linux 5.4, in Oracle Virtual Box
* Oracle Restart and ASM version 11.2.0.3.0 - 64bit
* ASMLib/oracleasm version 2.1.7
Set up ACFS volumes
As this is an Oracle Restart environment (single instance), I have to load ADVM/ACFS drivers manually (as root user).
# acfsload start
ACFS-9391: Checking for existing ADVM/ACFS installation.
ACFS-9392: Validating ADVM/ACFS installation files for operating system.
ACFS-9393: Verifying ASM Administrator setup.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9154: Loading 'oracleoks.ko' driver.
ACFS-9154: Loading 'oracleadvm.ko' driver.
ACFS-9154: Loading 'oracleacfs.ko' driver.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9156: Detecting control device '/dev/asm/.asm_ctl_spec'.
ACFS-9156: Detecting control device '/dev/ofsctl'.
ACFS-9322: completed
#
Create a disk group to hold ASM cluster file systems.
$ sqlplus / as sysasm
SQL> create diskgroup ACFS
disk 'ORCL:ASMDISK5', 'ORCL:ASMDISK6'
attribute 'COMPATIBLE.ASM' = '11.2', 'COMPATIBLE.ADVM' = '11.2';
Diskgroup created.
SQL>
While it is possible and supported to have a disk group that holds database files and ACFS volume files, I recommend to have a separate disk group for ACFS volumes. This provides the role/function separation and potential performance benefits to database files.
Check the allocation unit (AU) sizes for all disk groups.
SQL> select group_number "Group#", name "Name", allocation_unit_size "AU size"
from v$asm_diskgroup_stat;
Group# Name AU size
---------- -------- ----------
1 ACFS 1048576
2 DATA 1048576
SQL>
Note the default AU size (1MB) for both disk groups. I will refer to this later on, when I talk about the extent sizes for the volume files.
Create some volumes in disk group ACFS.
$ asmcmd volcreate -G ACFS -s 4G VOL1
$ asmcmd volcreate -G ACFS -s 2G VOL2
$ asmcmd volcreate -G ACFS -s 1G VOL3
Get the volume info.
$ asmcmd volinfo -a
Diskgroup Name: ACFS
Volume Name: VOL1
Volume Device: /dev/asm/vol1-142
State: ENABLED
Size (MB): 4096
Resize Unit (MB): 32
Redundancy: MIRROR
Stripe Columns: 4
Stripe Width (K): 128
Usage:
Mountpath:
Volume Name: VOL2
Volume Device: /dev/asm/vol2-142
State: ENABLED
Size (MB): 2048
Resize Unit (MB): 32
Redundancy: MIRROR
Stripe Columns: 4
Stripe Width (K): 128
Usage:
Mountpath:
Volume Name: VOL3
Volume Device: /dev/asm/vol3-142
State: ENABLED
Size (MB): 1024
Resize Unit (MB): 32
Redundancy: MIRROR
Stripe Columns: 4
Stripe Width (K): 128
Usage:
Mountpath:
$
Note that the volumes are automatically enabled after creation. On (server) restart we would need to manually load ADVM/ACFS drivers (acfsload start) and enable the volumes (asmcmd volenable -a).
ASM files for ACFS support
For each volume, the ASM creates a volume file. In a redundant disk group, each volume will also have a dirty region logging (DRL) file.
Get some info about our volume files.
SQL> select file_number "File#", volume_name "Volume", volume_device "Device", size_mb "MB", drl_file_number "DRL#"
from v$asm_volume;
File# Volume Device MB DRL#
------ ------ ----------------- ----- ----
256 VOL1 /dev/asm/vol1-142 4096 257
259 VOL2 /dev/asm/vol2-142 2048 258
261 VOL3 /dev/asm/vol3-142 1024 260
SQL>
In addition to volume names, device names and sizes, this shows ASM files numbers 256, 259 and 261 for volume devices, and ASM file numbers 257, 258 and 260 for the associated DRL files.
Volume file extents
Get the extent distribution info for one of the volume files.
SQL> select xnum_kffxp "Extent", au_kffxp "AU", disk_kffxp "Disk"
from x$kffxp
where group_kffxp=2 and number_kffxp=261
order by 1,2;
Extent AU Disk
---------- ---------- ----------
0 6256 0
0 6256 1
1 6264 0
1 6264 1
2 6272 1
2 6272 0
3 6280 0
3 6280 1
...
127 7272 0
127 7272 1
2147483648 6252 0
2147483648 6252 1
2147483648 4294967294 65534
259 rows selected.
SQL>
First thing to note is that each extent is mirrored, as the volume is in a normal redundancy disk group.
We also see that the volume file 261 has 128 extents. As the volume size is 1GB, that means each extent size is 8MB or 8 AUs. The point here is that the volume files have their own extent size, unlike the standard ASM files that inherit the (initial) extent size from the disk group AU size.
ASM based cluster file systems
We can now use the volumes to create ASM cluster file systems and let everyone use them (this needs to be done as root user, of course):
# mkdir /acfs1
# mkdir /acfs2
# mkdir /acfs3
# chmod 777 /acfs?
# /sbin/mkfs -t acfs /dev/asm/vol1-142
mkfs.acfs: version = 11.2.0.3.0
mkfs.acfs: on-disk version = 39.0
mkfs.acfs: volume = /dev/asm/vol1-142
mkfs.acfs: volume size = 4294967296
mkfs.acfs: Format complete.
# /sbin/mkfs -t acfs /dev/asm/vol2-142
mkfs.acfs: version = 11.2.0.3.0
mkfs.acfs: on-disk version = 39.0
mkfs.acfs: volume = /dev/asm/vol2-142
mkfs.acfs: volume size = 2147483648
mkfs.acfs: Format complete.
# /sbin/mkfs -t acfs /dev/asm/vol3-142
mkfs.acfs: version = 11.2.0.3.0
mkfs.acfs: on-disk version = 39.0
mkfs.acfs: volume = /dev/asm/vol3-142
mkfs.acfs: volume size = 1073741824
mkfs.acfs: Format complete.
# mount -t acfs /dev/asm/vol1-142 /acfs1
# mount -t acfs /dev/asm/vol2-142 /acfs2
# mount -t acfs /dev/asm/vol3-142 /acfs3
# mount | grep acfs
/dev/asm/vol1-142 on /acfs1 type acfs (rw)
/dev/asm/vol2-142 on /acfs2 type acfs (rw)
/dev/asm/vol3-142 on /acfs3 type acfs (rw)
Copy some files into the new file systems.
$ cp diag/asm/+asm/+ASM/trace/* /acfs1
$ cp diag/rdbms/db/DB/trace/* /acfs1
$ cp oradata/DB/datafile/* /acfs1
$ cp diag/asm/+asm/+ASM/trace/* /acfs2
$ cp oradata/DB/datafile/* /acfs2
$ cp fra/DB/backupset/* /acfs3
Check the used space.
$ df -h /acfs?
Filesystem Size Used Avail Use% Mounted on
/dev/asm/vol1-142 4.0G 1.3G 2.8G 31% /acfs1
/dev/asm/vol2-142 2.0G 1.3G 797M 62% /acfs2
/dev/asm/vol3-142 1.0G 577M 448M 57% /acfs3
ACFS disk group rebalance
Let's add one disk to the ACFS disk group and monitor the rebalance operation.
SQL> alter diskgroup ACFS add disk 'ORCL:ASMDISK4';
Diskgroup altered.
SQL>
Get the ARB0 PID from the ASM alert log.
$ tail alert_+ASM.log
Sat Feb 15 12:44:53 2014
SQL> alter diskgroup ACFS add disk 'ORCL:ASMDISK4'
NOTE: Assigning number (2,2) to disk (ORCL:ASMDISK4)
...
NOTE: starting rebalance of group 2/0x80486fe8 (ACFS) at power 1
SUCCESS: alter diskgroup ACFS add disk 'ORCL:ASMDISK4'
Starting background process ARB0
Sat Feb 15 12:45:00 2014
ARB0 started with pid=27, OS id=10767
...
And monitor the rebalance by tailing the ARB0 trace file.
$ tail -f ./+ASM_arb0_10767.trc
*** ACTION NAME:() 2014-02-15 12:45:00.151
ARB0 relocating file +ACFS.1.1 (2 entries)
ARB0 relocating file +ACFS.2.1 (1 entries)
ARB0 relocating file +ACFS.3.1 (42 entries)
ARB0 relocating file +ACFS.3.1 (1 entries)
ARB0 relocating file +ACFS.4.1 (2 entries)
ARB0 relocating file +ACFS.5.1 (1 entries)
ARB0 relocating file +ACFS.6.1 (1 entries)
ARB0 relocating file +ACFS.7.1 (1 entries)
ARB0 relocating file +ACFS.8.1 (1 entries)
ARB0 relocating file +ACFS.9.1 (1 entries)
ARB0 relocating file +ACFS.256.839587727 (120 entries)
*** 2014-02-15 12:46:58.905
ARB0 relocating file +ACFS.256.839587727 (117 entries)
ARB0 relocating file +ACFS.256.839587727 (1 entries)
ARB0 relocating file +ACFS.257.839587727 (17 entries)
ARB0 relocating file +ACFS.258.839590377 (17 entries)
*** 2014-02-15 12:47:50.744
ARB0 relocating file +ACFS.259.839590377 (119 entries)
ARB0 relocating file +ACFS.259.839590377 (1 entries)
ARB0 relocating file +ACFS.260.839590389 (17 entries)
ARB0 relocating file +ACFS.261.839590389 (60 entries)
ARB0 relocating file +ACFS.261.839590389 (1 entries)
...
We see that the rebalance is per ASM file. This is exactly the same behaviour as with database files - ASM performs the rebalance on a per file basis. The ASM metadata files (1-9) get rebalanced first. The ASM then rebalances the volume file 256, DRL file 257, and so on.
From this we see that the ASM rebalances volume files (and other ASM files), not the OS files in the associated file system(s).
Disk online operation in an ACFS disk group
When an ASM disk goes offline, the ASM creates the staleness registry and staleness directory, to track the extents that should be modified on the offline disk. Once the disk comes back online, the ASM uses that information to perform the fast mirror resync.
That functionality is not available to volume files in ASM version 11.2. Instead, to online the disk, the ASM rebuilds the entire content of that disk. This is why the disk online performance, for disk groups with volume files, is inferior to the disk group with standard database files.
The fast mirror resync functionality for volume files is available in ASM version 12.1 and later.
Conclusion
ASM disk groups can be used to host a general purpose cluster file systems. ASM does this by creating volume files inside the disk groups, that are exposed to the operating system as block devices.
Existing ASM disk group mirroring functionality (normal and high redundancy) can be used to protect the user files at the file system level. ASM does this by mirroring extents for the volume files, in the same fashion it does this for any other ASM file. The volume files have their own extent sizes, unlike the standard database files that inherit the (initial) extent size from the disk group AU size.
The rebalance operation, in an ASM disk group that hosts ASM cluster file system volumes, is per volume file, not per the individual user files stored in the associated file system(s).
You have made some good points here. I looked on the internet for the issue as well as found most mens brown real leather biker jacket people will certainly accompany with your website.
ReplyDelete