The views expressed on this blog are my own and do not necessarily reflect the views of Oracle

January 12, 2012

ASM files number 12 and 254


The staleness directory (ASM file number 12) contains metadata to map the slots in the staleness registry to particular disks and ASM clients. The staleness registry (ASM file number 254) tracks allocation units that become stale while the disks are offline. This applies to normal and high redundancy disk groups with the attribute COMPATIBLE.RDBMS set to 11.1 or higher. The staleness metadata is created when needed, and grows to accommodate additional offline disks.

When a disk goes offline, each RDBMS instance gets a slot in the staleness registry for that disk. This slot has a bit for each allocation unit in the offline disk. When an RDBMS instance I/O write is targeted for an offline disk, that instance sets the corresponding bit in the staleness registry.

When a disk is brought back online, ASM copies the allocation units, that have the staleness registry bit set, from the mirrored extents. Because only allocation units that should have changed while the disk was offline are updated, bringing a disk online is more efficient then adding a disk if was dropped instead of just offlined.

No stale disks

The staleness metadata structures are created as needed, which means the staleness directory and registry do not exist when all disks are online.

SQL> SELECT g.name "Disk group",
 g.group_number "Group#",
 d.disk_number "Disk#",
 d.name "Disk",
 d.mode_status "Disk status"
FROM v$asm_disk d, v$asm_diskgroup g
WHERE g.group_number=d.group_number and g.group_number<>0
ORDER BY 1, 2, 3;

Disk group       Group#      Disk# Disk         Disk status
------------ ---------- ---------- ------------ ------------
DATA                  1          0 ASMDISK1     ONLINE
                                 1 ASMDISK2     ONLINE
                                 2 ASMDISK3     ONLINE
RECO                  2          0 ASMDISK4     ONLINE
                                 1 ASMDISK5     ONLINE
                                 2 ASMDISK6     ONLINE

SQL> SELECT x.number_kffxp "File#",
 x.disk_kffxp "Disk#",
 x.xnum_kffxp "Extent",
 x.au_kffxp "AU",
 d.name "Disk name"
FROM x$kffxp x, v$asm_disk_stat d
WHERE x.group_kffxp=d.group_number
 and x.disk_kffxp=d.disk_number
 and x.number_kffxp in (12, 254)
ORDER BY 1, 2;

no rows selected

Stale disks

Staleness information will be created when a disk goes offline, but only when there are I/O writes intended for offline disks.

In the following example, I will offline the disk manually, with the ALTER DISKGROUP OFFLINE DISK command. But as far as stalenss metadata is concerned, it will be created irrespective of how and why a disk goes offline.

SQL> alter diskgroup RECO offline disk ASMDISK6;

Diskgroup altered.

SQL> SELECT g.name "Disk group",
 g.group_number "Group#",
 d.disk_number "Disk#",
 d.name "Disk",
 d.mode_status "Disk status"
FROM v$asm_disk d, v$asm_diskgroup g
WHERE g.group_number=d.group_number and g.group_number=2
ORDER BY 1, 2, 3;

Disk group       Group#      Disk# Disk         Disk status
------------ ---------- ---------- ------------ ------------
RECO                  2          0 ASMDISK4     ONLINE
                                 1 ASMDISK5     ONLINE
                                 2 ASMDISK6     OFFLINE

Database keeps writing to this disk group, and after a while we see the staleness directory and staleness registry created for this disk group

SQL> SELECT x.number_kffxp "File#",
 x.disk_kffxp "Disk#",
 x.xnum_kffxp "Extent",
 x.au_kffxp "AU",
 d.name "Disk name"
FROM x$kffxp x, v$asm_disk_stat d
WHERE x.group_kffxp=d.group_number
 and x.disk_kffxp=d.disk_number
 and d.group_number=2
 and x.number_kffxp in (12, 254)
ORDER BY 1, 2;

     File#      Disk#     Extent         AU Disk name
---------- ---------- ---------- ---------- ------------------------------
        12          0          0         86 ASMDISK4
                    1          0        101 ASMDISK5
                    2          0 4294967294 ASMDISK6
       254          0          0         85 ASMDISK4
                    1          0        100 ASMDISK5
                    2          0 4294967294 ASMDISK6

Look inside

There is not much to see in the actual metadata. Even kfed struggles to recognise these types of metadata blocks :)

$ kfed read /dev/oracleasm/disks/ASMDISK4 aun=86 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                           21 ; 0x002: *** Unknown Enum ***
...
kffdnd.bnode.incarn:                  1 ; 0x000: A=1 NUMM=0x0
kffdnd.bnode.frlist.number:  4294967295 ; 0x004: 0xffffffff
kffdnd.bnode.frlist.incarn:           0 ; 0x008: A=0 NUMM=0x0
kffdnd.overfl.number:        4294967295 ; 0x00c: 0xffffffff
kffdnd.overfl.incarn:                 0 ; 0x010: A=0 NUMM=0x0
kffdnd.parent.number:                 0 ; 0x014: 0x00000000
kffdnd.parent.incarn:                 1 ; 0x018: A=1 NUMM=0x0
kffdnd.fstblk.number:                 0 ; 0x01c: 0x00000000
kffdnd.fstblk.incarn:                 1 ; 0x020: A=1 NUMM=0x0
kfdsde.entry.incarn:                  1 ; 0x024: A=1 NUMM=0x0
kfdsde.entry.hash:                    0 ; 0x028: 0x00000000
kfdsde.entry.refer.number:   4294967295 ; 0x02c: 0xffffffff
kfdsde.entry.refer.incarn:            0 ; 0x030: A=0 NUMM=0x0
kfdsde.cid:                       +ASMR ; 0x034: length=5
kfdsde.indlen:                        1 ; 0x074: 0x0001
kfdsde.flags:                         0 ; 0x076: 0x0000
kfdsde.spare1:                        0 ; 0x078: 0x00000000
kfdsde.spare2:                        0 ; 0x07c: 0x00000000
kfdsde.indices[0]:                    0 ; 0x080: 0x00000000
kfdsde.indices[1]:                    0 ; 0x084: 0x00000000
kfdsde.indices[2]:                    0 ; 0x088: 0x00000000
...

$ kfed read /dev/oracleasm/disks/ASMDISK4 aun=85 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                           20 ; 0x002: *** Unknown Enum ***
...
kfdsHdrB.clientId:           1297301881 ; 0x000: 0x4d534179
kfdsHdrB.incarn:                      0 ; 0x004: 0x00000000
kfdsHdrB.dskNum:                      2 ; 0x008: 0x0002
kfdsHdrB.ub2spare:                    0 ; 0x00a: 0x0000
ub1[0]:                               0 ; 0x00c: 0x00
ub1[1]:                               0 ; 0x00d: 0x00
ub1[2]:                               0 ; 0x00e: 0x00
ub1[3]:                               0 ; 0x00f: 0x00
ub1[4]:                               0 ; 0x010: 0x00
ub1[5]:                               0 ; 0x011: 0x00
ub1[6]:                               0 ; 0x012: 0x00
ub1[7]:                              16 ; 0x013: 0x10
ub1[8]:                               0 ; 0x014: 0x00
...

Not much to see, as these are just bitmaps.

Conclusion

The staleness directory and staleness registry are supporting metadata structure for the disk offline and fast resync feature introduced in ASM version 11. The staleness directory contains metadata to map the slots in the staleness registry to particular disks and ASM clients. The staleness registry tracks allocation units that become stale while the disks are offline. This feature is relevant to normal and high redundancy disk groups only.

January 10, 2012

ASM files number 10 and 11


ASM metadata file number 10 is ASM user directory and ASM file number 11 is ASM group directory. These are supporting structures for ASM file access control feature.

ASM file access control can be used to restrict file access to specific ASM clients (typically databases), based on the operating system effective user identification number of a database home owner.

This information is externalized via V$ASM_USER, V$ASM_USERGROUP and V$ASM_USERGROUP_MEMBER views.

ASM users and groups

To make use of ASM file access control feature, we need to have the operating system users and groups in place. We would then add them to ASM disk group(s) via ALTER DISKGROUP ADD USERGROUP command. I have skipped that part to keep the focus on ASM user and group directories.

Here are the operating system users set up on this system

$ id grid
uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1020(asmadmin),1021(asmdba),1031(dba)
$ id oracle
uid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall),1021(asmdba),1031(dba)
$ id oracle1
uid=1102(oracle1) gid=1033(dba1) groups=1033(dba1)
$ id oracle2
uid=1103(oracle2) gid=1034(dba2) groups=1034(dba2)

And here are ASM users and groups I set up for my disk groups.

SQL> SELECT u.group_number "Disk group#",
 u.os_id "OS ID",
 u.os_name "OS user",
 u.user_number "ASM user#",
 g.usergroup_number "ASM group#",
 g.name "ASM user group"
FROM v$asm_user u, v$asm_usergroup g, v$asm_usergroup_member m
WHERE u.group_number=g.group_number and u.group_number=m.group_number
 and u.user_number=m.member_number
 and g.usergroup_number=m.usergroup_number
ORDER BY 1, 2;

Disk group# OS ID OS user ASM user# ASM group# ASM user group
----------- ----- ------- --------- ---------- --------------
          1 1100  grid            1          3 GRIDTEAM
            1101  oracle          2          1 DBATEAM1
            1102  oracle1         3          2 DBATEAM2
            1103  oracle2         4          2 DBATEAM2
          2 1101  oracle          2          1 DBATEAM1

Look inside

Get allocation units for ASM user and group directories in disk group number 1.

SQL> SELECT x.number_kffxp "File#",
 x.disk_kffxp "Disk#",
 x.xnum_kffxp "Extent",
 x.au_kffxp "AU",
 d.name "Disk name"
FROM x$kffxp x, v$asm_disk_stat d
WHERE x.group_kffxp=d.group_number
 and x.disk_kffxp=d.disk_number
 and d.group_number=1
 and x.number_kffxp in (10, 11)
ORDER BY 1, 2;

     File#      Disk#     Extent         AU Disk name
---------- ---------- ---------- ---------- ------------------------------
        10          0          0       2139 ASMDISK5
                    1          0       2139 ASMDISK6
        11          0          0       2140 ASMDISK5
                    1          0       2140 ASMDISK6

The user directory metadata has one block per user entry, where the block number corresponds to the user number (v$asm_user.user_number). We have four users, with user numbers 1-4, so those should be in user directory blocks 1-4. Let's have a look.

$ kfed read /dev/oracleasm/disks/ASMDISK5 aun=2139 blkn=1 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                           24 ; 0x002: KFBTYP_USERDIR
...
kfzude.user:                       1100 ; 0x038: length=4
...

So block 1 is for user with the OS user ID 1100. This agrees with the output from v$asm_user above. For the other blocks we have:

$ let b=1
$ while (( $b <= 4 ))
 do
 kfed read /dev/oracleasm/disks/ASMDISK5 aun=2139 blkn=$b | grep kfzude.user
 let b=b+1
 done

kfzude.user:                       1100 ; 0x038: length=4
kfzude.user:                       1101 ; 0x038: length=4
kfzude.user:                       1102 ; 0x038: length=4
kfzude.user:                       1103 ; 0x038: length=4

As expected that shows four operating user IDs in ASM user directory.

Group directory entries are also one per block, where the block number would match the ASM group number. Let's have a look:

$ let b=1
$ while (( $b <= 3 ))
 do
 kfed read /dev/oracleasm/disks/ASMDISK5 aun=2140 blkn=$b | grep kfzgde.name
 let b=b+1
done

kfzgde.name:                   DBATEAM1 ; 0x03c: length=8
kfzgde.name:                   DBATEAM2 ; 0x03c: length=8
kfzgde.name:                   GRIDTEAM ; 0x03c: length=8

This shows ASM group names as specified for this disk group.

Conclusion

ASM user and group directories are supporting structures for ASM file access control feature, introduced in version 11.2. This information is externalized via V$ASM_USER, V$ASM_USERGROUP and V$ASM_USERGROUP_MEMBER views.

January 9, 2012

ASM Attributes Directory


The ASM attributes directory - the ASM metadata file number 9 - contains the information about disk group attributes. The attributes directory exists only in disk groups with the COMPATIBLE.ASM (attribute!) set to 11.1 or higher.

Disk group attributes were introduced in ASM version 11.1[1] and can be used to fine tune the disk group properties. It is worth noting that some attributes can be set only at the time of the disk group creation (e.g. AU_SIZE), while others can be set at any time (e.g. DISK_REPAIR_TIME). Some attribute values might be stored in the disk header (e.g. AU_SIZE), while some others (e.g. COMPATIBLE.ASM), can be stored either in the partnership and status table or in the disk header (depending on the ASM version).

Public attributes

Most attributes are stored in the attributes directory and are externalized via V$ASM_ATTRIBUTE view. Let's have a look at disk group attributes for all my disk groups.

SQL> SELECT g.name "Group", a.name "Attribute", a.value "Value"
FROM v$asm_diskgroup g, v$asm_attribute a
WHERE g.group_number=a.group_number and a.name not like 'template%';

Group Attribute               Value
----- ----------------------- ----------------
ACFS  disk_repair_time        3.6h
     au_size                 1048576
     access_control.umask    026
     access_control.enabled  TRUE
     cell.smart_scan_capable FALSE
     compatible.advm         11.2.0.0.0
     compatible.rdbms        11.2
     compatible.asm          11.2.0.0.0
     sector_size             512
DATA  access_control.enabled  TRUE
     cell.smart_scan_capable FALSE
     compatible.rdbms        11.2
     compatible.asm          11.2.0.0.0
     sector_size             512
     au_size                 1048576
     disk_repair_time        3.6h
     access_control.umask    026
SQL>

One attribute value we can modify at any time is the disk repair timer. Let's use asmcmd to do that for disk group DATA.

$ asmcmd setattr -G DATA disk_repair_time '8.0h'

$ asmcmd lsattr -lm disk_repair_time
Group_Name  Name              Value  RO  Sys
ACFS        disk_repair_time  3.6h   N   Y
DATA        disk_repair_time  8.0h   N   Y
$

Hidden attributes

As mentioned in the introduction, the attributes directory is the ASM metadata file number 9. Let's locate the attributes directory, in disk group number 2:

SQL> SELECT x.disk_kffxp "Disk#",
x.xnum_kffxp "Extent",
x.au_kffxp "AU",
d.name "Disk name"
FROM x$kffxp x, v$asm_disk_stat d
WHERE x.group_kffxp=d.group_number
and x.disk_kffxp=d.disk_number
and d.group_number=2
and x.number_kffxp=9
ORDER BY 1, 2;

Disk# Extent   AU Disk name
----- ------ ---- ---------
   0      0 1146 ASMDISK1
   1      0 1143 ASMDISK2
   2      0 1150 ASMDISK3
SQL>

Now check out the attributes with the kfed tool.

$ kfed read /dev/oracleasm/disks/ASMDISK3 aun=1150 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                           23 ; 0x002: KFBTYP_ATTRDIR
...
kfede[0].entry.incarn:                1 ; 0x024: A=1 NUMM=0x0
kfede[0].entry.hash:                  0 ; 0x028: 0x00000000
kfede[0].entry.refer.number: 4294967295 ; 0x02c: 0xffffffff
kfede[0].entry.refer.incarn:          0 ; 0x030: A=0 NUMM=0x0
kfede[0].name:         disk_repair_time ; 0x034: length=16
kfede[0].value:                    8.0h ; 0x074: length=4
...

Fields kfede[i] will have the disk group attribute names and values. Let's look at all of them:

$ kfed read /dev/oracleasm/disks/ASMDISK3 aun=1150 | egrep "name|value"
kfede[0].name:         disk_repair_time ; 0x034: length=16
kfede[0].value:                    8.0h ; 0x074: length=4
kfede[1].name:       _rebalance_compact ; 0x1a8: length=18
kfede[1].value:                    TRUE ; 0x1e8: length=4
kfede[2].name:            _extent_sizes ; 0x31c: length=13
kfede[2].value:                  1 4 16 ; 0x35c: length=6
kfede[3].name:           _extent_counts ; 0x490: length=14
kfede[3].value:   20000 20000 214748367 ; 0x4d0: length=21
kfede[4].name:                        _ ; 0x604: length=1
kfede[4].value:                       0 ; 0x644: length=1
kfede[5].name:                  au_size ; 0x778: length=7
kfede[5].value:               ; 0x7b8: length=9
kfede[6].name:              sector_size ; 0x8ec: length=11
kfede[6].value:               ; 0x92c: length=9
kfede[7].name:               compatible ; 0xa60: length=10
kfede[7].value:               ; 0xaa0: length=9
kfede[8].name:                     cell ; 0xbd4: length=4
kfede[8].value:                   FALSE ; 0xc14: length=5
kfede[9].name:           access_control ; 0xd48: length=14
kfede[9].value:                   FALSE ; 0xd88: length=5

This gives us a glimpse into the hidden (underscore) disk group attributes. We can see that the value of the _REBALANCE_COMPACT is TRUE. That is the attribute to do with the compacting phase of the disk group rebalance. We also see how the extent size will grow (_EXTENT_SIZES) - initial size will be 1 AU, then 4 AU and finally 16 AU. And the _EXTENT_COUNTS shows the breaking points for the extent size growth - first 20000 extents will be 1 AU in size, next 20000 will be 4 AU and the rest will be 16 AU.

Conclusion

Disk group attributes can be used to fine tune the disk group properties. Most attributes are stored in the attributes directory and are externalized via V$ASM_ATTRIBUTE view. For details about the attributes please see the ASM Disk Group Attributes post.

[1] In ASM version prior to 11.1 it was possible to create a disk group with user specified allocation unit size. That was done via hidden ASM initialization parameter _ASM_AUSIZE. While technically that was not a disk group attribute, it served the same purpose as the AU_SIZE attribute in ASM version 11.1 and later.

January 8, 2012

ASM file number 8


The disk Used Space Directory (USD) – ASM file number 8 - maintains the number of allocation units (AU) used per zone, per disk in a disk group. The USD is split into a set of Used Space Entries (USE). Each USE will maintain a counter for the number of used AUs per disk, per zone. A disk zone can be either HOT or COLD.

This structure is version 11.2 specific and is relevant to the Intelligent Data Placement feature. The USD will be present in a newly created disk group in version 11.2 or when the ASM compatibility is advanced to 11.2.

Locating the used space directory

Let's get the allocation units for the used space directory - for all disk groups.

SQL> break on Group#
SQL> SELECT d.group_number "Group#",
 x.disk_kffxp "Disk#",
 x.xnum_kffxp "Extent",
 x.au_kffxp "AU",
 d.name "Disk name"
FROM x$kffxp x, v$asm_disk_stat d
WHERE x.group_kffxp=d.group_number
 and x.disk_kffxp=d.disk_number
 and x.number_kffxp=8
ORDER BY 1, 2;

 Group#  Disk#  Extent     AU Disk name
------- ------ ------- ------ ------------
      1      0       0     51 ASMDISK5
             1       0     51 ASMDISK6
      2      0       0     41 ASMDISK1
             2       0     39 ASMDISK3
             3       0     38 ASMDISK4

Check the disk used space allocation for all disks in all disk groups.

SQL> SELECT group_number "Group#",
 name "Disk name",
 hot_used_mb "Hot (MB)",
 cold_used_mb "Cold (MB)"
FROM v$asm_disk_stat
ORDER BY 1;

 Group# Disk name      Hot (MB)  Cold (MB)
------- ------------ ---------- ----------
      1 ASMDISK5              0       4187
        ASMDISK6              0       4187
      2 ASMDISK4              0       1138
        ASMDISK2              0       1135
        ASMDISK1              0       1139
        ASMDISK3              0       1144

The result shows that all space in all disks is allocated in the cold disk zones. Let's have a closer look at the used space directory with kfed.

$ kfed read /dev/oracleasm/disks/ASMDISK5 aun=51 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                           26 ; 0x002: KFBTYP_USEDSPC
...
kfdusde[0].used[0].spare:             0 ; 0x000: 0x00000000
kfdusde[0].used[0].hi:                0 ; 0x004: 0x00000000
kfdusde[0].used[0].lo:             4134 ; 0x008: 0x00001026
kfdusde[0].used[1].spare:             0 ; 0x00c: 0x00000000
kfdusde[0].used[1].hi:                0 ; 0x010: 0x00000000
kfdusde[0].used[1].lo:                0 ; 0x014: 0x00000000
kfdusde[1].used[0].spare:             0 ; 0x018: 0x00000000
kfdusde[1].used[0].hi:                0 ; 0x01c: 0x00000000
kfdusde[1].used[0].lo:             4134 ; 0x020: 0x00001026
kfdusde[1].used[1].spare:             0 ; 0x024: 0x00000000
kfdusde[1].used[1].hi:                0 ; 0x028: 0x00000000
kfdusde[1].used[1].lo:                0 ; 0x02c: 0x00000000
kfdusde[2].used[0].spare:             0 ; 0x030: 0x00000000
kfdusde[2].used[0].hi:                0 ; 0x034: 0x00000000
kfdusde[2].used[0].lo:                0 ; 0x038: 0x00000000
kfdusde[2].used[1].spare:             0 ; 0x03c: 0x00000000
kfdusde[2].used[1].hi:                0 ; 0x040: 0x00000000
kfdusde[2].used[1].lo:                0 ; 0x044: 0x00000000
...

There are two disks in disk group number 1, so only the first two kfdusde entries are populated. And both show that all the space is allocated in the cold zone.

Check the used space directory entries for disk group 2.

$ kfed read /dev/oracleasm/disks/ASMDISK1 aun=41 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                           26 ; 0x002: KFBTYP_USEDSPC
...
kfdusde[0].used[0].spare:             0 ; 0x000: 0x00000000
kfdusde[0].used[0].hi:                0 ; 0x004: 0x00000000
kfdusde[0].used[0].lo:             1092 ; 0x008: 0x00000444
kfdusde[0].used[1].spare:             0 ; 0x00c: 0x00000000
kfdusde[0].used[1].hi:                0 ; 0x010: 0x00000000
kfdusde[0].used[1].lo:                0 ; 0x014: 0x00000000
kfdusde[1].used[0].spare:             0 ; 0x018: 0x00000000
kfdusde[1].used[0].hi:                0 ; 0x01c: 0x00000000
kfdusde[1].used[0].lo:             1093 ; 0x020: 0x00000445
kfdusde[1].used[1].spare:             0 ; 0x024: 0x00000000
kfdusde[1].used[1].hi:                0 ; 0x028: 0x00000000
kfdusde[1].used[1].lo:                0 ; 0x02c: 0x00000000
kfdusde[2].used[0].spare:             0 ; 0x030: 0x00000000
kfdusde[2].used[0].hi:                0 ; 0x034: 0x00000000
kfdusde[2].used[0].lo:             1098 ; 0x038: 0x0000044a
kfdusde[2].used[1].spare:             0 ; 0x03c: 0x00000000
kfdusde[2].used[1].hi:                0 ; 0x040: 0x00000000
kfdusde[2].used[1].lo:                0 ; 0x044: 0x00000000
kfdusde[3].used[0].spare:             0 ; 0x048: 0x00000000
kfdusde[3].used[0].hi:                0 ; 0x04c: 0x00000000
kfdusde[3].used[0].lo:             1094 ; 0x050: 0x00000446
kfdusde[3].used[1].spare:             0 ; 0x054: 0x00000000
kfdusde[3].used[1].hi:                0 ; 0x058: 0x00000000
kfdusde[3].used[1].lo:                0 ; 0x05c: 0x00000000
kfdusde[4].used[0].spare:             0 ; 0x060: 0x00000000
kfdusde[4].used[0].hi:                0 ; 0x064: 0x00000000
kfdusde[4].used[0].lo:                0 ; 0x068: 0x00000000
kfdusde[4].used[1].spare:             0 ; 0x06c: 0x00000000
kfdusde[4].used[1].hi:                0 ; 0x070: 0x00000000
kfdusde[4].used[1].lo:                0 ; 0x074: 0x00000000
...

Disk group 2 has four disks and again all space is allocated in the cold disk zones.

Hot files

Let's create a disk group template for hot files.

SQL> alter diskgroup DATA add template HOTFILE attributes (HOT);

Diskgroup altered.

Note that this feature requires the disk group attribute COMPATIBLE.RDBMS to be at least 11.2.

Now create a datafile that will be alocated in the disks' hot zones.

SQL> create tablespace T1_HOT datafile '+DATA(HOTFILE)' size 50M;

Tablespace created.

Let's check the space allocation now, by running the last query again.

SQL> SELECT group_number "Group#",
 name "Disk name",
 hot_used_mb "Hot (MB)",
 cold_used_mb "Cold (MB)"
FROM v$asm_disk_stat
ORDER BY 1;

    Group# Disk name                        Hot (MB)  Cold (MB)
---------- ------------------------------ ---------- ----------
         1 ASMDISK5                                0       4187
           ASMDISK6                                0       4187
         2 ASMDISK4                               13       1152
           ASMDISK2                               12       1153
           ASMDISK1                               13       1152
           ASMDISK3                               13       1153

The result shows that 51 MB (50 MB for the file and 1 MB for the file header) are now allocated in the hot zones across all disk in the disk group.

Warm up a file

I can also move an existing datafile into the hot zone. Let's find all datafiles in disk group DATA.

$ asmcmd find --type datafile +DATA "*"
+DATA/BR/DATAFILE/EXAMPLE.269.769030517
+DATA/BR/DATAFILE/NOT_IMPORTANT.273.771795255
+DATA/BR/DATAFILE/SYSAUX.257.769030245
+DATA/BR/DATAFILE/SYSTEM.256.769030243
+DATA/BR/DATAFILE/T1_HOT.274.772054033
+DATA/BR/DATAFILE/TRIPLE_C.272.771794469
+DATA/BR/DATAFILE/TRIPLE_M.271.771793293
+DATA/BR/DATAFILE/UNDOTBS1.258.769030245
+DATA/BR/DATAFILE/USERS.259.769030245

Let's move the undo tablespace datafile into the hot zone.

SQL> alter diskgroup DATA modify file '+DATA/BR/DATAFILE/UNDOTBS1.258.769030245' attributes (HOT);

Diskgroup altered.

This action triggers the rebalance for disk group DATA, as file extents have to be moved to disks' hot regions. Once the rebalance completes, the last query shows more data in hot region for disks in disk group number 2.

SQL> SELECT group_number "Group#",
 name "Disk name",
 hot_used_mb "Hot (MB)",
 cold_used_mb "Cold (MB)"
FROM v$asm_disk_stat
ORDER BY 1;

    Group# Disk name                        Hot (MB)  Cold (MB)
---------- ------------------------------ ---------- ----------
         1 ASMDISK5                                0       4187
           ASMDISK6                                0       4187
         2 ASMDISK4                               40       1125
           ASMDISK2                               39       1126
           ASMDISK1                               39       1126
           ASMDISK3                               39       1127

Conclusion

The disk Used Space Directory (USD) – ASM file number 8 - maintains the number of allocation units (AU) used per zone, per disk in a disk group. It is a supporting metadata structure for the Intelligent Data Placement feature in ASM version 11.2. One handy use of this feature is a control of datafile placement in disks' hot or cold zones.

January 7, 2012

ASM file number 7


ASM metadata file number 7 - volume directory - keeps track of files associated with ASM Dynamic Volume Manager (ADVM) volumes.

An ADVM volume device is constructed from an ASM dynamic volume. One or more ADVM volume devices may be configured within each disk group. ASM Cluster File System (ACFS) is layered on ASM through the ADVM interface. ASM dynamic volume manager is another client of ASM - the same way the database is. When a volume is opened, the corresponding ASM file is opened and ASM extents are sent to the ADVM driver.

There are two file types associated with ADVM volumes
  • ASMVOL – The volume file which is the container for the volume storage
  • ASMVDRL – The file that contains the volume's Dirty Region Logging (DRL) information. This file is required for re-silvering mirrors
Turn up the ADVM volume

It is not necessary to create a dedicated disk group for ADVM, but it does make sense to do so. That way we keep the database files separate from the ACFS files. Let's have a look at an example.

SQL> create diskgroup ACFS
disk 'ORCL:ASMDISK5', 'ORCL:ASMDISK6'
attribute 'COMPATIBLE.ASM' = '11.2', 'COMPATIBLE.ADVM' = '11.2';

Diskgroup created.

To be able to add volumes to a disk group, attributes COMPATIBLE.ASM and COMPATIBLE.ADVM must be set to at least '11.2'. Also the ADVM/ACFS drivers have to loaded (this is always done in cluster environments, but it may have to be done manually in a single instance setup).

I can now create couple of volumes in this disk group.

$ asmcmd volcreate -G ACFS -s 2G ACFS_VOL1

$ asmcmd volcreate -G ACFS -s 2G ACFS_VOL2

$ asmcmd volinfo -a
Diskgroup Name: ACFS

Volume Name: ACFS_VOL1 Volume Device: /dev/asm/acfs_vol1-159 State: ENABLED Size (MB): 2048 Resize Unit (MB): 32 Redundancy: MIRROR Stripe Columns: 4 Stripe Width (K): 128 Usage: Mountpath: Volume Name: ACFS_VOL2 Volume Device: /dev/asm/acfs_vol2-159 State: ENABLED Size (MB): 2048 Resize Unit (MB): 32 Redundancy: MIRROR Stripe Columns: 4 Stripe Width (K): 128 Usage: Mountpath:

$

Note that there are no mount paths associated with the volumes as I haven't used them yet.

Let's now look at the ADVM volume metadata. First find the allocation units of the volume directory.

SQL> SELECT x.xnum_kffxp "Extent",
x.au_kffxp "AU",
x.disk_kffxp "Disk #",
d.name "Disk name"
FROM x$kffxp x, v$asm_disk_stat d
WHERE x.group_kffxp=d.group_number
and x.disk_kffxp=d.disk_number
and x.group_kffxp=2
and x.number_kffxp=7
ORDER BY 1, 2;

    Extent         AU     Disk # Disk name
---------- ---------- ---------- ------------------------------
         0         53          1 ASMDISK6
         0         53          0 ASMDISK5

Use kfed to have a look at the actual metadata.

$ kfed read /dev/oracleasm/disks/ASMDISK5 aun=53 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                           22 ; 0x002: KFBTYP_VOLUMEDIR
...
kfvvde.entry.incarn:                  1 ; 0x024: A=1 NUMM=0x0
kfvvde.entry.hash:                    0 ; 0x028: 0x00000000
kfvvde.entry.refer.number:   4294967295 ; 0x02c: 0xffffffff
kfvvde.entry.refer.incarn:            0 ; 0x030: A=0 NUMM=0x0
kfvvde.volnm:           ++AVD_DG_NUMBER ; 0x034: length=15
kfvvde.usage:                           ; 0x054: length=0
kfvvde.dgname:                          ; 0x074: length=0
kfvvde.clname:                          ; 0x094: length=0
kfvvde.mountpath:                       ; 0x0b4: length=0
kfvvde.drlinit:                       0 ; 0x4b5: 0x00
kfvvde.pad1:                          0 ; 0x4b6: 0x0000
kfvvde.volfnum.number:                0 ; 0x4b8: 0x00000000
kfvvde.volfnum.incarn:                0 ; 0x4bc: 0x00000000
kfvvde.drlfnum.number:                0 ; 0x4c0: 0x00000000
kfvvde.drlfnum.incarn:                0 ; 0x4c4: 0x00000000
kfvvde.volnum:                        0 ; 0x4c8: 0x0000
kfvvde.avddgnum:                    159 ; 0x4ca: 0x009f
kfvvde.extentsz:                      0 ; 0x4cc: 0x00000000
kfvvde.volstate:                      4 ; 0x4d0: D=0 C=0 R=1
...

That was block 0 of the allocation unit 53. It only contains the marker for the ADVM volume (++AVD_DG_NUMBER). The actual volume info is in blocks 1 and up.

$ kfed read /dev/oracleasm/disks/ASMDISK5 aun=53 blkn=1 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                           22 ; 0x002: KFBTYP_VOLUMEDIR
...
kfvvde.entry.incarn:                  1 ; 0x024: A=1 NUMM=0x0
kfvvde.entry.hash:                    0 ; 0x028: 0x00000000
kfvvde.entry.refer.number:   4294967295 ; 0x02c: 0xffffffff
kfvvde.entry.refer.incarn:            0 ; 0x030: A=0 NUMM=0x0
kfvvde.volnm:                 ACFS_VOL1 ; 0x034: length=9
kfvvde.usage:                           ; 0x054: length=0
kfvvde.dgname:                          ; 0x074: length=0
kfvvde.clname:                          ; 0x094: length=0
kfvvde.mountpath:                       ; 0x0b4: length=0
kfvvde.drlinit:                       0 ; 0x4b5: 0x00
kfvvde.pad1:                          0 ; 0x4b6: 0x0000
kfvvde.volfnum.number:              257 ; 0x4b8: 0x00000101
kfvvde.volfnum.incarn:        771971291 ; 0x4bc: 0x2e0358db
kfvvde.drlfnum.number:              256 ; 0x4c0: 0x00000100
kfvvde.drlfnum.incarn:        771971289 ; 0x4c4: 0x2e0358d9
kfvvde.volnum:                        1 ; 0x4c8: 0x0001
kfvvde.avddgnum:                    159 ; 0x4ca: 0x009f
kfvvde.extentsz:                      8 ; 0x4cc: 0x00000008
kfvvde.volstate:                      2 ; 0x4d0: D=0 C=1 R=0
...

$ kfed read /dev/oracleasm/disks/ASMDISK5 aun=53 blkn=2 | more
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                           22 ; 0x002: KFBTYP_VOLUMEDIR
...
kfvvde.entry.incarn:                  1 ; 0x024: A=1 NUMM=0x0
kfvvde.entry.hash:                    0 ; 0x028: 0x00000000
kfvvde.entry.refer.number:   4294967295 ; 0x02c: 0xffffffff
kfvvde.entry.refer.incarn:            0 ; 0x030: A=0 NUMM=0x0
kfvvde.volnm:                 ACFS_VOL2 ; 0x034: length=9
kfvvde.usage:                           ; 0x054: length=0
kfvvde.dgname:                          ; 0x074: length=0
kfvvde.clname:                          ; 0x094: length=0
kfvvde.mountpath:                       ; 0x0b4: length=0
kfvvde.drlinit:                       0 ; 0x4b5: 0x00
kfvvde.pad1:                          0 ; 0x4b6: 0x0000
kfvvde.volfnum.number:              259 ; 0x4b8: 0x00000103
kfvvde.volfnum.incarn:        771971303 ; 0x4bc: 0x2e0358e7
kfvvde.drlfnum.number:              258 ; 0x4c0: 0x00000102
kfvvde.drlfnum.incarn:        771971301 ; 0x4c4: 0x2e0358e5
kfvvde.volnum:                        2 ; 0x4c8: 0x0002
kfvvde.avddgnum:                    159 ; 0x4ca: 0x009f
kfvvde.extentsz:                      8 ; 0x4cc: 0x00000008
kfvvde.volstate:                      2 ; 0x4d0: D=0 C=1 R=0
...

Block 1 of ASM metadata file 7 has the information about the first volume (kfvvde.volnm: ACFS_VOL1). Note that there are two files associated with that volume:
  • DRL file (kfvvde.drlfnum.number: 256)
  • Volume file (kfvvde.volfnum.number: 257)
Block 2 has the information about the second volume (kfvvde.volnm: ACFS_VOL2). There are also two files associated with that volume:
  • DRL file – kfvvde.drlfnum.number: 258
  • Volume file – kfvvde.volfnum.number: 259
As these are special files, they are not shown in the output of 'asmcmd ls' command or when we query V$ASM_ALIAS. But they do show up in V$ASM_FILE view.

SQL> SELECT file_number "File #", bytes/1024/1024 "Size (MB)", type
FROM v$asm_file
WHERE group_number=2;

    File #  Size (MB) TYPE
---------- ---------- ----------
       256         17 ASMVDRL
       257       2048 ASMVOL
       258         17 ASMVDRL
       259       2048 ASMVOL

Create ASM cluster file system

I can now use the volume device to create an ASM cluster file system (ACFS).

# /sbin/mkfs -t acfs /dev/asm/acfs_vol1-159
mkfs.acfs: version                   = 11.2.0.3.0
mkfs.acfs: on-disk version           = 39.0
mkfs.acfs: volume                    = /dev/asm/acfs_vol1-159
mkfs.acfs: volume size               = 2147483648
mkfs.acfs: Format complete.

# mkdir /acfs1

# mount -t acfs /dev/asm/acfs_vol1-159 /acfs1

# mount
/dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)
...
oracleasmfs on /dev/oracleasm type oracleasmfs (rw)
/dev/asm/acfs_vol1-159 on /acfs1 type acfs (rw)

$ asmcmd volinfo -G ACFS ACFS_VOL1
Diskgroup Name: ACFS

 Volume Name: ACFS_VOL1
 Volume Device: /dev/asm/acfs_vol1-159
 State: ENABLED
 Size (MB): 2048
 Resize Unit (MB): 32
 Redundancy: MIRROR
 Stripe Columns: 4
 Stripe Width (K): 128
 Usage: ACFS
 Mountpath: /acfs1

$

Let's see if the mount path info now shows up in volume directory:

$ kfed read /dev/oracleasm/disks/ASMDISK6 aun=53 blkn=1 | grep mountpath
kfvvde.mountpath:                /acfs1 ; 0x0b4: length=6

It does as expected.

Conclusion

One or more ADVM volume devices may be configured within each disk group. ASM Cluster File System (ACFS) is layered on ASM through the ADVM interface. ASM dynamic volume manager is another client of ASM - the same way the database is.

There are two internal file types associated with ASM volumes:
  • ASMVOL – The volume file which is the container for the volume storage
  • ASMVDRL – The file that contains the volume's Dirty Region Logging (DRL) information