7.11 Shared Disk DR
The shared disk DR resource replicate a shared disk in a cluster to an off-site DR node for disaster recovery purpose.
The shared disk DR resource is composed of three nodes that manage the shared disk. The off-site DR node is target replica, but not the member of cluster.
In other words, MCCS is not involved with automatic operation like failover, online and offline of the off-site node, so human intervention is needed to recover from a disaster at the node.
Data is replicated from the source-primary node to the off-site DR node and the source-secondary node that cannot access the shared disk recognizes the shared disk as the one having the "NONE" role.
Thus, the off-site DR node and the "NONE" node are locked and these cannot write or read the data to prevent data corruption.
Table of Contents
[Figure] Mirroring Configuration
Mirror Mode
Mirror mode employs both asynchronous and synchronous mirroring schemes. Understanding the advantages and disadvantages between synchronous and asynchronous mirroring is essential to the correct operation of this.
Async mode
With asynchronous mirroring, each write is captured and a copy of this is made. That copy is queued to be transmitted to the target system as soon as the network will allow it. Meanwhile, the original write request is committed to the underlying storage device and control is immediately returned to the application that initiated the write. At any given time, there may be write transactions waiting in the queue to be sent to the target machine. But it is important to understand that these writes reach the target volume in time order, so the integrity of the data on the target volume is always a valid snapshot of the source volume at some point in time. Should the source system fail, it is possible that the target system did not receive all of the writes that were queued up, but the data that has made it to the target volume is valid and usable.
Sync mode
With synchronous mirroring, each write is captured and transmitted to the target system to be written on the target volume at the same time that the write is committed to the underlying storage device on the source system. Once both the local and target writes are complete, the write request is acknowledged as complete and control is returned to the application that initiated the write. With synchronous mirroring, each write is intercepted and transmitted to the target system to be written on the target volume at the same time that the write is committed to the underlying storage device on the source system. Once both the local and target writes are complete, the write request is acknowledged as complete and control is returned to the application that initiated the write.
Example of Shared Disk DR Configuration
Following steps are recommended to manage shared disk DR resource through Data replication software after installing MCCS.
Assume that a driver letter S: is a shared disk DR resource in a cluster.
Install Data replication software if it has not been installed on node DR.
While both nodes (assumed as node A and node B) are in the power off state, bring power online on node B first.
Install MCCS if it has not been installed and reboot the Node B after the installation.
Add driver S: to Data Replication Program though ‘%ExtMirrBase%\emcmd . setconfiguration S 256’ command.
Reboot node B. It will come up with the S: drives locked.
Power on node A when it is checked driver S: is locked to node B
Install MCCS if it has not been installed and reboot the Node A after the installation.
Node A will come up with the S: drives writable.
Use the MCCS console to create shared disk DR resource from node A S:(source) to node DR S:(target).
Maintain the shared disk state to allow one disk access from one node and keep them locked to prevent the damage of data.
Adding
Add the DR shared disk resource.
When add a resource from a group name, select a group → right click → 'Add Resource'. Or select Edit(E) from the main menu bar → select 'Add Resource'. Or select 'Add Resource' icon from the tool bar.
Select 'SharedDisk DR' from Resource Type lists and click 'Next' button. When add a resource from the SharedDisk DR resource type name, this step will be skipped.
Enter the resource name and select driver letter of shared disk in the cluster.
Enter the IP address of data replication on the off-site DR node.
[Figure] Adding Shared Disk DR ResourceAs 'Additional Settings', Mirror Type and Check Disk option is available and click 'OK' button.
[Figure] Defining mirror type and check disk optionWhen click OK button, following warning message will be displayed and click 'OK' button if the informations are correct.
[Figure] Alert popup message after click OK button
Click 'Finish' button to add the shared disk DR resource.
Deleting
Select the resource -> right click -> delete the resource.
State
The following table explains the state of shared disk DR resource.
State | Agent command | Description | Note |
Online
| Offline | It locks the mirror disk using LOCKVOLUME command. Both source and target disk cannot accessible from each node. |
|
Monitoring | Before monitoring, update the state of resource first. Perform a disk write test to determine the state of resource. |
| |
Offline Source and target disk is not fault state but locked so that cannot access both disk.
| Online | Type of command is determined by the role of the mirror disk at the node. 1. It unlocks the disk and read/writeable. 2. Unlocking is done by 'UNLOCKVOLUME' of the replicaiton component. The write function is executed after changing the MountReadOnly in registry value of the copying program to 0. After writing is available, the value is changed back to 1. 1. If the mirror role is target or NONE, run the SWITCHOVERVOLUME command. |
|
Monitoring | Refer to the description of monitoring as above. |
| |
Fault If a writing test failed online, or an attempt to go online is failed, the trouble state is displayed. *Failover disabled state | Online | Refer to the above online command. |
|
Offline | It locks the mirror disk using LOCKVOLUME command. Both source and target disk cannot accessible from each node. |
|