7-6 Mirror Disk
In case of MCCS needed data cannot be shared among nodes in a cluster, namely NAS or DAS, may not use latest data after failover. In this case, Mirror disk is used when there is no external shared disk, and is replicated between two nodes by using replication component. MCCS provides replication function of DRBD from Linbit and Mirror Disk Agent manages this replicated data set.
It functions properly according to the state and role of mirror disk. Actions such as state and role of mirror disk are all recorded in the system event log, and this information will be sent to MCCS through event log monitor module.
MCCS event module will start when the MCCS service starts. The DRBD operates by creating mirror set for volumes between two nodes.
The primary server has source volume and the secondary server has target volume which is an exact replica of source volume.
Client is only available to read/write in the source volume, changed block of the volume is replicated to the target volume through the TCP/IP network connection. At this point, target volume is in lock state and read/write is not allowed. This is to ensure data integrity by preventing the use of target volume.
Table of Contents
The below figure illustrates that mirroring is set to two servers.
[Figure] Mirroring Configuration
Mirror Mode
Mirror mode employs both asynchronous and synchronous mirroring schemes. Understanding the advantages and disadvantages between synchronous and asynchronous mirroring is essential to the correct operation of this.
Async Mode
With asynchronous mirroring, each write is captured and a copy of this is made. That copy is queued to be transmitted to the target system as soon as the network will allow it. Meanwhile, the original write request is committed to the underlying storage device and control is immediately returned to the application that initiated the write. At any given time, there may be write transactions waiting in the queue to be sent to the target machine. But it is important to understand that these writes reach the target volume in time order, so the integrity of the data on the target volume is always a valid snapshot of the source volume at some point in time. Should the source system fail, it is possible that the target system did not receive all of the writes that were queued up, but the data that has made it to the target volume is valid and usable.
Semi-sync Mode
With semi-synchronous mirroring, each write is captured and transmitted to the target system. Local write completes in the source as soon as the replication packet has reached the target. Normally, no writes are lost in case of failover , but this may can be lost when both nodes are failed simultaneously.
Sync Mode
With synchronous mirroring, each write is captured and transmitted to the target system to be written on the target volume at the same time that the write is committed to the underlying storage device on the source system. Once both the local and target writes are complete, the write request is acknowledged as complete and control is returned to the application that initiated the write. With synchronous mirroring, each write is intercepted and transmitted to the target system to be written on the target volume at the same time that the write is committed to the underlying storage device on the source system. Once both the local and target writes are complete, the write request is acknowledged as complete and control is returned to the application that initiated the write.
Adding
Add the mirror disk resource to a group.
MCCS for linux support DRBD which is open source for Mirror disk. Therefore, DRBD 8.3.13 must be installed beforehand.
The size of disk file system that is created by mirror disk should be created with 128MB
128M space is required to store meta data that manages replication volume service. Size of meta data can be changed according to the size of the meta data.
Please refer to DRBD meta data size. (Calculation for more details.)
When use both DBRB and LVM, only DRBD ON LVM is supported and LVM In DRBD is not supported.
Select a group → right click → 'Add Resource'.
Select 'MirrorDisk' from Resource Type lists and click 'Next' button.
Enter the mirror volume to be used as a mirror disk, mounting position and source server.
The virtual volume, mirror network address and mirror ports will be entered automatically.
[Figure] Mirror Disk AddedAs 'Additional Settings', Mirror Type option is available. Default is Asynchronous Mode.
[Figure] Mirror Disk Additional SettingWhen click OK button, following warning message will be displayed and lick OK button if the informations are correct.
If you select wrong sever, click cancel button.Click the 'Finish' button to add the mirror disk.
You can immediately check the result in the management web console.
Deleting
Select resource type → right click → delete resource.
(Current online resources and replication programs are configured on the mirror disk mirrored volume can not be deleted)
Click "Delete resource" and a confirming message about deleting resource will appear.
[Figure] Check resource viewClick "OK" and a confirming message about deleting the mirror disk.
[Figure] Deleting mirror disk viewClick "OK" and mirror disk is deleted. The deleted resource will immediately disappear from the management web console.
Status
The following table explains the status switching of the MCCS resource caused by a user's command and the status.
The command assumes that it is generated by a user.
Mirror disk agent: Manages the mirror disk. Copying program should be installed.
Status | Agent command | Description | Note |
|---|---|---|---|
Online In this status, you can access the source volume and perform a writing test properly. | Offline | If the role of mirror resource is online, Unmount (umount) the mirror disk from the mount point and the role of mirror disk is demoted to secondary. |
|
Monitoring | MCCS consistently handles events of the replication service. |
| |
Offline Except for the online and trouble states, it is always offline. | Online | Type of operation is determined by the role of the mirror disk at the node. <Target volume> If the mirror disk is not defined, it is processed as a failure without any operation. |
|
Monitoring | Refer to the description of monitoring as above. |
| |
Fault If a writing test failed online, or an attempt to go online is failed, the trouble state is displayed. *Failover deactivation state | Online | Refer to the above online command. |
|
Offline | If the role of mirror resource is online, Unmount (umount) the mirror volume from the mount point and the role of mirror disk is demoted to secondary. |
|