Troubleshooting
Split-Brain
Overview
The state in which the source node becomes more than two nodes at a specific time by the administrator or management (HA) software of the replication cluster is called a split brain (SB). SB occurs when the replication connection is disconnected, and both nodes become source nodes at the same time without knowing each other's role and status. When this SB occurs, the replication cluster is split into two replication SETs, putting it in a state of potentially data loss. Upon recognizing the occurrence of SB, the administrator must resolve the SB and normalize the replication according to the following procedure.
Detect
The FSR can internally identify whether both nodes are in SB state. The identification of SB is performed through RID exchange at the time when the replication connection is established. As a result, if it is recognized as SB, the replication connection is immediately disconnected and the connection is standby in a standalone state. The following is the output log when SB occurs.
2019-12-19 14:50:03.629 WRN establishing error=split-brain compare=newer key=1 peer=node3 resource=r0 state=connected
Resolve
The way to solve SB starts with deciding which of the two nodes the SB will sacrifice to. When the node to be sacrificed is determined, the data of the victim node is discarded through the following command on the victim node, and the final SB is resolved by connecting with the other node.
fsradm connect --discard-my-data <res-id> <peer-node>
When the SB is resolved by establishing a connection with --discard-my-data, the victim node resynchronizes based on the other node and restores the latest duplicate data set.
When multiple SBs occur, synchronization between victim nodes does not occur, so it can be resolved by establishing a connection with --discard-my-data from all victim nodes.
Fault
Describes the subsequent response to the following failure situations
- Disk failure
- File I/O errors in the FSR engine
Disk failure
Failures can occur on the replication target disk itself, such as when the volume on which the replication target resides is unintentionally unmounted during operation or when the storage medium itself fails due to physical damage. In this case, the user should take steps to repair the replication target so that the volume device is up and running again. Once the manual repair is complete, replication should be restarted with a full synchronization.
Monitoring disk health
FSR periodically monitors the health of the disks to detect if something is wrong with them. This is based on S.M.A.R.T. (Self-Monitoring, Analysis, and Reporting Technology) technology, and the frequency of the monitoring can be specified as follows.
"disk"
: {
"health"
: {
"period"
:
10
}
},
Engine file I/O errors
File I/O can fail in a variety of situations, including errors due to file path issues and account permissions. Although I/O errors do not occur frequently, it is a common occurrence during service operation that they are caused by unintended changes in the environment or by applications that are not written to respond flexibly to exceptional situations. When an I/O error occurs, applications are expected to perform exception handling for that exception, and the subsequent behavior depends on the application. In this way, errors in file I/O by source-side applications are viewed as normal file I/O errors that can occur at any time and are not failures.
However, if an error occurs in file I/O performed by the fsr engine, it is a failure. If fsr is unable to do file I/O, mirroring is essentially impossible and replication stops immediately.
The error codes for I/O errors encountered by the fsr engine are logged, and the cause of the error can be estimated from the error code. From there, the administrator must manually repair the failure and normalize I/O on the fsr. In a normalized environment, resources are finally restarted and a full sync is performed to resume replication.
Check Disk
Physical errors on disk volumes are difficult to repair because they are caused by damage to the media itself, but logical errors at the filesystem level can be checked or repaired with a utility (chkdsk on Windows, fsck on Linux).
It is usually safe to use these utilities after the volume has been unmounted, and if logical faults are detected and subsequently repaired during this check, the volume must be spun up again as a replication resource and given a full synchronization to ensure consistency with the target.
File Lock
File Handle Closing Error
During the file locking process, there is a procedure to clean up the handles of files that are already open among the files to be replicated. This section explains what to do if the following error message occurs when performing this procedure.
ERR handle closed error="attach: operation not permitted" exec=handle group= key=2 name=/opt/data/b/1234.txt node=b pid=76716 resource=r0
The above error only occurs on Linux and is caused by the ptrace utility not being authorized to perform that control, and to resolve it you will need to adjust your system's permission settings. If the value of /proc/sys/kernel/yama/ptrace_scope is set to 3, you will need to adjust it to a value between 0 and 2, and reboot after adjusting the setting. If you are unable to adjust the value of the ptrace_scope setting on your system, you will need to manually kill all processes that are opening files.