...
Info |
---|
The volume for replication should not have paging file settings for virtual memory operation. If there is a paging file setting, umount to the volume cannot be performed. The maximum size of a replication volume supported by bsr is theoretically 1 PB, and a volume larger than 10 TB is generally considered as a large volume.
|
Info |
---|
The method of configuring Space reclamation in thin provisioning in a virtualized environment is not suitable for the environment in which replication is configured. In order to maintain consistency, replication requires continuous tracking of data changes over the entire area of the volume. In a thin provisioning environment, the volume's physical space is actively adjusted by increasing or decreasing the volume's physical space. Therefore, the replication agent installed in the guest OS cannot continuously track the entire area of the volume. For this reason, configuring replication in a thin provision method in a virtualized environment can be problematic.Another option, the thick provisioning method, is a method that allocates the entire area of the volume in a fixed manner, so it conforms to the existing concept of replication operation. When configuring volumes in a virtualized environment, only the thick provisioning configuration should be usedenvironments is not compatible with BSR. To deploy in a thin provisioned storage environment, you must disable space reclamation. |
Meta Volume
bsr keeps additional information necessary to operate replication in a separate non-volatile storage space and simultaneously writes and reads this data during replication. This additional information is called meta data, and the storage volume that records it is called meta volume. The meta volume must be prepared to correspond 1: 1 to the replication volume, and the size requires about 33MB of space per 1TB based on 1 node replication. For example, for a 1: 2 replication, 3TB replication volume, you need a meta volume with size of 2 * 3 * 33MB = 198MB.
...
Info |
---|
Specifying the node id (node-id) of each node is mandatory. |
Code Block |
---|
resource r0 {
disk d;
meta-disk f;
on alice {
address 10.1.1.31:7789;
node-id 0;
}
on bob {
address 10.1.1.32:7789;
node-id 1;
}
} |
|
Configuration examples
Simple configuration
...
/etc/bsr.d/global_common.conf
Code Block |
---|
global {
}
common {
net {
protocol C;
}
} |
|
/etc/bsr.d/r0.res
Code Block |
---|
resource r0 {
on alice {
disk d;
address 10.1.1.31:7789;
meta-disk f;
node-id 0;
}
on bob {
disk d;
address 10.1.1.32:7789;
meta-disk f;
node-id 1;
}
} |
|
1:2 Mesh
Here is an example of a 1: 2 mirror configuration. Specifies to establish all connections between 3 nodes through the connection-mesh section.
Code Block |
---|
resource r0 {
device e minor 2;
disk e;
meta-disk f;
on store1 {
address 10.1.10.1:7100;
node-id 0;
}
on store2 {
address 10.1.10.2:7100;
node-id 1;
}
on store3 {
address 10.1.10.3:7100;
node-id 2;
}
connection-mesh {
hosts store1 store2 store3;
}
} |
|
1: 2 individual connection configuration
This is an example of a 1: 2 mirror configuration, and you can set properties according to the connection individually.
Code Block |
---|
resource r0 {
volume 0 {
device e minor 2;
disk e;
meta-disk f;
}
on store1 {
node-id 0;
}
on store2 {
node-id 1;
}
on store3 {
node-id 2;
}
connection {
host store1 address 10.10.0.245:7789;
host store2 address 10.10.0.252:7789;
}
connection {
host store2 address 10.10.0.252:7789;
host store3 address 10.10.0.247:7789;
}
connection {
host store1 address 10.10.0.251:7789;
host store3 address 10.10.0.247:7789;
}
} |
|
floating peer
You can configure based on IP address without specifying a host name.
Code Block |
---|
resource r0 {
floating 200.200.200.6:7788 {
device d minor 1;
disk d;
meta-disk n;
node-id 0;
}
floating 200.200.200.7:7788 {
device d minor 1;
disk d;
meta-disk n;
node-id 1;
}
} |
|
Code Block |
---|
resource r0 {
floating 10.10.0.251:7788 {
device e minor 2;
disk e;
meta-disk f;
node-id 0;
}
floating 10.10.0.252:7788 {
device e minor 2;
disk e;
meta-disk f;
node-id 1;
}
floating 10.10.0.253:7788 {
device e minor 2;
disk e;
meta-disk f;
node-id 2;
}
connection {
address 10.10.0.251:7788;
address 10.10.0.252:7788;
}
connection {
address 10.10.0.251:7788;
address 10.10.0.253:7788;
}
connection {
address 10.10.0.252:7788;
address 10.10.0.253:7788;
}
} |
|
2:1 configuration
In the source node store1, specify the target node as store3.
Code Block |
---|
resource r0 {
device e minor 2;
disk e;
meta-disk f;
on store1 {
node-id 0;
}
on store3 {
node-id 2;
}
connection {
host store1 address 10.10.0.245:7789;
host store3 address 10.10.0.247:7789;
}
} |
|
In the source node store2, the target node is specified as store3, and the above configured store1 and store2 are N: 1 configurations that specify store3 as the target.
Code Block |
---|
resource r1 {
device e minor 2;
disk e;
meta-disk f;
on store2 {
node-id 1;
}
on store3 {
node-id 2;
}
connection {
host store2 address 10.10.0.246:7790;
host store3 address 10.10.0.247:7790;
}
} |
|
Target node store3 accepts both store1 and store2 configurations.
Code Block |
---|
resource r0 {
device e minor 2;
disk e;
meta-disk f;
on store1 {
node-id 0;
}
on store3 {
node-id 2;
}
connection {
host store1 address 10.10.0.245:7789;
host store3 address 10.10.0.247:7789;
}
}
resource r1 {
device g minor 4;
disk g;
meta-disk h;
on store2 {
node-id 1;
}
on store3 {
node-id 2;
}
connection {
host store2 address 10.10.0.246:7790;
host store3 address 10.10.0.247:7790;
}
} |
|
Precautions
The following describes precautions for each platform.
...