Environments
- 1 Configuration environments
- 2 Specifications
- 2.1 Platforms
- 2.2 File systems
- 3 Precautions for ReFS (Non-Journaling Filesystem) Volumes
- 3.1 Scope
- 3.2 Precautions
- 3.3 Recommendation
- 3.4 CPU
- 3.5 Memory
- 3.5.1 Transmit buffer size
- 3.6 Disk
- 3.6.1 Basic installation space
- 3.6.2 Mirror disks
- 3.6.3 Meta Disk
- 3.7 Network
- 3.7.1 throughput
- 3.7.2 ports
- 3.8 Storage provisioning
Configuration environments
BSR is designed as a versatile block-replication engine that can be deployed across a wide range of IT infrastructures without environmental constraints. It delivers a consistent replication model regardless of the underlying compute architecture—physical servers, virtualized environments, or hyper-converged infrastructure (HCI)—and can be flexibly integrated into various operational scenarios.
Since BSR operates at the block device layer, independent of operating system or hardware architecture, it can be reliably deployed in the following environments:
Physical Servers
Supports general x86 servers as well as diverse storage backends such as DAS, SAN, and NVMe
Provides stable replication performance for high-throughput, enterprise-grade workloads
Suitable for OS-level replication, database servers, and mission-critical applications
Virtualized Environments (VMs)
BSR performs block-level replication inside virtual machines across major hypervisors.
Fully operable in VMware, Hyper-V, KVM, and other virtualization platforms
Supports replication of virtual disks (VMDK, VHDX, QCOW2, and others)
Enables P2V (physical→virtual), V2P (virtual→physical), and V2V (virtual→virtual) transition scenarios
Well-suited for implementing HA/DR strategies for VM-based services
Hyper-Converged Infrastructure (HCI)
BSR functions as an independent replication layer even in environments where storage and compute are tightly integrated.
Compatible with platforms such as Nutanix, vSAN, and Red Hat HCI
Operates reliably on distributed, local-storage–based clusters
Integrates cleanly without conflicting with storage policies or cluster-management frameworks
Development / Test / Large-Scale Service Environments
BSR can be adopted in environments ranging from single-server deployments to large, multi-host clusters.
Simple data-protection scenarios between individual hosts
Enterprise systems running tens to hundreds of terabytes
Distributed service architectures requiring continuous data availability
Mixed-OS and multi-platform service environments
Specifications
The following are the physical requirements of the platforms and target systems supported by the BSR.
Platforms
Supports Windows 2012 or higher, Linux CentOS 6.4, Ubuntu 16.04 LTS or higher x64 environment.
OS | CPU Architecture |
|---|---|
Windows 2012 or higher | x64 |
RHEL / CentOS 6.4 ~ 8.4 | x64 |
RHEL / Rocky 8.5 or higher | x64 |
Ubuntu 16 LTS or higher | x64 |
ProLinux 7 or higher | x64 |
SUSE 12, 15 | x64 |
File systems
In recent years, the size of replication volumes has grown significantly (tens to hundreds of terabytes), and as a result, the initial synchronization of these volumes now requires a substantial amount of time. When performing initial synchronization over the entire volume, the process may take several days to even several weeks. For example, with a 1 Gbps network link, synchronizing a 10 TB volume requires at least 27 hours, while a 100 TB volume can take more than ten days.
To address this long initial-sync problem, we implemented FastSync(Filesystem Active Sector Transfer Sync), which synchronizes only the sectors actually allocated by the filesystem. FastSync dramatically reduces the initial synchronization time by transferring only the filesystem-used areas instead of the entire volume. For instance, if a 100 TB volume contains only 10 GB of actual data, FastSync can complete the initial sync in under one minute over a 1 Gbps network.
FastSync supports the following filesystems. For all other filesystems, synchronization is performed over the full volume:
NTFS on Windows
Ext3, Ext4, XFS, Btrfs on Linux
Precautions for ReFS (Non-Journaling Filesystem) Volumes
Scope
This section applies when using a Windows ReFS (Resilient File System)–formatted data volume as a BSR replication target.
Precautions
ReFS is a COW-based, non-journaling filesystem, and metadata inconsistencies may occur immediately after abnormal shutdown events such as forced termination, power failure, or kernel panic.
Compared to NTFS, ReFS provides limited offline recovery tools and procedures, and filesystem corruption may result in the volume failing to mount.
ReFS does not provide an offline recovery mechanism equivalent to NTFS chkdsk.
Microsoft offers the dedicated ReFS utility (RefsUtil), but it has version-specific and scenario-specific limitations and does not guarantee full recovery.Therefore, when using ReFS, administrators must predefine flush policies and failure-recovery procedures, independent of BSR replication.
Recommendation
Replication of non-journaling filesystems such as ReFS is not recommended.
CPU
It is recommended to configure at least 2GHz, 4 core or higher, x64 compatible processor. There is no problem in operating in a processor environment with a lower specification, but considering the I / O performance of the system, it is necessary to secure the specification of the configuration machine as high as possible.
Memory
The system typically starts paging virtual memory when memory usage exceeds 70%, depending on kernel settings. Because paging degrades system I/O performance, it is beneficial for replication to be configured to always have at least 30% free physical memory so that paging is suppressed.
The memory used by BSR is primarily required for buffering purposes and is determined by the maximum write request value (max-req-write-count) in the BSR settings and the size of the transmit buffer. Below is an example for a Windows environment.
For synchronous without a send buffer
At the default setting for write requests (10k), use a maximum of 1.5GB per resource.
At the write request maximum setting (100,000), use a maximum of 3 GB per resource.
For asynchronous with a 1GB send buffer setting, use a maximum of 3GB per resource in the
Use a maximum of 2.5 GB per 1 resource at the write request default setting.
Use a maximum of 4 GB per resource at the Write Request Max setting.
For a server with 64 GB of physical memory, approximately 20 GB (30%) of memory free space is required, and of the memory space used, a maximum of 2.5 GB per 1 resource is required for asynchronous by default.
If you don't have that 30% free memory, you'll have to accept a degradation in basic I/O performance due to paging.
The 3 to 4 GB NP memory usage per resource required by replication should be kept free, otherwise you will run out of memory, which can lead to failure.
Transmit buffer size
The size of the transmit (TX) buffer on the local side is ideally set to a number that allows for the free transfer of locally generated I/O data to the remote side. It is found by the following equation
Maximum size of transmit band per time (s) * Transmit timeout time (s)
For example, for a 1 Gbps band, you can get (about 100 MB/s * 5 s) = 500 MB, and you can set a buffer of 500 MB to 1 GB.
If you need to consider the variable bandwidth of the WAN segment, you can add more capacity to the above size plus the time (s) to tolerate the WAN bottleneck delay, or buffer it with a proxy (DRX). Typically, WAN leg buffering is specified at 5 to 10 times the TX buffer.
When paging occurs can vary depending on your system's memory capacity, platform, and OS version. The 70% figure described above is typical and should be understood in the context of your environment.
BSR memory usage on Linux is similar or less than on Windows. It uses a bit more memory on Windows due to some differences in the replication architecture.
Recently, replication configurations through VMs are becoming more common in virtualization environments, but there are cases where the CPU or memory resources allocated to individual VMs are not sufficient. For example, if an individual VM is configured with 1 core CPU and 1~2GB memory, it will not meet the minimum configuration specifications of bsr. When a replication environment is configured with low-end VMs, performance delays due to frequent CPU context switching, and thus inter-node communication (keep alive) delays, become frequent. Operating replication on VMs in this environment has severe limitations. There is no solution to this problem other than freeing up system resources.
Disk
Basic installation space
It requires about 200MB to install all the binary execution modules of bsr, and 1GB is required to store the log of bsr, which requires about 2GB of disk installation space.
Mirror disks
Theoretically, the capacity of a BSR's mirror disk is unlimited, but the actual disk capacity is limited to 10 TB. At higher capacity, the meta-area corresponding to the mirror disk capacity grows along with it, and the operation (Attach) that needs to process the entire meta-disk becomes time-consuming and impedes operations.
Multi-volume
Storage corresponding to one service task is often composed of multiple volumes. In this case, it is necessary to treat these volumes as one resource. In this way, operating multiple volumes by tying them to one resource is called a multi-volume configuration. When operating resources as multiple volumes, the data processing buffer queue is operated as a single queue, thereby serializing the service's write I/O order to the disk volume to ensure service consistency.
There is no limit to the number of multiple volumes (maximum 65535), but in reality, as the number of volumes increases, the queue buffer delay may become severe and difficult to control. It is realistic to configure no more than 4 to 5 volumes in a 10G network.
Meta Disk
Depending on the capacity of the replication volume, you need to estimate the capacity of the metadisk. It requires about 33MB of meta disk space per 1TB of the replication volume, and for more accurate size, see Metadata Size Estimation.
Network
throughput
In a recent corporate environment, mirroring on a local network is generally configured with a bandwidth of 1 Gbps to 10 Gbps, and a remote replication environment (Disaster Recovery) for DR (Disaster Recovery) is generally operated with a lower bandwidth. That is, it is applied to a wide range of network environments from very low bandwidth (10 ~ 100Mbps) to high bandwidth. However, because replication in a low-band network environment is bound to affect replication performance due to bandwidth limitations, consider such as linking replication accelerators for performance improvement.
ports
The mirroring ports for replication (specified in the configuration file) must be open. On Windows, additionally TCP loopback ports 5678, 5679 must be open for control.
Storage provisioning
It is only supported for the THICK provisioning method of storage provisioning. Thin provisioning environments with disk space reclamation are not compatible with BSR.
Disk space reclamation in THIN provisioning can cause data inconsistency issues in certain situations. To use a THIN environment, you must disable space reclamation.