Document toolboxDocument toolbox

Relay buffers

From relay version 1.4.2 ahead, a new buffering mechanism has been added to Devo Relay, which makes possible the use of both memory and disk space. This allows users to store higher volumes of data in the relay in case of long downtimes in the network, limited bandwidth, or problems in the destination.

The relay buffer sizes can be configured from the Devo application. To do it, go to Administration → Relays and ELBs → Relays, click the Edit option in the ellipsis menu of the required relay and go to the Output tab. Enter the required buffer sizes in the Memory buffer size (MB) and Disk buffer size (MB) fields.

Maximum memory buffer size

If you specify a value that is higher than the maximum size allowed for your machine, you will get a warning message and Devo will automatically fill this field with your maximum value allowed.

Learn more about the other relay settings in Customizing the Devo Relay output connection.

The new buffer that allows both memory and disk will be used if the value entered in the Disk buffer size (MB) field is higher than 0. If the relay encounters any problem when sending data to the destination, it will first use the memory part first. When the memory buffer is filled up, the data is transferred to the disk, where more capacity should be available.

Disk buffer stores the data in files in the following path /var/logt/buffer and local relay logs are stored in /var/logt/local so one option is to mount a data volume in /var/logt

$ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 7.7G 2.4G 5.3G 32% / /dev/sdb1 3.0T 89M 2.9T 1% /var/logt

Otherwise, you will need a root filesystem with enough capacity.

It's also important that the disk has enough sequential read and write speed for the amount of data we are sending to the relay. We recommend using a disk with a throughput of 100 MB/s or higher.

A basic way to measure disk performance is by using a Linux tool like dd. For example:

$ sudo dd if=/dev/zero of=/var/logt/buffer/test1.img bs=1M count=1024 oflag=dsync 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 8.61186 s, 125 MB/s