Mengukur Kinerja File Sharing menggunakan dbench

From OnnoWiki
Revision as of 08:39, 22 November 2021 by Randymatheas (talk | contribs) (Created page with "dbench is a tool to generate I/O workloads to either a filesystem or to a networked CIFS or NFS server. It can even talk to iSCSI targets. "How many concurrent clients/applica...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

dbench is a tool to generate I/O workloads to either a filesystem or to a networked CIFS or NFS server. It can even talk to iSCSI targets. "How many concurrent clients/applications do this workload can my server handle before the response starts to lag?" dbench can be used to stress a filesystem or a server to see which workload becomes saturated and can.

The dbench measurement is performed by mounting the shared folder to a local folder on the client running dbench. The dbench process is done by simulating the load of 10, 20, 30 up to 700 clients. The measurement results show that the results obtained for four systems, reliable enough for only up to 300 clients. Above 300 clients, some system unable to keep up with the stress and unable to provide any results.



Figure 9. shows the throughput during dbench benchmarking. The RaspberryPi3 throughput is around 3 Mbps for both LAN and WiFi access. The Asus Mini PC throughput is around 4Mbps, higher than that of RaspberryPi3. The virtual machine via physical LAN throughput is around 7 Mbps. Interesting to note that the virtual machine throughput via a direct bridge with no physical network measures about 13Mbps and 18Mbps, for one core and four cores, respectively. Thus, the network bandwidth limits the ability of the processor to pump data into the network. For massive streaming and file sharing activities, it would be advisable to binding several network physical interfaces in the server. In general, throughput degrades as more clients accessing the system. The maximum capability of the Internet-Offline cannot be probed from the throughput measurement.





Figure 10. shows the measurement results of the maximum latency of four systems. In general, the Asus Mini PC has a much better performance as compared to RaspberryPi 3. The RaspberryPi 3 handles relatively well for up to 20 simultaneous clients access. Asus Mini PC degrades for loads higher than 80 clients. For higher loads, a better system is needed, such as using Asus Mini PC. For larger loads, the i5 four-core system would be able to handle it.

It is interesting to note that the network interface limits the performance of the i5 virtual machines. The following table shows the average throughput (Mbps) and average maximum latency (ms) for single-core and four-core virtual machines with LAN and direct bridging with a client on the same host.

Table 7. Performance Comparison during dbench

Ave. Throughput (Mbps) Ave. Max. Latency (ms) 1 core LAN 5.7 2795 1 core nolan 12.4 1114 4 core LAN 6.0 2979 4 core nolan 17.4 921

It shows that the virtual machine with the direct bridging connection has higher performance. The higher performance also observed on the higher core machines.