All news

Internal Load Test Technical Report

Credits is proud to present the results of the First Stage of Platform Testing. On Friday, 14 September 2018, the Credits team performed a capacity test of the Credits platform using 32 geographically distributed nodes. The test was designed to showcase a stable platform and provide an indication of transactions per second (TPS) with a high level of simulated transactions.  

 

Check Credits Youtube live stream testing:

 

 

 

 

Testing conditions

 

  • 32 nodes with both Windows and Linux operating systems meant to exhibit cross-platform interoperability;

  • Nodes were located in 5 countries - Canada, Germany, Australia, Poland, Latvia to provide a realistic geographically distributed testing environment;

  • 5 nodes were equipped with transaction generators to simulate network loading;

  • Increasing Network load was simulated by adding 275 transactions per second until maximum load was reached.

 

If you are interested in a more detailed information, please read the full article about Internal Load Test conditions.

 

Testing report

 

Summary Information from Credits Blockchain and Credits during Phase 1 Test.

 

Test time in minutes

39 Minutes

Total amount of transactions

305,136,000 Transactions

Total amount of blocks

5 923 Blocks

Test Progression

275 Additional Transactions Added to Queue with Each Pass

Average Number of Transactions per block

85, 200 Transactions per block

Peak Number of Transactions in a Single Block

685,122 Transactions per block

Average number of Blocks per second

2 Blocks Per second

Peak Number of Blocks per second

6 Blocks Per second

Average Transactions per second

130,400 Transactions per second

Peak Number of Transactions per second

1,327,152 Transactions per second

 

 

 

 

Detailed Information about Internal Load Test Phase 1 from Windows OS tools and freeware utilities

 

 

Figure 1 - Test Environment Configuration  

 

 

To simulate the network performance in realistic environment we used networks comprised  of Virtual Machines (VM), with each of the specifications shown in the Fig. 1. VMs were placed in various geographic locations at different IP addresses. The node software combines these computers into micro networks, which in turn communicate with each other through NAT. In addition to the node software, we installed additional monitoring software in order to gauge the performance of the VMs.

 

 

Figure 2 - VM RAM Allocation via Sysinternals Suite

 

 

 

Figure 2 illustrates typical RAM usage on VM during node operation. The image shows RAM allocation on one of the test VMs provided with the help of RamMap from Sysinternals. The node uses a significant volume of memory which is allocated to private processes and mapped files.

 

 

Figure 3 - Node Operation Process

 

 

 

Figure 3 displays the node process (client.exe) separately from other processes running on the computer. Memory usage after one hour of operation is displayed. It scales in proportion to overall system RAM usage. The node is the most resource intensive process running on the virtual machine.

 

 

Figure 4 - LevelDB Database Impact on Node Operation

 

 

Here the impact of database operations is isolated within the node (client.exe). Database operations account for nearly one third of memory within the node process.

 

 

Figure 5 - Memory Used by Executable Node File.

 

 

From this image it’s clear that memory is allocated mostly for database LevelDB used in the system rather than the executable file itself, as it weighs just 412 Kb.

 

 

Figure 6 - Node CPU Usage

 


 

Figure 6 displays the impact of the node (client.exe) on the CPU. The node has almost completely loaded the CPU with node operations.

 

 

Figure 7 - Memory Usage After 1 Hour of Operation.

 

 

System Memory Usage is compared to Node Memory Usage  (3rd chart from the bottom) affecting the memory used by the node (yellow line on the 3rd chart from the bottom) and memory used by other system processes. Please note that high RAM usage leads to hard faults in the system. At the beginning of the test run, the node uses around 1/3 of RAM, yet after 1 hour virtually all system’s RAM is consumed by the node database. Once system RAM is exhausted, the node begins overwriting database cells resulting in hard faults. As the database is running in memory, the entire node is negatively impacted.

The following Database Optimizations are required:

 

  • The addition of automatic or manual limitations of memory usage (similar to a paging file or memory for Java virtual machine);

  • Repair of all memory leaks.

 

Figure 8 - Node Impact on Storage

 

 

Figure 8 shows the node's interaction with system storage (hard drive). The graph shows operations performing within expected ranges. A slight increase in operating speeds (peaks on the graph) can be made by replacing hard drives with more efficient solid state drives. Faster drives improve operations as the node’s data will require less time for reading and writing the queue to the disk.

 

 

Figure 9 - Node, Network, and Signal Server Interaction

 

 

The node is actively communicating with external environment and other nodes. As there is no other internet communication or web applications running on the test virtual machines, almost all traffic is taken up by the nodes send and receive functions. The internet traffic itself is insignificant, so a node running on an internal computer network minimally impact other machines attempting to use the internet at the same time.

 

Figure 10 - System Resource Consumption

 

 

Red color is the processor's consumption of the client.exe program, the green displays other processes. The consumption of computer resources is shown in a more detailed way. As the test proceeds, both system memory and CPU resources are completely used up. Further revisions of the node software will place restrictions on memory consumption and input-output, IO, operations.

 

 

Figure 11 - Network node activity (without transaction generator)

 

 

Lilac color - traffic generated by a working node, yellow - other. With zero network activity, nothing is written to the disk, as all data is stored in RAM.

 

 

Figure 12 - Nodes network activity (with transaction generator).

 

 

Lilac color - traffic generated by a working node, yellow - other. The console window shows nodes working on blocks containing transactions. During this process the node data is exchanged with the hard drive. As Network activity increases, periods of activity without transactions, as shown in Fig. 11, became almost invisible in comparison to peak transaction processing due to auto-scaling within the graph.

 

 

Figure 13 - The interaction of nodes with constant memory.

 

 

The Test release was not optimized to minimize hard disk space. The growth of the blockchain and the need to copy the complete history of all system transactions to each untrusted node is a disadvantage of all blockchain technology. The use of Web clients and lightweight wallets is one of the solutions for this.

 

Questions:

 

We would like to provide answers to the most popular questions from the Credits community regarding the live stream.

 

 

1) Is the signal server going to be removed?

 

 Yes, the removal of the signal server is on our internal development roadmap. A future version of the software, likely coinciding with Phase 3 testing should have this potential bottleneck and obstacle to decentralization removed, with its functionality being moved to the node software code.

 

2) Why did the balances of the nodes show 0 values during the livestream?

 

 There was an API flaw in the underlying monitor website software code used during the  Phase 1 Test. A bug report was logged, with a fix already completed. Subsequent versions of the monitor software will not exhibit this flaw.

 

3) During the short live testing session, approximately 17 gigabytes of data were produced. How will this huge amount of data be handled in a production scenario?

 

 Blockchain data size is being optimized for transaction size and length. Additional storage gains will be made by utilizing data compression within the database. It is expected that the released product will have vastly better compression suitable for production situations and use of nodes.

 

4) Why does the Transaction Per Second (TPS) chart not correlate with the information on the Monitor home page?

 

 The TPS chart shows the average transaction count over a 30 seconds time period. The Credits Monitor (home page) shows the amount of transactions in one block.

 

5) What are the factors that determine block processing time?

 

 Block processing time is mostly impacted by bandwidth latency. An additional factor is the significant 11,5 MB amount of data included in one block.

 

6) Why were some blocks empty?

 

 The Transaction Generator was developed to simulate a real life environment, therefore it is logical to have intervals without any transactions within the block.

 

 

Follow us on:

 

 

This site uses cookies in order to improve your user experience and to provide content tailored specifically to your interests. Detailed information on the use of cookies on this website is provided in our Privacy Policy. By using this website, you consent to the use of cookies. You can always deactivate cookies in commonly used browsers.
Ok