You have a HTTP endpoint, and you want to know when it might fall over. Of course someone has written tools for this.
Anyway, roll initial impressions
These tools are based around dumping a lot of requests at your server. Some are obviously better than others.
hey: Go cli tool
Importantly does the one thing I want: send as many requests as it can as fast as it can.
1$ hey -n 1000000 http://localhost:8080
2
3Summary:
4 Total: 29.6725 secs
5 Slowest: 0.0464 secs
6 Fastest: 0.0001 secs
7 Average: 0.0015 secs
8 Requests/sec: 33701.1903
9
10
11Response time histogram:
12 0.000 [1] |
13 0.005 [971369] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
14 0.009 [22573] |■
15 0.014 [4660] |
16 0.019 [1076] |
17 0.023 [238] |
18 0.028 [65] |
19 0.033 [13] |
20 0.037 [4] |
21 0.042 [0] |
22 0.046 [1] |
23
24
25Latency distribution:
26 10% in 0.0003 secs
27 25% in 0.0006 secs
28 50% in 0.0012 secs
29 75% in 0.0019 secs
30 90% in 0.0028 secs
31 95% in 0.0037 secs
32 99% in 0.0078 secs
33
34Details (average, fastest, slowest):
35 DNS+dialup: 0.0000 secs, 0.0001 secs, 0.0464 secs
36 DNS-lookup: 0.0000 secs, 0.0000 secs, 0.0231 secs
37 req write: 0.0000 secs, 0.0000 secs, 0.0199 secs
38 resp wait: 0.0014 secs, 0.0000 secs, 0.0342 secs
39 resp read: 0.0001 secs, 0.0000 secs, 0.0300 secs
40
41Status code distribution:
42 [200] 1000000 responses
wrk2: C cli tool
Based on wrk but with a new (required) throughput parameter (reqs/seq)
and apparently more accurate reporting.
Equal or better performance than hey
but i like the output of hey
a bit better.
1$ wrk2 -R 100000 -d 30s --latency http://localhost:8080
2Running 30s test @ http://localhost:8080
3 2 threads and 10 connections
4 Thread calibration: mean lat.: 2802.377ms, rate sampling interval: 10002ms
5 Thread calibration: mean lat.: 2760.434ms, rate sampling interval: 10059ms
6 Thread Stats Avg Stdev Max +/- Stdev
7 Latency 11.89s 3.47s 18.20s 57.62%
8 Req/Sec 19.20k 248.00 19.45k 50.00%
9 Latency Distribution (HdrHistogram - Recorded Latency)
10 50.000% 12.09s
11 75.000% 14.87s
12 90.000% 16.61s
13 99.000% 17.87s
14 99.900% 18.14s
15 99.990% 18.20s
16 99.999% 18.22s
17100.000% 18.22s
18
19 Detailed Percentile spectrum:
20 Value Percentile TotalCount 1/(1-Percentile)
21
22 5824.511 0.000000 13 1.00
23 7069.695 0.100000 77838 1.11
24 8216.575 0.200000 155282 1.25
25 9437.183 0.300000 233028 1.43
26 10747.903 0.400000 310697 1.67
27 12091.391 0.500000 388294 2.00
28 12664.831 0.550000 427436 2.22
29 13205.503 0.600000 465808 2.50
30 13680.639 0.650000 504621 2.86
31 14278.655 0.700000 544086 3.33
32 14868.479 0.750000 582142 4.00
33 15171.583 0.775000 601792 4.44
34 15450.111 0.800000 621392 5.00
35 15736.831 0.825000 640325 5.71
36 16023.551 0.850000 660073 6.67
37 16310.271 0.875000 679201 8.00
38 16465.919 0.887500 689453 8.89
39 16605.183 0.900000 698765 10.00
40 16752.639 0.912500 708242 11.43
41 16908.287 0.925000 719130 13.33
42 17055.743 0.937500 728927 16.00
43 17121.279 0.943750 733300 17.78
44 17186.815 0.950000 738233 20.00
45 17268.735 0.956250 742727 22.86
46 17367.039 0.962500 747481 26.67
47 17481.727 0.968750 752249 32.00
48 17530.879 0.971875 754395 35.56
49 17596.415 0.975000 756705 40.00
50 17661.951 0.978125 759392 45.71
51 17711.103 0.981250 761563 53.33
52 17776.639 0.984375 764530 64.00
53 17793.023 0.985938 765187 71.11
54 17825.791 0.987500 766650 80.00
55 17858.559 0.989062 768095 91.43
56 17874.943 0.990625 768877 106.67
57 17907.711 0.992188 770252 128.00
58 17924.095 0.992969 770854 142.22
59 17940.479 0.993750 771338 160.00
60 17973.247 0.994531 772190 182.86
61 17989.631 0.995313 772752 213.33
62 18006.015 0.996094 773280 256.00
63 18022.399 0.996484 773685 284.44
64 18022.399 0.996875 773685 320.00
65 18038.783 0.997266 774032 365.71
66 18055.167 0.997656 774347 426.67
67 18087.935 0.998047 774743 512.00
68 18087.935 0.998242 774743 568.89
69 18104.319 0.998437 774907 640.00
70 18120.703 0.998633 775085 731.43
71 18137.087 0.998828 775454 853.33
72 18137.087 0.999023 775454 1024.00
73 18137.087 0.999121 775454 1137.78
74 18153.471 0.999219 775553 1280.00
75 18169.855 0.999316 775816 1462.86
76 18169.855 0.999414 775816 1706.67
77 18169.855 0.999512 775816 2048.00
78 18169.855 0.999561 775816 2275.56
79 18169.855 0.999609 775816 2560.00
80 18186.239 0.999658 775901 2925.71
81 18186.239 0.999707 775901 3413.33
82 18202.623 0.999756 776051 4096.00
83 18202.623 0.999780 776051 4551.11
84 18202.623 0.999805 776051 5120.00
85 18202.623 0.999829 776051 5851.43
86 18202.623 0.999854 776051 6826.67
87 18202.623 0.999878 776051 8192.00
88 18202.623 0.999890 776051 9102.22
89 18202.623 0.999902 776051 10240.00
90 18202.623 0.999915 776051 11702.86
91 18202.623 0.999927 776051 13653.33
92 18219.007 0.999939 776100 16384.00
93 18219.007 1.000000 776100 inf
94#[Mean = 11892.257, StdDeviation = 3465.307]
95#[Max = 18202.624, Total count = 776100]
96#[Buckets = 27, SubBuckets = 2048]
97----------------------------------------------------------
98 1191527 requests in 30.00s, 9.52GB read
99Requests/sec: 39718.16
100Transfer/sec: 324.92MB
wrk: C cli tool
Cli tool, with optional lua scripting. You have to think about threads and connections though
1$ wrk -d 2m -t 2 -c 100 --latency http://localhost:8080
2Running 2m test @ http://localhost:8080
3 2 threads and 100 connections
4 Thread Stats Avg Stdev Max +/- Stdev
5 Latency 2.13ms 2.65ms 47.49ms 87.41%
6 Req/Sec 29.16k 4.08k 41.59k 69.10%
7 Latency Distribution
8 50% 1.00ms
9 75% 2.88ms
10 90% 5.37ms
11 99% 12.54ms
12 6963860 requests in 2.00m, 55.63GB read
13Requests/sec: 57998.74
14Transfer/sec: 474.47MB
oha: Rust cli/tui tool
Has a TUI, but the final output is still text. Fancy but don't see how the ui is useful.
$ oha -n 1000000 -c 1000 http://localhost:8080
Summary:
Success rate: 1.0000
Total: 29.6182 secs
Slowest: 0.2919 secs
Fastest: 0.0001 secs
Average: 0.0295 secs
Requests/sec: 33763.0723
Total data: 7.86 GiB
Size/request: 8.24 KiB
Size/sec: 271.76 MiB
Response time histogram:
0.008 [157163] |■■■■■■■■■■■■■■■■■■■■■■
0.016 [226111] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.024 [202547] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.032 [122975] |■■■■■■■■■■■■■■■■■
0.040 [70000] |■■■■■■■■■
0.048 [48070] |■■■■■■
0.056 [38563] |■■■■■
0.064 [30063] |■■■■
0.072 [22626] |■■■
0.080 [17009] |■■
0.088 [64873] |■■■■■■■■■
Latency distribution:
10% in 0.0062 secs
25% in 0.0113 secs
50% in 0.0204 secs
75% in 0.0365 secs
90% in 0.0658 secs
95% in 0.0896 secs
99% in 0.1350 secs
Details (average, fastest, slowest):
DNS+dialup: 0.0175 secs, 0.0001 secs, 0.1563 secs
DNS-lookup: 0.0000 secs, 0.0000 secs, 0.0051 secs
Status code distribution:
[200] 1000000 responses
TUI:
┌Progress─────────────────────────────────────────────────────────────────────────────────────┐
│ 329421 / 1000000 │
└─────────────────────────────────────────────────────────────────────────────────────────────┘
┌statics for last second──────────────────────┐┌Status code distribution──────────────────────┐
│Requests : 35088 ││[200] 329421 responses │
│Slowest: 0.2831 secs ││ │
│Fastest: 0.0001 secs ││ │
│Average: 0.0284 secs ││ │
│Data: 282.42 MiB ││ │
│Number of open files: 1010 / 65535 ││ │
└─────────────────────────────────────────────┘└──────────────────────────────────────────────┘
┌Error distribution───────────────────────────────────────────────────────────────────────────┐
└─────────────────────────────────────────────────────────────────────────────────────────────┘
┌Requests / past second (auto). press -/+/a to┐┌Response time histogram───────────────────────┐
│ ▅▅▅▅▅▅▅ ▆▆▆▆▆▆▆ ███████ ││███████ │
│▃▃▃▃▃▃▃ ▅▅▅▅▅▅▅ ███████ ███████ ███████ ││███████ ▇▇▇▇▇▇▇ │
│███████ ███████ ███████ ███████ ███████ ││███████ ███████ │
│███████ ███████ ███████ ███████ ███████ ││███████ ███████ │
│███████ ███████ ███████ ███████ ███████ ││███████ ███████ │
│███████ ███████ ███████ ███████ ███████ ││███████ ███████ ▅▅▅▅▅▅▅ ▃▃▃▃▃▃▃ │
│█30797█ █31935█ █37635█ █38692█ █39732█ ││█14355█ █12168█ █3582██ █2152██ █2831██ │
│0s 1s 2s 3s 4s ││0.0170 0.0340 0.0510 0.0681 0.0851 │
└─────────────────────────────────────────────┘└──────────────────────────────────────────────┘
ab
, part of apache: C cli tool
Performance is meh.
1$ ab -n 1000000 -c 10000 http://localhost:8080/
2This is ApacheBench, Version 2.3 <$Revision: 1879490 $>
3Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
4Licensed to The Apache Software Foundation, http://www.apache.org/
5
6Benchmarking localhost (be patient)
7Completed 100000 requests
8Completed 200000 requests
9Completed 300000 requests
10Completed 400000 requests
11Completed 500000 requests
12Completed 600000 requests
13Completed 700000 requests
14Completed 800000 requests
15Completed 900000 requests
16Completed 1000000 requests
17Finished 1000000 requests
18
19
20Server Software:
21Server Hostname: localhost
22Server Port: 8080
23
24Document Path: /
25Document Length: 8440 bytes
26
27Concurrency Level: 10000
28Time taken for tests: 70.118 seconds
29Complete requests: 1000000
30Failed requests: 0
31Total transferred: 8537000000 bytes
32HTML transferred: 8440000000 bytes
33Requests per second: 14261.60 [#/sec] (mean)
34Time per request: 701.184 [ms] (mean)
35Time per request: 0.070 [ms] (mean, across all concurrent requests)
36Transfer rate: 118897.72 [Kbytes/sec] received
37
38Connection Times (ms)
39 min mean[+/-sd] median max
40Connect: 0 251 36.5 249 370
41Processing: 113 448 51.1 444 740
42Waiting: 0 199 38.9 194 426
43Total: 347 699 33.2 693 960
44
45Percentage of the requests served within a certain time (ms)
46 50% 693
47 66% 702
48 75% 711
49 80% 720
50 90% 745
51 95% 758
52 98% 771
53 99% 782
54 100% 960 (longest request)
webslap: C cli tool
Based on apache bench / ab, with multi url support and a terminal animation! / live display. Does make copying results out a pita though.
code count min avg max kbhdrs kbtotal kbbody
200 1,000,000 0 347 1,313 122,070 8,376,953 8,242,187
URL: http://localhost:8080
200 1,000,000 0 347 1,313 122,070 8,376,953 8,242,187
Time completed: Tue, 22 Jun 2021 21:25:25 GMT
Concurrency Level: 10,000
Time taken for tests: 35.213s
Total requests: 1,000,000
Failed requests: 0
Keep-alive requests: 990,000
Non-2xx requests: 0
Total transferred: 8,578,000,000 bytes
Headers transferred: 125,000,000 bytes
Body transferred: 8,440,000,000 bytes
Requests per second: 28,398.60 [#/sec] (mean)
Time per request: 352.130 [ms] (mean)
Time per request: 0.035 [ms] (mean, across all concurrent requests)
Wire Transfer rate: 237,893.76 [Kbytes/sec] received
Body Transfer rate: 234,066.59 [Kbytes/sec] received
min avg max
Connect Time: 0 1 242
Processing Time: 0 0 193
Waiting Time: 0 347 1,313
Total Time: 0 347 1,313
siege: C cli tool
config and output are both unintuitive
1$ siege -b -q -c 100 -t 1m -j http://localhost:8080
2
3{
4 "transactions": 530945,
5 "availability": 100.00,
6 "elapsed_time": 59.00,
7 "data_transferred": 4273.59,
8 "response_time": 0.01,
9 "transaction_rate": 8999.07,
10 "throughput": 72.43,
11 "concurrency": 99.37,
12 "successful_transactions": 530946,
13 "failed_transactions": 0,
14 "longest_transaction": 0.15,
15 "shortest_transaction": 0.00
16}
vegeta Go cli tool / library
More of a toolsuite, but the ux is a bit confusing? Actual performance is meh.
1$ echo "GET http://localhost:8080/" | vegeta attack -duration=30s -rate 0 -max-workers 10000 | tee results.bin | vegeta report
2Requests [total, rate, throughput] 334306, 11140.79, 11140.50
3Duration [total, attack, wait] 30.008s, 30.007s, 766.498µs
4Latencies [min, mean, 50, 90, 95, 99, max] 67.87µs, 3.115ms, 1.701ms, 7.585ms, 9.751ms, 15.574ms, 127.55ms
5Bytes In [total, mean] 2821542640, 8440.00
6Bytes Out [total, mean] 0, 0.00
7Success [ratio] 100.00%
8Status Codes [code:count] 200:334306
9Error Set:
httperf: C cli tool
questionable performance
1$ httperf --server localhost --port 8080 --num-conns 1000 --rate 10000 --num-calls 1000000
2httperf --client=0/1 --server=localhost --port=8080 --uri=/ --rate=10000 --send-buffer=4096 --recv-buffer=16384 --ssl-protocol=auto --num-conns=1000 --num-calls=1000000
3^CMaximum connect burst length: 106
4
5Total: connections 1000 requests 6791480 replies 6790782 test-duration 215.640 s
6
7Connection rate: 4.6 conn/s (215.6 ms/conn, <=1000 concurrent connections)
8Connection time [ms]: min 0.0 avg 0.0 max 0.0 median 0.0 stddev 0.0
9Connection time [ms]: connect 5.5
10Connection length [replies/conn]: 0.000
11
12Request rate: 31494.5 req/s (0.0 ms/req)
13Request size [B]: 62.0
14
15Reply rate [replies/s]: min 22054.7 avg 31484.5 max 41094.8 stddev 4416.3 (43 samples)
16Reply time [ms]: response 22.0 transfer 9.8
17Reply size [B]: header 125.0 content 8440.0 footer 2.0 (total 8567.0)
18Reply status: 1xx=0 2xx=6790782 3xx=0 4xx=0 5xx=0
19
20CPU time [s]: user 36.91 system 136.20 (user 17.1% system 63.2% total 80.3%)
21Net I/O: 265307.6 KB/s (2173.4*10^6 bps)
22
23Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
24Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0
These tools are more about simulating complex user flows. They usually fall over at actually high loads... Maybe it's better to just run multiple of the pure load generators hitting multiple urls. And TBH i wouldn't really trust any of these.
locust: python module
Write python code to simulate requests. You also get a web ui. Being python, you're in charge of running multiple instances to make use of multiple cores, at least it has a master-worker distributed mode.
1from locust import HttpUser, task, between
2
3class LoadTest(HttpUser):
4 wait_time = between(0.5, 1)
5 host = "http://localhost:8080"
6
7 @task
8 def task1(self):
9 self.client.get("/1")
10 self.client.get("/2")
Example run:
1$ locust
2[2021-06-22 19:44:44,960] eevee/INFO/locust.main: Starting web interface at http://0.0.0.0:8089 (accepting connections from all network interfaces)
3[2021-06-22 19:44:44,972] eevee/INFO/locust.main: Starting Locust 1.5.3
4[2021-06-22 19:44:53,693] eevee/INFO/locust.runners: Spawning 1000 users at the rate 10 users/s (0 users already running)...
5[2021-06-22 19:45:24,985] eevee/WARNING/root: CPU usage above 90%! This may constrain your throughput and may even give inconsistent response time measurements! See https://docs.locust.io/en/stable/running-locust-distributed.html for how to distribute the load over multiple CPU cores or machines
6[2021-06-22 19:46:46,763] eevee/INFO/locust.runners: All users spawned: LoadTest: 1000 (1000 total running)
7[2021-06-22 19:49:03,220] eevee/INFO/locust.runners: Stopping 1000 users
8[2021-06-22 19:49:03,535] eevee/INFO/locust.runners: 1000 Users have been stopped, 0 still running
9[2021-06-22 19:49:03,535] eevee/WARNING/locust.runners: CPU usage was too high at some point during the test! See https://docs.locust.io/en/stable/running-locust-distributed.html for how to distribute the load over multiple CPU cores or machines
10KeyboardInterrupt
112021-06-22T19:49:24Z
12[2021-06-22 19:49:24,478] eevee/INFO/locust.main: Running teardowns...
13[2021-06-22 19:49:24,479] eevee/INFO/locust.main: Shutting down (exit code 0), bye.
14[2021-06-22 19:49:24,479] eevee/INFO/locust.main: Cleaning up runner...
15 Name # reqs # fails | Avg Min Max Median | req/s failures/s
16--------------------------------------------------------------------------------------------------------------------------------------------
17 GET /1 93630 0(0.00%) | 682 1 2505 890 | 374.78 0.00
18 GET /2 93630 0(0.00%) | 600 1 1472 740 | 374.78 0.00
19--------------------------------------------------------------------------------------------------------------------------------------------
20 Aggregated 187260 0(0.00%) | 641 1 2505 770 | 749.56 0.00
21
22Response time percentiles (approximated)
23 Type Name 50% 66% 75% 80% 90% 95% 98% 99% 99.9% 99.99% 100% # reqs
24--------|------------------------------------------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|
25 GET /1 890 1100 1100 1100 1200 1200 1200 1300 1400 2100 2500 93630
26 GET /2 740 890 960 1000 1100 1100 1200 1300 1400 1500 1500 93630
27--------|------------------------------------------------------------|---------|------|------|------|------|------|------|------|------|------|------|------|
28 None Aggregated 770 990 1100 1100 1100 1200 1200 1300 1400 1900 2500 187260
k6: Go/JavaScript cli tool / SaaS
Write js code to simulate requests. Not sure about the SaaS part. I guess it works.
1import http from "k6/http";
2import { sleep } from "k6";
3
4export default function () {
5 http.get("http://localhost:8080");
6}
1$ k6 run -u 10000 -i 1000000 script.js
2
3 /\ |‾‾| /‾‾/ /‾‾/
4 /\ / \ | |/ / / /
5 / \/ \ | ( / ‾‾\
6 / \ | |\ \ | (‾) |
7 / __________ \ |__| \__\ \_____/ .io
8
9 execution: local
10 script: script.js
11 output: -
12
13 scenarios: (100.00%) 1 scenario, 10000 max VUs, 10m30s max duration (incl. graceful stop):
14 * default: 1000000 iterations shared among 10000 VUs (maxDuration: 10m0s, gracefulStop: 30s)
15
16
17running (01m35.9s), 00000/10000 VUs, 1000000 complete and 0 interrupted iterations
18default ✓ [======================================] 10000 VUs 01m35.9s/10m0s 1000000/1000000 shared iters
19
20 data_received..................: 8.6 GB 89 MB/s
21 data_sent......................: 80 MB 834 kB/s
22 http_req_blocked...............: avg=7.94ms min=810ns med=2.12µs max=2.43s p(90)=4.04µs p(95)=6.66µs
23 http_req_connecting............: avg=7.91ms min=0s med=0s max=2.43s p(90)=0s p(95)=0s
24 http_req_duration..............: avg=720.12ms min=125.84µs med=615.92ms max=3.67s p(90)=1.26s p(95)=1.73s
25 { expected_response:true }...: avg=720.12ms min=125.84µs med=615.92ms max=3.67s p(90)=1.26s p(95)=1.73s
26 http_req_failed................: 0.00% ✓ 0 ✗ 1000000
27 http_req_receiving.............: avg=6.48ms min=14.04µs med=31.12µs max=3.09s p(90)=231.81µs p(95)=446.44µs
28 http_req_sending...............: avg=2.81ms min=4.25µs med=8.96µs max=2.25s p(90)=63.51µs p(95)=164.56µs
29 http_req_tls_handshaking.......: avg=0s min=0s med=0s max=0s p(90)=0s p(95)=0s
30 http_req_waiting...............: avg=710.82ms min=75.91µs med=614.89ms max=3.59s p(90)=1.24s p(95)=1.68s
31 http_reqs......................: 1000000 10427.261551/s
32 iteration_duration.............: avg=861.5ms min=11.91ms med=662.06ms max=3.86s p(90)=1.61s p(95)=2s
33 iterations.....................: 1000000 10427.261551/s
34 vus............................: 10000 min=9712 max=10000
35 vus_max........................: 10000 min=10000 max=10000
tsung: Erlang cli tool
Run as proxy, make request using something else and replay. Didn't try it.
gatling: Scala library / cli toolset?
Run recorder script and replay, or write scala / java code. Didn't try it.
artillery: Javascript cli tool, yaml config
Does it even work?
1config:
2 target: http://localhost:8080
3 phases:
4 - duration: 30
5 arrivalCount: 30000
6scenarios:
7 - flow:
8 - get:
9 url: "/"
1$ artillery run conf.yaml
2Started phase 0, duration: 30s @ 21:12:33(+0000) 2021-06-22
3Report @ 21:12:43(+0000) 2021-06-22
4Elapsed time: 10 seconds
5 Scenarios launched: 9984
6 Scenarios completed: 0
7 Requests completed: 0
8 Mean response/sec: 1002.41
9 Response time (msec):
10 min: NaN
11 max: NaN
12 median: NaN
13 p95: NaN
14 p99: NaN
15 Errors:
16 EAI_AGAIN: 2
17
18Warning:
19CPU usage of Artillery seems to be very high (pids: 1)
20which may severely affect its performance.
21See https://artillery.io/docs/faq/#high-cpu-warnings for details.
22
23Report @ 21:12:53(+0000) 2021-06-22
24Elapsed time: 20 seconds
25 Scenarios launched: 10000
26 Scenarios completed: 0
27 Requests completed: 0
28 Mean response/sec: 1002
29 Response time (msec):
30 min: NaN
31 max: NaN
32 median: NaN
33 p95: NaN
34 p99: NaN
35 Errors:
36 EAI_AGAIN: 1
37 ETIMEDOUT: 9981
38
39Warning: High CPU usage warning (pids: 1).
40See https://artillery.io/docs/faq/#high-cpu-warnings for details.
41
42Report @ 21:13:03(+0000) 2021-06-22
43Elapsed time: 30 seconds
44 Scenarios launched: 9976
45 Scenarios completed: 0
46 Requests completed: 0
47 Mean response/sec: 998.5
48 Response time (msec):
49 min: NaN
50 max: NaN
51 median: NaN
52 p95: NaN
53 p99: NaN
54 Errors:
55 ETIMEDOUT: 10000
56
57Report @ 21:13:13(+0000) 2021-06-22
58Elapsed time: 40 seconds
59 Scenarios launched: 40
60 Scenarios completed: 0
61 Requests completed: 0
62 Mean response/sec: 4.1
63 Response time (msec):
64 min: NaN
65 max: NaN
66 median: NaN
67 p95: NaN
68 p99: NaN
69 Errors:
70 ETIMEDOUT: 9975
71
72Report @ 21:13:13(+0000) 2021-06-22
73Elapsed time: 40 seconds
74 Scenarios launched: 0
75 Scenarios completed: 0
76 Requests completed: 0
77 Mean response/sec: NaN
78 Response time (msec):
79 min: NaN
80 max: NaN
81 median: NaN
82 p95: NaN
83 p99: NaN
84 Errors:
85 ETIMEDOUT: 41
86
87All virtual users finished
88Summary report @ 21:13:13(+0000) 2021-06-22
89 Scenarios launched: 30000
90 Scenarios completed: 0
91 Requests completed: 0
92 Mean response/sec: 749.63
93 Response time (msec):
94 min: NaN
95 max: NaN
96 median: NaN
97 p95: NaN
98 p99: NaN
99 Scenario counts:
100 0: 30000 (100%)
101 Errors:
102 EAI_AGAIN: 3
103 ETIMEDOUT: 29997