Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

async proof of concept #12

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

async proof of concept #12

wants to merge 1 commit into from

Conversation

danmayer
Copy link

We want to verify dalli works well with the async library and check some basic performance impacts... This is a small proof of concept to show how dalli can be used with async along with a connection pool to increase performance across multiple cache requests that could be in threads or fibers.

There is some overhead and a normal non async set of calls will win on very small and very fast calls... but as the IO increases either with payload size or by making remote network calls that have higher latency async shows how it can more efficiently balance the IO.

Async across toxiproxy with around 2X the latency of normal localhost, async easily out performing in this case

❯❯❯$ BENCH_JOB=get bundle exec bin/async                                                                                <bundler> [main]
benchmarking async with 10 connections
ruby 3.3.4 (2024-07-09 revision be1089c8ec) [arm64-darwin23]
Warming up --------------------------------------
   get 100 keys loop     5.000 i/100ms
  get 100 keys async    11.000 i/100ms
Calculating -------------------------------------
   get 100 keys loop     72.508 (± 8.3%) i/s   (13.79 ms/i) -    360.000 in   5.036897s
  get 100 keys async    109.261 (±16.5%) i/s    (9.15 ms/i) -    528.000 in   5.020974s

Comparison:
  get 100 keys async:      109.3 i/s
   get 100 keys loop:       72.5 i/s - 1.51x  slower


~/projects/dalli                                                                                                              [13:48:34]
❯❯❯$ BENCH_JOB=set bundle exec bin/async                                                                                <bundler> [main]
benchmarking async with 10 connections
ruby 3.3.4 (2024-07-09 revision be1089c8ec) [arm64-darwin23]
Warming up --------------------------------------
 write 100 keys loop     6.000 i/100ms
write 100 keys async    12.000 i/100ms
Calculating -------------------------------------
 write 100 keys loop     72.161 (± 4.2%) i/s   (13.86 ms/i) -    366.000 in   5.081488s
write 100 keys async    118.966 (±15.1%) i/s    (8.41 ms/i) -    564.000 in   5.013887s

Comparison:
write 100 keys async:      119.0 i/s
 write 100 keys loop:       72.2 i/s - 1.65x  slower

50k payload on fast localhost, holding nearly equal with a loop, showing some of the overhead involved in async:

❯❯❯$ BENCH_JOB=get bundle exec bin/async                                                                                <bundler> [main]
benchmarking async with 10 connections
ruby 3.3.4 (2024-07-09 revision be1089c8ec) [arm64-darwin23]
Warming up --------------------------------------
   get 100 keys loop    13.000 i/100ms
  get 100 keys async    14.000 i/100ms
Calculating -------------------------------------
   get 100 keys loop    142.440 (± 5.6%) i/s    (7.02 ms/i) -    715.000 in   5.034559s
  get 100 keys async    128.682 (± 9.3%) i/s    (7.77 ms/i) -    644.000 in   5.046229s

Comparison:
   get 100 keys loop:      142.4 i/s
  get 100 keys async:      128.7 i/s - same-ish: difference falls within error


~/projects/dalli                                                                                                              [13:50:11]
❯❯❯$ BENCH_JOB=set bundle exec bin/async                                                                                <bundler> [main]
benchmarking async with 10 connections
ruby 3.3.4 (2024-07-09 revision be1089c8ec) [arm64-darwin23]
Warming up --------------------------------------
 write 100 keys loop    14.000 i/100ms
write 100 keys async    14.000 i/100ms
Calculating -------------------------------------
 write 100 keys loop    151.000 (± 6.0%) i/s    (6.62 ms/i) -    756.000 in   5.021468s
write 100 keys async    137.135 (±14.6%) i/s    (7.29 ms/i) -    686.000 in   5.097228s

Comparison:
 write 100 keys loop:      151.0 i/s
write 100 keys async:      137.1 i/s - same-ish: difference falls within error

Comment on lines +95 to +99
def run_gc
GC.enable
GC.start
GC.disable
end

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the GCSuite mostly used to restart the GC? So that each of the benchmark runs can get the GC cleared before running next one?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah this attempts to remove GC overhead and random skew from the benchmarks by running GC between each iteration and not allowing GC to run during the measured code execution.

@drinkbeer
Copy link

There is some overhead and a normal non async set of calls will win on very small and very fast calls... but as the IO increases either with payload size or by making remote network calls that have higher latency async shows how it can more efficiently balance the IO.

In a scenario of network intense, will the async add more loads to network, and make the situation worse? For example, in the multiple get calls we chunked the keys into smaller batches (100 keys in each batch). Doing the calls async will issue more requests concurrently, and may cause extra loads.

@danmayer
Copy link
Author

yes it can increase load on the server for sure when apps increase concurrent calls to memcached, in this example with the connection pool if you max out the connection pool so all 10 workers are actively busy... that is the equivalent to 10x the clients if they were all waiting serially to do work...

So leveraging this type of concurrency has additional costs for the overall infrastructure and performance of the platform.

@danmayer danmayer requested a review from drinkbeer November 19, 2024 16:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants