|
13 | 13 | //!
|
14 | 14 | //! ## Trend Estimator
|
15 | 15 | //! Hold up to the given number of frequencies to create an estimation.
|
16 |
| -//! This pool holds 10 frequencies at a time. |
| 16 | +//! Trend estimator holds 10 frequencies at a time. |
| 17 | +//! This value is stored as constant in [FREQUENCY_QUEUE_SIZE](constant.FREQUENCY_QUEUE_SIZE.html). |
17 | 18 | //! Estimation algorithm and prediction uses Exponentially Weighted Moving Average algorithm.
|
18 | 19 | //!
|
19 |
| -//! Algorithm is altered and adapted from [A Novel Predictive and Self–Adaptive Dynamic Thread Pool Management](https://doi.org/10.1109/ISPA.2011.61). |
| 20 | +//! This algorithm is adapted from [A Novel Predictive and Self–Adaptive Dynamic Thread Pool Management](https://doi.org/10.1109/ISPA.2011.61) |
| 21 | +//! and altered to: |
| 22 | +//! * use instead of heavy calculation of trend, utilize thread redundancy which is the sum of the differences between the predicted and observed value. |
| 23 | +//! * use instead of linear trend estimation, it uses exponential trend estimation where formula is: |
| 24 | +//! ```text |
| 25 | +//! LOW_WATERMARK * (predicted - observed) + LOW_WATERMARK |
| 26 | +//! ``` |
| 27 | +//! *NOTE:* If this algorithm wants to be tweaked increasing [LOW_WATERMARK](constant.LOW_WATERMARK.html) will automatically adapt the additional dynamic thread spawn count |
| 28 | +//! * operate without watermarking by timestamps (in paper which is used to measure algorithms own performance during the execution) |
| 29 | +//! * operate extensive subsampling. Extensive subsampling congests the pool manager thread. |
| 30 | +//! * operate without keeping track of idle time of threads or job out queue like TEMA and FOPS implementations. |
20 | 31 | //!
|
21 | 32 | //! ## Predictive Upscaler
|
22 |
| -//! Selects upscaling amount based on estimation or when throughput hogs based on amount of tasks mapped. |
| 33 | +//! Upscaler has three cases (also can be seen in paper): |
| 34 | +//! * The rate slightly increases and there are many idle threads. |
| 35 | +//! * The number of worker threads tends to be reduced since the workload of the system is descending. |
| 36 | +//! * The system has no request or stalled. (Our case here is when the current tasks block further tasks from being processed – throughput hogs) |
| 37 | +//! |
| 38 | +//! For the first two EMA calculation and exponential trend estimation gives good performance. |
| 39 | +//! For the last case, upscaler selects upscaling amount by amount of tasks mapped when throughput hogs happen. |
| 40 | +//! |
| 41 | +//! **example scenario:** Let's say we have 10_000 tasks where every one of them is blocking for 1 second. Scheduler will map plenty of tasks but will got rejected. |
| 42 | +//! This makes estimation calculation nearly 0 for both entering and exiting parts. When this happens and we still see tasks mapped from scheduler. |
| 43 | +//! We start to slowly increase threads by amount of frequency linearly. High increase of this value either make us hit to the thread threshold on |
| 44 | +//! some OS or make congestion on the other thread utilizations of the program, because of context switch. |
| 45 | +//! |
23 | 46 | //! Throughput hogs determined by a combination of job in / job out frequency and current scheduler task assignment frequency.
|
| 47 | +//! Threshold of EMA difference is eluded by machine epsilon for floating point arithmetic errors. |
24 | 48 | //!
|
25 | 49 | //! ## Time-based Downscaler
|
26 |
| -//! After dynamic tasks spawned with upscaler they will continue working in between 1 second and 10 seconds. |
27 |
| -//! When tasks are detached from the channels after this amount they join back. |
| 50 | +//! When threads becomes idle, they will not shut down immediately. |
| 51 | +//! Instead, they wait a random amount between 1 and 11 seconds |
| 52 | +//! to even out the load. |
28 | 53 |
|
29 | 54 | use std::collections::VecDeque;
|
30 | 55 | use std::fmt;
|
@@ -59,7 +84,7 @@ const FREQUENCY_QUEUE_SIZE: usize = 10;
|
59 | 84 | const EMA_COEFFICIENT: f64 = 2_f64 / (FREQUENCY_QUEUE_SIZE as f64 + 1_f64);
|
60 | 85 |
|
61 | 86 | /// Pool task frequency variable.
|
62 |
| -/// Holds scheduled tasks onto the thread pool for the calculation window. |
| 87 | +/// Holds scheduled tasks onto the thread pool for the calculation time window. |
63 | 88 | static FREQUENCY: AtomicU64 = AtomicU64::new(0);
|
64 | 89 |
|
65 | 90 | /// Possible max threads (without OS contract).
|
|
0 commit comments