## How it works

Here is a quick summary of the method used to compute estimates. The algorithm uses a simplified model taking into account 3 factors: Current weight of transactions in the mempool Speed at which new weight is entering the mempool Randomness in block production intervals

Unlike some other fee estimation algorithms, it doesn't look at the previous mined blocks at all. Instead, it looks at the factors that are going to drive the production of the next blocks: the mempool, the speed of increase of the mempool and the probability at which it is being drained.

Its goal is to give reasonable estimates given the presently known mempool dynamics, while avoiding overestimation.

### Inputs

The mempool is categorized into "fee buckets". A bucket represents data about all transactions with a fee greater than or equal to some amount (in sat/vbyte).

Each bucket contains 2 numeric values:

• `current_weight`, in WU (Weight-Units), represents the transactions currently sitting in the mempool.
• `flow`, in WU/min (Weight-Units per minute), represents the speed at which new transactions are entering the mempool. Currently that is sampled by observing the flow of transactions during twice the timespan of each target interval (ex: last 60 minutes of transactions for the 30 minutes target interval)

For simplicity, transactions are not looked at individually. Focus is on the weight, like a fluid flowing from bucket to bucket.

### Computations

For each target interval (30 mins, 1 hour, 2 hours etc...), we're trying to find the cheapest fee rate that is likely to become fully cleared (0 WU) with a given probability.

The probability is defined by the "confidence" setting on the website. Current values are:

• Optimistic – I'm feeling lucky – : 50%
This is to be used if your primary objective is fee minimization and if you do not care if there's some chance to get delayed. Might work if you get the next blocks fast enough with no unlucky rounds.
• Standard: 80%
This profile seems to give reasonable balanced estimates while avoiding overestimation or underestimation most of the time.
• Cautious: 90%
This one has a tendency to overestimate, to compensate for potential unlucky rounds of blocks.

Now let's simulate what's going to happen during each timespan lasting `minutes`:

• New transactions entering the mempool. While it's impossible to predict sudden changes to the speed at which new weight is added to the mempool, for simplicty's sake we're going to assume the flow we measured remains constant.
`added_weight = flow * minutes`
• Transactions leaving the mempool due to mined blocks. Each block removes up to 4,000,000 WU from a bucket, however the exact number of blocks that are going to occur during the interval is uncertain. So what we'd like to do is to find out the mimimum number of blocks we should expect (with our chosen probability).
The occurence of blocks follows a Poisson distribution, so what we can do is calculate the inverted Poisson CDF (in Python: `1 - scipy.stats.poisson(λ).cdf(k)`), with `λ = minutes / 10` (expected average number of blocks), then iteratively increase the `k` parameter (number of blocks) until the output probability is < to our chosen probability and then we return the previous `k` value.
Once we know the minimum expected number of blocks we can compute how that would affect the bucket's weight:
`removed_weight = 4000000 * blocks`
• Finally we can compute the expected final weight of the bucket:
`final_weight = current_weight + added_weight - removed_weight`

The cheapest bucket whose `final_weight` is ≤ 0 is going to be the one selected as the estimate.

### Small correction

Because the window used to sample the flow of transactions increases proportionally to each target interval, it sometimes gives incoherent results with estimates that decrease then increase as the window gets larger (if there was significant variations in the flow of transactions during this time).

Since this makes no sense (if a low fee gets you confirmed faster, then there is no need to increase the fee to target a longer window), so for each estimate we take the minimum value of all estimates at windows shorter or equal.

Icons by FontAwesome (cc-by-4.0)