Currently, the approach is to use the maximum value of each interval to calculate the rate of change.
In case of a Counter reset inside the bucket due to a process restart,
the count delta = max of current interval - max of previous interval
This is a problem because we are ignoring any requests received after counter reset (if that number of count < count before reset).
This approximation works fine for shorter sampling intervals, for larger sampling intervals this results in an higher error.
Currently, the approach is to use the maximum value of each interval to calculate the rate of change.
In case of a Counter reset inside the bucket due to a process restart,
the count delta = max of current interval - max of previous interval
This is a problem because we are ignoring any requests received after counter reset (if that number of count < count before reset).
This approximation works fine for shorter sampling intervals, for larger sampling intervals this results in an higher error.