In Go 1.14, time.Timer was optimized from a global heap to per-P heaps, and during task switching in the scheduling loop, each P was solely responsible for checking and running timers that could be woken up. However, in that implementation, the work-stealing process didn’t check timer heaps on Ps that were currently executing (bound to an M) — meaning if a P found itself with nothing to do, even if timers on other Ps needed to be woken up, the idle P would still go to sleep. Fortunately, this was fixed in 1.15. But was that the end of it?
Unfortunately, the per-P heap approach fundamentally still relies on async preemption to force-switch goroutines that monopolize an M, so that timers can always be scheduled within a bounded time. But what is the upper bound? In other words, how high is the wake-up latency of time.Timer?
Obviously, the current async preemption implementation relies on the system monitor (sysmon), whose wake-up period is on the order of 10–20 milliseconds, meaning in the worst case, services with strict real-time requirements (such as live streaming) would be seriously affected.
In the upcoming 1.16, a new fix reduces this tens-of-milliseconds latency directly to the microsecond level — very exciting. The benchmark below shows how to systematically quantify timer latency using average and worst-case latency metrics, along with a comparison of the improved timer latency against results from 1.14 and 1.15.
|
|
Go 1.14 中,time.Timer 曾从全局堆优化到了 per-P 堆,并在调度循环进行任务切换时,独自负责检查并运行可被唤醒的 timer。但在当时的实现中,偷取过程并没有检查那些位于正在执行(与 M 绑定)的 P 上的 timer 堆,即如果某个 P 发现自己无事可做,即便其他 P 上的 timer 需要被唤醒,这个无事可做的 P 也会进一步休眠;好在该问题在 1.15 得到了解决。但这就万事大吉了吗?
可惜的是,per-P 堆方法的本质仍然上是在依赖异步抢占来强制切换那些长期霸占 M 的 G,进而 timer 总能在有界的时间内被调度。但这个界的上限是多少?换句话说,time.Timer 的唤醒延迟到底有多高?
显然,现在异步抢占的实现依赖系统监控,而系统监控的唤醒周期是 10 至 20 毫秒级的,这也就意味着在最坏情况下,将对一些对实时性要求极高的服务(如实时流媒体)会产生严重的干扰。
在即将到来的 1.16 中,一项新的修复将这种数十毫秒级的延迟直接干到了微秒级,非常的 exciting。下面的基准测试展示了如何系统的通过平均延迟以及最坏延迟两个指标对 timer 的延迟进行量化,并附上了进一步改进后的 timer 延迟与 1.14, 1.15 中结果的对比。
|
|