Possibly the fastest implementation for getting a goroutine ID across all Go versions with Go 1 compatibility guarantee.
|
|
可能是具有 Go 1 兼容性保障的全版本获取 gorountine ID 的最快的实现
|
|
Possibly the fastest implementation for getting a goroutine ID across all Go versions with Go 1 compatibility guarantee.
|
|
可能是具有 Go 1 兼容性保障的全版本获取 gorountine ID 的最快的实现
|
|
Many people have written benchmark tests. In Go Nightly Reading Episode 83, Reliable Performance Testing for Go Programs, we shared how to use tools like benchstat and perflock for rigorous and reliable performance testing. That session briefly discussed the measurement methodology and implementation principles of benchmarks, but due to time constraints, the coverage wasn’t deep enough. So today, let’s further share two details that weren’t covered in Episode 83, but are easily overlooked in certain strict testing scenarios:
b.N. As discussed previously, the testing package runs the code multiple times, gradually predicting how many times the code can be executed consecutively within the required time range (e.g., 1 second, resulting in, say, 100,000 iterations). But there’s an implementation detail: why doesn’t it incrementally accumulate execution times across multiple runs such that t1+t2+…+tn ≈ 1s, and instead searches for the maximum b.N where the total loop time ≈ 1s? The reason is that incremental runs introduce more systematic measurement error. Benchmarks are typically unstable in early iterations (e.g., cache misses), and accumulating results from multiple incremental runs would further amplify this error. In contrast, finding the maximum b.N where the total consecutive execution time satisfies the required range amortizes (rather than accumulates) this systematic error across each test.testing package’s implementation is perfect, and all we need to do as users is write benchmarks, run under perflock, and use benchstat to eliminate statistical errors? Things aren’t that simple, because the testing package’s measurement program itself also has systematic error, which in extreme scenarios can introduce significant bias. Explaining this requires more space, so here’s an additional article for further reading: Eliminating A Source of Measurement Errors in Benchmarks. In this article, you can learn more about what this intrinsic systematic measurement error is, and several reliable approaches to eliminate it when you need to benchmark such scenarios.很多人都编写过 Benchmark 测试程序,在 Go 夜读第 83 期 对 Go 程序进行可靠的性能测试 (https://talkgo.org/t/topic/102) 分享中也跟大家分享过如何利用 benchstat, perflock 等工具进行严谨可靠的性能测试。在那个分享中也曾简单的讨论过基准测试程序的测量方法及其实现原理,但由于内容较多时间有限对性能基准测试的原理还不够深入。因此,今天跟大家进一步分享两个未在第 83 期覆盖,但在进行某些严格测试时较容易被忽略的细节问题:
Hello world!
你好世界!
About six months ago, I did a presentation that talks about how to conduct reliable benchmarking in Go. Recently, I submitted an issue #41641 to the Go project, which is also a subtle problem that you might need to address in some cases. The issue is all about the following code snippet: 1 2 3 4 5 6 …
大约六个月前,我做了一个关于如何在 Go 中进行可靠基准测试的演讲。 最近,我向 Go 项目提交了一个 issue #41641, 它涉及一个在某些情况下需要处理的隐蔽问题。 问题的核心在于以下代码片段: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 func BenchmarkAtomic(b *testing.B) { var v int32 …
这篇文章也是一篇博客文章,读者可能会质疑:这不是与这篇博客的主题相矛盾吗? 我目前觉得还没有,但可以预见的是,这个博客在未来的一年以内不会更新任何内容。 我的博客的更新频率越来越低,我为什么不再写博客了呢?或者严谨的说,我为什么不再进行频繁的博客写作了呢?
当周围的同龄人都已工作的工作、回国的回国、结婚的结婚以及生子的生子, 迈入人生的下一个重要的阶段时,当回顾我的 2019 时,似乎我已经从早年那个目标明确、 总是走在他人前面的那个 “佼佼者”,逐渐沦为了一个不知前路在何方、似乎已经落人一步的 “平庸” 之人。相比往年,我的 2019 似乎过的格外的 “精彩” 却又异常的 “茫然”。
2018 年没有更新书单,原因有很多。现在与 2019 年一并放出,这两年在做毕业论文和做科研两方面影响下,这两年读的书都偏技术和理论,读得更多的反而是论文(论文清单我们以后有机会再表)。人文类的闲书的数量也大大减少,所以整体读书的时间也大大延长,2018 年几乎没读完几本书,这也是当时没有更新书单的主要原因之一。
这个清单里还有好几本读过的与自己博士研究方向相关的书籍没有列出,也算是延续了以前读书清单的传统:专业相关的书籍不在此列表中。
沟通和信任原本是人与人之间打交道的两个基本因素,但这两个因素在跨文化的背景下显得越来越不经如人意。 从今年三月份开始,我步入了我的博士生涯,在这随后的两个月里我开始重新思考沟通与信任这两个主题。