To see how tedious error handling can be, just look at this very thorough summary:
https://seankhliao.com/blog/12020-11-23-go-error-handling-proposals/
错误处理有多无聊 看看这个非常相近的总结就知道了
https://seankhliao.com/blog/12020-11-23-go-error-handling-proposals/
To see how tedious error handling can be, just look at this very thorough summary:
https://seankhliao.com/blog/12020-11-23-go-error-handling-proposals/
错误处理有多无聊 看看这个非常相近的总结就知道了
https://seankhliao.com/blog/12020-11-23-go-error-handling-proposals/
Does this code have a data race?
|
|
An issue submitted yesterday appears to point out a bug in the current Go memory model. Further reading:
这段代码有 data race 吗?
|
|
昨天提交的一个 issue 似乎指出了目前 Go 内存模型中的一个错误。进一步阅读:
Continued: In my view, the reason many people are dissatisfied with error handling is a lack of patience to understand how Go approaches problem-solving. An important lesson Jonathan drew is that errors are inherently domain-specific — some domains focus on better tracing of error origins, though stack traces themselves are sometimes not that useful; some domains focus on more flexible aggregation of multiple error messages, but many people just want to get the happy path right and throw a unified error at the end, and so on. In his Q&A session, he also mentioned that he does not recommend using xerrors, etc. It should be (not so) obvious that only solutions tailored to the specific problem are the best ones. Developers should take the time to think carefully about how to design error handling for a particular problem. Complaining about the lack of try/catch at the syntax level, or crying about how ugly if err is everywhere, is just as meaningless and life-wasting as debating which brackets to use for generics.
续: 很多人不满错误处理的原因在我看来是没有耐心去理解 Go 里处理问题的方式,Jonathan 总结得到的一个重要教训就是错误本身就是领域特定的,有些领域关注如何更好的追踪错误来源,但堆栈信息本身有时候也不那么有用;有些领域关注如何更加灵活的对多个错误信息进行整合,但很多人可能只想把正常逻辑给写对了然后统一扔一个错误出去等等,后续他的QA中还提到不建议使用xerrors等。(不那么)显然,只有针对问题本身给出的方案才是最好的,开发者应该静下心来思考怎么对某个具体问题设计错误处理,吐槽什么语法层面有没有 try/catch 、 if err 满天飞丑到哭泣就跟讨论泛型用什么括号一样没有意义且浪费生命。
At today’s GopherCon 2020, the author of the Go 1.13 error values proposal mentioned in hindsight that he regrets the lack of error formatting support, and that there will be no further improvement plans for many years to come. One of the reasons he gave is that error handling is a domain-specific problem, and it is simply beyond his ability to produce a solution that satisfies everyone. Nevertheless, at the end of his talk, he offered some advice on error wrapping — namely, implementing fmt.Formatter. Below is a simple example.
|
|
今天的 GopherCon2020 上,Go 1.13 错误值提案的作者事后提及他对目前错误格式化的缺失表示遗憾,而且在未来很长的好几年内都不会有任何进一步改进计划。对此他本人给出的原因之一是对于错误处理这一领域特定的问题,在他的能力范围内实在是无法给出一个令所有人都满意的方案。尽管如此,在他演讲的最后,还是给出了一些关于错误嵌套的建议,即实现 fmt.Formatter,下面给出了一个简单的例子。
|
|
How to get CPU clock frequency on macOS.
|
|
macOS 下获取 CPU 时钟频率的方法
|
|
How do you construct a context that retains all values from the parent context but does not participate in the cancellation propagation chain?
|
|
如何构造一个保留所有 parent context 所有值但不参与取消传播链条的 context?
|
|
Wrote a new tool called bench, which integrates and wraps best practices for benchmark testing.
新写了一个叫做 bench 的工具,主要对进行基准测试中的实践进行了整合与封装。
We are aware that using pointers for passing parameters can avoid data copy, which will benefit the performance. Nevertheless, there are always some edge cases we might need concern.
Let’s take this as an example:
|
|
Which vector addition runs faster?
Is there any way to make these two functions run faster?
|
|
Here’s a straightforward optimization: lookup table + linear interpolation:
|
|
The benchmark shows approximately 98% runtime performance improvement after optimization.
|
|
有什么办法能够让这两个函数跑得更快吗?
|
|
这里介绍一个很平凡的优化方案: lookup table + 线性插值:
|
|
基准测试显示,优化后的运行时性能提升约为 98%。
|
|
Guess which add implementation has better performance, vec1 or vec2?
|
|
The answer is pass-by-value is faster. The reason is inlining optimization, not escape analysis as many might guess. The pointer implementation returns a pointer solely to support method chaining — the returned pointer is already on the stack, so there’s no escape. Test results:
|
|
A practical example: changing from pass-by-pointer to pass-by-value brought a 6–8% performance improvement in a simple rasterizer (see https://github.com/changkun/ddd/commit/60fba104c574f54e11ffaedba7eaa91c8401bce4).
Furthermore, we might ask: is pass-by-value still faster without inlining? We can try adding the //go:noinline compiler directive to both add methods. The results without inlining (old) compared with inlining (new) are:
|
|
So the next question is: without inlining, why is the pointer version faster? Read more at https://changkun.de/blog/posts/pointers-might-not-be-ideal-for-parameters/
猜猜 vec1 和 vec2 实现的 add 哪个性能更好?
|
|
答案是传值更快。原因是内联优化,而非很多人猜测的逃逸。原因是指针实现的方式虽然返回了指针,但却只是为了能够支持链式调用而设计的,返回的指针本身就已经在栈上,不存在逃逸一说。测试结果:
|
|
一个实际的例子是,将传指针改为传值方式在一个简单的光栅器中带来了 6-8% 的性能提升(见 https://github.com/changkun/ddd/commit/60fba104c574f54e11ffaedba7eaa91c8401bce4)。
除此之外,我们可能会问,如果没有内联的话,还是传值更快么?我们可以试着给两个加法方法增加 //go:noinline 编译标记,最终的结果(old)跟有内联的结果(new)对比如下所示:
|
|
那么问题又来了,在没有内联的情况下,为什么指针更快呢?请阅读 https://changkun.de/blog/posts/pointers-might-not-be-ideal-for-parameters/