Changkun's Blog欧长坤的博客

Science and art, life in between.科学与艺术,生活在其间。

  • Home首页
  • Ideas想法
  • Posts文章
  • Tags标签
  • Bio关于
Changkun Ou

Changkun Ou

Human-AI interaction researcher, engineer, and writer.人机交互研究者、工程师、写作者。

Bridging HCI, AI, and systems programming. Building intelligent human-in-the-loop optimization systems. Informed by psychology, sociology, cognitive science, and philosophy.连接人机交互、AI 与系统编程。构建智能的人在环优化系统。融合心理学、社会学、认知科学与哲学。

Science and art, life in between.科学与艺术,生活在其间。

281 Blogs博客
168 Tags标签
Changkun's Blog欧长坤的博客

Stability Dilemma in AI System ArchitectureAI系统架构的稳定性困境

Published at发布于:: 2026-03-10   |   PV/UV: /

I’ve been thinking recently about what this current wave of AI development will ultimately become, and what societal structures will transform into as a result.

I’ve been considering that AI might ultimately learn a holistic and most stable system structure. For instance, when we design software architecture, the software itself offers tremendous freedom (such as languages being Turing complete, and platforms allowing various solutions), but when you actually want the system to run stably, you’re essentially imposing many constraints. Only by setting these constraints properly can the system operate steadily.

But there’s an interesting problem here: because AI itself is an uncertain system, how can it guarantee stable operation when executing tasks? That’s difficult to say. As AI runs, to impose constraints (like preventing it from doing something, or requiring it to complete a task), you might need another system (or another AI) to verify its results and guide it in a certain direction.

Essentially, once you’ve imposed all these constraints, what shape the constrained AI system ultimately takes is already determined by the initial structural design.

There’s a vivid metaphor for this: like a tree. As it grows, it initially has a main trunk, which develops into several important branches, and these branches gradually become thinner, eventually developing into many fine twigs. When you design a system, you mainly design its primary architecture—the tree’s main trunk. Once the trunk is well-established, the tree won’t grow into something very strange. It will only develop small branches in the fine-twig areas, without overwhelming the major branches. These small branches will gradually refine and grow, but ultimately won’t affect the shape and structure you’ve already established.

Unless there’s an external destabilizing factor that disrupts or alters this system. It’s like what we often say about organizational structures or human organizations: when you design a company’s architecture, you’re essentially trying to create a state where personnel can operate stably, thus continuously achieving certain business objectives.

If that’s the case, there might eventually be a business objective, but because you’ve already imposed constraints on the system design, it will only develop within those constraints and won’t develop in extremely bizarre ways. The eventual effect of achieving the business objective might simply be letting AI run continuously.

In other words, the process of ultimately achieving objectives will be constrained by the system’s computational scalability. It won’t grow infinitely because the amount of content you can run simultaneously is limited. There’s a law here called Amdahl’s Law. It states that when you accelerate a system, the system has many non-linear parts that can be accelerated without issue, but the bottleneck of the entire system will ultimately be limited by those linear parts. That’s roughly how it works.

Combining this with an experience from my trip to Xinjiang, I saw many people retracing the Silk Road. They wanted to experience what the process of the ancient Silk Road was actually like. Of course, it’s impossible to experience the exact same process now because modern society has many conveniences—at certain points you can easily obtain food and safe shelter, which wasn’t possible many years ago.

At the time, I was wondering what their actual purpose was. Later, I found an answer: their essential goal was actually “experience.”

This is quite similar to AI’s current state of rapid development and exponential explosive growth. Although we don’t know what AI will ultimately become, fundamentally it has destroyed our sense of purpose as humans. Because with AI (assuming the premise of infinite energy), everything can grow exponentially. In other words, whatever goal you have, you can immediately achieve it.

In this situation, whatever organizational structure is actually irrelevant because:

  1. As long as the organizational structure you design can satisfy an objective, it can actually be achieved immediately.
  2. You can easily destroy this system itself, because with external uncertainty factors, you can easily destroy it and continuously rebuild it.

In reality, all possibilities are there. The only difference is: the structure of system A that you create might be closer to your desired goal compared to system B. So the ultimate question becomes: what kind of architecture do you now have that can realize such objectives? This is actually very difficult.

If we go further, suppose that as AI continuously grows, it discovers such a structure capable of achieving all kinds of objectives that humans want to accomplish. Or rather, suppose humans exist precisely because they’re situated within an extremely stable system structure.

Then the questions become:

  1. Why can such a structure be maintained?
  2. How long can it be maintained?
  3. If we had such an AI that could immediately achieve any objective, what should humans actually do?

I think that experience in Xinjiang actually provided an answer: once all your objectives can be easily achieved, what’s ultimately left? What remains is how you experience the process.

By then, the ultimate objective becomes far less important because you can achieve it immediately. It’s like wanting to travel from point A to point B. If there are airplanes and flight routes in between, you can get there quickly by plane. But you’ve actually missed the entire scenery of the road, missed all the diverse people and social forms along the way. During the journey, unexpected detours might alter your goals, leading you to explore other paths.

Thinking about it this way is actually quite melancholic. Although all possibilities exist, the structures that can stabilize a complex system are limited and unique. Moreover, it’s not immutable—it heavily depends on the stability within the system, and whether there are inputs of external uncertainty factors.

Just as humanity has evolved over so many years, the human species fundamentally depends on the stability of Earth’s ecosystem. So when would this stability disappear? Only when an external massive force destroys this stability, such as an asteroid impact. If it’s only a very small asteroid, it cannot destroy this stability because there are always enough complex systems that can automatically repair local systemic instability. This instability can be gradually digested and repaired by the structures of other complex systems.

However, if a massive destructive structure appears—say, large-scale asteroid impacts—the chain reactions would affect the entire global scope. To the entire system structure, this would be massive destruction and devastation.

我最近在想,关于 AI 这一波最后的发展会变成什么样子,以及真正意义上来说,社会结构会随之变成什么样。

我其实在想,最终 AI 可能会学到一个整体的、也是最稳定的一种系统结构。比方说,我们平时在做软件架构的时候,软件本身会给你很多的自由度(比如语言本身是图灵完备的,平台也允许各种各样的方案),但真正当你希望这套系统稳定运行的时候,你其实是在给系统加了很多限制。只有把限制设定好,系统才能稳定地跑起来。

但这里面有一个很有意思的问题:因为 AI 本身是一个不确定性的系统,那它在执行任务时,如何保证稳定的运行状态呢?这很难说。因为在 AI 运行的过程中,你为了给它加限制(比如这里运行不行,或者需要它完成一个任务),可能需要再用另外一个系统(或者是另一个 AI)来验证它的结果,引导它朝着某个方向走。

本质上,当你把各种限制都加好之后,这套加了限制的 AI 系统最终能长成什么样子,其实在你最初设计结构时就早已定型了。

这里有一个很形象的比喻:就像一棵树,长的时候最初会有一个主干,主干会发展出几个重要的分支,这些分支再不断地变细,最终发展出很多细枝末节。当你设计一套系统时,你主要设计的是它的主要架构,也就是这棵树的主干。主干一旦长好了,这棵树就不会长成非常奇怪的样子。它最终只会在细枝末节的位置长出小分支,而不会吞没掉整个大枝干。这些小分支会逐渐细化生长,但最终并不会影响到你定好的那个形状和结构。

除非有一个外部的不稳定因素来干扰或改变这个系统。这就像我们常说的系统组织结构或人类组织结构:当你设计一个公司的架构时,你其实是想设计出一套能让人员稳定运行的状态,从而不断地实现某个业务目标。

如果是这样的话,以后可能会有一个业务目标,但因为你已经给了系统设计的限制,它只会在限制范围内发展,不会发展得特别离谱。最终实现业务目标的效果,可能就只是让 AI 不停地跑起来。

也就是说,最终实现目标的过程会被限制在整个系统算力的可扩展性上。它不会无限增长,因为你同时能运行的内容是有限的。这里有一个定律,叫作阿姆达尔定律(Amdahl’s Law)。它说的是,你在给系统加速时,系统本身会有很多非线性的部分,这些部分加速是没有问题的,但整个系统的瓶颈最终会被限制在那些线性的部分。大概就是这个样子。

结合我之前去新疆旅游的一段经历,当时我在路上看到很多人在重走丝绸之路。他们想要体验以前那种丝绸之路的过程到底是什么样子,当然,现在不可能完完全全体验到一模一样的过程,因为现代社会有很多便利性,在某个节点很容易得到食物和安全的居所,而在很多年前这是不行的。

当时我就在想,他们这样做的目的到底是什么?后来我想到一个答案:他们本质的目标其实就是“体验”。

这其实跟 AI 这种高速发展、指数级爆炸性增长的现状很像。虽然我们不知道 AI 最后会发展成什么样,但本质上它摧毁了我们作为人的目标感。因为有了 AI 之后(假设前提是能量无穷),所有事情都可以指数级增长。也就是说,但凡你有目标,就可以立刻实现。

在这种情况下,无论什么样的组织结构其实都无所谓,因为:

  1. 只要你设计的组织结构能够满足一个目标,它其实就可以立刻达成。
  2. 你可以轻易地摧毁这个系统本身,因为有外部不确定性因素,你可以轻易地摧毁并不断地重建。

实际上,所有的可能性都在那里。唯一的区别在于:你创建的系统 A 的结构,相较于系统 B 来讲,可能更接近你所想要的那个目标。那么最终的问题就在于,你现在有什么样的一个架构能够实现这样一个目标?这其实是很难的。

如果我们再进一步,假设 AI 在不断的增长过程中发现了这样一种结构,它能够实现人类想要达成的各式各样的目标。或者说,人类之所以存在,就是因为本身处于一个非常稳定的系统结构中。

那么问题就在于:

  1. 为什么这样的结构可以维持下去?
  2. 它能维持多久?
  3. 如果我们有了这样一套能够立刻实现任何目标的人工智能,人类到底应该干什么?

我觉得新疆的那次经历其实给出了一个答案:一旦当你的任何目标都能轻易实现时,最终还剩下什么呢?剩下的就是你怎样去体验这个过程。

到那时,最终的目标就不再那么重要了,因为你可以立刻实现它。这就像你想从 A 地到 B 地,如果中间有飞机、有航线,搭个飞机很快就过去了。但是你其实错过了整个道路的风景,错过了途中形形色色的人和各种社会形态。在旅行的过程中,可能因为某个意外打岔,你的目标会发生改变,进而去探索其他的路径。

这样想的话其实也挺可悲的。虽然所有的可能性都存在,但能让一个复杂系统稳定下来的结构是有限且唯一的。而且它不是一成不变的,它严重依赖于系统内部的稳定性,以及是否有外部不确定性因素的输入。

就像人类社会进化了这么多年,人类这个物种本质上依赖的是地球生态系统的稳定性。那么这种稳定性在什么时候会消失呢?只有当外部有一个巨大的力量把这种稳定性摧毁时,比如小行星撞击地球。如果只是一个非常小的小行星,是无法摧毁这种稳定性的,因为始终有足够多的复杂系统能自动修复局部的系统不稳定性。这种不稳定性可以被其他复杂系统的结构慢慢消化、修复掉。

但是,如果出现一种巨大的摧毁性结构,比方说大规模的小行星撞击,它造成的连锁反应影响到了整个地球范围,那对整个系统结构来说,将是一个巨大的摧毁和打击。

Have thoughts on this?有想法?

I'd love to hear from you — questions, corrections, disagreements, or anything else.欢迎来信交流——问题、勘误、不同看法,或任何想说的。

hi@changkun.de
© 2008 - 2026 Changkun Ou. All rights reserved.保留所有权利。 | PV/UV: /
0%