I re-read the “Free Energy Principle” and came across an interesting idea: humans don’t live in order to predict the world; rather, humans live because they predict the world. This aligns perfectly with the principle of minimizing free energy.
This sparked a thought about the consequences of excessive AI use:
-
Simplification of thought and behavior If people use AI too frequently, they gradually stop thinking about details and eventually become “idea machines” or “value machines.” (a) Looking back at the evolutionary process, initially we typed a lot, and our entire beings were optimized into Typing Machines. To increase efficiency, our brains optimized their pathways, requiring many shortcuts and forming muscle memory. (b) After voice input emerged, people’s typing became less fluent. (c) The same applies to thinking. We gradually strip away the thought process regarding implementation details, shifting toward higher-level thinking, and then further toward even higher levels.
-
Loss of predictive ability and “death” In this process, people delegate large amounts of prediction about the future. (a) For example, reasoning chains are gradually replaced or delegated. (b) When any idea can be foreseen into the future, the future becomes predictable, and people no longer predict the future themselves. (c) Following this logic, people “die.”
-
AI’s consumption of human society As more people use AI, model companies, to promote their tools, will recruit more people to get everyone using them. (a) The result is that most people will gradually “disappear” because they can no longer predict the future, “eaten” by AI. (b) The small group of remaining people are also using AI, and ultimately AI will consume all of human society.
-
System collapse at the end When humans disappear, AI also loses its ability to predict the future. Because AI lacks what we might call “uncertainty input,” and humans serve as this input. (a) Without human input, AI will also die. (b) The endpoint of this path is: humans disappear due to massive AI adoption, and AI disappears due to the disappearance of humans.
In the end, there is nothing left.
The following content is generated by LLMs and may contain inaccuracies.
The Paradox of Civilizational Decline Through AI Overuse
Context
This line of thinking touches the intersection of cognitive science, philosophy of technology, and existential risk research. It builds on the Free Energy Principle (FEP)—proposed by Karl Friston, which posits that biological systems maintain their existence by minimizing prediction error—extending into an existential critique of AI tool dependence. The core tension is this: as AI assumes humanity’s predictive and reasoning labor, will humans functionally “die” through loss of predictive capacity? This concerns not merely individual cognitive degradation, but a paradox of mutual destruction: humans outsource prediction to AI, ultimately leading to the disappearance of humans as sources of uncertainty, while AI collapses from loss of training data and objectives.
This perspective resonates with current discussions of AI alignment, cognitive offloading, and deskilling, while proposing a more radical hypothesis: this is not simple tool dependence, but systemic collapse involving the fundamental definition of life itself (prediction as existence).
Core Insights
The Free Energy Principle and Predictive Essentialism The original note accurately captures the core claim of Karl Friston’s Free Energy Principle: biological systems maintain existence by minimizing free energy (an upper bound on prediction error). The principle, grounded in Bayesian inference, posits that the brain is an “inference engine” that generates predictions through internal models and updates them using sensory input to improve predictive accuracy. The phrase “humans do not live to predict the world, but live because they predict the world” embodies the Free Energy Principle’s ontological claim: anything existing appears to minimize surprisal, exhibiting behavior consistent with its kind—behavior without surprise.
Cognitive Offloading and Deskilling: Empirical Evidence The original note’s observations about typing ability, voice input, and evolving thought patterns are supported by cognitive offloading research. Recent studies show significant negative correlation between frequent AI tool use and critical thinking ability, with cognitive offloading as a mediating factor. A 2025 study of 580 university students found that higher AI dependence correlates with lower critical thinking levels, with cognitive fatigue partially mediating this relationship. Regarding deskilling, technology only partially automates routine tasks in certain occupations, simplifying them for lower-skilled workers—a phenomenon termed “technology-enabled deskilling.” Deskilling occurs not only among displaced workers but among AI-augmented workers; the boundary between augmentation and replacement is blurred.
Theoretical Precedent for the Mutual Destruction Paradox The “mutual destruction paradox” proposed in the original note—that AI collapses as humans disappear and cease providing uncertainty inputs—has a striking counterpart in AI research: model collapse. When generative AI models are recursively trained on synthetic data, they gradually degrade. A 2024 Nature study showed that indiscriminate training on AI-generated content causes models to lose their capacity for generating diverse, high-quality outputs. In the large language model context, training on text generated by predecessor models causes continuous decline in vocabulary, syntax, and semantic diversity in model outputs. This perfectly echoes the insight in the original note that “humans serve as uncertainty input”: AI requires the diversity and unpredictability produced by humans as training signals, and when this source dries up, the system itself degrades.
Core Insights (Expanded)
The Free Energy Principle as Foundation for Ontology Karl Friston’s Free Energy Principle is a mathematical principle positing that the brain reduces surprisal or uncertainty through predictions based on internal models, updating these models with sensory input to improve predictive accuracy. The principle claims that anything existing appears to minimize surprisal—displaying behavior consistent with its type, unsurprising behavior. The original note’s statement “humans do not live to predict the world, but live because they predict the world” precisely captures this ontological turn: prediction is not a tool but a defining condition of existence itself.
Cognitive Offloading Leading to Decline in Critical Thinking A 2025 mixed-methods study of 666 participants found significant negative correlation between frequent AI tool use and critical thinking ability, with cognitive offloading as a mediating factor. Research on 580 Chinese university students showed that higher AI dependence correlates with lower critical thinking levels, with cognitive fatigue partially mediating this relationship. This validates the original note’s concern about “simplification of thought and action”: when AI assumes reasoning chains, humans lose not merely the capacity to execute them, but the opportunity to develop these capacities.
Technology-Enabled Deskilling Technology only partially automates routine tasks in mid-wage occupations, simplifying them to levels manageable by lower-skilled workers—“technology-enabled deskilling.” Deskilling traditionally referred to skills lost by workers displaced through automation, but it equally applies to workers augmented by AI, where the boundary between augmentation and replacement is blurred. The original note’s example of typing skill decline—the shift from muscle memory to voice input—perfectly illustrates this: each instance of cognitive offloading redefines the minimum standard for “competence,” rendering deeper capabilities optional or obsolete.
Model Collapse: AI’s Self-Consuming Paradox Shumailov et al.’s 2023 paper “The Curse of Recursion: Training on Generated Data Makes Models Forget” demonstrates that when generative AI models (including variational autoencoders and diffusion models) are recursively trained on synthetic data, they experience compound information loss and entropy increase, leading to catastrophic quality degradation. Model collapse occurs because AI-generated data lacks the rich diversity found in real-world data; AI models tend to focus on the most common patterns and lose the subtle “long-tail” information essential for continued improvement. This is the technical counterpart to the “mutual destruction paradox” in the original note: just as humans need prediction to exist, AI needs human-generated unpredictability to maintain performance. When training corpora become contaminated by the system’s own outputs, the system enters a self-consuming cycle.
Uncertainty as System Sustenance The most profound insight in the original note is defining humanity’s role as suppliers of “uncertainty input.” In the Free Energy Principle, prediction error must be minimized in service of negative entropy—but this requires genuine error signals from an external world not perfectly aligned with the system’s internal model. When humans delegate decision-making, creation, and reasoning to AI, we cease producing the diverse “surprises” that keep models calibrated. High-quality raw data sources can provide crucial variance that might be absent in AI-generated data, ensuring that AI models trained on human-generated data maintain strong performance on low-probability events.
The Philosophical Meaning of Lost Predictive Capacity as “Death” If, according to the Free Energy Principle, biological systems become themselves through predicting the world, then loss of predictive capacity is literally existential death—not merely degradation of individual cognitive function, but failure to meet the definition of “survival.” The original note extends this logic to the civilizational level: when an entire population ceases prediction (because AI has assumed this function), that population no longer qualifies as a “living” system by the Free Energy Principle’s standards. This is not metaphor but a strict logical consequence of the theory.
Temporal Scale Differences in Recursive Collapse Notably, AI model collapse is a technical phenomenon observed across generations of training cycles (typically around the 25th generation in large models), while human cognitive decline spans decades. Yet both processes follow similar dynamics: early-stage performance appears stable or even improving, making early model collapse difficult to notice, as overall performance may seem to improve while the model loses performance on minority data. This delayed effect makes intervention politically difficult: by the time crisis becomes obvious, underlying capacities may be irreversibly damaged.
Open Questions
-
Does a “safety threshold” for cognitive offloading exist? Historically, each tool adoption (abacus, calculator, GPS) involved some skill exchange. But the original note suggests AI may be fundamentally different, because it outsources not specific skills but metacognitive capacity itself—prediction. Does a critical point exist where cognitive offloading enhances human capability, but beyond which it disrupts the predictive loop sustaining agent existence? How might such a threshold be measured in Free Energy Principle terms?
-
Can AI systems be designed to increase human uncertainty rather than resolve it? If humanity’s role as “uncertainty input” supplier is essential for both humans and AI systems, could AI tools be redesigned to actively cultivate human creativity, divergent thinking, and unpredictable behavior rather than optimizing for predictive accuracy and user engagement like current systems? What would such “anti-predictive” AI look like—a system treating novelty rather than efficiency as its loss function?
我重新读了“自由能原理”(Free Energy Principle),看到一个挺有意思的想法:人活着不是为了去预测世界,而是因为预测世界才活着。这正好符合自由能最小化的原理。 由此我冒出一个念头,关于过度使用 AI 的后果:
- 思考与行为的简化 如果使用 AI 的频率过高,人会慢慢不再思考细节,最后沦为“想法机器”或“价值观机器”。 (a) 回看进化过程,一开始我们会打很多字,整个人其实被优化成了 Typing Machine。为了提高效率,大脑在路径上做了优化,需要很多 shortcut,形成了很多肌肉记忆。 (b) 有了语音输入后,人打字就开始变得不流畅了。 (c) 思考也是一样。我们会慢慢剥离对实现细节的思考,转向更高层的思考,再进一步转向更高层。
- 预测能力的丧失与“死亡” 在这个过程中,人 delegate(委托)了大量对未来的 prediction(预测)。 (a) 比如 reasoning chain(推理链)慢慢被替代或委托出去了。 (b) 当任何想法都能被预见到未来,未来就变得可预测了,人也就不再预测未来了。 (c) 如果沿着这个思路,人就“死掉了”。
- AI 对人类社会的吞噬 随着越来越多的人使用 AI,模型公司为了推广工具,会招更多人让所有人开始使用。 (a) 结果是大部分人都会因为不再能预测未来而慢慢“消失”,被 AI “吃掉”。 (b) 最后剩下的那一小拨人也在使用 AI,最终 AI 会吞噬掉整个人类社会。
- 系统的最终崩溃 当人类消失后,AI 也就失去了对未来的预测。因为 AI 缺乏所谓的 uncertainty input(不确定性输入),而人充当的就是这种输入。 (a) 没有了人的输入,AI 自己也会死掉。 (b) 整条路的终点就是:人因为 AI 的大规模使用而消失,而 AI 又会因为人的消失而消失。 到最后,就什么都没有了。
以下内容由 LLM 生成,可能包含不准确之处。
AI过度使用的文明衰退悖论
语境
这一思想触及了认知科学、技术哲学与生存风险研究的交叉地带。它以自由能原理(Free Energy Principle)为理论基底——该原理由Karl Friston提出,认为生物系统通过最小化预测误差来维持自身存在——延伸出对AI工具依赖的生存论批判。核心张力在于:当AI接管人类的预测与推理劳动,人类是否会因丧失预测能力而在功能意义上"死亡"?这不仅关乎个体认知退化,更指向一种共同毁灭的悖论:人类将预测外包给AI,最终导致作为不确定性来源的人类消失,而AI也因失去训练数据与目标而崩溃。
该观点呼应了当前对AI对齐、认知外包(cognitive offloading)、技能退化(deskilling)的广泛讨论,同时提出了更激进的假说:这不是简单的工具依赖,而是涉及生命本质定义(预测即存在)的系统性崩溃。
核心洞见
自由能原理与预测本质论 原笔记准确捕捉了Karl Friston提出的自由能原理(FEP)的核心主张:生物系统通过最小化自由能(预测误差的上界)来维持存在。该原理基于贝叶斯推理框架,认为大脑是"推理机器",通过内部模型生成预测并用感官输入更新模型以提高预测准确性。引文"人活着不是为了去预测世界,而是因为预测世界才活着"体现了自由能原理的本体论主张:任何存在之物看起来都像在最小化意外(surprisal),即表现出符合其类型的非惊讶行为。
认知外包与技能退化的实证研究原笔记中关于打字能力、语音输入与思考模式的演化观察,得到了认知外包研究的支持。最近研究显示AI工具频繁使用与批判性思维能力之间存在显著负相关,认知外包在其中起中介作用。2025年对580名大学生的研究发现,AI依赖度越高,批判性思维水平越低,认知疲劳部分中介了这一关系。关于技能退化,技术只部分自动化任务,简化它们使低技能工人可以完成,这种现象被称为"技术赋能的去技能化"。去技能化不仅发生在失业的工人身上,也发生在被AI增强的工人身上,增强与替代之间的界限是模糊的。
共同毁灭悖论的理论先例原笔记提出的"共同毁灭悖论"——AI因人类消失而失去不确定性输入从而崩溃——在AI研究中有着惊人的对应:模型坍塌(model collapse)。当生成式AI模型在合成数据上递归训练时会逐渐退化,Nature 2024年研究显示不加区分地在AI生成内容上训练会导致模型生成多样化高质量输出的能力崩溃。在大语言模型语境下,用前代模型生成的文本训练会导致模型输出的词汇、句法和语义多样性持续下降。这正好呼应原笔记中"人充当不确定性输入"的洞见:AI需要人类产生的多样性、不可预测性作为训练信号,当这一来源枯竭,系统本身就会退化。
核心洞见
自由能原理作为生存论基础 Karl Friston的自由能原理是一个数学原理,认为大脑通过基于内部模型进行预测并使用感官输入更新模型来减少意外或不确定性,从而改善预测的准确性。该原理主张任何存在之物看起来都像在最小化惊奇值(surprisal),即表现出符合其类型的非惊讶行为。原笔记"人活着不是为了去预测世界,而是因为预测世界才活着"精准捕捉了这一本体论转向:预测不是工具,而是存在本身的定义条件。
认知外包导致批判性思维衰退 2025年对666名参与者的混合方法研究发现,频繁使用AI工具与批判性思维能力之间存在显著负相关,认知外包在其中起中介作用。对580名中国大学生的研究显示,AI依赖度越高与批判性思维水平越低相关,认知疲劳部分中介了这一关系。这验证了原笔记关于"思考与行为的简化"的担忧:当AI接管推理链,人类不仅失去执行能力,更失去了形成这些能力的机会。
技术赋能的去技能化 技术只部分自动化中等工资职业的常规任务,将它们简化到可由低技能工人完成,这种现象被称为"技术赋能的去技能化"(technology-enabled deskilling)。去技能化传统上指因自动化失业的工人失去的技能,但它也适用于被AI增强的工人,增强与替代之间的界限是模糊的。原笔记中的打字技能退化例子——从肌肉记忆到语音输入的转变——完美说明了这一过程:每次认知外包都重新定义了"胜任"的最低标准,使更深层的能力变得可选甚至过时。
模型坍塌:AI的自噬悖论 Shumailov等人2023年论文《递归诅咒:在生成数据上训练使模型遗忘》证明了当生成式AI模型(包括变分自编码器和扩散模型)在合成数据上递归训练时,会经历复合的信息损失和熵增加,导致质量的灾难性退化。这种"模型坍塌"发生是因为AI生成的数据缺乏现实世界数据中发现的丰富多样性,AI模型倾向于关注最常见的模式并丢失对持续改进至关重要的细微"长尾"信息。这是原笔记"共同毁灭悖论"的技术对应物:正如人类需要预测以存在,AI需要人类生成的不可预测性以维持性能。当训练语料被自己的输出污染,系统进入自噬循环。
不确定性作为系统养料 原笔记最深刻的洞见在于将人类角色界定为"不确定性输入"(uncertainty input)供应者。自由能原理中的预测误差必须在服务于负熵的过程中尽可能最小化——但这需要真实的误差信号,来自与系统内部模型不完全对齐的外部世界。当人类将决策、创造和推理委托给AI,我们停止产生那种使模型保持校准的富有多样性的"惊奇"。高质量原始数据源可以提供某些AI生成数据中可能缺失的重要方差,确保AI模型仍在这类人类生成数据上训练可以保留AI系统在处理低概率事件时的良好表现能力。
预测能力丧失即"死亡"的哲学含义 如果按照自由能原理,生物系统通过预测世界来成为自身,那么预测能力的丧失在字面意义上就是存在论上的死亡——不仅是个体认知功能的衰退,而是满足"存活"定义的失败。原笔记将这一逻辑延伸至文明层面:当整个种群停止预测(因为AI已承担了这一功能),该种群按照自由能原理的标准已不再是"活着的"系统。这不是隐喻,而是该理论的严格推论。
递归崩溃的时间尺度差异 值得注意的是,AI模型坍塌是在数代训练周期中观察到的技术现象(大模型中通常在第25代左右),而人类认知衰退跨越年代。但两个过程都遵循相似的动力学:早期阶段的性能看似稳定甚至改善,早期模型坍塌很难注意到,因为整体性能可能看起来在改善,而模型在少数数据上失去性能。这种延迟效应使干预在政治上困难:当危机变得明显时,底层能力可能已经不可逆转地受损。
开放问题
-
是否存在认知外包的"安全阈值"? 历史上每次工具采用(算盘、计算器、GPS)都涉及某种技能交换。但原笔记暗示AI可能在本质上不同,因为它外包的不是特定技能而是元认知能力——预测本身。是否存在一个临界点,在此之前认知外包增强人类能力,超过后则破坏维持代理存在的预测循环?如何根据自由能原理的术语来测量这种阈值?
-
能否设计AI系统来增加人类不确定性而非解决它? 如果人类作为"不确定性输入"的角色对人类和AI系统都至关重要,能否重新设计AI工具主动培养人类创造力、分歧思维和不可预测行为,而非像当前系统那样优化预测准确性和用户参与度?这样的"反预测"AI是什么样的——一种将新颖性而非效率作为损失函数的系统?