← 回到花园

Why Are People Starting To Sound Like Chatgpt
为什么人们开始听起来像Chatgpt

2025-12-30 • TED Learning Garden
✨ Key Takeaways

📋 TED演讲大纲:为什么人们开始听起来像ChatGPT?

I. 引言:潜意识里的认知偏差

  • 我们自认为能轻易分辨网络上的真假(如AI生成的图片),但实际上我们在潜意识层面很难区分 。

  • 案例: 社交媒体算法展示了极端的现实图景,导致我们高估了他人政治信仰的极端程度 。

  • 有争议的信息因为互动率高而更容易传播,导致我们看到的“现实”是被扭曲的 。

II. 语言的同化:为什么你会说 "Delve"?

  • 现象: 自ChatGPT问世以来,人们在自发的口语对话中更频繁地使用 "delve"(深入研究)一词 。

  • 原因:

    • OpenAI将训练过程外包给了尼日利亚的员工,当地英语习惯中常用 "delve" 。
    • 这种语言习惯在模型中被过度强化,进而影响了全球用户 。
  • 本质: 我们在潜意识中混淆了AI语言与真实语言。讽刺的是,人类正在变得越来越像机器 。

  • 反馈循环: AI代表现实 -> 我们认为那是真实的并模仿 -> 我们反哺数据给AI -> AI进一步强化这种表达 。

III. 文化的算法化:不仅仅是语言

  • 案例:Spotify与 "Hyperpop"

    • "Hyperpop" 最初并不存在于文化词典中,直到Spotify算法发现了一群相似用户并创建了歌单 。
    • 有了标签后,人们开始定义它,音乐家开始为了迎合标签而创作,人为制造了潮流 。
  • 商业逻辑: 平台(如TikTok, Spotify)并不想诚实地反映现实,只想放大趋势以留住用户赚钱 。

  • 后果: 我们分不清“自然趋势”和“人为鼓吹的趋势”(如拉布布 Labubu、迪拜巧克力等),因为算法只推送视觉上具挑衅性的内容 。

IV. 危险的深层影响:世界观的被塑造

  • 这不仅影响语言和消费,还影响我们对可能性的认知 。

  • 意识形态偏见:

    • ChatGPT讲波斯语时更保守,可能受限于伊朗当地保守的训练文本 。
    • Elon Musk通过修改Grok机器人的回复和放大特定推文,潜移默化地训练用户符合其意识形态 。
  • 警告: 如果你说话像ChatGPT,你可能思考方式也像它。如果你忽略这一点,你的世界观就是幸存者偏差 。

V. 结论:如何保持真实

  • 解决方案: 必须不断反问自己“为什么?” 。

    • 为什么我看到这个?为什么我这么说?为什么平台奖励这个? 。
  • 呼吁: 如果不提问,平台版本的现实就会变成你的现实。请保持真实(Stay real) 。

  • 中文翻译由机器生成。
📝 Notes

最近有没有发现自己或者身边人说话越来越“书面化”?比如经常用“深入研究”、“探讨”这种词?🤔 原本以为是我们更有文化了,看完这个TED演讲才发现:原来是我们正在被AI同化!

演讲者Adam Aleksic揭露的真相简直让人头皮发麻,快来看看你中招了没!👇

🤖 1. 为什么全世界都在说 "Delve"? 你以为这是你的词汇量提升?其实是因为ChatGPT! OpenAI把训练外包给了尼日利亚员工,那边习惯用 "delve" 这个词。结果AI学会了并疯狂加权,导致全球用户潜移默化地模仿AI说话 。 可怕的循环: AI模仿人类 -> 人类模仿AI -> 人类变得像机器 。

🎵 2. 你的审美也是“算法特供” 你知道 "Hyperpop" 这个音乐流派吗?它其实是Spotify算法“生造”出来的! 算法发现了一群听歌口味相似的人 ➡️ 贴上标签 ➡️ 音乐人为了火开始迎合标签创作。 所谓的“流行趋势”(比如最近火爆的Labubu、迪拜巧克力),可能只是算法为了留住你而人为放大的 。

🧠 3. 最大的危险:你也开始像AI一样思考 如果你的语言像AI,你的思维也会随之改变 。

  • 伊朗版ChatGPT更保守,因为训练语料保守 。

  • 马斯克的Grok可能正在潜意识里给你灌输他的价值观 。 我们以为在看世界,其实看的是平台想让我们看到的“幸存者偏差”世界 。

🛡️ 如何自救? 演讲者给出了唯一的解药:多问“为什么”! “为什么我也开始用这个词?” “为什么平台把这个视频推给我?” “为什么这个观点会被奖励?”

别让算法定义的现实,取代了真正的现实。Stay Real!✨

🖊 Highlights
0:00.664
How sure are you that you can tell what's real online?
你有多确定可以分辨出 网络上什么是真的?
0:04.735
(Laughter)
(笑声)
0:05.903
You might think it's easy to spot an obviously AI-generated image,
你可能以为轻而易举就能识别 明显是 AI 生成的图片,
0:09.072
and you're probably aware that algorithms are biased in some way.
你也知道算法是带些偏见的。
0:12.543
But all the evidence
但这些证据
0:13.744
is suggesting that we're pretty bad at understanding that
都表明我们相当不擅长
0:16.547
on a subconscious level.
在潜意识中理解这些现象。
subconscious /ˌsʌbˈkɑːnʃəs/
下意识
0:17.748
Take, for example, the growing perception gap in America.
以美国日益扩大的认知差距为例。
0:20.484
We keep over- and overestimating
我们不断高估
0:22.119
how extreme other people's political beliefs are,
他人的政治信仰有多极端,
0:24.521
and this is only getting worse with social media,
有了社交媒体就更糟糕了,
0:26.890
because algorithms show us the most extreme picture of reality.
因为算法向我们展示 现实中最极端的图景。
0:30.227
As an etymologist and content creator,
作为词源学家和内容创作者,
etymologist /ˌetəˈmɑːlədʒɪst/
n. 词源(语源)学家
0:32.095
I always see controversial messages go more viral
我总是看到有争议的信息 更容易大肆传播,
viral /ˈvaɪɹəl/
病毒
0:34.531
because they generate more engagement than a neutral perspective.
因为与中立的视角相比, 它们能带来更多互动。
neutral /ˈnjuːtɹəl/
中性
0:38.035
But that means we all end up seeing this more extreme version of reality,
但这意味着我们最终 会看到更极端的现实,
0:41.572
and we're clearly starting to confuse that with actual reality.
显然,我们开始将其 与实际的现实混为一谈。
0:46.043
The same thing is currently happening with AI chatbots,
AI 聊天机器人也是如此,
0:48.712
because you probably assume that ChatGPT is speaking English to you,
因为你可能默认 ChatGPT 在和你说英语,
0:51.949
except it's not speaking English,
它其实并不会说英语,
0:53.617
in the same way that the algorithm's not showing you reality.
就像算法并不会给你展现现实。
0:56.553
There are always distortions,
总是会出现偏差,
0:58.255
depending on what goes into the model and how it's trained.
取决于模型的输入和训练方式。
1:01.258
Like we know that ChatGPT says “delve” at way higher rates than usual,
我们知道 ChatGPT
delve /dɛlv/
vi. 钻研;探究;挖 vt. 钻研;探究;挖 n. 穴;洞 n.(Delve)人名;(英)德尔夫
1:04.695
possibly because OpenAI outsourced its training process to workers in Nigeria
可能是因为 OpenAI 将训练过程外包给了尼日利亚的员工,
outsourced /ˈaʊtsɔːst/
adj. 外包的 v. 转包;外购原料(outsource 的过去分词)
1:08.432
who do, actually, say, "delve" more frequently.
他们确实更频繁地 用“深入研究”一词。
1:10.767
Over time, though, that little linguistic overrepresentation
但是,随着时间的推移, 这种不起眼的语言过度代表性
linguistic /lɪŋˈɡwɪstɪk/
adj. 语言的,语言学的
1:13.670
got reinforced into the model even more than in the workers' own dialects.
在模型中得到的强化程度 甚至超过了员工自己的方言。
1:17.240
Now that's affecting everybody's language.
这影响着每个人的语言。
1:19.309
Multiple studies have found that, since ChatGPT came out,
多项研究发现, 自 ChatGPT 问世以来,
1:22.045
people everywhere have been saying the word "delve" more
世界各地的人们都自发地在口语对话中
1:24.781
in spontaneous spoken conversation.
更多地用“深入研究”这个词。
spontaneous /spɒnˈteɪ.ni.əs/
自发主动
1:26.950
Essentially, we're subconsciously confusing the AI version of language
其实是我们在潜意识中 将 AI 的语言
1:30.420
with actual language.
与真实的语言混淆了。
1:32.022
But that means that the real thing is, ironically, getting closer
讽刺的是,这就代表 真实事物逐渐趋于
1:35.125
to the machine version of the thing.
机器理解的事物。
1:36.960
We're in a positive feedback loop with the AI representing reality,
我们处在这样一个正反馈循环中: AI 代表了现实,
1:40.230
us thinking that's the real reality,
我们认为这就是真实的现实,
1:42.065
and regurgitating it so the AI can be fed more of our data.
反馈给 AI, 给 AI 输入更多数据。
regurgitating
vt. 反刍;机械地重复(regurgitate 的现在分词形式) vi. 回流(regurgitate 的现在分词形式)
1:45.002
You can also see this with the algorithm through words like "hyperpop,"
你可以看出算法中的这个现象, 比如“高能流行”(hyperpop)一词,
1:48.372
[not a] part of our cultural lexicon until Spotify noticed an emerging cluster
[不在]我们的文化词典中, 直到 Spotify 在算法中发现
lexicon /ˈlɛk.sɪ.kən/
词库
1:52.209
of similar users in their algorithm.
出现了一群相似的用户。
1:54.144
[When] they identified it and introduced a hyperpop playlist, however,
当 Spotify 发现并推出了 “高能流行”歌单,
1:57.514
the aesthetic was given a direction.
这种艺术审美有了定义。
aesthetic /iːs.ˈθe.tɪk/
美学
1:59.750
Now people began to debate what did and did not qualify as hyperpop.
人们开始议论
2:03.720
The label and the playlist made the phenomenon more real
这样的标签和歌单 让这个现象更真实,
2:06.356
by giving them something to identify with or against.
让人们有了认同或不认同的东西。
2:09.092
And as more people identified with hyperpop,
随着越来越多的人认同高能流行,
2:11.261
more musicians also started making hyperpop music.
越来越多的音乐家 也开始创作高能流行音乐。
2:14.998
All the while, the cluster of similar listeners in the algorithm
与此同时,算法中相似的听众群
2:18.101
grew larger, and Spotify kept pushing it more,
越来越大, Spotify 也推波助澜,
2:20.971
because these platforms want to amplify cultural trends to keep you on the app.
因为这些平台希望放大文化趋势, 让你继续使用该应用。
2:25.642
But that means we also lose the distinction between a real trend
但这也代表我们看不见真实趋势
2:28.712
and an artificially inflated trend.
和人为鼓吹的趋势之间的区别。
2:30.947
And yet, this is how all fads now enter the mainstream.
但是,所有时尚潮流 都是这样进入主流的。
2:34.518
We start with a latent cultural desire.
从潜在的文化欲望开始。
2:36.420
Maybe some people are interested in matcha, Labubu or Dubai chocolate.
也许有人喜欢抹茶、 拉布布或者迪拜巧克力。
2:40.123
The algorithm identifies this desire and pushes it to similar users,
算法识别了这种兴趣, 并推送给相似的用户,
2:43.393
making the phenomenon more of a thing.
让这些现象更成气候。
2:45.228
But again, just like how ChatGPT misrepresented the word "delve,"
但同样,就像 ChatGPT 误用“深入研究”一词,
2:48.632
the algorithm is probably misrepresenting reality.
算法也可能在歪曲事实。
2:51.034
Now more businesses are making Labubu content
越来越多商家 制作拉布布的相关内容,
2:53.303
because they think that's the desire.
因为他们觉得人们喜欢。
2:55.706
More influencers are also making Labubu trends
越来越多网红也在掀起拉布布热潮,
2:58.141
because we have to tap into trends to go viral.
因为必须利用趋势才能走红。
go viral
迅速传播:指信息在社交媒体上迅速而频繁地传播,也可以指通过流言蜚语或作为对广播或广泛印刷发行的集体反应而传播。
3:00.711
And yet, the algorithm is only showing you
但是算法只会让你看到
3:03.480
the visually provocative items that work in the video format.
有视觉吸引力的视频。
provocative /pɹəˈvɒk.ə.tɪv/
挑衅
3:08.151
TikTok has a limited idea of who you are as a user,
TikTok 对于 作为用户的你是谁知之甚少,
3:11.254
and there's no way that matches up
也无法完全理解
3:12.956
with your complex desires as a human being.
作为人类的你 有什么复杂的兴趣。
3:15.058
So we have a biased input.
我们就有了带偏见的输入。
3:16.693
And that's assuming that social media is trying
就是假设社交媒体
3:18.895
to faithfully represent reality, which it isn't.
会诚实地展现现实,但事实并非如此。
3:21.164
It's only trying to do what's going to make money for them.
它只会为了能赚钱的东西服务。
3:24.034
It's in Spotify's interest to have you listening to hyperpop,
让你听高能流行 符合Spotify的利益,
3:27.003
and it’s in TikTok’s to have you looking at Labubus
它只会为了能赚钱的东西服务。
3:29.473
because that's commodifiable.
我想重申,现实和表达之间有区别,
3:30.974
So again, we have this difference between reality and representation,
让你看拉布布 符合TikTok的利益,
3:34.611
where they're actually constantly influencing one another.
它们其实在不断互相影响。
3:38.882
But it's incredibly dangerous to ignore that distinction,
但是忽视区别是非常危险的,
3:41.651
because this goes beyond our language and consumptive behaviors.
因为不仅会影响 我们的语言和消费行为。
3:44.788
This affects the world we see as possible.
还会影响我们眼中的世界。
3:47.524
Evidence suggests that ChatGPT is more conservative
有证据表明, ChatGPT
conservative /kənˈsɜːvətɪv/
保守的
3:49.926
when speaking the Farsi language,
还会影响我们眼中的世界。
3:51.595
likely because of the limited training texts in Iran
可能是因为伊朗的训练文本有限,
3:54.097
reflect the more conservative political climate in the region.
反映了该地区更为保守的政治气候。
3:57.100
Does that mean that Iranian ChatGPT users will think more conservative thoughts?
可能是因为伊朗的训练文本有限,
4:00.937
Elon Musk regularly makes changes to his chatbot Grok
反映了该地区更为保守的政治气候。
4:03.874
when he doesn't like how it's responding,
这代表伊朗的 ChatGPT 用户 有更保守的思想吗?
4:05.942
and then uses his platform X to artificially amplify his tweets.
然后借助他的平台 X 人为传播他的推文。
4:09.179
Does that mean that the millions of Grok and X users
这是否意味着数以百万计的Grok和X用户
4:11.815
are subconsciously being trained to align with Musk's ideology?
正在潜意识中接受训练, 以符合马斯克的意识形态?
ideology /ˌaɪ.diːˈɒl.ə.d͡ʒiː/
n.意识形态 ,思想
4:15.685
We need to constantly remember that these aren't neutral tools.
我们需要时刻记住, 这些不是中立的工具。
neutral /ˈnjuːtɹəl/
中性
4:20.557
Everything that ends up in your social media feed
你在社交媒体中看到的所有信息,
4:22.926
or in your chatbot responses
聊天机器人给你的所有回复,
4:24.327
is actually filtered through many layers of what's good for the platform,
都会经过层层筛选, 筛成对平台有利、
4:27.798
what makes money and what conforms to the platform’s incorrect idea
让平台赚钱、符合 平台对你的错误认知的版本。
4:31.034
about who you are.
都会经过层层筛选, 筛成对平台有利、
4:32.335
When we ignore this, we view reality through a constant survivorship bias,
如果我们忽视这一点,我们就会 一直带着幸存者偏差看待现实,
survivorship /sərˈvaɪvərʃɪp/
生存者取得权;(财产共有者中)生者享有权;生者对死者名下财产的享有权
4:36.273
which affects our understanding of the world.
从而影响我们对世界的认知。
4:39.142
After all, if you're talking more like ChatGPT,
说到底, 如果你说话越来越像 ChatGPT,
4:42.179
you're probably thinking more like ChatGPT as well,
你估计也会越来越像 ChatGPT 那样思考,
4:45.549
or TikTok or Spotify.
或者是 TikTok 或 Spotify。
4:47.684
But you can fight this if you constantly ask yourself: Why?
但如果你总是问自己这些问题, 你是可以克服的:
4:51.087
Why am I seeing this?
我为什么会看到这些?
4:52.656
Why am I saying this?
我为什么会这么说?
4:54.057
Why am I thinking this?
我为什么会这么想?
4:55.625
And why is the platform rewarding this?
为什么平台会奖励这些呢?
4:58.428
If you don't ask yourself these questions,
如果你不问自己这些问题,
5:00.564
their version of reality
他们的对现实的看法
5:01.765
is going to become your version of reality.
就会是你对现实的看法。
5:03.900
So stay real.
所以,保持真实吧。
5:05.635
(Cheers and applause)
(欢呼和掌声)