【泥沙龙笔记:弃暗投明,明在何方】

我:
just had a small talk with Tanya on US election, she was super angry and there was a big demonstration against Trump in her school too

行:
@wei
在我们这个群里,我们都见证了立委清晰的预测了川普对希拉里的领先优势。与传统媒体相比,这次社交网络所反映的民意更准确。也许更为重要的是分析整个选举过程中与时间相关的一些关键变量。
不过有一个问题和缺点,这个分析没有反映美国的选举人制度,事实上希拉里克林顿所取得的选票高于川普。如果能有回缩的地域分析,特别是,摇摆州的地域分析,比如说佛罗里达等的回溯

我:
是的。这次其实是千载难逢的机会,因为太多人关注,太多人 bet,应该认真当成一个项目去做,精心设计。

利:
不光是美国人关注,我们在国内也非常关注

行:
证明了新工具的力量。这也是这次川普当选的最正面的事件。

我:
我这种票友性质地玩,只是显示了大数据里面的确有名堂
但不是震撼性的。

利:
我跟美国的朋友们说:不管谁赢得了总统,都是大数据分析赢了

行:
等我有钱了,我来投你。

毛:
对,我也想过这个事,难点恐怕在于网上的信息恐怕难以分清出自何地?

我:
票友性质不是说的技术:技术是deep,靠谱和专业的,我从来都不小看自己;票友是说我对 domain (政治、大选)是票友 ,到现在对选举人制度还是模模糊糊,它到底怎么工作的

行:
lP地址不是相对能反映地域吗?

我:
推特是最大最动态的数据源,我们有推特的地理,应该大体足够从地理上区分了
我们也有种族,还有年龄和性别等信息。

行:
强烈建议回溯一下摇摆州。挖矿!非常值得进一步挖掘。

我:
没那个精力和兴趣了,公司缩水,也没有几个兵了,日常的琐务也要做
大数据不好赚钱。烧钱倒是哗哗的。

行:
需要设计出一个能赚钱的商业模式。技术是根本,但不是全部。

毛:
如果能把地理年龄这些结合进去,那你的系统大有前景。

Nick:
同意,伟哥可以写本书:
how is a presidential election won or stolen?把选举人票考虑进去

我:
有兵的时候,鸡毛蒜皮我不管,我爱怎么玩怎么玩, 到头来连兵都保不住,还玩个球啊。一个教训,不要把技术开发得过头。小公司的构建内,任何一个部门都不宜超前太多,超前了,就意味着末路的来临。

Nick:
@wei 早就叫你弃暗投明

我:
弃暗投明倒有个明啊 一厢情愿哪里行。

技术并不是越深入越先进越好,by nature 作为科学家,我们总是想越深越好
结果是产品来不及消化,技术总吃不饱,最后最先裁剪的就是技术 呵呵 反正也消化不了全部,你再优秀也没价值 其实是有前车之鉴的:《朝华午拾 – 水牛风云》
十几年再来一次,仿佛时光倒转。

一个机构作为一个整体,必须保证大体相称的发展水平,才可相谐。一个部门太出色,overperforming,其他部门无法消化,也就成了目标。譬如研发,要质量我给你最好的质量,超过“世界第一”,要广度我给你整出20个世界主要语言的深度分析 (deep parsing),cover 语言数据的 90+%,要领域化可以在两周内 deliver 一个 domain 所需的情报单位(一种关系,或一个事件),只要定义明确,产品的情报挖掘的瓶颈永远不在这个自然语言研发部门。结果呢,部门需要为部门的太好表现付出代价。这个世界就是这样诡异。

话说回来,一套技术在同一个公司挥洒了10年还没走人,对我这样害怕变动的人,公司也已经相当不易了。对得起我,我也对得起它了。当年没有我的技术,公司早死翘翘了。如今有了技术不能起飞,也怪不得我,公司从上到下,在这一点是共识:论技术和由此而来的数据质量,我们绝对领先对手。市场做不起来,打败不了对手,是技术以外的因由,我无能为力。另一方面也可以说,市场不成熟,技术变钱不是那么简单 market economy 决定的。

白:
NLP部门因为表现太好而不受欢迎,听起来是天方夜谭,如果不是伟哥亲历,谁信呀……

我:
反正我信。
我们吃不饱有日子了。一直都是我们催产品经理,而不是相反:求求你,给我们一个任务吧。产品经理说:就根据客户反馈小修小补吧。我们的数据质量已经行业领先很久了,一直是领先。

白:
用嘴投票还是用脚投票,这是一个问题

我:
新的 specs,或者出不来,或者出来了,我们 deliver 了,产品却实施不了。

严:
@wei 还是觉得公司产品方向太窄了,这么好的技术被局限在这么窄的应用范围。董事会老是要Focus。

邓:
听起来CEO应该负很大的责任啊

我:
据说是市场太小了,或饱和了。产品在一个 niche market,这个社会媒体大数据挖掘的market一度被疯狂追捧和夸大。几年下来发现,价值得到验证,市场也确实存在,但是就是不够大。拓展其他 market 需要有眼光的产品老总。对于“高新技术”,有眼光的产品老总比熊猫还稀少。高新技术比较适合做大公司的花瓶,其价值在于花瓶的股市效应。或者,适合一个巨大平台,帮助连接顾客和厂家: 这个可以产生真正的价值,譬如 Facebook。高新技术对于创业其实很难,第一缺乏资源(不能吃一辈子VC),第二缺乏平台(连大数据都要花大价钱购买,更甭提顾客与厂家的network了),第三缺乏熊猫。好不容易都凑齐了,最佳的出路也就是有幸被巨头看重收购了事。这个概率不到十分之一吧。也就是说,你哪怕有再牛的技术,你这辈子活过了三个人的寿命,有机会创业10次,你可能创业成功,如果成功是以被收购作为标准的话。如果成功是以上市成为独角兽作为标准,那么你需要的机会数是下一个量级,五年一个轮回,你大概需要活500岁才可撞上狗屎运。

老总的眼光各有自己的局限,譬如,原来一直做 b2b saas 的 就一直沿着以前的经验和熟悉的领域想技术的用场。超出经验领域之外 是很难的。产品创新不再是技术的创新,而是产品层面不断加 features,越加越多。为了讨好不同的客户。结果是 90% features 基本没人用,产品也因此变得让人眼花缭乱了。为什么 agency 喜欢这样的产品?因为他们是 power users, features 越多,他们越爽。其他客户面对众多 features,只会晕菜,反而起反作用。

NLP 的真正威力是把数据转为情报,如果一个产品只需要一种情报,譬如舆情,无法消化其他可能有用的情报,NLP 就处于语义落地吃不饱的地位。你吃不饱,你的价值就丧失。

我:
洪诗人有空可以为nlp写一首挽歌,为nlp超出产品一叹。

悟:
李氏唐朝西游记
维度无穷NLP录
立宪定法三权六
委身侍主天地合
@wei 我先抛砖引玉, 见笑了

我:
这砖抛的,狂赞。
【相关】

Final Update of Social Media Sentiment Statistics Before Election

Trump sucks in social media big data in Spanish

Did Trump’s Gettysburg speech enable the support rate to soar as claimed?

Pulse:tracking US election, live feed,real time!

【大数据跟踪美大选每日更新,希拉里成功反击,拉川普下水】

【社煤挖掘:大数据告诉我们,希拉里选情告急】

【社煤挖掘:川普的葛底斯堡演讲使支持率飙升了吗?】

【社煤挖掘:为什么要选ta而不是ta做总统?】

Big data mining shows clear social rating decline of Trump last month

Clinton, 5 years ago. How time flies …

【社媒挖掘:川大叔喜大妈谁长出了总统样?】

【川普和希拉里的幽默竞赛】

【大数据舆情挖掘:希拉里川普最近一个月的形象消长】

欧阳峰:论保守派该投票克林顿

【立委科普:自动民调】

【关于舆情挖掘】

《朝华午拾》总目录

【语义计算沙龙:坐而论道 on 中文 parsing】

董:
刺死前妻男友男子获刑5年 死者系酒醉持刀上门 -- 百度新闻
Stabbed her boyfriend man jailed for 5 years, the drunken knife door --百度翻译
Stabbed his ex-boyfriend boyfriend was sentenced to death for 5 years the Department of drunken knife door -- 谷歌翻译
不知道这样结果是什么智能? -- 人工?鬼工?骗工?

白:
也是醉了

董:
我主要是要探讨“连动”--酒醉,持刀,上门。这三个动词在知网词典里都是有的。 酒醉 -- {dizzy|昏迷:cause={drink|喝:patient={drinks|饮品:{addict|嗜好:patient={~}}}}}
持刀 -- {hold|拿:aspect={Vgoingon|进展},patient={tool|用具:{cut|切削:instrument={~}},{split|破开:instrument={~}}}}
上门 -- {visit|看望}
酒醉的上位可达:“状态”;持刀的上位可达“行动”,但它与“拿”不同,它是“拿着”,所以定义描述里多了“aspect=Vgoningon”;最后是“上门” 它是“行动”。于是我试下面的规则:
DefineVP1 0712 CN[*pos==`verb`,*def_h=={act|行动},*syl==`2`];L1[*pos==`verb`,*def_h=={act|行动},*def_s==`aspect={Vgoingon|进展}`,*syl==`2`]$L1[*log==`preceding`]@chunk(CN,L1)# // 酒醉持刀上门;
DefineVP1 0722 CN[*pos==`verb`,*def_h=={act|行动},*syl==`2`];L1[*pos==`verb`,*def_h=={state|状态},*syl==`2`]$L1[*log==`preceding`]@chunk(CN,L1)# // 酒醉持刀上门;
心里还是不踏实,因为没有大数据的支持。想听你们的意见。其他例子如:骑车上街买菜遇到一个老同学;

白:
直观感觉,状态的标签不是太好贴。比如,拿着刀子砍人,拿着是状态;抡起斧子砍人,抡起就不是状态?隔着玻璃射击,隔着是状态;打开窗户通风,打开算不算状态 ?
买菜和遇到老同学,谁是前景,谁是背景?谁是主线谁是旁岔,很难说。像伟哥这样一律next最省事。
打开保险射击,打开保险就不是状态

我:
伟哥于是成为懒汉的同义语 。工业界呆久了 想不懒都不成。我曾经多么勤勉地一条道走到黑啊。Next 的好处是拖延决策 或者无需决策。可以拖延到语义中间件,有时也可以一直拖延到语义落地。更多的时候 拖延到不了了之 这就是无需决策的情形。

白:
董老师说的就是语义落地啊。花五毛钱打酱油,花五毛钱打醋。花五毛钱该贴啥标签?
要不是语义落地谁费这事儿。

我:
花 money vp
这个是 subcat 可以预测的模式。凡是subcat可明确预测的句型 通常都不是事儿。给标签于是成为 system internal 的内部协调。

白:
关键是不知道该有多少标签,如何通过粒度筛选、领域筛选、时空背景筛选,快速拿到最有用的标签。

我:
通常的给法是:money 是 o (object),vp 是 c (complement),这是句法。
句法之上这几个节点如何标签逻辑语义 也可以由 subcat 输出端强行给定。譬如 可以给 vp 一个【结果】的标签,vp 是 “花钱” 的结果。
subcat 的实质就是定义输入端的线性模式匹配 并 指明如何 map 到输出端的句法和逻辑语义的结构。这种词典化的subcat驱动简化了分析算法 而且包容了语义甚至常识。

董:
我是因为首先要解决句法关系引起的。例如:欢迎参观;争取投资,就是VO关系,而不是参观游览。也就是说,两个或更多的动词连着时,如何排除歧义?试着只给两个标签:动宾、连动。

我:
一般而言 动宾 是动决定的,连动可以是第一个动决定, 也可以是随机的组合。后者有一个与conjoin区分的问题。
“欢迎” 在词典subcat 中决定了可以带 “参观” 这样的宾语,就事论事 这个“欢迎-参观”的关系几乎是强搭配,与 “洗-澡” 类似。
连动也有词典 subcat 决定的,譬如 “去” vp,“驱车” vp,“出门” vp。
词典决定的东西 没有排除歧义的问题 就是词典绑架 通过 subcat。只有随机组合才有歧义区分的问题。而动宾的本质是不随机,原则上不存在歧义 一律是强盗逻辑 本质就是记忆。可以假设 人的动宾关系是死记在词典预测(expectation)里的,预测实现了 动宾就构建了 这符合 arg structure 的词典主义原则。

董:
负责挖坑,负责浇水,负责填土。。。动宾关系;

我:
负责 vp
为 vp 负责
后者是变式

董:
这么看来,动宾还是连动还是修饰(限定),都由词典解决了。统统做进词典里,就可以了。明白了。

我:
词典主义。随机度太大的组合比较难做进词典。所以一方面尽量做进词典,另一方面 来几条非词典化的规则 兜个底。
随机性而言 似乎 修饰大于连动 连动大于动宾。

白:
如果只有这三个标签,当然做进词典是首选,就怕落地时要的不止这三个。

董:
33724688194454877

这是我刚才试的一个句子。我们为每个节点预留10个子节点。动词与动词也得包括这些。

我:
进不进词典 主要不是有几个标签 而是这个标签的性质。
语言学的理论比较文科,说的东西有些模糊,但大体还是有影子的。
语言学理论中一个最基本的概念区分就是 complement vs adjunct,这是句法的术语,对应到较深的层面 就是 argument vs modifier。一般而言,arguments or complements 都是词典的主导词可以 subcat 预测的。HowNet 从语义层面对 args 已经做了预测。语言学词典(譬如英语的计算词典,汉语的计算词典等)就是要相应地从具体语言的句法表达方式的角度把 subcat 预测的 complements 定义出来。至于 modifier 和 adjuncts,他们的组合性随机,词典就难以尽收。最典型的就是普世的时间地点状语等。世界上的所有事件都是在时间和地点中进行。

白:
跑步去公园,去公园跑步。前者去公园的路上都在跑步,两个事件在时间上重合;后者只有到了公园才开始跑步,在时间上只是先后衔接。
如果语义落地需要对此作出区分,该有什么标签?怎么词典化?
动词为其他动词挖坑的情况都不难处理,难的是压根儿没有标配的坑。这是从ontology的事件根结点继承下来的。

我:
跑步去公园,去公园跑步。
先说第二句:【去 + NP + VP】 这是可以词典预测的,万一预测不准,可以 fine-tune 条件,譬如:【去 + 地点 + 动作】,总之是词典预测的。既然词典预测了,那么该给什么标签就不是问题了。给什么都可以,要什么给什么。
再看第一句:跑步去公园。
去公园 不是问题 这是一个动宾 VP 是词典预测的:【去 + NP】 或 【去 + 地点】。
问题于是就成为 “跑步” 与 VP(人类动作)之间的关系。 这种关系在哪里处理,词典可以不可以预测?

白:
吃口饭去单位,又是接续关系不是重叠关系了

我:
这个的确有些 tricky 但不是无迹可寻。

白:
跑会儿步去公园,也是接续关系了。

我:
偷懒的办法就是有一条非辞典化的模糊的规则 Next 连接二者。
费劲的办法也有:一个是 “跑步去” 词典化 作为“去”的变体,“跑步”是对“去”的方式限定。

白:
现在的问题是,句法上承认next,语义上细化next

我:
另一个词典化的做法是,在“跑步”词条下,预测 movement 的动词 VP, 【去NP】 、【来NP】 、【到达NP】 等等 都符合条件,可以跟在“跑步”后面。

白:
为啥跑步加了时态,限定就失效?

我:
这个预测的subcat里面的句法规定是:
1. 本词不许有显性时态,不许分离;
2. 后面的 VP 必须是 movement;
3. 输出端:本词作为后一个 VP 的限定方式(句法叫方式状语:adverbial of manner)。
Binggo!
至于为啥?这个问题,系统可以不回答,系统可以是数据驱动的。
系统背后的语言学家可以一直为了 “为啥” 去争论下去,系统不必听见。总之是让 “跑会儿步去公园” 不能在此预测pattern中实现。词典化实现不了,那就只好找兜底的规则了,于是 Next 了。【限定】与【接续】的区别由此实现。前者是词典强盗,后者是句法标配。

白:
在词典之外搞几个标签模版也不难,句法上都对着next,只不过依据前后subcat细化了,这有多困难,而且清爽。

我:
亦无不可。差不多是一回事儿。一碗豆腐,豆腐一碗,就是先扣条件还是后补条件的区别而已。无论前后,总之是要用到词典信息,细线条的词典信息。

白:
看上去不那么流氓

我:
先耍流氓【注1】,还是先门当户对,是两个策略。
很多年前跟刘倬老师做专家词典。他是老一代无产阶级革命家,谆谆教导的是不能耍流氓,要门当户对,理想一致了才能结合成为革命伴侣。后来到了美国闹革命,开始转变策略,总是先耍了流氓再重新做人。其实都是有道理的。

白:
@董 跑步和上班是先后关系,跑步和去是同时关系。

董:
这句分析后,有两个“preceding”,不符合我们理想的结果。我们要的是“跑步”是“去上班”的manner 才好。因为我们要准备用户提出更多的信息要求。例如:系统要告诉用户,我平时是HOW去上班的。

我:
刘老师做系统是在科学院殿堂里面,可以数年磨一剑,we can afford to 不耍流氓。来美国闹革命拿的是风投的钱,恨不能你明天就造出语言理解机器人出来,鞭子在上,不耍流氓出不了活。形势比人强,不养童养媳成不了亲,不抓壮丁打不了仗,于是先霸王,然后有闲再甄别。

董:
是的,我们现在连科学院殿堂都不是,而是家庭作坊,可以慢慢磨。其实已经磨了20多年了。

我:
我还记得当年我们为了一个不足100句的英语sample,翻来覆去磨剑磨了两三年,反复地磨平台、磨算法和磨规则。当时的董老师已经大数据(现在看也不是大数据了)开放集测试“科研一号”【注:中国MT划时代的第一款工业产品“译星”的前身】了。

董:
我们给我们的现在开发的中文分析的目标是:看看能最大限度地挖掘出多少信息。

我:
董老师20年磨出的 HowNet 打下了语言分析的牢固基础。现在是把普世的 HowNet 细化为具体语言的句法规定。路线上是一脉相承的。换个角度看,董老师在 HowNet 中已经把普世的 Subcat 的输出端统一定义了,现在是要反过来再进一步去定义具体语言的句法表达形式,也就是输入端的pattern和条件,然后把二者的映射关系搭上,大功即告成。先深层结构 和 UG,然后回过头来应对每个语言的鸡零狗碎的形式。

董:
这倒是的,我们这个中文系统还没到半年,就有点模样了。词典22万义项,规则近4000条。当然,要真正交给用户,那还有一段磨的。

我:
蛮 impressive。我们开发四年多了,但绝对没有 8x 的规则量。

董:
这回我们不做中英翻译,因为英语生成我们做不起,又没有大数据的。其实做出来也只是给别人添砖加瓦,多一个陪着玩的。这种事情我们不玩的。

我:
对,MT 从大面上就拱手相让吧,数据为王。 符号逻辑和规则路线现在的切入点就是应对数据不足的情境:其实数据不足比人们想象的要严重得多,领域、文体等等,大数据人工标注根本玩不起。不带标的 raw 数据哪里都不缺 但那比垃圾也好不了多少。

宋:
"中国对蒙出口产品开始加征费用"

白:
这个哪里特殊?

宋:
中国对(蒙出口产品)开始加征费用, (中国对蒙)出口产品开始加征费用

白:
进口出口,应该站在自己立场吧

宋:
出口是自己的立场,但也有两种解读:蒙古出口,中国对蒙古出口。我一开始理解为后者,看了内容才知道是前者。

我:
这个 tricky,在争抢同一个介词“对”:对 np 征税;对 n 出口。
远距离赢。

白:
常识是保护自己一方的出口,限制非自己一方的进口

我:
远距离原则有逻辑 scope 的根据。但是具体看 很难说 因为汉语的介词常常省略。scope 的起点用零形式 并不鲜见。
“对阔人征税” 可以减省为 “阔人征税”;“对牛肉征税” 可以简化为 “牛肉征税”。但 “对蒙古出口”,不可简化为 “蒙古出口”。本来也可以简化的,但赶上了 “出口” ,逻辑主语相谐。“牛肉” 与 “征税” 没有这种逻辑主谓的可能,于是“对”可省 而NP的逻辑语义不变。

白:
势均力敌时,常识是关键一票

宋:
这个例子在我所看到的语境下是远距离赢,在别的语境下则不一定。因此,分析器是否应当给出两个结果,然后在进一步的处理中再筛选?

我:
给两个结果 原则上没难度,但后去还是麻烦。

白:
其实关键是什么时候定结果,几个倒在其次

我:
"中国对蒙出口产品开始被加征费用"

加了一个 被 字 哈哈 可能是蒙古对中国的反制。

白:
两个对,有一个和被不兼容

【注1】所谓parsing耍流氓,指的是在邻近的短语之间,虽然他们之间句法语义关系的条件和性质尚不清晰,parser 先行把他们勾搭上,给个 Next 或 Topic 之类的虚标签,类似未婚同居,后去或确认具体关系,明媒正娶,或红杏出墙,另攀高枝,或划清界限,分手拉倒。

 

中文处理

Parsing

【置顶:立委NLP博文一览】

《朝华午拾》总目录

【李白对话录之10:白老师的麻烦不是白老师的】

我:

突然想起一句话 怕忘了 写在这:

“白老师的麻烦是 他懂的 我不懂 我懂的 他懂。”

谁的麻烦?

乔姆斯基说 麻烦是白老师的

菲尔默说 麻烦是我的

后一种语义深度分析的结论是如何得出的?

语义要多茁壮 才能敌得过句法的标配啊。

而且这种语义的蛛丝马迹并非每个人都有捕捉的能力 它远远超出语言学 与一个人的背景知识和领悟力有关

遇到这种极深度的人工智慧 目前能想出来的形式化途径 还是词驱动比较靠谱 如果真想较真探索的话

“麻烦 问题 毛病” 这类词有两个与【human】有关的坑

一个是标配 表达的是所有关系 possessive

另一个是 about 要求填坑的是 【event】或【entity】 后者自然也包括 【human】

白:

“他的教训我一辈子忘不了”

谁被教训?

我: 哈。

回到前面, 近水楼台的 【human】 “白老师” 是标配。

另一条词驱动的可能路径自然休眠。因为词驱动 也就埋下来唤醒的种子。

上下文中遇到另一个 【human】 candidate “我”,加上其他一时也整不清楚但终究可能抓到的蛛丝马迹, 于是休眠唤醒 了。

白:

好像sentiment在休眠唤醒中起比较重要的作用

我:

此句是一例 本来是褒 可不唤醒就是贬了。

白:

标配的麻烦,把负面情感赋与那谁,等到后面说的都是正面,纠结了,另一个human就有空子钻了。

我:

对对对

这个 trick 我们做了n年 sentiment 摸索出来了就在用。典型案例是: “Thank you for misleading me”

Thank 里表达的抽象的褒 由于遭遇了 misleading 的较为具体的贬 而转化为讽刺。

还有:“你做的好事儿 great”。这里 great 的讽刺也是有迹可寻的。

白:

more specific expressions承载的sentiment优先

我:

遇到过两次记者采访,两次都被问到 你们教给机器 sentiment,机器可以理解正话反说 和 讽刺 吗?

我的回答是:这是一个挑战 但其中的一些常见的讽刺说法 是可以形式化 可以捕捉到的。举例就是上面。

白:

具体override抽象。

我:

yes yes yes

白:

如果二者纠结,具体承载的sentiment才是基调,抽象的反向sentiment不是抵消而是修辞手法的开关。

我:

我一直在强调,sentiment 的世界里面,主要是两类东西:一类是情绪的表达,一类是情绪背后的理由。

有些人只表达情绪,但有些人为了说服或影响别人,好恶表态的前后,会说一通理由:you make a point,then you need to support your point with arguments

所谓 sentiment analysis 很长一段时间 领域里面以为那是一个简单的分类问题:thumbs up thumbs down。这个浅陋而流行的观点只是针对的情绪,而面对情绪背后千变万化的理由 就有些抓瞎了。可是没有后者,那个sentiment就没啥特别的价值。

所谓讽刺,只是情绪的转向,正话反说。具体的理由是不能转向的,否则人类的交流就没有一个 protocol 而可以相互理解了。褒贬里面具体的东西 我们叫 pros and cons, 那个东西因为其具体,所以语义是恒定的,不会轻易改变。

情绪却不同。人是一个奇怪的动物,爱极而恨,恨极而爱,都有。甚至很多时候 爱恨交织 自己都搞不清楚。表达为语言,就更诡异善变。

英语口语中 sick 是强烈的褒义情绪,shit 和 crap 等词也不是贬义,bad ass is very positive too:

“The inside of a prius is bad ass no lie.” 是非常正面的褒奖。

人类在情绪表达中说反话,或者由于反话说常了 community 都理解成正话了,这种情形也屡见不鲜。

关键词的褒贬分类系统遇到这种东西不傻眼才怪:当然如果input很长,可以 assume 这类现象只是杂音,整个关键词分类还可以靠谱。但一旦是社会媒体的短消息,这种语言模型比丢硬币好不了多少。

汉语中 老婆太喜欢老公了 喜欢到不知道怎么好了 就说 杀千刀的。

再举一个今天遇到的 sentiment 实际案例:
@Monster47_eNd nah, you have no idea how bad I would kill to eat taco bell or any kind of shit like that.
瞧瞧里面的 sentiment triggers: bad;kill;shit 三个都是强烈的 negative triggers
谈论的 topic 是 Taco Bell,一家流行的墨西哥快餐连锁品牌。
这条短消息通篇没有褒义词出现,因此没有理解、缺乏结构的关键词系统只能得出贬义的结论。但这句话其实是对 Taco Bell 异乎寻常的褒奖 用的是完全草根普罗的用语。

谷歌的神经翻译遇到口语化的句子也基本抓瞎,训练的数据严重口语不足(那是因为双语语料质量过得去的来源大多是正规文档,组织人力去标注口语,做地道的口语翻译,是一个浩大的工程,巨头也无能为力吧):
@ Monster47_eNd nah,你不知道我會殺了多少吃塔可鐘或任何種類的狗屎。

尝试“人工”翻译一哈:
@ Monster47_eNd nah,你不知道为了能吃上Taco Bell 的东东,我會怎样不惜代价(哪怕让我杀人都行)。

简单的译法是:
想吃 Taco Bell 这样的垃圾,我他妈都想疯了。

谁要再说 sentiment 好做,我TM跟他急。这无疑是 NLP 中最艰涩的果子之一。
【相关】

《泥沙龙笔记:parsing 的休眠反悔机制》

【立委科普:基于关键词的舆情分类系统面临挑战】

【立委科普:舆情挖掘的背后】

【李白对话录之九:语义破格的出口】 

李白对话录之八:有语义落地直通车的parser才是核武器

【李白对话录之七:NLP 的 Components 及其关系】

【李白对话录之六:如何学习和处置“打了一拳”】

【李白对话录之五:你波你的波,我粒我的粒】

【李白对话录之四:RNN 与语言学算法】

【李白对话录之三:从“把手”谈起】

【李白隔空对话录之二:关于词类活用】

《李白对话录:关于纯语义系统》

中文处理

Parsing

【置顶:立委NLP博文一览】

《朝华午拾》总目录

【一日一parsing:“这瓶酒他只喝了一杯”】

白:
“这瓶酒他只喝了一杯。”
两个量词(瓶、杯)和一个名词(酒)关联。
三个问题:1、“这瓶酒”是什么成分?为什么?2、“一杯”是回指到句中的“酒”还是指到另一个省略了的“酒”?3、如果“喝”的逻辑宾语是杯中酒,那么瓶中酒又是什么逻辑角色?
就是说,如果把逻辑宾语看成“部分”,其相对的“总体”提前为“话题主语”或“大主语”,那么后者到底填了什么坑?目测已经没位置了

詹:
“语文他答对了三道题。”跟白老师例子类似。
他只喝了这瓶酒中一杯的量
这瓶酒他只喝了一口
这瓶酒他只喝了二两
“喝”事件可以设计一个“消耗量”的事件元素
“这瓶酒他喝了一大半”

白:
随意增减动词坑的数目总是不好,量词倒是可负载两种结构:一种是绝对量,一种是相对量。相对量有坑,绝对量没坑。

詹:
动词的坑的数量可以设计(因而可调)。消耗量设计为“喝”的一个坑,可以跟“讨论、谈、喜欢”这样的动词对比。“这瓶酒他们讨论了一杯”不能接受。因为“讨论”类动词没有预留这个坑
“这瓶酒他们讨论了一天。”
请教白老师说的绝对量和相对量具体如何理解?形式区别是什么?

白:
相对量和绝对量都是数量组合。绝对量与中心语结合,相对量中心语省略,但与同形的先行中心语形成远距离照应。
“山东聊城市”

我:

1121a
句法是清楚的。

白:
buyu是个大杂烩 装了很多不同的东西,从填坑角度看更是五花八门缺少共性。

我:
那就加个标签【数量补语】,与其他补语对照:【程度补语】【结果补语】或【原因补语】等。如果想进一步区分 “喝了一杯” 与 “喝了一斤”,还可以进一步区分 根据数量结构本身的子类即可。句法到这一步 落地应该水到渠成了。

白:
那倒不必。喝了一口有点麻烦。可是这不是一个好的二元关系。
或者说,buyu才是真正的宾语,O反而只跟buyu发生直接关系,通过buyu才跟动词发生间接关系。O跟buyu的关系是明确的总分关系

我:
喝---酒 应该是直接的关系 否则 语义不搭。

白:
一杯后面有个省略的酒
正常也可以说,走,喝两杯去。省略是肯定的,省略的是酒,则是通过先行词照应出来的。先行词是茶,省略的就是茶。杯和酒,也有强关联,不管语义上还是统计上。
试试:“这瓶酒张三只喝了一杯,李四却喝了三杯。”
要想把“一杯”和“三杯”都分析成buyu,还有点小难度呢。
“一瓶酒四个人喝,张三和李四各喝了一杯,王五和赵六各喝了两杯,瓶里还剩一杯,问这瓶酒共有几杯?”

我:

1121b

一致不一致 只要后面是有准备的 就可以我们在落地模块里面 其实是有这个心理准备的,
并不指望句法分析出现完全一致的结果。关系标签只是落地的条件之一,不是全部条件,如果 x 和 y 的关系都有可能,对付不一致就是 x|y,一般不影响结果。

白:
“X杯”都分析成buyu吗?
不好的句法不一致多些,好的句法不一致少些

我:
一切都是平衡,某个条件宽了,另外的条件就可以弥补。

白:
遇到不好的句法,不一致不是不能对付,只是一边对付一边喷语言学家而已。

我:
哪里都一样。arsing 做不好 可以喷 POS 模块开发人,OS 做不好 可以怪词典学家没弄好。或者学习模块很操蛋 对付不了 sparse data,但是 说到底 在一个真实开发环境里 还是内部协调为纲。要是踢皮球,做不了好系统

白:
但是句法稍作调整,就可以做得更好。
我:

铁路警察各管一段 是一个非常坏的原则,adaptive dev 才是正道。当然,凡事都一个度。

白:
补语和宾语补足语弄成两个东西,一个指向动词,一个指向名词。已经做了初一,还怕十五么?

我:
一杯和酒 脱离上下文 也有很强的特征上的不同 而且也有ontology或大数据方面的高度相关性。因此 句法把它们连成 x 也好 y 也好 都不是大问题,因为各自的本性的、静态的标签是恒定的、随时可check 的

白:
这话推到极端,就是不要句法也行
可你老人家早就有话等在那里,有现成的梯子,为什么不用?
我现在要说,反正也没到顶,有另一部可以爬得更高的梯子,为什么不用?
与大数据或ontology的关系,自然语言是跑不掉的,波粒二象性摆在那里。
其中可以帮到句法的部分,封装成中间件直接拿来用,早已不是禁忌。

我:
真地没看到显然的必要性,起码对于抽取情报,V 连上了实体 N做 O,连上了数量做 Buyu,想从中抽取啥都可以。要细做,也最多是把 Buyu 和 O 再加一条通道,说 Buyu 是限定 O 的。

白:
看看上面的应用题。要解题,不知道总分关系怎么解?不把句法关系标成一致,怎么获取总分关系?

我:
自然语言理解落地为自动解题,作为复杂问答系统的一个分支,这个倒是确实要求比一般情报抽取要高。那天与胡总聊到高考机器人项目,胡总说,数学应用题道理上应该电脑是大拿吧。可惜,电脑读不懂应用题。自然语言理解是拦路虎。如果读懂了题,转化成了公式,电脑当然当小菜来解题。

白:
NLU做应用题,@约翰 师兄三十几年前就在做了。

我:
做几何题,@严 也兴趣了很久。

白:
用填坑来统领句法关系,就不会那么为难了。把二元关系进行到底,把词例化进行到底。吴文俊团队实际上也做了部分几何题理解的工作。不过数学家们认为这是脏活累活,没有学术价值。所以浅尝则止

wang:
机器做数学应用题,是验证自然语言理解效果的一个非常好的测试。但是没有市场。
本人2000年是在做小学数学应用题求解系统,当时也是为了检验自然语言理解效果的。当时系统,本群的刘群老师,周明老师,詹卫东老师,董强老师都见过,只是这些老师是否想起16年前的事就不得而知了。
当时演示的应用题“一条河里有4条小船,5条大船,河里一共有几条船?”--对于求解有几条小船,几条大船,或者颠倒顺序,都可以演示OK。但是在北大詹卫东老师把“一条河”改成“一个河”,系统就出不来结果,量词啊,量词没细致考虑。
这都是过去多年的事了,只是这个系统没有市场,最后只能搁浅。落不了地就被历史淹没了。记得当时台湾的中研院许文廉老师也做数学应用题求解。对于几何求解系统前几年看过文献,好像已经非常成熟了。可能语义理解的信息不是复杂,还是封闭环境非歧义语义,也许相对容易,这个后期我关注就不是很多了。

白:
应用题这东西,换个内容就是上市公司的报表,谁还敢说分析上司公司的报表没有市场?

wang:
白老师,我那个时候抱着系统广泛寻求市场,却没有市场关爱我。

白:
关键是不要被技术的表现形式所迷惑,要看穿技术的实质,有没有用是由实质决定的,不是由眼下的表现形式决定的。定位问题了。天上不会掉下个产品经理,最初的产品经理就是你自己。这世界上能看穿技术实质的人少之又少,要把技术包装对方向,还要扶上马送一程,理解的人才有可能多那么一点点。现在的教育里用人工智能逐渐多起来,但是系统更像系统而不是老师。要想让系统像老师,必须有NLP。像伟哥这样可以躺在垄断场景上高枕无忧,犯不着关注其他场景的人毕竟也是少数。

wang:
遗憾当初没有遇到白老师啊!以白老师的眼力,就活了。
觉得李老师也是在找更宽的场景。
回到昨天的话题“这瓶酒他只喝了一杯”。我的想法是“这瓶酒”--不是补语
应该是个强调部分。类似英语“It is .... that”
这瓶“酒”和一杯(“酒”),这酒是同质的事物,后者必须省略。不同质的事物,必须交代。

白:
还有不涉及量词的总分关系:“我们班的同学就他混到了正部级”
“我们班的同学”相当于瓶中酒,“他”相当于杯中酒。
总分关系,“总”表现为话题主语,“分”表现为动词的直接成分,主语或宾语。
但是按照移位理论,移出来的话题主语的原位必须是某个论元,所以一定要找到这个坑。

wang:
这种情况可否理解介词短语省略了介词“在...中”,(among)
单独“总”这个论元好像对应不了谓词,比如这里“混”

白:
英语介词短语可以修饰名词 总直接对分,分对谓词
我早上核心观点就是这个

wang:
恩,同意白老师

我:
I drink a cup of tea
cup is O of drink and then tea is linked to cup??
this is not what has been practised for long
tea is O of drink and cup (or a_cup_of) is Mod of tea
these are standard treatments

白:
@wei 这个treatment我太同意了。
英语不能省略tea吧。
即使前面提及了tea
壶里的茶我只喝了一杯,英语怎么说?

我:
NMT: I only drank a cup of tea, how to say English?
壶呢?
原来神经做翻译的时候,怎么常见怎么来,拉下的词没处放,就不放,一笔抹去,眼不见为净。这倒是顺溜了,可不带这么糊弄吧以前的 MT,无论 SMT 还是 RMT,大概
不敢这么玩

白:
有些口译人士倒是真的如此

刘:
SMT也一样的,经常丟词,还有论文专门研究SMT的丟词问题

白:
我在上交所的时候,就领教过知名公司的随团口译。我们提出的尖锐问题,一律抹平了翻,尖锐的词儿影都没有。有时我不得不自己用英语纠正一遍。

我:
那就是 RMT 不敢丢,其实也不是不敢,是丢不掉。除非生成程序有意设计了丢的条件。默认,实词是不能丢的。
“壶里的茶我只喝了一杯” 应该是:
as for the tea in the pot, I only drank one cup of it.
“it" refers to the "tea"

白:
it,相当于移走的tea的trace 在汉语是空范畴 在英语里总要有个真实代词。从伟哥的英译可以看出,他是真心不把“壶里的茶”当主语或宾语的。

我:
顺便一提,我觉得将来机器口译会有更好的用户体验
这是因为人的口译也就那么回事儿,糊弄的时候多,不合格的口译多,合格的在时间紧张的时候也老出乱子。这个观察在前些时候尝试用 NMT 翻译汉语到英语的时候就很清晰了。当时翻译到了英语以后,第一个震惊是,NND,神经真厉害,然后看到谷歌翻译下面有一个 speech 的按钮,就顺手一按,这一听,是第二个震惊,听上去比读居然更顺耳!读起来别扭或不合法的地方,给当今的语音合成一糊弄,居然那么自然,加上人的口译也是错误不断,相比之下,机器读出来里面有几个错就相当可以接受了。于是我用 iPhone 把那一段录音下来,放到了我的博客里面,让世人见识一下,机器口译不是梦。见:

谷歌NMT,见证奇迹的时刻

以前一直认为,口语到文字是第一层损耗,文字翻译是信息的第二层损耗,再从目标语文字到语音,是第三层损耗,损耗这样叠加下来,语音机器翻译是一个完全没谱的事儿。但实际上不是这么回事儿。
这第三层损耗,由于有人的陪绑和陪衬,不但不减分,反而加分。第一层的问题也基本解决了。当然前提是语音技术要神(经),语音合成要做得自然巧妙,而这些现在已经不是问题了。前几天讯飞合成一个广告词,居然声情并茂。

赵忠祥当年深陷录音门丑闻,声誉形象大减,那是错了时代。隔现在,赵大叔可以一口咬定那个录音是机器假冒的。

白:
啥时候声乐也能人工合成了,让帕瓦罗蒂唱我写的歌。

我:
白老师等着吧,不远了。

 

中文处理

Parsing

【置顶:立委NLP博文一览】

《朝华午拾》总目录

【我看好深度神经读唇术】

Nick:转载:谷歌人工智能唇读术完虐人类,仅凭5千小时电视节目!人类古老的技艺再次沦陷-搜狐科技!!!

南:
估计很快就有读心术了

Nick:
读心术和读唇术结合,细思恐极,星座是讲不下去了。。。

洪:
记得是 David G. Stork开创了这个领域。

葛:
根据脑电波可以读心

陈:
所有空间转换,如果有足够的训练数据,都可以尝试用深度学习拟合。

我:
读唇术真是神经的好应用啊 可以想见 它会重复语音的辉煌 而且显然远远超出专家。

陈:
才40%正确率

我:
聋哑人的读唇能力 我见识过。有一次招员 一位白人“龙女”应聘。她跟我面试交谈,眼睛使劲盯着我的嘴唇,要吃了人似的。虽然我英语带口音 不标准 而且说话急促,她居然大体都“看”懂了。麻烦的不是她听话和理解的能力,而是我受不了她说话。由于她很多年耳聋,结果她说话的腔调越来越偏离人类。虽然我勉强听得懂 但那是一种“深度神经”折磨。公司hr和主管都鼓励要她,hr 多少还有担心怕她说我们对残疾人有歧视。特别嘱咐 如果基本能力够格 交流沟通的缺陷不能作为不聘用的考量。我心里不情愿 怕以后工作每天受听力折磨 但还是勉强同意招。

结果 negotiate 待遇 她居然狮子大开口 比其他几位类似能力的 candidates 高出很多 而且摆出不愿意讨价还价的样子。她的这个态度帮助我摆脱了不要她可能带来的良心不安。

发现残疾人的专项能力的发展可以让人惊诧 她的读唇能力在我们普通人看来不可思议。面试她六七位同事都反映 她的“听力”理解 完全可以胜任工作之间所需要的沟通协调,说的能力也有 只是偏离人类发音的趋势会越来越严重 大概遵循的“熵最大”(maxent)原理 孤立态混乱度无法逆转吧。

电脑有几乎无限的带标训练数据 这个场景非常类似于mt 这么好的天然学习场景 电脑超越龙女 是必然的吧。报道说 读唇专家不到百分之二十 电脑能力高出一倍 到百分之四十。不懂这都是哪门子专家,与我见到的龙女无法比。专家读播音员标准的说话,龙女读的是我们这些不同语言背景人的蹩脚英语。专家读唇之前已经熟悉这些播音员 等于受过历史数据的培训,龙女以前跟我们素不相识。

马:
以前有个电影叫联合舰队,是根据真人真事改编的,主演也是原型担任。一个盲人,一个聋哑人共同上学,盲人用嘴型重复老师说的话,聋哑人通过唇读获得信息

我:
残疾人的补偿替代功能常超越我们的想象
电脑只要有超大数据 也可以超越我们想象
看好这个方向。

马:
搜狗也刚做了一个唇读,识别率还蛮好的

 

中文处理

Parsing

【置顶:立委NLP博文一览】

《朝华午拾》总目录

【一日一parsing:他 / 喝了 / 三碗 / 汤】

bai:
“他汤喝了三碗”
问题:“三碗”指向“汤”还是“喝”还是自己的省略被修饰语?
问题:它和“他喝了三碗汤”在语义上等价吗?

马:
强调的内容不一样吧,前者强调喝了三碗的是汤不是别的,后者强调是三碗

我:
要挖出变式的 nuances,不如把表层结构包括词序的差异保存 等到落地的时候 由应用的需要来决定这种差异是不是有必要。脱离落地谈细微差别 及其抽象表达,容易莫衷一是 也容易丢了西瓜。

他喝了三碗汤
他喝了汤三碗
三碗汤他喝了
汤他喝了三碗
他汤喝了三碗
? 他三碗喝了汤
? 三碗他喝了汤

最后两个变式走在句法的边缘。

一个标签是 Mod,一个是 buyu,其余皆同,包括可分离动词合成词“喝汤”,表层结构的所有信息,包括词序,也都 accessible if needed。因为 parer 的内部 representation 通常是增量的、信息 enrich 的过程,除非是信息更新为了改正一个错误,过去的或历史的信息并不丢失。这也是我们以前说过的为什么休眠唤醒机制可以work,因为被唤醒的原始状态并没有丢失,一个子串永远可以重来,二次 parsing。推向极端就是,整个一个句子都可以推倒重来,因为原始的 token string 并没丢弃。当然,实际上的休眠唤醒几乎永远是针对句子中的一个子树,再糟糕的 parser 也不至于全错需要重新来过。

Topic 再进一步转为 S 就完美了,语义中间件还有细致的工作可做。

最后这两句句法边缘的句子不是不可能出现,但比较罕见,对于毛毛虫边缘的毛刺部分的现象,合法非法中间的数据,如果不常见,那就拉倒,parser 出啥结果都无需太 care,反正有做不完的活计,不值当在它们身上花时间。

【相关】

中文处理

Parsing

【置顶:立委NLP博文一览】

《朝华午拾》总目录

【李白对话录之八:有语义落地直通车的parser才是核武器】

bai:
“你牺牲了的战友不能瞑目。”
“张三打得李四脸都肿了。”

我:
张三打李四
...打得他脸都肿了
...打得他手都肿了
...打得脸都肿了
...打得心直哆嗦
...打得好痛快
...打得鼻青脸肿
...打得天昏地暗

这些后续与第一句的不同组合,有些可以转成白老师的句式
s v o v 得 vp --> s v 得 s2 vp

bai:
填坑角度看不一样,前面topic填名词坑还是动词坑还是与坑无关。天昏地暗可以当一个形容词。拆开来看天和地都不能成为填“打”的坑的共享萝卜。
谓词结合的不同方式,只有显式地描述坑和萝卜才说得清

我:
对,不是都可以转,必须后一个s2是前一个 o 的时候,才可以转。如果 s2 回指第一个 s, 那就是另一组了。
“天昏地暗” 是成语形容词,黑箱子词,句法拆不开。
我用 vp 表达的不是 vp 是“谓语p” 的意思 包括 ap。以后得创造一个合适的标签 PredP
只剩下一个主语的坑待填。对于主语,谓语是ap 还是 vp,不重要。人家自己已经内部摆平了,不关主语事儿。

bai:
类似:(a/b)*(b/c)=a/c

我:
谁脸肿了?
李四。
谁手肿了?
不好说,但张三比李四可能更大,因为打人借助的工具往往是手。打人最常打的部位是脸,
而不是手。这个 minimal pair 真心诡异:

张三打李四打得他脸都肿了
张三打李四打得他手都肿了

也是中文文法很操蛋的鲜活例证。

bai:
没啥,常识都是软的,一碰到硬证据就怂。
你不说对方手上挨打,那就是打人者手肿,说了,那就是挨打者手肿。语言和常识推理已经融为一体。各种标配都是随时准备让位给例外的,例外不出山,标配称大王。

我:
白老师的段子是张口就来啊。这个说段子的功力很神。

bai:
在填坑时,先不管三七二十一按标配填,再给例外一个权利,可以override标配。
试试
“你渴了饮水机里有水可以喝。”
缩合条件。

我:
马上出门 回来再试
喝水不就是 “有 o 可以 vt”?蛮常见的。
有书可读
有澡可洗

bai:
但填坑结构是跨前后件的。
啥句法标签呢?

我:
补足语,逻辑 vo 单标。graph 也不管它怎么绕了,看上去合理就行。反正用的时候都是子树匹配,落地甚至可以是 binary 关系组的匹配。原则上,任何 node 可与 任何 node  发生暧昧,不讲门当户对。
一张分析全图(the entire tree)的元逻辑性(meta logicality)可以不管它,只要个体的 dependency 有说法就行了。英语也是:“have a book to read”
句法标签是 宾语 ➕ 宾补,后加逻辑vo
到了逻辑语义层 或语用层、抽取层,句法的层次理论和原则不算数了。

bai:
“他有三个保镖保护着。”
句法上其实有条件带点笼统性地把坑共享的标配拿出来。

我:
有 np vt,vt 的标配是 np 做宾语(o),若要 s 做逻辑宾就需要外力。

bai:
这房间有三扇窗户可以通风采光。连逻辑宾都不是,最多算间接逻辑宾

我:
我的理解是逻辑主语。两个主语都说得通,全部与部分。

bai:
“这房间”对于“通风采光”来说是填什么坑呢?

我:
主语啊。窗户也是主语,不过是整体和细节的区分而已:
窗户通风了,房间自然通风。

bai:
这套音响有七个音箱和两个低音炮可以营造出环绕立体声效果。

我:
这样不断营造语用现场,其实导致的不是语言学关系的矛盾,而是语义 interpretation 的挑战。
语言学关系的标签,本性是弹性的,哪怕标签取名不一定合适或容易误导(譬如主语误导为施事,其实未必)。 主语也好、宾语也好,都是万能的筐,什么 interpretation 都可能。话题(Topic)就更甭提了。
常识来说 立体声效果的营造,应该是立体装置的总体,这些装置的个体达不成这个效果。这是知识内部的争论,与语言表达背后的结构关系不大。知识内部也可 argue 立体装置中某个装置是决定性的,那个装置效果出来了,立体效果就基本出来了。
这是两套系统,两个层面。 结构关系,与我党对历史事件的原则一致,宜粗不宜细,留下语义解释或争论的空间。

bai:
那就干脆粗到不分主宾语,只计数目,不计语序方向,更不计subcat的相谐,装到框里再说。在遇到多种填坑戴帽可能性的时候,再把这些法宝一个一个祭出来。吃瓜打酱油的捎带着做细了。不是为了做细而做细,是为了增加确定性而做细。这就有意思了,比如量词搭配。看起来是在细化修饰关系,可顺带把逻辑宾语搞定了,纯粹是搂草打兔子。

我:
不是不可。实践中,往往在句法关系或标签的 representation 的极端做法之间,做个折衷。更多是为了方便。说到底,一切句法语义计算的表达,都是人自己玩,方便原则不过是让人玩的时候,少一点别扭而不是求一个逻辑完备性。representation 作为语言理解的输出,本质是人的逻辑玩偶。爱怎样打扮都可以。这个本性是所谓强人工智能的克星。

bai:
我还不那么赖皮……

我:
强ai 更赖皮

bai:
刚性的局部可以顺带给柔性的全局注入一小丢丢刚性,但是出发点就没指望全局会百分之百刚性。

我:
连语义的终极表达都一头雾水,说什么强智纯属扯淡。

bai:
强AI我反对,语义表示太过任意我也不赞成。总要有个松紧带勒着。

我:
system internal 是做现场的人的现实。很多东西就是有一个模模糊糊大的原则,或有相当弹性的松紧带。下面呢,就是一个系统内部的协调(system internal coordination)。在人叫自圆其说,在机器就是内恰。

bai:
二分法是要的,一部分role assignment,一部分symbol grounding。前者是深度NLP的必修课,后者跟现场关系更大些。
过松的松紧带,红利已经吃得差不多了。新兴的松紧带,不紧点就没有投资价值。

我:
投资价值与宣传价值还有一些不同。投资价值对松紧不会那么敏感,除非是投资与宣传(marketing)紧密相关的时代,譬如当下ai泡沫的时代,或当年克林顿的时代。
投资价值的落脚点还是语义落地(semantic grounding)。至于怎么落的地,松啊、紧啊,不过是给一个宣传的说法。昨天我还说,syntaxnet 和很多 dl 都是开源的,要是好落地为产品,还不是蜂拥而上。现实是,不好落地。
所谓核武器是这样一个工具,它有一个明确的落地途径,至少从方法学上。system internal 的落地管道,被反复验证的,余下的主要是领域打磨和调试。

bai:
现在很多公司是万事俱备,就差核武器

我:
syntaxnet 至少目前状态没有这个。虽然也是 deep parsing,但并不是所有的 deep parsing 都是核武器,要看是谁家的、怎样的 deep parser 才有核武器的威力。

bai:
你没看上眼的,我们可以不用讨论

我:
看上眼的dl,是有海量带标数据的(最好是自然带标数据,无需组织人去标注),端对端绕过显性结构的,里面满肚子谁也猜不透的隐藏层黑箱子的机器,譬如神经机器翻译( nmt)。

bai:
带标看标在什么地方。标在字典里OK,那算数据资源建设。标在语料里,即便假定标注体系在语言学上是正确的,还要考虑做不做得起呢,何况语言学上错误的标注体系,更让人怀疑有没有价值和意义去如此大动干戈了。

我: 回家了,可以测试:“你渴了饮水机里有水可以喝。”

逻辑的坑都没到位。句法的框架不能算离谱。就是这样。至于叫补足语还是叫 Next,也无大关系,反正后续语义中间件需要这么一个桥梁做细活。“有 NP V” 的句式以前调试过,比想象的复杂,一直没搞定,就放置一边了。

bai:
“有电话可以打”“有空调可以吹”“有大床可以睡”
不必然是逻辑主语,不必然是逻辑宾语,甚至不必然是必选坑。两个谓词中间被NP穿插的,朱先生书里叫“连谓结构”。类似伟哥的next。

我:哈。

bai:
大床居然是S

我:
目前词典没有收可分离合成词 “睡床” 或 “睡大床”。 默认做主语 也是可以的。循 “有 什么什么 发生了” 的句式, 何况 “睡” 做不及物动词的时候更多。不是说分析对了,而是说错得有迹可循。汉语“有”在句首的时候,常常是 dummy,如果 “有” 前有个 NP,那么后面的 NP 做主语的机会就相应减少了。
白老师曰:  大床居然是 S:

有两个哥们,一个叫大床,一个叫小床。大床爱睡懒觉,小床爱撒酒疯。有大床睡,就有小床喝,一刻不得安宁 .... 【谁接龙?】

bai:
白老师还曰,任何成分皆可为专名。

我: =:)
吾谁与归?

bai:
时不我待

我:
想起文革时期的莫须有群众举报,结论是:事出有因,查无实据。然后是 有则改之无则加勉 就是教育被污名者自认倒霉,没的冤枉。
说实心话,昨天白老师说很多公司是,万事俱备,只欠东风。时不我待,我手心的疑似东风如何才能刮起?

bai:
专名是一种层次纠缠。
事出有因,查无实据;有则改之 无则加勉。这是那年代的套话
方言,成了小说里的人名;文章,成了现实中的人名。
找谁讲理去。
只能用“结构强制”,从外部施加影响,再辅以大数据。

我:
说事出有因 是文过饰非。
不过 nmt 测试的结果常常连事出有因 都很难。一个长句 只有一个字不同,而且这个不同的字还是同质的,nmt 翻译结果却有很大的不同。这个现象非dl专家无法解释和理解

bai:
所以规则层面的、用可理解的特征直接表示的知识如何混入大数据直接参与学习甚至“编译”,非常重要。

我:
所谓符号逻辑派 就是错了 也错得事出有因 debug 也知道症结所在

bai:
符号逻辑派缺乏的是柔性,不知道认怂,一错到底。

我:
yeh 见过这种人 还不少

【相关】

【李白对话录之九:语义破格的出路】

【李白对话录之七:NLP 的 Components 及其关系】

【李白对话录之六:如何学习和处置“打了一拳”】

【李白对话录之五:你波你的波,我粒我的粒】

【李白对话录之四:RNN 与语言学算法】

【李白对话录之三:从“把手”谈起】

【李白隔空对话录之二:关于词类活用】

《李白对话录:关于纯语义系统》

中文处理

Parsing

【置顶:立委NLP博文一览】

《朝华午拾》总目录

【李白对话录之九:语义破格的出口】

白:
“国内大把的钱想出逃”
钱不会“想”。但是“出逃”只有一个坑,除了“钱”没有其他候选。这种情况下句法优先,语义的不匹配,到语用(pragmatics)层面找辙。一个语用出口是拟人、人格化,把钱人格化。另一个语用出口是延展使动用法,钱的主人“想”使钱出逃。

我:
1117a
出口的问题也许不必存在。句法搞定的东西 默认是 语义不出场 语用不解释,除非落地需要这种解释。落地通常不需要。譬如 mt,一个语言的语义不谐而产生的转义通常可以平移到目标语,哪怕是八杆子打不着的语种之间。譬如乔姆斯基的 green ideas,直译成汉语,同样可以反映乔老爷想 make 的 point:句法确定的时候 可以排除语义。

白:
聚焦句法的人看到的是half full,聚焦全局的人看到的是half empty。

我: 哈
这里谈的是默认。默认做法是、一直是,语义破格是默认许可的,句法破格才需要语义出场。 因为自然语言中,句法确定场合下 语义破格太常见了,常见到见怪不怪。无需解释。而受体在理解过程中 常常各有各的理解 根据这个人的教育和素养 而不是语言学 后者个体差异不大。

白:
默认的主体是谁
分析器么?分析器我同意。但默认的主体不必然是分析器。

我:
换句话说,如语义破格一定要给一个语用出口的话,很可能莫衷一是,标准很难制定。譬如乔老爷的破格的 green ideas,我们语言学家的理解 与普罗的理解 在语用层面相差太大。但是在句法层面,精英与普罗是一致的,虽然普罗可能不知道主谓宾定等术语。

白:
钱想出逃,在应用场景中是有意义的,不管精英普罗,并没有大的分歧

我:
洗钱 的意思?

白:
不一定,也有正常的恐慌.包括本地赚了人民币觉得不安全的,以及外资觉得不想继续玩下去的。

我:
这些破格带来的附加的意义,是听众体会出来的。每个人的体会即便大体方向一致,也很多差异。白老师的理解,比我的理解要丰富,比普罗更不同。很难形式化。即便能形式化 也很危险,因为有强加于人 限制其他可能的缺陷。

白:
这不重要,重要的是面向大众中和精英的预警都要take it into account。

我:
也许只要指出某个关节 语义破格 就可以了,至于这个破格意味什么 让人各自琢磨。其实破格的事儿 指出不指出 大家都心知肚明。

白:
伟哥说的是模块视角,不是系统或服务视角。换到服务视角,即便面向普罗,但是定位也可以是让普罗觉得专业,精英觉得不外行。一个带有修辞性语义破格的表述只有把附加意义掰开揉碎了才能向后传播,跟其他信息滚在一起发酵。在NLP同行间心知肚明的事,要想在知识情报各个piece之间引发chemistry,必须还原为掰开揉碎的形态。形成看上去专业的影响链、作用链。

我:
语义计算提供多种可能 在语用中发酵 是个好主意 ,可能提升人工智能的深度。

白:
所以,一个有追求的服务,不会迁就普罗的非专业理解,而是想办法把专业的理解用普罗便于接受的形式展现出来。

我:
不过 也有可能是潘多拉的盒子

白:
不喜欢不买便是

我:
发酵到不可收拾 不收敛,语义破格的确是 nondeterministic,本性就是发散。其本质是诉诸的人类的想象力。

白:
有些破格已经是家常便饭了
像这句家常便饭就是。

我:
“家常便饭”的破格 通常固化到词典里面去了 。绑架以后 就把破格合法化了 可以不算是破格了。只是词源上 可以看到 两个语义 对于同一个词。系统是看成两个个体的 尽管实际操作我们常常绕过wsd,不做区分 但是如果需要区分 词典是给出了两条路径的。

白:
但和本意还是两个义项
“没怎么特意准备,就是家常便饭,大家随意吃哈。”
家常便饭遇到吃,和难过遇到小河,是一个性质。

我:
感觉正好反着
家常便饭遇到吃 是常态 默认;就好比 难过 遇到 人【human】。
家常便饭甚至谁也遇不到,也还是默认为本义 【food】。
“难过” 稍微模糊点 谁是本义 谁是转义 可以 argue,但通常按照 hidden ambiguity 的原则,词法大于句法,“难过”因此本义是 sad

白:
计算机只管一个是本义、另一个是转义,其他不care

我:
转义带有强烈的句法组合色彩 ,是 difficult to cross。
当然 这一切都听人的安排,遵从便利原则。
语义计算 没有人工 便没有语义,没有语义 就谈不上计算。
说到底 人的语义 design 以及系统内部的协调的考量,是语义计算的出发点 数据是语义计算的营养基地。

白:
如果说到相似性,就是固定组合里面的词素和外面的词素产生了搭配趋势,改变了原来的结合路径。

我: 对。
“这条河很难过。”
lexical entry “难过”里面的词素“过”与外面句法的词素“河”发生了 VO 的关系纠缠。
“这孩子很难过。”
就没有纠缠,桥是桥路是路。

白:
本义的家常便饭,和外面的“吃”有纠缠,转义的没有纠缠;本义的难过和外面的“小河”有纠缠,转义的没有。本义的不一定是概率最高的,譬如本义的“难-过”就可能比不上转义的“难过”概率高。

我:
所以说,要 遵从便利原则, 系统内部协调。本义、转义的区分不重要,重要的是内部协调:哪个义项最方便作为标配。一旦作为标配,就不必考虑纠缠的条件了。只有不是标配的选项 才需要条件,或者需要唤醒。一般而言是概率高的做标配。或者条件混沌、难搞定的那个做标配。然后让条件清晰的去 override 标配,此所谓 system internal coordination。遵循 longest principle,具有 hidden ambiguity  的“难过”,词典标配可以是 sad

白:
选最高概率的作为标配是情理之中,但标配如果恰好是本义,就不需要纠缠去唤醒本义了。“把国民经济搞上去”

我:
最高概率原则保证的是,万一系统没有时间充分开发,标配至少保证了从 bag of word 的传统模型上看,数据质量最优。我们实践中也遇到过决定不采用概率最大的作为标配,这是因为概率大的那个选项,上下文条件很清晰,规则容易搞定。而概率小的选项却条件模糊,所以索性就扔进词典做了标配。所有这些考量都是 system internal,与语言学或词源学上的本义、转义没有必然的对应联系。

白:
吃豆腐,标配是本义,搭配在本义内部纠缠,遇到sex上下文时进入转义。不一定显性,隐形的sex也在内。比如,“张三的豆腐你也敢吃?” 当然,张三卖的豆腐有食品安全问题时,也可以这么问。后者更加specific,是“例外的例外”

我:
例外之例外不得超过三层,这是我的原则,甚至不超过两层。虽然人使劲想,可以一直想到更精巧的例外之例外来。系统不要被带到沟里去。曾经由着性子这么干过,一路追下去,自以为得计。在某个时间的点,一切都 ok,但除非封装为黑箱,只要系统还在继续开发中,那种追求例外之例外的开发路线,结果是捉襟见肘,不堪维护。鲁棒的系统不允许规则具有嵌套层次的依赖性。【科研笔记:系统不能太精巧,正如人不能太聪明

白:
这话放在比特币上,一堆人会跟你急。比特币的设计实在是太精巧了。

我:
超人例外。电脑例外。机器学习例外。
肉身凡胎的人做自然语言系统,stay simple,stay foolish 怎么强调也不过分。

白:
“人家都出轨了,你为啥还没上轨”这标题有意思

我:
机器学习例外是因为反正就是个黑箱子,里面有多少参数,调控成了怎样都是一锅粥,在 retraining 之前,这就是一锤子买卖,好坏就是它,不跟人类讲理。

白:
无规则的系统例外

我:
无 symbolic rule 的系统例外。规则的广义似乎也包括黑箱子系统。严格说该是,无可以让人干预的 symbolic rule 系统例外,如果是 symbolic,但是人不得干预,那也无妨。跟封装等价。

白:
完全词例化的系统也是无symbolic rule的系统吗?

我:
在我这里是。每一条都可以做符号逻辑的解释,都遵循某种语言学的思路。

白:
人只能干预词典

我:
1117b
句法是超然的,处变不惊。只有语义甚至修辞,才需要把 出轨 与 上轨 联系起来,感受其中的“深意”。interpretation 是围绕人跳舞的,譬如我们做 sentiment,把大选舆情挖掘出来,至于如何解读,各人面对挖掘出来的同样的情报,会各自不同。很多人想让机器也做这个解读,基本是死路。上帝的归上帝,凯撒的归凯撒。剥夺人的解读机会,简直蛮不讲理,而且也注定无益。

白:
在证券领域,就是智能投研和智能投顾的关系。

我:
解读的下一步是决策。机器不能也不该做决策。

白:
智能投顾也可以是机器人,但根据一份智能投研报告,不同的智能投顾机器人可以做出不同的投资决策。机器真做决策。但是决策机器人和语义分析机器人之间有防火墙。在投资领域,机器比人强。人过于贪婪和不淡定。人处理信息特别是把握瞬间机会的能力不如机器。做对冲的不利用机器是不可想象的。

我:
这个我信。
甚至银行的那些投资顾问,遇到过不止一个了,老是忽悠我们每年定期去免费咨询他们,感觉他们的平均水平低于一台机器。按照他们几乎千篇一律的所谓投资建议去投资,不会比遵循某个设计良好的系统的建议,更有好处。这些顾问应该被机器把饭碗砸了,省得误导人。
【相关】

从 colorless green ideas sleep furiously 说开去

《泥沙龙笔记:parsing 的休眠反悔机制》

李白对话录之八:有语义落地直通车的parser才是核武器

【李白对话录之七:NLP 的 Components 及其关系】

【李白对话录之六:如何学习和处置“打了一拳”】

【李白对话录之五:你波你的波,我粒我的粒】

【李白对话录之四:RNN 与语言学算法】

【李白对话录之三:从“把手”谈起】

【李白隔空对话录之二:关于词类活用】

《李白对话录:关于纯语义系统》

中文处理

Parsing

【置顶:立委NLP博文一览】

《朝华午拾》总目录

Small talk with Daughter on US Election

just had a small talk with Tanya on US election, she was super angry and there was a big demonstration against Trump in her school too

T:
I don't want him to win
I don't want him to do well
Or else another racist gets electedMe:

Me:
neither did I
IF he does very badly, he will be impeached;
or at least he will not be reelected in 4 years.
But now that he is, we can keep an open mind.
There is an element of sentiment he is representing: so-called silent majority, that is why most polls were wrong.

By the way, many have praised my social media analysis just before the election, mine was way better than all the popular polls such as CNN.  This is not by accident, this is power of big data and high tech in the information age:

Final Update of Social Media Sentiment Statistics Before Election

with deep NLP and social media, we can pick up sentiments way more reliable and statistical than the traditional polls, which usually only call 500 to 1000 for opinions to hope they represent 200 million voters.  My mining and analysis are based on millions and millions of data points.  So in future we have to utilize and bring the automatic NLP into things like this as one important indicator of insights and public opinions and sentiments

So in future, we have to utilize and bring NLP into things like this as one important indicator of insights and public opinions and sentiments.

T:
daddy
you're amazing
Your technology is amazing

Me:
I got lots of compliments for that, but yours mean the most to me.

What happened in the election as I had been tracking using our NLP sentiment tool was:

1. Clinton was clearly leading in the period after the recording scandal of Trump and before the FBI started reopening Clinton's email case: Big data mining shows clear social rating decline of Trump last month.

2. Clinton has always been leading in Spanish speaking communities and media, but that did not seem to be sufficient to help revert the case:  Trump sucks in social media big data in Spanish.

3. The event of FBI re-opening the email investigation gave Clinton the most damage: Trump's scandal was cooling down and the attention was all drawn to Clinton's email case so that the sentiment has a sharp drop for Clinton (【社煤挖掘:大数据告诉我们,希拉里选情告急】)

4. When FBI finally reissued a statement that there was no evidence to charge Clinton only 2 days before the election, time was too short to remedy the damage FBI did in their first event of reopening the case: my big data tracking found that there was some help but not as significant (【大数据跟踪美大选每日更新,希拉里成功反击,拉川普下水】).

5. Then just before the election, I did a final update of the big data sentiment tracking for the last 24 hours versus last 3 months, and found that Trump had a clear leading status in public opinion and sentiments, so I decided to let the world know it although at the point most everyone believed that Clinton was almost sure to win.

T:
Oh my god dad your machine is the smartest tracker on the market
Dad your system is genius
This is exactly what media needs
You should start your own company
This is amazing
I think this would be the planets smartest machine

Me:
I do not disagree, :=)It was a tight competition and with good skills, things could turn different in result.  In terms of popularity votes, they are too to be statistically different, so anything at the right timing could have changed the result.

It was in fact a tight competition and with good skills, things could turn different in result.  In terms of popularity votes, they are too to be statistically different, so anything at the right timing could have changed the result.

On retrospect, FBI did a terrible thing to mess up with the election:
they reopened a case which they did not know the results
just 10 days before the election which made a huge difference.
On the other hand, the recording scandal was released too early
so that although it hurt Trump severely at the time, yet it allowed FBI to revert the attention to Clinton

In future, there should be a strict law disallowing a government agency
which is neutral politically by nature to mess up with an election within a time frame, so Trump's winning the case to my mind has 80%+ credit from the FBI events.
What a shame

 

[Related]

【社煤挖掘:川普的葛底斯堡演讲使支持率飙升了吗?】

【社煤挖掘:为什么要选ta而不是ta做总统?】

Big data mining shows clear social rating decline of Trump last month

Clinton, 5 years ago. How time flies …

【社媒挖掘:川大叔喜大妈谁长出了总统样?】

【川普和希拉里的幽默竞赛】

【大数据舆情挖掘:希拉里川普最近一个月的形象消长】

欧阳峰:论保守派该投票克林顿

【立委科普:自动民调】

【关于舆情挖掘】

《朝华午拾》总目录

《朝华午拾 - 水牛风云》

朝华午拾 - 我的世界语国(五): 水牛风云

作者:立委

纽约州水牛城是我来美奋斗挣扎了八年的地方,我的世界语国也经历了许多的风雨起伏。

我是在美国网络热潮中来到这家创业公司的(见朝华午拾-创业之路》)。在世纪末网络泡沫破灭之前,我协助老板获得了1000万美元的风险投资。钱一下多得好像永远用不完似的。老板决定停薪留职,不再承担她的大学教授责任,来到公司当任全职CEO。开始的 executives 就老板和我两个人。我们踌躇满志,准备大干一场,开发自然语言技术支持的新一代问答系统。

跟钱同时进来的是压力。如果我们无能快速组建团队,老板对投资人就无法交代。扩员的压力很大,我和老板漫天做招工广告,每当发现一个合适对象,并成功招纳,就相互祝贺。如果有一周一个也没有招到,就有挫折感。

当时的气氛跟中国大跃进类似,理性被压抑,冒进被称颂。投资人来视察时,得知我们新的办公楼还在接洽,旧的办公室太过拥挤,难以适应迅速扩张的需求,竟然提议两班倒,“人停机不停”。我们明知科研和开发不是靠“革命热情”和人海战术就可以飞跃的,但是在当时的那种气氛下,也没有办法跟投资人说清这个道理。作为经理,我只好因势利导,每个周末以身作则,来公司加班,并鼓励员工至少周末加班一天。平时每天晚上六点半左右我出去买各式快餐,好像大跃进吃公共食堂的样子,为届时还在办公室的员工提供免费晚餐。

董事会要求我们尽快从当时的五六个员工至少扩充到50-60人的规模。我作为第一位副总,被赋予为我的研究开发组招工扩员20-30人的任务。我的组需要三类人才,一是研究科学家,要懂机器学习算法,跟踪最新学术动态,二是软件工程师,能够开发和优化 real life 软件模块,三是语言学家,可以编制和维护机器语法和词典等软件资源。前两类人比较紧缺,语言学家相对好办。我先从加拿大招来两名语言学家,又在德国招来一名,加上一名中国籍女博士,组建了一支语言学博士队伍。董事会还嫌我们扩张速度不够,不能符合他们的大跃进要求。我们于是实施员工引荐的奖励办法,非经理的员工推荐一人,一旦受聘,可得一千美元奖金。作为经理,内举不避亲,我着手在我的两个社会圈子,华人和世界语朋友中,继续扩招。华人圈子主要是中国的留学生和新移民,前后招进10名。其中多是先跟我做暑假实习生(interns),然后留下来成为正式员工。他们多还没有毕业,也没有北美工作经验,需要留在水牛城继续学业,能够来到公司一边工作,一边完成学位对他们是绝好的选择(水牛城工作机会很有限,我们公司被认为是比较理想的所在)。老板对中国学生印象很好,认为他们比印度同学更加踏实能干,所以对我偏向在华人留学生中招员表示支持。

世界语圈子里,我跟加拿大世界语协会主席P先生认识多年,他的博士已经念了七年多了,因为毕业即失业的压力,一直在系里耗着不毕业。我于是去信请他来面试,邀请他加盟我的研究开发组。他询问待遇如何,我告诉他如果被录用,比他现在的 sessional instructor 的工资高出两三倍,他自然喜出望外。拿到 offer 以后,他和他的世界语太太欢天喜地,开车从西海岸沿一号公路横穿加拿大,经多伦多一路开车到水牛城报到。由于他的到来,水牛城成为世界语俱乐部的新据点,来自邻城多伦多和 Rochester 的世界语朋友,也纷纷来他的公寓聚会,我的世界语圈子也随之扩大了。

早在温哥华念博士时期,我就认识了P先生。其实他可以算我的师兄,在我进入语言学系前他就在我系读博士,到我去的时候,他转到邻城的另一所大学继续他漫长的博士生涯。我们在地区性的语言学会议和世界语会议上都见过面,他给我的印象是比较典型(stereotyped)的语言学家,有点迂腐,善于做田野工作,detail-oriented,懂得很多门外语,适合当秘书或编辑。我觉得经过培训,他可以胜任机器词典语法的编制维护任务。我离开温哥华前,和他也有一些个人交往,一次开北美语言学会的时候,曾在他家留宿。还有一次开北美西北地区世界语会议以后,我搭乘他的车回温哥华。一路上,他和太太两个兴奋异常,用世界语高谈阔论,突然发现汽车没油了。半夜三更,我们被困在高速公路旁边。当时我们是学生,为省钱都没有加入汽车协会(CAA),所以也无法向CAA求援。P先生后来硬是步行到下一个高速出口边的汽油站,请求好心人帮忙送来一管汽油,我们才得以平安回家。

P先生是在欧洲参加世界语大会时认识太太的。太太是当地的世界语积极分子,跟前夫离异后带着女儿生活。她性格爽朗,滔滔不绝,说话爱夸张,表情丰富。谈起她和P的相识相爱,总是眉飞色舞。她把丈夫看得很高很大,现在丈夫博士还没有答辩就找到了工作,经济一下子翻身了,她的喜悦更是溢于言表。为了表达对我举荐和接纳的感激,她自己绘画,制作一批手工艺卡片送给我的太太,还赠送我一本柴门霍夫传记,扉页写满了对我的溢美之词。

P先生来后,工作按部就班,倒也兢兢业业,但跟现有的几位语言学家相比,也并不突出。我们只做英语,他的外语专长也无法表现。他也不大懂公司文化中的个人表现和隐形的加班要求,总是按时上下班。也难怪,他和太太有很多世界语协会的杂务,编辑加拿大世界语协会通讯,发展会员等等。看的出来,他们满意现状,很 enjoy 目前的生活。我心内认同这样的劳逸结合的生活方式,但自己不得不过另一种生活:每天天很晚才回家,周末总是加班,难得有时间陪孩子和太太。

有一次跟P聊天,我提到想把同样是世界语者的资深D博士招来,可是联系不上,P先生说可以在世界语朋友中查询他的下落。过了两一个月,他兴冲冲告诉我联络上了,说D博士目前在一家社区学院担任临时讲师。我马上打电话给他,一拍即合,邀请他前来面试。D博士曾经是我的”上司”(见《朝华午拾-我的世界语国(四): 欧洲之行》):当年在荷兰公司以世界语为媒介语的机器翻译项目DLT中,他负责指导和审查我承包的汉语形式语法。我想,作为资深语言学博士,又跟我一样实际从事过多年的机器翻译工作,他也许可以帮助我指导这个越来越大的团队。

面试并不顺利。D博士年岁较大,反应有点迟钝,我也感觉有些失望,至少他不象是个 group leader 的人才。不过,心里想,他也许经历的挫折较多,至少经验是有的,作为一个 team member,想必没有问题。老板跟我说,D很老实,但是不象是个能干的人,不主张招。不过,如果我觉得能用上,还是由我定。我咬咬牙,还是招了,但没有给资深人士待遇,年薪跟其他语言学家拉平。尽管如此,对于D博士,这无疑是自荷兰公司工作后的多年漂流生涯以来的最好工作。他和他的世界语太太也是欢天喜地来到水牛城,而且来了不久就买了房子,俨然要在水牛城扎根。后来得知,D博士的母亲听到儿子得到一份不错的工作的喜讯,决定提前把家产划给他,资助他在房价便宜的水牛城置办房产。

说到这里,有必要介绍一下语言学家供过于求的北美劳务市场。在西方,有很多冷门专业不断制造着社会不需要的人才,这些专业的大部分博士毕业即失业。冷门专业包括我们从小迷信其威力的数学和物理,我主修的语言学也是其中之一。这些专业的博士生除了谋求教授职务,在社会上很少有需要其专门技能的岗位。可是教授职位毕竟很有限,往往一个职位出来,就有上百个博士和博士后申请,对于不是一流大学的博士,求教职简直比登天还难。拿语言学来说,就我所知,甚至MIT的博士,也常常需要经过两三轮清贫的博士后中转(博士后是真正的学术“苦力”,一年两万左右薪水,经济上比餐馆打工强不了多少),运气好的最后可能找到一个二流或三流大学的教职。

这就是我所学的可怜的语言学的现实,好在我的研究方向跟电脑有关,运气稍好。可是很多我的同学终身潦倒落魄。少数头脑灵活的丢掉专业转行去干别的,更多的人不能适应社会的需要,只好在大学做临时讲师(sessional instructor,僧多粥少,这种工资很低的临时工也很难找),或者接点翻译或编辑的零活,勉强糊口。别小瞧这些语言学博士,他们尽管没有多少创造性,棱角也早已磨圆了,可个个都是饱学之士,多数都会五六种外语,会十几种外语的也不在少数。我的世界语朋友P先生和D博士就是他们的代表。这些落魄而清高的语言学博士,囊中羞涩,在北美很难得到女士的垂青。可是在前共产主义的东欧,借助世界语的特殊场合,却可能喜结良缘。D博士在荷兰公司的项目完结以后,辗转东欧各国,教授了几年英语,同时投身当地世界语运动。回美国的时候,跟P先生一样,带回来一个世界语者太太。

我们在语言学家中大量招工的行动引起了媒体的关注。当时,我们的几个竞争对手包括AnswerLogic.com 也一样到语言学家中招工,形成了一道社会风景。我们这些活动经过《华尔街日报》题为”No Longer Just Eggheads, Linguists Leap to the Net”的采访报道后,在社会上和语言学界引起强烈反响(甚至中文报纸《世界日报》也编译了华尔街日报的报道),一时间似乎为语言学家开辟了一条新路。作为参与者,我为自己能够帮助同行创造就业机会感到欣慰和自豪。在公司内部,尽管由于劳务市场的供需影响,语言学家作为 knowledge engineers,比同等学历的软件工程师工资要低,我还是尽量为他们谋求高于市场价格的待遇。一时间,公司仿佛成为语言学家的天堂。

然而,好景不长。D博士差不多是我们疯狂扩招的最后一个了。世纪末,网络泡沫终于破灭,Nasdaq 科技股市场一落千丈,投资人变得异常挑剔和谨慎。AnswerLogic 拿钱比我们早,烧得比我们快,轰轰烈烈闹腾了不到两年,终于随着Nasdaq的坍台而销声匿迹。还有一家搞自然语言有相当年头的公司,日本投资人决定撤资,拍卖股权,公司负责人找到我们,认为我们两家的技术有很大的互补性,希望我们贱价购买,并接纳他们的技术骨干:负责人实在不忍心对技术骨干裁员。我们的另一个对手,曾经拿到三千万巨额投资,集中了世界一流科学家的 Whizbang! 也遭遇滑铁卢,投资人在烧了一千多万美元以后,决定撤资,撕毁合同,放血大拍卖:他们的所有技术,包括源程序和说明,everything must go! 价格已经降到一两百万美元,让我们不得不动心。可是我们泥菩萨过河,自身难保,没有能力和精力消化这些技术,只好放弃这个“deal of the century”。股市垮台不到一年,几十家在我的 watch-list 中的对手,只剩下两三家,跟我们一样勉强维持,惨淡经营,朝不保夕。

我们当时还剩下约五百万投资,加上不断增长的政府项目的进项,还没有到山穷水尽。当然,投资人也可以中途撤资,但他们最终还是决定继续支持下去。不过,董事会决定重金引进职业经理人,我的老板只好屈居第二。新的CEO精明强干,哈佛MBA出身,此前领导过三家高科技创业公司,并成功转手出售给大公司,有不错的 track record。他的担子很重,在 high-tech 公司纷纷关张的恶劣形势下,必须带领公司闯出新路,度过难关,伺机发展。当时,问答系统的先行者 AskJeeves 盛极而衰,股票一跌千丈,董事会因此认定我们一直在开发的问答系统没有市场,指令转向开发新产品。

CEO上任以后,连续两周听我们详细介绍技术细节,比较我们的技术跟可能的竞争对手的异同,开始咨询一些外面的高参,探询新产品的路子。同时,他不动声色地考虑如何重组(re-org)公司,减少开支,轻装前进。对于高科技公司,最大的开支是人力资源,re-org 就意味着裁员。他随身总带着一个花名册,上面标有每个员工的职务和工资,他不时在上面写写划划,有的打叉,有的标上问号。最先打叉的就有D博士。这也不怪,D博士来了不久,就犯了几个低级错误,闹了不少笑话,他老朽无能的评价很快就反馈上来了。我很为难,但是知道难以保护他,他确实不上手。我至今也不明白,一个名校博士,有六年相关的实际工作经验,怎么这样不入。他也没有到老糊涂的年岁呀。

D博士自己也有所觉察,有危机感。他有点木纳,不善于迎合其他主管,觉得我是他的唯一的救命稻草,于是请我和全家做客,P先生夫妇作陪,联络感情。他的用心我很明白,可我确实无能为力,在公司正式宣布裁员名单前还必须小心保密。这次请客真让我犯难,跟太太一商量,觉得不能不给他们夫妇一个面子,但又不能让他们有错觉我有能力保护他。最后决定我一个人去,带上礼物赴宴。女主人使出全身解数,做了一顿极为丰盛的晚餐,用的餐具也很讲究,可是我没有任何胃口和心情,硬着头皮应付。气氛有点凝重,连平时爱热闹,喜欢多话的P太太,察言观色,也收敛很多。P先生夫妇转着弯子替D博士美言,我只能微笑不语,这是我在世界语国所经历过的最别扭的晚宴。

裁员计划暂缓,因为CEO和董事会还在协商多大的裁员幅度既能节省开支,支持公司开发出新产品,又不伤筋骨,保存骨干。终于,在CEO到来的第三个月,裁员指标在管理层下达,我做梦也没有想到,我们辛苦发展的60多员工的公司,居然要砍掉一半。这下不但D博士保不住,连P博士(P先生当时已经答辩,顺利拿到了博士学位,正春风得意)也必须走人。由老板和天使投资人任命的四个年轻副总,也开掉三个,甚至天使投资人的亲弟弟也不能幸免。老的VP就剩下我一个,好腾出位子让CEO引进资深经理人员,组建新的领导班子。公司的第四号员工,一个挺能干但爱抱怨的西班牙小伙子,也列入黑名单。我感到痛心,毕竟大家同舟共济,一路走过来,我说服老板和我的老搭档、瑞典籍的第一号员工一起去跟CEO说情,还是没有成功。CEO跟我说:I know it’s a great pain, especially for those you have worked with for long. But we all want the comnpany to succeed and this is the only way to survive this tough time. I have done this numerous times, believe me, it works. 说的是老实话,可是作为经理,要开掉自己亲手招来的员工,是什么滋味:job 是员工的命根子,你不能把人送上天堂,转手又打入地狱。

煎熬不止这些。我保护华人员工的私心也受到挑战。经过多轮内部讨价还价,最后决定10名华人员工必须裁掉两位。大家乡里乡亲,砸人饭碗的事情怎么忍心去做。就在这个当口,我两年前招进来的中小学同学C博士跟我谈起,他由于个人原因,已经决定海龟(后来应聘招标成为名校的博导和正教授,事业一片光明),但是不想在裁员风潮中辞职,怕人误会是表现不佳,不得不离开。我心内暗喜,他的离开至少救了一位。我说,你不用当心,我们可以安排你在裁员风潮过后离开,而且公司会为他饯行,表彰他两年来的贡献。还剩最后一位华人员工,看样子是保不住了。我不死心,私下跟我的资深助手一起,沟通CEO刚招进来的资深工程副总,说服他工程组需要一位我们研发组出身的既懂技术又懂工程的人,作为两个组的桥梁,这样在新产品开发中可以加速技术转移。说的也是实情,但一切在于权衡。副总新到,对我们老人有所依仗,现在CEO把工程组裁员重组和产品开发的任务交给他,他多方权衡,终于接受我们的方案,接纳了我们推举的人,使我松了口气,总算保全了华人员工。

在大裁员的那一周,我整夜整夜失眠,心急如焚,茶饭不思。更加残酷的是,裁员实施当天,我作为经理,必须履行职责,跟被裁的员工个别谈话,做好善后。不管怎样小心,最后还是有风波,一位被裁的白人女质量检测员,平时受过我的批评有积怨,加上看到华人员工均完好无损,扬言我们有种族歧视和性别歧视,要到法院告我们。公司后来找人沟通,说服她私了了。我的西班牙同事,也是一个实心眼,经常打电话给我,想回到公司,可是开他的人都在台上,怎么可能。他还几次回来看我和其他老同事,跟我说对公司念念不忘,充满love-n-hate的感情。我的中国同事担心他想不开,做什么绝事,劝我躲开他。我了解他的为人,同情他的遭遇,还是一直跟他保持良好的关系,并在他寻找新的工作时给予强烈推荐。

回想起来,不动大手术,公司难以为继,也就没有后来的复苏,成功地开发出市场需要的产品,使得投资人愿意进一步追加二期和三期的资金。可是,我和老板毕竟是书生,没有职业经理人的“铁石心肠”,感情上很难接受裁员的残酷现实,无法面对员工的惊惶和绝望。

我不能忘记P太太听到丈夫被裁、天雷轰顶一样的反应。裁员前夕,他们夫妇正计划利用每年的假日去参加北美世界语会议,老板跟我商量,决定暂先不告诉他们裁员的消息,以免影响他们的心情。可以想见,当他们在世界语国欢度一周回来后落到深渊的感受。从我们这里出去,P博士回到加拿大担任了一段园林工人,后来好像找到一份临时秘书的工作,在某大学帮忙。D博士此后失业很久,一直找不到工作,也不知他刚买的房子怎么了结。

好久好久,裁员的阴影挥之不去。太太安慰我说:你已尽了努力,他们的工作在紧缩时确实是可有可无,无法保全。唯一可以自我安慰的是,他们本来是没有机会的,我毕竟给了他们机会,并没有因此耽误他们的其他机会。

我很佩服CEO,在随后开发新产品和技术转移过程中,跟他配合默契。但在他领导公司走向成功的路上,我总觉得有“一将功成万骨枯”的悲凉。命运使我凑巧进入小公司的senior management,八年下来,我的体会是,经理,这不是我等意志薄弱者应该干的活计。

Wei Li
记于2006年独立节

立委《我的世界语国》入《世运人物志》

【相关】

《朝华午拾:用人之道》

朝华午拾-创业之路

【置顶:立委科学网博客NLP博文一览(定期更新版)】

Pulse:实时舆情追踪美国大选,live feed,real time!

http://www.netbase.com/presidential-elections2016/

Clinton has been mostly leading the social media sentiment :

Screenshots at 4:50pm 11/8/2016:

11082016a

110820160450b

110820160450c

110820160450d

110820160450e

Again go check our website live on Pulse:

http://www.netbase.com/presidential-elections2016/

 

[Related]

【社煤挖掘:川普的葛底斯堡演讲使支持率飙升了吗?】

【社煤挖掘:为什么要选ta而不是ta做总统?】

Big data mining shows clear social rating decline of Trump last month

Clinton, 5 years ago. How time flies …

【社媒挖掘:川大叔喜大妈谁长出了总统样?】

【川普和希拉里的幽默竞赛】

【大数据舆情挖掘:希拉里川普最近一个月的形象消长】

欧阳峰:论保守派该投票克林顿

【立委科普:自动民调】

【关于舆情挖掘】

《朝华午拾》总目录

 

Final Update of Social Media Sentiment Statistics Before Election

Final update before election:

brand-passion-index-1

timeline-comparison-2
Net sentiment last 24 hours: Trump +7 ; Clinton -9.  The last day analysis of social media.  Buzz:

timeline-comparison-3
So contrary to the popular belief, Trump actually is leading in social media just before the election day.

Compare the above with last month ups and downs to put it in larger context:

brand-passion-index-2
Last 3 month sentiment: Trump -11; Clinton -18.
Buzz for Trump never fails:

timeline-comparison-4

Trump's Word Clouds:

sentiment-drivers-6

sentiment-drivers-7sentiment-drivers-8

 

 

 

 

 

 

Clinton's Word Clouds:

sentiment-drivers-9

sentiment-drivers-10

sentiment-drivers-11
Trump 3-month summary:

trumpsummary3m

Clinton 3-month summary:

clintonsummary3m

Ethnicity:

ethinic

RW:
伟哥的东西,好是好,就是没有体现美国的选人制度
Xin:
主要是白人黑人和亚裔人数比例并没有代表实际的选民百分比。
RW:
理论上讲,只要有一方得到所有选票的23%, 他或她就可能当选

 

[Related]

【社煤挖掘:川普的葛底斯堡演讲使支持率飙升了吗?】

【社煤挖掘:为什么要选ta而不是ta做总统?】

Big data mining shows clear social rating decline of Trump last month

Clinton, 5 years ago. How time flies …

【社媒挖掘:川大叔喜大妈谁长出了总统样?】

【川普和希拉里的幽默竞赛】

【大数据舆情挖掘:希拉里川普最近一个月的形象消长】

欧阳峰:论保守派该投票克林顿

【立委科普:自动民调】

【关于舆情挖掘】

《朝华午拾》总目录

【大数据跟踪美大选每日更新,希拉里成功反击,拉川普下水】

昨天发布了【社煤挖掘:大数据告诉我们,希拉里选情告急】,鉴于大选的临近和选情的瞬息万变,我们决定用我们的社煤挖掘的核武器,每日跟踪大数据选情。

美国大选大数据一日一更新,11/1/2016 前24小时,看FBI事件发酵后的走势最新动态:

timeline-comparison-52

1101us

嗨 过去 24 小时,克林顿赶上来了也:两人打平,都是 -12%。热议度克林顿更甚,这也难怪,FBI 重启以后,议论焦点从老川转移到老喜身上。看看BPI这图,这一对真是冤家啊,纠缠在一起:

brand-passion-index-32

川大叔整个被喜大妈包住了,严严实实,比孙悟空的紧箍圈还厉害。Note:里面的圈是川普,外面的圈是希拉里,貌似希拉里气场如今大过老川了。照这个趋势,克林顿希望蛮好。

昨天晚上看新闻,说虽然 FBI 重启对克林顿选情影响很大,传统的新闻民调 CNN poll 还是希拉里领先五个百分点,其他的民调有曾一度只领先一个百分点的记录。虽然都比以前的领先幅度缩小,但仍然领先。川普阵营批判说这些个民调都是被操纵的,他们那边的民调是川普领先。这些个极小数据的民调极易偏差,公婆各有理,还是 put aside,咱们看真正的大数据:这是川普与希拉里最近24小时的 big data summary 对比

1101huanpu24

1101clinton24

回顾重温一下一周来(10/25-11/1)的走向,作为希拉里选情起伏的背景:

timeline-comparison-53

brand-passion-index-33

到现在为止的一周平均 net sentiment,Trump 是 2%,Clinton 是 -12%,可见希拉里的反击,主要不是把自己的 social rating 提升了(过去一天还是 -12),而是把对手拉下水了,让川普从周平均的 +2 拉到现在的冰点以下 -12。克林顿用的是什么伎俩赶上来的呢?

朋友说,大招来了:原来 拉川普下水是找到了川普与普京勾搭的新证据啊:

50740893092863278

A Veteran Spy Has Given the FBI Information Alleging a Russian Operation to Cultivate Donald Trump

Donald Trump Used Legally Dubious Method to Avoid Paying Taxes

约:
有点标题党,内容还算靠谱:

希拉里这次要坐牢?

施:
这次选举是测试大数据有效性的一个试金石,我感觉可能无效....
另:美帝国主义的人民群众也太不成熟了,一点自己的信念都没有?都受舆情影响,吃瓜群众表示不懂

南:
关键是很多选民都没有被社交媒体覆盖到吧

施:
情绪和投票时间的关系是什么样的?

Nick:
没错。伟哥说这么多没用,就一句话:谁能上。

张:
看样子是川普了,我很好奇这个家伙上来会是什么结果

我:
我这才是实事求是,动态跟踪,全方位大数据信息。“谁能上”那算个啥啊?
在胶着的选情下,那就是赌命,有没有大数据,都可以一赌,也都有不小的概率猜中,或猜不中,没有半点营养。如果是非胶着状态,大数据预测比其他预测更准。我坚信。要学那个AI大嘴巴,谁不会?他们根本连技术细节都没有,不过是制造了一个话题,顶了一个AI的帽子,利用普罗和媒体对AI的敬畏。我的选情追踪和分析,比那个高出不知几个数量级,这还真不是吹的。今天的选情趋势如果能够持续,大选日前没有新的定时炸弹被引爆,我预测克林顿当选的可能性可达80%

Nick:
@wei 是骡子是马,拉出来溜溜。就一句话:谁赢。

我:
这样吧,大选日前一天,我做个预测,根据一直到那一刻的综合大数据 analytics,现在不行,选情还在变化,并且显然有胶着的迹象。

Xi:
@wei , 别那么保守! 得老莫者, 得天下! 肯定是Hillary赢了。。。

Nick:
@wei 这算什么本事?

我:
尼克是星座骗女青年骗惯了,只知道短平快 如何得手,顾不了失手的后果了。
反正我有大数据 有平台 有深度parisng 我就这么每日追踪 不打无准备之仗。
以唐老师的说法,得老墨者得天下,那是克林顿无疑了,西班牙语舆情那是一面倒,克林顿高高在上,从来没有下来过

白:
伟哥这是要把谁能上做成红学的节奏。
最后,谁能上不重要了,为了谁能上而秀肌肉的人互撕。

我:
重在过程 不在结果。
这次大选好 富有戏剧性和悬念, 具有观赏性和互撕性, 跌宕起伏 精彩纷呈

阿:
我开了个盘口 目前二人押川普 四人押希太 欢迎加入
重在结果 不在过程

我:
问一句 为什么希拉里推特说的三点facts
第一条说 fbi 并未重启电邮门调查,只是提议重启。

Nick:
@wei 加入盘口,eat your own dog food

我:
第二个 fact 是 fbi director 自己并不清楚新发现的邮件有多少相关
据信很可能是已经审查过的邮件的另一个拷贝。
这个 director 涉嫌扰乱大选,对一个不知结果的新线索 可以按程序重启调查 但在大选前造成舆论 难逃干扰大选的怀疑,他可能也有违法乱纪的麻烦。

 

【相关】

【社煤挖掘:大数据告诉我们,希拉里选情告急】

CNBC‎: AI system finds Trump will win the White House and is more popular than Obama in 2008

Trump sucks in social media big data in Spanish

Did Trump’s Gettysburg speech enable the support rate to soar as claimed?

【社煤挖掘:川普的葛底斯堡演讲使支持率飙升了吗?】

【社煤挖掘:为什么要选ta而不是ta做总统?】

Big data mining shows clear social rating decline of Trump last month

Clinton, 5 years ago. How time flies …

【社媒挖掘:川大叔喜大妈谁长出了总统样?】

【川普和希拉里的幽默竞赛】

【大数据舆情挖掘:希拉里川普最近一个月的形象消长】

欧阳峰:论保守派该投票克林顿

【立委科普:自动民调】

【关于舆情挖掘】

《朝华午拾》总目录

【社煤挖掘:大数据告诉我们,希拉里选情告急】

这是最近最近一周的对比图:

brand-passion-index-15
的确显得不妙,川大叔领先了。是不是因为FBI重启调查造成的结果?
这是过去24小时的图:

brand-passion-index-17
这是一个月的涨跌对比:

timeline-comparison-25

至此局势基本清晰了:希拉里的确选情告急。MD 这大选真是瞬息万变啊,不久前还是喜妈领先或胶着,如今川大叔居然翻身了,选情的变化无常真是让人惊心动魄。

这是last week:

timeline-comparison-26

这一周喜婆,很被动很不利。过去24小时 一直在零下20上下,而老川在零上10左右,有30点的差距 NND:

timeline-comparison-27

看看更大的背景,过去三个月的选情对比:

timeline-comparison-28

原来是, 喜大妈好容易领先了,此前一直落后,直到九月底。九月底到十月中是喜妈的极盛期,是川普的麻烦期。

至于热议度,从来都没有变过,总是川普压倒:

timeline-comparison-31

眼球数也是一样:

timeline-comparison-32

一年来的狂热度(passion intensity)基本上也是川普领先,但喜婆也有不有不少强烈粉她或恨她的,所以曲线有交叉:

timeline-comparison-33

这个 passion intensity 与所谓 engagement 应该有强烈的正相关,因为你痴迷或痛恨一个 candidate 你就愿意尽一切所能去投入、鼓噪、撕逼。

最好是赶快把川大叔的最新丑闻抖出来。这家伙那么多年,难道就留不下比电话录音更猛、更铁的丑闻证据。常识告诉我们肯定有 skeleton in the closet,可是这家伙太狡猾,可能一辈子做商人太过精明,连染有液体的内裤也不曾留下过?是时候从 closet 拿出来了。反正这次大选已经 low 得不能再 low 了,索性 low 到底。不过如果要是有,不会等到今天,大选只剩下一周、先期投票已经开始。

这么看来,作为 data scientist,我不敢不尊重 data 一厢情愿宣传喜妈的赢面大了。赶巧我一周前调查的那个月是克林顿选情的黄金月,结果令人鼓舞。

我们的大数据平台有 27 种 filters,用我们的大数据工具可以对数据做不同的组合切割,要是在会玩的分析师手中,可以做出很漂亮的各种角度的分析报告和图表出来。地理、时间只是其中两项。

电邮门是摧毁性的。FBI 选在大选前一周重启,这个简直是不可思议。比川普的录音曝光的时间点厉害。那家印度所谓AI公司押宝可能押对了,虽然对于数据的分析能力和角度,远不如我们的平台的丰富灵活。他们基本只有一个 engagement 的度量,连最起码的 sentiment classification 都没有,更不用说 social media deep sentiments 了。无论怎么说,希拉里最近选情告急是显然的。至于这种告急多大程度上影响真正的选票,还需要研究。

朋友提醒所谓社会媒体,其实是 pull 和 push 两种信息的交融,其来源也包含了不少news等,这些自上而下的贴子反映的是两党宣传部门的调子,高音量,影响也大,但并非真正的普罗网虫自下而上的好恶和呼声,最好是尽可能剔除前者才能看清真正的民意。下面的一个月走势对比图,我们只留下 twitter,FB,blog 和 microblog 四种社会媒体,剔除了 news 和其他的社会媒体:

timeline-comparison-49

下面是推特 only,大同小异:

timeline-comparison-50

对比一下所有的社会媒体,包括 news 网站,似乎对于这次大选,pull 和 push的确是混杂的,而且并没有大的冲突和鸿沟:

timeline-comparison-51

希拉里为什么选情告急?看看近一个月的希拉里云图,开始红多绿少了:

sentiment-drivers-43

sentiment-drivers-44

对比一下川普的云图,是红绿相当,趋向是绿有变多的趋势,尤其是第二张情绪(emotion)性云图:

sentiment-drivers-45

sentiment-drivers-46

再看看近一周的云图对比, 舆论和选情的确在发生微妙的变化。这是川普最近一周的sentiment 云图:

sentiment-drivers-47

sentiment-drivers-48
对比喜婆婆的一周云图:

sentiment-drivers-49

sentiment-drivers-50

下面是网民的针对希拉里来的正负行为表述的云图:

sentiment-drivers-51

not vote 希拉里的呼声与 vote for her 的不相上下。对比一下川普最近一周的呼声:

sentiment-drivers-52
vote 的呼声超过 not vote for him

这是最近一周关于克林顿流传最广的posts:

clinton_trouble

FBI 重启调查显然被川普利用到了极致,影响深远。

Most popular posts last week by engagement:

clinton_trouble1

Most popular posts last week on Clinton by replies and comments:

clinton_trouble2

Some random sample posts:

clinton_tposts_random
negative comments are rampant on Clinton recently:

clinton_tposts

29367bc4bae054ee9a6262d9cccdfed6

如果这次希拉里输了,the FBI director Comey 居功至伟。因为自从录音丑闻以后,选情对希拉里极为有利,选情的大幅度下滑与FBI重启调查紧密相关。媒体的特点是打摆子,再热的话题随着时间也会冷却,被其他话题代替。这次的问题在,FBI 重启电邮门调查的话题还没等到冷却,大选就结束了,媒体和话题对选民的影响当下为重。而录音丑闻的话题显然已经度过了发酵和热议期,已经冷却,被 FBI 话题代替了。从爆料的角度,录音丑闻略微早了一些,可谁料到在这个节骨眼 FBI 突然来这么一招呢。

看看最近一周的#Hashtags,也可以了解一点社会媒体话题的热度:

word-cloud-23

与事件有关的有: #fbi #hillarysemails #hillarysemail #podestaemails19 #podestaemails20
Negative ones include: #wikileaks #neverhillary #crookedhillary #votetrump

Look at the buzz around Hillary below: the biggest is "FBI" in the brands cloud mentioned with her in the last week's data:

word-cloud-24

The overall buzz last week:

word-cloud-26

这是最近一周有关希拉里话题的emoji图:

hullery1weekemoji

虽然说笑比哭还,希拉里及其阵营和粉丝却笑不起来,一周内用到这个话题的emoji总数高达 12,894,243 。这也是社会媒体的特点吧,用图画表达情绪。情绪的主调就是 哭。邮件门终于炸了。

现在的纠结是,【大数据告诉我们,希拉里选情告急】,到底发还是不发?为了党派利益和反川立场,不能发。长老川志气,灭吾党威风。为了 data scientist 的职业精神,应该发。一切从数据和事实出发,是信息时代之基。中和的办法是,先发一篇批驳那篇流传甚广的所谓印度AI公司预测川普要赢,因为那一篇的调查区间与我此前做的调查区间基本相同,那是希拉里选情最好的一个月,他们居然根据 engagement alone 大嘴巴预测川普的胜选,根本就没有深度数据的精神,就是赌一把而已。也许等批完了伪AI,宣扬了真NLU,然后再发这篇 【大数据告诉我们,希拉里选情告急】。

FBI director 说这次重启调查,需要很长时间才能厘清。现在只是有了新线索需要重启,不能说明希拉里有罪无罪。没有结论前,先弄得满城风雨,客观上就是给选情带来变数。虽然在 prove 有罪前,都应该假定无罪,但是只要有风声,人就不可能不受影响。所以说这个时间点是最关键的。如果这次重启调查另有黑箱,就更惊心动魄了。如果不是有背后的黑箱和势力,这个时间点的电邮门爆炸纯属与新线索的发现巧合,那就是希拉里的运气不佳,命无天子之福。一辈子强性格,卧薪尝胆,忍辱负重,功亏一篑,无功而返,保不准还有牢狱之灾。可以预测,大选失败就是她急剧衰老的开始。

一周前有个记者interview川普,川普一再说,希拉里这个犯罪的人,根本就不该被允许参加竞选。记者问,哪里犯罪了?川普说电邮门泄密,还有删除邮件隐瞒罪恶。当时这个重启调查还没有。记者问,这个案子不是有结论了吗,难到你不相信FBI的结论?川普说,他们弄错了,把罪犯轻易放了。这是一个腐烂的机构,blah blah。可是,同样这个组织,老川现在是赞誉有加。这就是一个无法无天满嘴跑火车的老狐狸。法律对他是儿戏,顺着他的就对,不顺着他心意的就是 corrupt,rigged,这种人怎么可以放心让他当总统?

中间选民的数量在这种拉锯战中至关重要,据说不少。中间选民如果决定投票,其趋向基本决定于大选前一周的舆论趋向。本来是无所谓是鸡是鸭的,如今满世界说一方不好,合理的推断就是去投另一方了。现在看来,这场竞赛的确是拉锯战,很胶着,不是一方远远超过另一方。一个月前,当录音丑闻爆料的时候,那个时间点,希拉里远远超过川普,毫无悬念。一个月不到,选情大变,就不好说了,迹象是,仍然胶着。

不过,反过来看,川普的 popularity 的确是民意的反映。不管这个人怎么让人厌恶,他所批判的问题的确长久存在。某种意义上,Sanders 这样的极端社会主义者今年能有不俗的表现,成为很多年轻一代的偶像,也是基于类似的对现状不满、对establishment的反叛的民意。而希拉里显然是体系内的老旧派,让人看不到变革的希望。人心思变的时候,一个体系外的怪物也可以被寄托希望。至少他敢于做不同事情,没有瓶瓶罐罐的牵扯。

上台就上台吧,看看他造出一个什么世界。

老闻100年前就说过:
这是一沟绝望的死水,清风吹不起半点漪沦。不如多扔些破铜烂铁,爽性泼你的剩菜残羹。
。。。。。。
这是一沟绝望的死水,这里断不是美的所在,不如让给丑恶来开垦,看它造出个什么世界。

 

【相关】

CNBC‎: AI system finds Trump will win the White House and is more popular than Obama in 2008

Trump sucks in social media big data in Spanish

Did Trump’s Gettysburg speech enable the support rate to soar as claimed?

【社煤挖掘:川普的葛底斯堡演讲使支持率飙升了吗?】

【社煤挖掘:为什么要选ta而不是ta做总统?】

Big data mining shows clear social rating decline of Trump last month

Clinton, 5 years ago. How time flies …

【社媒挖掘:川大叔喜大妈谁长出了总统样?】

【川普和希拉里的幽默竞赛】

【大数据舆情挖掘:希拉里川普最近一个月的形象消长】

欧阳峰:论保守派该投票克林顿

【立委科普:自动民调】

【关于舆情挖掘】

《朝华午拾》总目录

 

Trump sucks in social media big data in Spanish

As promised, let us get down to the business of big data mining of public opinions and sentiments from Spanish social media on the US election campaign.

We know that in the automated mining of public opinions and sentiments for Trump and Clinton we did before, Spanish-Americans are severely under-represented, with only 8% Hispanic posters in comparison with their 16% in population according to 2010 census (widely believed to be more than 16% today), perhaps because of language and/or cultural barriers.  So we decide to use our multilingual mining tools to do a similar automated survey from Spanish Social Media to complement our earlier studies.

This is Trump as represented in Spanish social media for the last 30 days (09/29-10/29), the key is his social rating as reflected by his net sentiment -33% (in comparison with his rating of -9% in English social media for the same period): way below the freezing point, it really sucks, as also illustrated by the concentration of negative Spanish expressions (red-font) in his word cloud visualization.

By the net sentiment -33%, it corresponds to 242,672 negative mentions vs. 121,584 positive mentions, as shown below. In other words, negative comments are about twice as much as positive comments on Trump in Spanish social media in the last 30 days.

This is the buzz in the last 30 days for Trump: mentions and potential impressions (eye balls): millions of data points and indeed a very hot topic in the social media.

This is the BPI (Brand Passion Index) graph for directly comparing Trump and Clinton for their social ratings in the Spanish social media in the last 30 days:

As seen, there is simply no comparison: to refresh our memory, let us contrast it with the BPI comparison in the English social media:

Earlier in one of my election campaign mining posts on Chinese data, I said, if Chinese only were to vote, Trump would fail horribly, as shown by the big margin in the leading position of Clinton over Trump:

This is even more true based on social media big data from Spanish.

This is the comparison trends of passion intensity between Trump and Clinton:

The visualization by weeks of the same passion intensity data, instead of by days, show even more clearly that people are very passionate about both candidates in the Spanish social media discussions, the intensity of sentiment expressed for Clinton are slightly higher than for Trump:

This is the trends graph for their respective net sentiment, showing their social images in Spanish-speaking communities:

We already know that there is simply no comparison: in this 30-day duration, even when Clinton dropped to its lowest point (close to zero) on Oct 9th, she was still way ahead of Trump whose net sentiment at the time was -40%. In any other time segments, we see an even bigger margin (as big as 40 to 80 points in gap) between the two. Clinton has consistently been leading.

In terms of buzz, Trump generates more noise (mentions) than Clinton consistently, although the gap is not as large as that in English social media:

This is the geo graph, so the social data come from mostly the US and Mexico, some from other Latin America countries and Spain:

Since only the Mexicans in the US may have the voting power, we should exclude media from outside the US to have a clearer picture of how the Spanish-speaking voters may have an impact on this election. Before we do that filtering, we note the fact that Trump sucks in the minds of Mexican people, which is no surprise at all given his irresponsible comments about the Mexican people.

Our social media tool is equipped with geo-filtering capabilities: you can add a geo-fence to a topic to retrieve all social media posts authored from within a fenced location. This allows you to analyze location-based content irrespective of post text. That is exactly what we need in order to do a study for Spanish-speaking communities in the US who are likely to be voters, excluding those media from Mexico or other Spanish-speaking countries. communities in the US who are likely to be voters, excluding those media from Mexico or other countries. This is also needed when we need to do study for those critical swing states to see the true pictures of the likelihood of the public sentiments and opinions in those states that will decide the destiny of the candidates and the future of the US (stay tuned, swing states social media mining will come shortly thanks to our fully automated mining system based on natural language deep parsing).

Now I have excluded Spanish data from outside America, it turned out that the social ratings are roughly the same as before: the reduction of the data does not change the general public opinions from Spanish communities, US or beyond US., US or beyond US. This is US only Spanish social media:

This is summary of Trump for Spanish data within US:

It is clear that Trump's image truly sucks in the Spanish-speaking communities in the US, communities in the US, which is no surprise and so natural and evident that we simply just confirm and verify that with big data and high-tech now.

These are sentiment drivers (i.e. pros and cons as well as emotion expressions) of Trump :

We might need Google Translate to interpret them but the color coding remains universal: red is for negative comments and green is positive. More red than green means a poor image or social rating.

In contrast, the Clinton's word clouds involve way more green than red: showing her support rate remains high in the Spanish-speaking communities of the US.

It looks like that the emotional sentiments for Clinton are not as good as Clinton's sentiment drivers for her pros and cons.

Sources of this study:

Domains of this study:

[Related]

Did Trump's Gettysburg speech enable the support rate to soar as claimed?

Big data mining shows clear social rating decline of Trump last month

Clinton, 5 years ago. How time flies …

Automated Suevey

Dr Li’s NLP Blog in English

Did Trump's Gettysburg speech enable the support rate to soar as claimed?

Last few days have seen tons of reports on Trump's Gettysburg speech and its impact on his support rate, which is claimed by some of his campaign media to soar due to this powerful speech.  We would love to verify this and uncover the true picture based on big data mining from the social media.

First, here is one link on his speech:

DONALD J. TRUMP DELIVERS GROUNDBREAKING CONTRACT FOR THE AMERICAN VOTER IN GETTYSBURG. (The most widely circulated related post in Chinese social media seems to be this: Trump's heavyweight speech enables the soaring of the support rate and possible stock market crash).

Believed to be a historical speech in his last dash in the campaign, Trump basically said: I am willing to have a contract with the American people on reforming the politics and making America great again, with this plan outline of my administration in the time frame I promised when I am in office, I will make things happen, believe me.

Trump made the speech on the 22nd this month, in order to mine true public opinions of the speech impact, we can investigate the data around 22nd for the social media automated data analysis.  We believe that automated polling based on big data and language understanding technology is much more revealing and dependable than the traditional manual polls, with phone calls to something like 500 to 1,000 people.  The latter is laughably lacking sufficient data to be trustworthy.

timeline-comparison-14

What does the above trend graph tell us?

1  Trump in this time interval was indeed on the rise. The "soaring" claim this time does not entirely come out of nowhere, but, there is a big BUT.

2. BUT, a careful look at the public opinions represented by net sentiment (a measure reflecting the ratio of positive mentions over negative mentions in social media) shows that Trump has basically stayed below the freezing point (i.e. more negative than positive) in this time interval, with only a brief rise above the zero point near the 22nd speech, and soon went down underwater again.

3. The soaring claim cannot withstand scrutiny at all as soaring implies a sharp rise of support after the speech event in comparison with before, which is not the case.

4. The fact is, Uncle Trump's social media image dropped to the bottom on the 18th (with net sentiment of -20%) of this month.  From 18th to 22nd when he delivered the speech, his net sentiment was steadily on rise from -20% to 0), but  from 22nd to 25th, it no longer went up, but fell back down, so there is no ground for the claim of support soaring as an effect of his speech, not at all.

5. Although not soaring, Uncle Trump's speech did not lead to sharp drop either, in terms of the buzz generated, this speech can be said to be fairly well delivered in his performance. After the speech, the net sentiment of public opinions slightly dropped, basically maintaining the fundamentals close to zero.

6.  The above big data investigation shows that the media campaign can be very misleading against the objective evidence and real life data.  This is all propaganda, which cannot be trusted at its face value: from so-called "support rate soared" to "possible stock market crash". Basically nonsense or noise of campaign, and it cannot be taken seriously.

The following figure is a summary of the surveyed interval:

trump1

As seen, the average public opinion net-sentiment for this interval is -9%, with positive rating consisting of 2.7 million mentions, and negative rating of 3.2 million mentions.

How do we interpret -9% as an indicator of public opinions and sentiments? According to our previous numerous automated surveys of political figures, this is certainly not a good public opinion rating, but not particularly bad either as we have seen worse.  Basically, -9% is under the average line among politicians reflecting the public image in people's minds in the social media.  Nevertheless, compared with Trump's own public ratings before, there is a recorded 13 points jump in this interval, which is pretty good for him and his campaign.  But the progress is clearly not the effect of his speech.

This is the social media statistics on the data sources of this investigation:

trump2

In terms of the ratio, Twitter ranks no 1, it is the most dynamic social media on politics for sure, with the largest amount of tweets generated every minute. Among a total of 34.5 million mentions on Trump, Twitter accounted for 23.9 million.  In comparison, Facebook has 1.7 million mentions.

Well, let's zoom in on the last 30 days instead of only the days around the speech, to provide a bigger background for uncovering the overall trends of this political fight in the 2016 US presidential campaign between Trump and Clinton.

timeline-comparison-15

The 30 days range from 9/28-10/28, during which the two lines in the comparison trends chart show the contrast of Trump and Clinton in their respective daily ups and downs of net sentiment (reflecting their social rating trends).  The general impression is that the fight seems to be fairly tight.  Both are so scandal-ridden, both are tough and belligerent.  And both are fairly poor in social ratings.  The trends might look a bit clearer if we visualize the trends data by weeks instead of by day:

timeline-comparison-16

No matter how much I dislike Trump, and regardless of my dislike of Clinton whom I have decided to vote anyway in order to make sure the annoying Trump is out of the race,  as a data scientist, I have to rely on data which says that Hillary's recent situation is not too optimistic: Trump actually at times went a little ahead of Clinton (a troubling fact to recognize and see).

timeline-comparison-17

The graph above shows a comparison of the mentions (buzz, so to speak).  In terms of buzz, Trump is a natural topic-king, having generated most noise and comments, good or bad.  Clinton is no comparison in this regard.

timeline-comparison-18

The above is a comparison of public opinion passion intensity: like/love or dislike/hate?  The passion intensity for Trump is really high, showing that he has some crazy fans and/or deep haters in the people.  Hillary Clinton has been controversial also and it is not rare that we come across people with very intensified sentiments towards her too.  But still, Trump is sort of political anomaly, and he is more likely to cause fanaticism or controversy than his opponent Hillary.

In his recent Gettysburg speech, Trump highlighted the so-called danger of the election being manipulated. He clearly exaggerated the procedure risks, more than past candidates in history using the same election protocol and mechanism.  By doing so, he paved the way for future non-recognition of the election results. He was even fooling the entire nation by saying publicly nonsense like he would totally accept the election results if he wins: this is not humor or sense of humor, it depicts a dangerous political figure with ambition unchecked.  A very troubling sign and fairly dirty political tricks or fire he is playing with now, to my mind.  Now the situation is, if Clinton has a substantial lead to beat him by a large margin, this old Uncle Trump would have no excuse or room for instigating incidents after the election.  But if it is closer to see-saw, which is not unlikely given the trends analysis we have shown above, then our country might be in some trouble: Uncle Trump and his die-hard fans most certainly will make some trouble.  Given the seriousness of this situation and pressing risks of political turmoil possibly to follow,  we now see quite some people, including some conservative minds, begin to call for the election of Hillary for the sake of preventing Trump from possible trouble making.  I am one with that mind-set too, given that I do not like Hillary either.  If not for Trump, in ordinary elections like this when I do not like candidates of both major parties, I would most likely vote for a third party, or abstain from voting, but this election is different, it is too dangerous as it stands.  It is like a time bomb hidden somewhere in the Trump's house, totally unpredictable. In order to prevent him from spilling, it is safer to vote for Clinton.

In comparison with my earlier automated sentiment analysis blogged about a week ago (Big data mining shows clear social rating decline of Trump last month),this updated, more recent BPI brand comparison chart seems to be more see-saw: Clinton's recent campaign seems to be stuck somewhere.

brand-passion-index-11

Over the last 30 days, Clinton's net sentiment rating is -17%, while Trump's is -19%.  Clinton is only slightly ahead of Trump.  Fortunately, Trump's speech did not really reverse the gap between the two, which is seen fairly clearly from the following historical trends represented by three different circles in brand comparison (the darker circle represents more recent data): the general trends of Clinton are still there: it started lagging behind and went better and now is a bit stuck, but still leading.

 

brand-passion-index-12

Yes, Clinton's most recent campaign activities are not making significant progress, despite more resources put to use as shown by bigger darker circle in the graph.  Among the three circles of Clinton, we can see that the smallest and lightest circle stands for the first 10 days of data in the past 30 days, starting obviously behind Trump.  The last two circles are data of the last 20 days, seemingly in situ, although the circle becomes larger, indicating more campaign input and more buzz generated.  But the benefits are not so obvious.  On the other side, Trump's trends show a zigzag, with the overall trends actual declining in the past 30 days.  The middle ten days, there was a clear rise in his social rating, but the last ten days have been going down back.  Look at Trump's 30-day social cloud of Word Cloud for pros and cons and Word Cloud for emotions:

Let us have a look at Trump's 30-day social media sentiment word clouds, the first is more about commenting on his pros and cons, and the second is more direct and emotional expressions on him:sentiment-drivers-38

sentiment-drivers-37
One friend took a glance at the red font expression "fuck", and asked: who are subjects and objects of "fuck" here?  In fact, the subject generally does not appear in the social posts, by default it is the poster himself, reflecting part of the general public, the object of "fuck" is, of course, Trump, for otherwise our deep linguistics based system will not count it as a negative mention of trump reflected in the graph.  Let us show some random samples side by side of the graph:

trumpfuck

trumpfuck2
My goodness, the "fuck" mentions account for 5% of the emotional data, the poor old Uncle Trump is fucked 40 million times in social media within one-month duration, showing how this guy is hated by some of the people whom he is supposed to represent and govern if he takes office.   See how they actually express their strong dislike of Trump:

fucking moron
fucking idiot
asshole
shithead

you name it, to the point even some Republicans also curse him like crazy:

Trump is a fucking idiot. Thank you for ruining the Republican Party you shithead.

Looking at the following figure of popular media, it seems that the most widely circulated political posts in social media involve quite some political video works:

trumpmedia

The domains figure below shows that the Tumblr posts on politics contribute more than Facebook:

domains-6

In terms of demographics background of social media posters, there is a fair balance between male and female: male 52% female 48% (in contrast to Chinese social media where only 25% females are posting political comments on US presidential campaign).  The figure below shows the ethnic background of the posters, with 70% Caucasians, 13% African Americans, 8% Hispanic and 6% Asians.  It looks like that the Hispanic Americans and Asian Americans are under-represented in the English social media in comparison with their due population ratios, as a result, this study may have missed some of their voice (but we have another similar study using Chinese social media, which shows a clear and big lead of Clinton over Trump; given time, we should do another automated survey using our multilingual engine for Spanish social media.  Another suggestion from friends is to do a similar study on swing states because after all these are the key states that will decide the outcome of this election, we can filter the data by locations where posts are from to simulate that study).  There might be a language or cultural reasons for this under-representation.

trumpethinics

This last table involves a bit of fun facts of the investigation.  In social media, people tend to talk most about the campaign, on the Wednesday and Sunday evenings, with 9 o'clock as the peak, for example, on the topic of Trump, nine o'clock on Sunday evening generated 1,357,766 messages within one hour.  No wonder there is no shortage of big data from social media on politics.  It is all about big data. In contrast, with the traditional  manual poll, no matter how sampling is done, the limitation in the number of data points is so challenging:
with typically 500 to 1000 phone calls, how can we trust that the poll represents the public opinions of 200 million voters?  They are laughably too sparse in data.  Of course, in the pre-big-data age, there were simply no alternatives to collect public opinion in a timely manner with limited budgets.  This is the beauty of Automatic Survey, which is bound to outperform the manual survey and become the mainstream of polls.

trumpdayhour

Authors with most followers are:

trumpmedia2

Most mentioned authors are listed below:

trumpauthors

Tell me when in history did we ever have this much data and info, with this powerful data mining capabilities of fully sutomated mining of public opinions and sentiments at scale?

trumppopularposts

 

[Related]

Big data mining shows clear social rating decline of Trump last month

Clinton, 5 years ago. How time flies …

Automated Suevey

Dr Li’s NLP Blog in English

 

 

【社煤挖掘:川普的葛底斯堡演讲使支持率飙升了吗?】

反正日夜颠倒了,那就较真一下,看看大数据大知识,对于川普的葛底斯堡演说的所谓舆情飙升到底是怎么回事。先给几个links:

DONALD J. TRUMP DELIVERS GROUNDBREAKING CONTRACT FOR THE AMERICAN VOTER IN GETTYSBURG

报道的是本月22日川大叔的历史性演说,旨在振奋人心,做竞选的最后冲刺,大意:
寡人与美国人民有个约定,看我的,believe me

中文舆论中,这篇似乎流传最广:【川普重磅演讲致支持率飙升 全球股市将暴跌?】。

因为川普演说是22日,为了看舆情的飙升对比,可以以22日为中心取前后几天的社会媒体大数据做分析,看个究竟。至少比传统民调打五百、一千个电话来调查,自动民调的大数据(millions 的数据点)还是靠谱一些吧。

timeline-comparison-14
这张趋势图怎么看?

1 川普在这个时间区间总体的确是上升。飙升之说,不完全是无中生有(准确地说,其实是捕风捉影,见下)。

2 但是,仔细看舆情(net sentiment)图可以发现,川普这段时间基本上还是一直没有摆脱负面舆情多于正面舆情的局面,舆情曲线除了22号当天短暂超越冰点,总体一直是零下。

3. 飙升之说经不起推敲,因为凡飙升,必须是事件后比事件前的舆情,有明显的飞跃,其实不然。

4. 事实是,川大叔近期舆情的谷底是本月18号(零下20+),从18号到22号 他 deliver speech 前,他的舆情已经有比较明显的提升(从 -20 到 0),而从 22 号 到 25 号,舆情不升反略降,飙升从何谈起?

5. 虽然没有飙升,但川大叔这次表演还是及格的。至少 speech 后,舆情没有大跌,基本保持了接近零度的基本面。

6 由此可见,媒体造势是多么地捕风捉影。以后各位看到这种明显是宣传(propaganda)的帖子,可以多一个心眼了:通常的宣传造势的帖子都在夸大其词(如果不公然颠倒黑白或歪曲事实的话),从所谓“舆情飙升”到预计“股市暴跌”,都是要显示川普演说的重量级。基本是无稽之言,不能当真的。

下图是这个调查区间的数据小结:

trump1

这个区间的平均舆情指数是 -9%,2.7 million 的正面评价,3.2 million 的负面评价。

-9% 是一个什么概念,根据我们以往对政治人物的多次舆情调查来看,这不是一个好的舆情,但也不是特别糟糕,属于平均线下。但是,与川普自己的总体舆情比较,这个区间表现良好,有 13 点的提升,但这个提升并非所谓演说飙升带来的。

这是社煤数据源的统计:

trump2

从比例看,推特永远是最 dynamic,量也最大,总热议度 34.5 million mentions,推特占了 23.9 million。不少社煤的分析 apps 干脆扔掉其他的数据源,只做推特,作为社会媒体的代表,也基本上可以了。但是,感觉上还是,只做推特,虽然大数据之量可以保证,但可能偏差会大一些,因为喜欢上推特跟踪政治人物和话题,吐槽或粉丝的人,只是社会阶层中的一部分,往往是比较狂热的一批。推特这个公共平台,本来就长于偶像和followers(粉丝或“黑”)互动。其他的社会媒体可能更平实一些,譬如 Facebook 上的发言基本是说给朋友圈的。Facebook 也有 1.7 million 的热议。

好,我们把区间放大,看 last 30 days 的趋势,作为这次演说前后趋势的一个背景。

timeline-comparison-15
这是 9/28-10/28 的川普与克林顿舆情趋势对比图,by days;仔细解读前,总体印象是够纠缠的。这两位老头老太也真是,剪不断理还乱,不是冤家不碰头,呵呵。两位都那么多丑闻缠身,性格都很tough倔强。看看一个月来 by weeks 的曲线也许更明朗:

timeline-comparison-16

不管我多么厌恶川普,也不管我为了厌恶川普而决定选举并不喜欢的克林顿,作为 data scientist,不得不说,希拉里最近的情势不是很乐观:川普居然开始有点儿领先克林顿的趋势了,NND。

timeline-comparison-17

上图是热议度(mentions)的对比。这个没的说,川普天生的话题大王,克林顿无论如何也赶不上。

timeline-comparison-18

这是舆情烈度的对比:喜欢或厌恶川普的还是更加狂热,虽然印象中希拉里克林顿比起其他政治人物所引起的情绪已经要更趋于激烈了。可是川普是个政治异数,还是更容易引起狂热或争议。

川普在演说中特别强调选举被操纵的危险,他显然在夸大这种危险,为将来的不承认选举结果做铺垫。挺恶心人的。现在的情况是,如果克林顿大幅度领先,川大叔再流氓也没辙。如果是拉锯接近,就麻烦了,老川和川粉几乎肯定要闹事。可现在的选情显得有些胶着拉锯,这也是为什么很多人包括保守派开始有倡议,说为了川普,请投票克林顿。本来我是要投第三党的,或者弃权不投,但是这次选举不同,危险太大,川老是个定时炸弹,而且不可预测。为了防止他撒泼,还是投给克林顿好。至少让他看看,马戏团的表演是上不了台面的,由不得他胡来。沐猴而冠变不成林肯。

对比我 一周前做的自动民调 Big data mining shows clear social rating decline of Trump last month,下面这个品牌对比图似乎更加拉锯,克林顿最近选情不是很佳。

brand-passion-index-11

最近30天,克林顿是 -17%,川普是 -19%,略领先于川普。所幸,川普的这次演讲并没有真正扭转两人的差距,从下面这张历史趋势品牌对比看,克林顿从开始的舆情落后,变为领先的趋势还在:

brand-passion-index-12
不过最近克林顿的选情是原地踏步,并没有明显进展。比较克林顿的三个圈可知,最淡的圈是过去30天的前10天,明显落后于川普,后两个圈是最近20天,基本原地,只是圈子变大了,说明竞选的投入和力度加大了,但效益并不明显。而从川普方面的三个圈圈看趋势,这老头儿实际的总体趋势是下跌,过去三十天,中间的十天舆情有改观,但最近的十天又倒回去了,虽然热议度有增长。(MD,这个分析没法细做,越做越惊心动魄,很难保持平和的心态,可咱是 data scientist 啊。朋友说,“就是要挖点惊心动魄的”,真心唯恐天下不乱啊。)看看川普的30天社煤的褒贬云图(Word Cloud for pros and cons)和情绪云图(Word Cloud for emotions)吧:

sentiment-drivers-38

sentiment-drivers-37
朋友一眼看中了那红红的 fuck 舆情,问:“fuck”的主语和宾语是谁?

主语一般不出现,默认是普罗网虫,fuck 的宾语当然是川普,否则上不来他的负面情绪云图:

trumpfuck

trumpfuck2
天,fuck mentions 占据了情绪数据的 5%,老川在一个月里被社煤普罗 fuck 了近40万次,可见这家伙如果上台会有多少与他不共戴天的子民。看上面怎么吐槽 fuck 的:

fucking moron
fucking idiot
asshole
shithead

you name it,甚至疑似共和党人也fuck他:
Trump is a fucking idiot. Thank you for ruining the Republican Party you shithead.

 

看 popular media,貌似流传最广的大多是视频:

trumpmedia

Tumblr 超越 Facebook 成为社煤老二?

domains-6

从来没用过 Tumblr 这名字也拗口 怎么这么 popular?

西方媒体吐槽的,男女比较均衡:male 52% female 48%,对比中文社媒,明显是女人少谈政治的:才占25%。这次调查的种族背景分布:

trumpethinics

还是白大哥占压倒多数。族裔信息占社煤帖子中的近一半,所以这个社煤族裔分布的情报应该是靠谱的。黑大哥第二,占 13%,亚裔才 6%。墨大哥 8%, 与其人口比例不相称吧(?):由于语言或文化障碍,under-represented here??

这个有点意思,喜欢到社煤吐槽的人,集中在周三和周日的晚上,晚九点达到高峰, 譬如 关于川普话题的社煤,在周日晚上九点高达 1,357,766, 一个小时就有一百三十五万帖啊,够大数据吧。

trumpdayhour

这还才是 sampling 的 data, 推特sampling占总量大约十分之一吧,如果是 data hose (要额外付钱的)一网打尽的话,数据量又要增加一个量级。不过,对于大数据情报挖掘,再增加一个量级已经没有什么意义了,不会实质上改变调查的结果的。说明一下,那个周日的统计量应该是过去一个月的调查中的周日的总和,一个月有四个周日,那个数据应该除以4,然后乘以10,才是川普数据周日九点的那是时间区间的真实量。总之是地地道道的大数据。相比之下,传统民调,不管怎么抽样,感觉都是儿戏,有点胡闹:
500 个电话,说是代表了两亿人的民意舆情,不是儿戏是什么。不过,前大数据时代,那是没办法的办法。自动民调是大势所趋

下图是影响最大 followers 最多的 authors:

trumpmedia2

Most mentioned authors below:

trumpauthors

什么时代有过如此丰富的信息与如此强大的数据挖掘能力?

RW:
@wei 你实际上可以好好搞一个大选预测引擎,利用你现在的methodology, finetune 一下,可以吸引很多眼球。效果好,下次就可以收费了。一炮而红,还有什么是更有效的marketing?

我:
我要是有微信数据的话,不打炮也会红。什么都不用变,就是现在的引擎,现在的app,只要有微信,什么情报专家也难比拟。为什么现在发布中文舆情挖掘不如英文挖掘那么有底气?不是我中文不行,而是数据源太 crappy 了。闹来闹去也就是新浪微博、天涯论坛、中文推特或脸书。至少全球华人大陆背景的,这个压倒多数,都在用微信,而数据够不着,得不到反映。

李:
@wei 我公司有团队做着类似的事情

我:
你能染指微信数据?

李:
微信个人数据只有腾讯有。

看看流传最广的社煤帖子都是什么?

trumppopularposts

从 total engagement 指标看,无疑是川普自己的推特账号,以及 Fox : 这大概是唯一的主流媒体中仅存的共和党的声音了。也不怪,老川在竞选造势中,不断指着鼻子骂主流媒体,甚至刻薄主持人的偏袒。历史上似乎还没有一个候选人与主流媒体如此对着干,也没有一个人被主流媒体如此地厌恶。

展示到这里,朋友转来一个最新的帖子,说是用人工智能预测美国大选,川普会赢:Trump will win the election and is more popular than Obama in 2008, AI system finds,quote:

"But the entrepreneur admitted that there were limitations to the data in that sentiment around social media posts is difficult for the system to analyze. Just because somebody engages with a Trump tweet, it doesn't mean that they support him. Also there are currently more people on social media than there were in the three previous presidential elections."

haha,同行是冤家,他的AI能比我自然语言deep parsing支持的 I 吗?从文中看,他着重 engagement,这玩意儿的本质就是话题性、热议度吧。早就说了,川普是话题大王,热议度绝对领先。(就跟冰冰一样,话题女王最后在舆情上还是败给了舆情青睐的圆圆,不是?)不是码农相轻,他这个很大程度上是博眼球,大家都说川普要输,我偏说他必赢。两周后即便错了,这个名已经传出去了。川普团队也会不遗余力帮助宣传转发这个。

Xi:
那个印度鬼子也有点瞎扯了。
知道ip地址跟知道ssl加密后的搜索的内容是两码事儿啊!
不知道是记者不懂呢,还是这小子就是在瞎胡弄了。

洪:
印度ai公司预测美国大选,有50%以上测准概率,中国ai公司也别放过这个机会

毛:
伟哥为什么认为川普必赢?不是说希拉莉的赢率是 95% 吗?

南山/邓保军: 不是wei说的

我:
这叫横插一杠子。川普要赢,我去跳河。。。

毛:
哦,伟哥是在转述。

我:
跳河是玩笑了,我移民回加拿大总是可以吧。

李:
韩国这个料就爆得好。希拉里在关键时刻,也有可能爆大料

我:
问题是谁爆谁的料。两人都到了最后的时刻,似乎能找到的爆料也都差不多用了。再不用就不赶趟了。很多地方的提早投票都已经开始了,有杀手锏最多再等两三天是极限了,要给媒体和普罗一个消化和咀嚼的时间。

毛:
@wei 但是老印的那个系统并非专为本届大选而开发,并且说是已经连续报准了三届呀?

我:
我的也不是专为大选开发的呀。而且上次奥巴马决定用我们,你看他就赢了,我们也助了一臂之力呢。

毛:
你们两家的配方不同?

我:
奥巴马团队拥抱新技术,用舆情挖掘帮助监测调整竞选策略,这个比预测牛一点点吧。预测是作为 outsider 来赌概率。我这个是 engage in the process、技术提供助力 呵呵。当时不允许说的。

李:
奥巴马有可能会去硅谷打工唉

毛:
是否在舆情之外还有什么因素?

李:
原来你那个奥巴马照片不是蜡像呀

我:
假做真时真亦假呀

002_510_image

 

【相关】

【社煤挖掘:为什么要选ta而不是ta做总统?】

Big data mining shows clear social rating decline of Trump last month

Clinton, 5 years ago. How time flies …

【社媒挖掘:川大叔喜大妈谁长出了总统样?】

【川普和希拉里的幽默竞赛】

【大数据舆情挖掘:希拉里川普最近一个月的形象消长】

欧阳峰:论保守派该投票克林顿

【立委科普:自动民调】

【关于舆情挖掘】

《朝华午拾》总目录

 

 

 

 

【社煤挖掘:为什么要选ta而不是ta做总统?】

中文社煤挖掘美国大选的华人舆情,接着练。

Why and why not Clinton/Trump?

Why 喜大妈?Why 川大叔?Why not Clinton? Why not Trump?这是大选的首要问题,也是我们舆情挖掘想要探究的重点。Why???

First, why Clinton and why not Clinton? 看看喜大妈在舆情中的优劣对比图(pros and cons)。

sentiment-drivers-33

why Clinton?剔除竞选表现优秀等等与总统辩论和 campaign 有关的好话(“领先”、“获胜”、“占上风”、“赢得”等)外,主要理由有:

1. 老练 强硬; 2. 乐观; 2. 清楚; 4 换发活力 谈笑风生; 5. 梦想共同市场

拿着放大镜,除了政治套话和谀辞外也没看到什么真正的亮点。舆情领先,只能说对手太差了吧。四年前与奥巴马竞争被甩出一条街去,那是遇到了真正的强手。

OK,why not Clinton?

1. 性侵 性骚扰 威胁(她丈夫做的好事,她来背黑锅,呵呵。照常理她是受害者,可以同情的,不料给同样管不住下半身的川普一抹黑,她倒成了性侵的帮凶,说是威胁被性侵的女性。最滑稽的是,川普自己的丑闻曝光,他却一本正经带了一帮前总统克林顿的绯闻女士开记者会,来抹黑自己的对手克林顿夫人。滑稽逆天了。)

2. 邮件门 曝光 泄密

3 竞选团队的不轨行为 操纵大选 作弊

4. 克林顿基金会的问题

5. 华尔街收费

6 健康问题

7 撒谎、可耻

8. 缺乏判断力

这些都不是新鲜事儿,大选以来已经炒了很久了,但比起她的长处(经验老练等少数几条),喜妈被抓住的辫子还真不少。再看网民的情绪性吐槽, 说好话都是相似的,坏话却各有不同:轻的是,“乏善可陈”、“不喜欢”、“不信任”; 重的是:“妖婆”,“婊子”、“灾难”、“无耻”、“邪恶”。

sentiment-drivers-34
作为对比,来看川大叔,why or why not Trump?

sentiment-drivers-35

pros:1. 减税;2. 承诺 崛起 (America great again);3. 真实;4. 擅长 business
cons:
1. 曝光的视频丑闻 性骚扰
2. 偷税漏税
3. 吹嘘
4 咄咄逼人 喜怒无常
5 粗鄙、威胁
6 撒谎

情绪性吐槽,轻的是 “不靠谱”、“出言不逊”,重的是 “恶心”、“愚蠢”、“卑劣”、“众叛亲离”。

sentiment-drivers-36
上篇中文社煤自动民调博文发了以后有朋友问,为什么不见大名鼎鼎的脸书。(微信不见可以理解,人家数据不对外开放,对隐私性特别敏感,比脸书严多了。不过,地球人都知道,反映我大唐舆情最及时精准的大数据宝库,非微信莫属)。查对了一下,上次做的中文舆情调查,不知何故 Facebook 不在 top 10,只占调查数据的 0.1%:

sources-9

记得以前的英语社煤调查,通常的比例是 70% twitter,20% Facebook, 其他所有论坛和社交媒体只占 10%。最近加了 instagram、Tumblr 等,格局似有变。但是中文在海外,除了推特,Facebook 本来应该有比重的,特别是我台湾同胞,用 Facebook 跟东土用微信一样普遍。

再看看这次调查的网民背景分类。

1.  职业是科技为主(大概不少是咱码农),其次才是新闻界和教育界。这些人喜欢到网上嚷嚷。

professions

这是他们的兴趣(interests),有意思的关联似乎是,喜欢谈政治的与喜欢谈宗教和美食的有相当大交集。

interests

这是年龄分组,分布比较均匀,但还是中青年为主。

age

性别不用说,男多女少。男人谈政治与女人谈shopping一样热心。

gender

最后看看地理分布,社煤的地理来源:
geo-regions

 

 

【相关】

【社媒挖掘:川大叔喜大妈谁长出了总统样?】

Big data mining shows clear social rating decline of Trump last month

【川普和希拉里的幽默竞赛】

【大数据舆情挖掘:希拉里川普最近一个月的形象消长】

论保守派该投票克林顿

【立委科普:自动民调】

【关于舆情挖掘】

《朝华午拾》总目录

Clinton, 5 years ago. How time flies ...

 311736_10150433966356900_893547465_n
克林顿白,立委黑,
“立委,你的头发太有个性了。”
无独有偶,黑白的对比引来大小的反差。居然比立委的头发还白,岂有此理

“好想知道 Clinton 在你背後說的是好話,還是壞話?”

【社媒挖掘:川大叔喜大妈谁长出了总统样?】

眼看决战时刻快到了,调查一下华人怎么看美国大选,最近一个月的舆情趋势。中文社会媒体对于美国总统候选人的自动调查。

aaa

先看喜大妈,是过去三十天的调查(时间区间:9/26-10/25)
summary-metrics-new-3
mentions 是热议度,net sentiment 是褒贬指数,反映的网民心目中的形象。

summary-metrics-6
很自然,二者并不总是吻合:譬如,在十月10日到11日的时候,希拉里被热议,而她的褒贬指数则跌入谷底。那天有喜大妈的什么丑闻吗?咱们把时间按周(by weeks)而不是按日来看 trends,粗线条看趋势也许更明显一些:

summary-metrics-7
Anyway,过去30天的总社煤形象分(net sentiment)是 11%,比起英语世界的冰点之下(-18%)好太多了,似乎华语世界远不如英语世界对老政客喜大妈的吐槽刻薄。

作为对比,我们看看川普(特朗普)在同一个时期的社会形象的消长趋势:川普过去30天的总社煤形象分(net sentiment)是 -12%,比希拉里的+11%成鲜明对比。

summary-metrics-8

看上面的趋势图(by weeks),川普的热议度一直居高不下,话题之王名副其实,但他的社会评价却一直在冰点之下,十月初更是跌入万丈深渊。同时期的希拉里,热议度与社会评价却时有交叉。趋势 by days:

summary-metrics-9

这样看来,虽然有所谓华人挺川的民间鼓噪,总体来看,川大叔在华人的网上口水战中,与喜大妈完全不是一个量级的对手。川普很臭,真地很臭。在英语社煤中,川普也很臭(-20%),但希拉里也不香,民间厌恶她诅咒她的说法随处可见,得分 -18%,略好于川普。譬如电邮门事件,很多老美对此深恶痛绝,不少华人(包括在下)心里难免觉得是小题大作。为什么华人世界对希拉里没有那么反感呢?居然给希拉里 +11% 的高评价。朋友说,希拉里更符合华人主流价值观吧。

这是我们的品牌对比图,三维直观地对比两位候选人在社煤的形象位置:

brand-passion-index-10

希拉里领先太多,虽然热议度略逊。

总有人质疑社煤挖掘的情报价值,说也许NLU不过关,挖掘有误呢。更多的质疑是,也许某党的人士更愿意搅浑水呢(譬如利用水军或机器人bots)。凡此总总,都给社会媒体舆情挖掘在多大程度上反映民意,提出了疑问和挑战。其实,对于传统的民调,不同的机构有不同的结果,加上手工民调的取样不可能大,error margin 也大。各机构结果也颇不同,所以大家也都是一肚子怀疑。不断有怀疑,还是不断有民调在进行。这是大选年的信息“刚需”吧。

所有的自动的或人工的民调,都可能有偏差,都只能做民意的参考。但是我要强调的是:

1. 现在的深度 NLU 支持的舆情挖掘,已经今非昔比,加上大数据信息冗余度的支撑,精准度在宏观上是可以保障的;

2. 全自动的社煤民调,其大数据的特性,是人工民调无法比的(时效以及costs也无法比,见【立委科普:自动民调】);

3. 虽然社煤上的口水、噪音以及不同党派或群体在其上的反映都可能有很大差异,但是社煤民调的消长趋势的情报以及不同候选人(或品牌)的对比情报,是相对可靠的。怎么讲?因为自动系统具有与生俱来的一视同仁性。

时间维度上的舆情消长,具有相对的比较价值,它基本不受噪音或其他因素的影响。也不大受系统数据质量的影响(当然,太臭的舆情系统也还是糊不上墙,跟抛硬币差不了太多的一袋子词这样的“主流”舆情分类,在短消息压倒多数的社会媒体,还是不要提了吧,见一切声称用机器学习做社会媒体舆情挖掘的系统,都值得怀疑)。

我们目前的系统,是 deep parsing 支持,本性是 precision 优于 recall(precision 不降低,recall 也可以慢慢爬上来,譬如我们的英语舆情系统就有相当好的recall,recall在符号逻辑路线里面,本质上就是开发时间的函数)。Given big data 这样的场景,recall 的某种缺失,其实并不影响舆情的相对意义,因为决定 recall 的是规则量,缺少的是一些长尾 pattern rules,而语言学的 rules 不会因为时间或候选人的不同,而有所不同。同理,因为系统的编制是独立于千变万化的候选人、品牌或话题,因此数据质量对于候选人之间的比较,是靠谱的。这样看,舆情趋势和候选人对比的情报挖掘,的确真实地反映了民意的消长和相对评价。下面是这次自动民调的 Top 10 数据来源(可惜没有“她”,我是说 wechat),还是最动态反映舆情的推特中文帖子占多数(其中 66% 简体,30% 繁体,4% 粤语)。

domains-5

看一下popular的帖子,居然小方的也在其列。倒也不怪,方在中文社煤还是有影响力的。

chuanpupopularposts

小方总结得不错啊,难得同意他:满嘴跑火车的川大叔是“谎言大王”。其实川普与其说是谎话连篇,不如说是他根本不care 或不屑去核对事实。就跟北京出租司机信口开河成为习惯一样,话说到这里,转一篇我的老友刚写的博文(论保守派该投票克林顿),quote:

川普说话不顾事实是众所周知的。只要他一开口,就忙坏了各种事实核查 fact check ......
更重要的是,川普不仅犯了大大小小众多的事实错误,而且对事实抱着强烈的轻蔑和鄙视。

总结一下这次民调的结果可以说,如果是华人投票,川普不仅是 lose 而是要死得很惨,很难看。(当然,不管华人与否,川普都没有啥胜算。)

timeline-comparison-12

这是 by days 的趋势对比,这种持续的舆情领先在大选前很难改变吧:

timeline-comparison-13

【更多美国大选舆情的自动调查还在进行整理中,stay tuned】

 

【相关】

【社煤挖掘:为什么要选ta而不是ta做总统?】

Big data mining shows clear social rating decline of Trump last month

【川普和希拉里的幽默竞赛】

【大数据舆情挖掘:希拉里川普最近一个月的形象消长】

论保守派该投票克林顿

【立委科普:自动民调】

【立委科普:舆情挖掘的背后】

【社媒挖掘:《品牌舆情图》的设计问题】

一切声称用机器学习做社会媒体舆情挖掘的系统,都值得怀疑

【关于舆情挖掘】

《朝华午拾》总目录

 

 

 

 

 

 

 

 

 

Big data mining shows clear social rating decline of Trump last month

Big data mining from last month' social media shows clear decline of Trump in comparison with Clinton

aaa

Our automatic big data mining for public opinions and sentiments from social media speaks loud and clear: Tump's social image sucks.

Look at last 30 days of social media on the Hillary and Trump's social image and standing in our Brand Passion Index (BPI) comparison chart below:

brand-passion-index-8

Three points to note:
1 Trump has more than twice buzz than Hillary in terms of social media coverage (the size of the circles indicates the degree of mentions);
2. The intensity of sentiments from the general public of netters is more intense for Chump than for Clinton: the Y-axis shows the passion intensity
3. The social ratings and images of the two are both quite poor, but Trump is more criticized in social: the X-axis of Net Sentiment shows the index social sentiment ratings.  Both are under freezing point (meaning more negative comments than positive).

If we want to automatically investigate the trend of the past month and their social images' ups and downs, we can have the data segmented into two or three segments.  Figure below shows the trends contrast of the first 15 days of social media data vs. the second 15 days of data in the 30-day period (up to 10/21/2016):

brand-passion-index-7

See, in the past month, with the presidential election debates and scandals getting attention, Trump's media image significantly deteriorated, represented by the public opinion circles shifting from the right on the X-axis to the left side (for dislike or hate sentiments: the lighter circle represents data older than the darker circle).  His social rating was clearly better than Hillary to start with and ended up worse than that of Hillary.  At the same time, Hillary's social media image has improved, the circle moves a bit from the left to right. Two candidates have always been below the freezing point, clearly shown in the figure, but just a month ago, Clinton was rated even lower than Trump in public opinions of the social media: it is not the people who like Trump that much, but the general public showed more dislike for Hillary for whatever reasons.

As seen, our BPI brand comparison chart attempts to visualize four-dimensional information:
1. net sentiment for social ratings on the X-axis;
2. the passion intensity of public sentiments on the Y-axis;
3. buzz circle size, representing mentions of soundbites;
4. The two circles of the same brands show the coarse-grained time dimension for general trends.

It is not very easy to represent 4 dimensions of analytics in a two-dimensional graph.  Hope the above attempt in our patented visualization efforts is insightful and not confusing.

If we are not happy with the divide-into-two strategy for one month of data to show the trends, how about cut them into three pieces?  Here is the Figure for .three circles in the time dimension.

brand-passion-index-6

We should have used different colors for the two political brands to make visualization a bit clearer.  Nevertheless, we see the trends for Clinton in her three circles of social media sentiments shifting from the lower left corner to the upper right in a zigzag path: getting better, then worse, and ended up with somewhere in between at this point (more exactly, up to the point of 10/21/2016). For the same 3 segments of data, Trump's (brand) image started not bad, then went slightly better, and finally fell into the abyss.

The above is to use our own brand comparison chart (BPI) to decode the two US presidential candidates' social images change and trends.  This analysis, entirely automated based on deep Natural Language Parsing technology, is supported by data points in a magnitude many times more than the traditional manual polls which are by nature severely restricted in data size and time response.

What are the sources of social media data for the above automated polling?  They are based on random social media sampling of big data, headed by the most dynamic source of Twitter, as shown below.

sources-5

sources-4

sources-3

This is a summary of the public opinions and sentiments:

%e5%b7%9d%e6%99%ae%e5%b8%8c%e6%8b%89%e9%87%8c

As seen, it is indeed BIG data: a month of random sampling of social media data involves the mentions of the candidates for nearly 200 million times, a total of up to 3,600+ billion impressions (potential eyeballs). Trump accounted for 70 percent of the buzz while Clinton only 30 percent.

The overall social rating during the period of 09/21/2016 through 10/21/2016, Trump's net sentiment is minus 20%, and Clinton is minus 18%.  These measures show a rating much lower than that of most other VIP analysis we have done before using the same calculations.  Fairly nasty images, really.   And the big data trends show that Trump sucks most.

The following is some social media soundbites for Trump:

Bill Clinton disgraced the office with the very behavior you find appalling in...
In closing, yes, maybe Trump does suffer from a severe case of CWS.
Instead, in this alternate NY Times universe, Trump’s campaign was falling ...
Russian media often praise Trump for his business acumen.
This letter is the reason why Trump is so popular
Trump won
I'm proud of Trump for taking a stand for what's right.
Kudos to Trump for speaking THE TRUTH!
Trump won
I’m glad I’m too tired to write Trump/Putin fuckfic.
#trump won
Trump is the reason Trump will lose this election.
Trump is blamed for inciting violence.
Breaking that system was the reason people wanted Trump.
I hate Donald Trump for ruining my party.
>>32201754 Trump is literally blamed by Clinton supporters for being too friendly with Russia.
Another heated moment came when Trump delivered an aside in reponse to ...
@dka_gannongal I think Donald Trump is a hoax created by the Chinese....
Skeptical_Inquirer The drawing makes Trump look too normal.
I'm proud of Donald Trump for answering that honestly!
Donald grossing me out with his mouth features @smerconish ...
Controlling his sniffles seems to have left Trump extraordinarily exhausted
Trump all the way people trump trump trump
Trump wins
Think that posting crap on BB is making Trump look ridiculous.
I was proud of Trump for making America great again tonight.
MIL is FURIOUS at Trump for betraying her!
@realdonaldTrump Trump Cartel Trump Cartel America is already great, thanks to President Obama.
Kudos to Mr Trump for providing the jobs!!
The main reason to vote for Trump is JOBS!
Yes donal trump has angered many of us with his WORDS.
Trump pissed off a lot of Canadians with his wall comments.
Losing this election will make Trump the biggest loser the world has ever seen.
Billy Bush's career is merely collateral damage caused by Trump's wrenching ..
So blame Donald for opening that door.
The most important reason I am voting for Trump is Clinton is a crook.
Trump has been criticized for being overly complimentary of Putin.
Kudos to Trump for reaching out to Latinos with some Spanish.
Those statements make Trump's latest moment even creepier.
I'm mad at FBN for parroting the anti-Trump talking points.
Kudos to Trump for ignoring Barack today @realDonaldTrump
Trump has been criticized for being overly complimentary of Putin.
OT How Donald Trump's rhetoric has turned his precious brand toxic via ...
It's these kinds of remarks that make Trump supporters look like incredible ...
Trump is blamed for inciting ethnic tensions.
Trump is the only reason the GOP is competitive in this race.
Its why Republicans are furious at Trump for saying the voting process is rigged.
Billy Bush’s career is merely collateral damage caused by Trump’s wrenching ..
Donald Trump is the dumbest, worst presidential candidate your country ...
I am so disappointed in Colby Keller for supporting Trump.
Billy Bush’s career is merely collateral damage caused by Trump’s wrenching..
In swing states, Trump continues to struggle.
Trump wins
Co-host Jedediah Bila agreed, saying that the move makes Trump look desperate.
Trump wins
"Trump attacks Clinton for being bisexual!"
TRUMP win
Pence also praised Trump for apologizing following the tape’s disclosure.
In swing states, Trump continues to struggle.
the reason Trump is so dangerous to the establishment is he is unapologetical..

Here are some public social media soundbites for Clinton in the same period:

Hillary deserves worse than jail.
Congratulations to Hillary & her campaign staff for wining three Presidential ..
I HATE @chicanochamberofcommerce FOR INTRODUCING THAT HILLARY ...
As it turns out, Hillary creeped out a number of people with her grin.
Hillary trumped Trump
Trump won!  Hillary lost
Hillary violated the Special Access Program (SAP) for disclosing about the ...
I trust Flint water more than Hillary
Hillary continued to baffle us with her bovine feces.
NEUROLOGISTS HATE HILLARY FOR USING THIS TRADE SECRET DRUG!!!!...
CONGRATULATIONS TO HILLARY CLINTON FOR WINNING THE PRESIDENCY
Supreme Court: Hillary is our only choice for keeping LGBT rights.
kudos to hillary for remaining sane, I'd have killed him by now
How is he blaming Hillary for sexually assaulting women. He's such a shithead
The only reason I'm voting for Hillary is that Donald is the only other choice
Hillary creeps me out with that weird smirk.
Hillary is annoying asf with all of her laughing
I credit Hillary for the Cubs waking up
When you listen to Hillary talk it is really stupid
On the other hand, Hillary Clinton has a thorough knowledge by virtue of ...
Americans deserve better than Hillary
Certain family members are also upset with me for speaking out against ...
Hillary is hated by all her security detail for being so abusive
Hillary beat trump
The only reason to vote for Hillary is she's a woman.
Certain family members are also upset with me for speaking out against ....
I am glad you seem to be against Hillary as well Joe Pepe.
Hillary scares me with her acions.
Unfortunately Wikileaks is the monster created by Hillary & democrats.
I'm just glad you're down with evil Hillary.
Hillary was not mad at Bill for what he did.  She was mad he got caught.  ......
These stories are falling apart like Hillary on 9/11
Iam so glad he is finally admitting this about Hillary Clinton.
Why hate a man for doing nothing like Hillary Clinton
Hillary molested me with a cigar while Bill watched.
You are upset with Hillary for doing the same as all her predecessors.
I feel like Hillary Clinton is God's punishment on America for its sins.
Trumps beats Hillary
You seem so proud of Hillary for laughing at rape victims.
Of course Putin is going to hate Hillary for publicly announcing false ...
Russia is pissed off at Hillary for blaming the for wikileaks!
Hillary will not win.  Good faith is stronger than evil.  Trump wins??
I am proud of Hillary for standing up for what is good in the USA.
Hillarys plans are worse than Obama
Hillary is the nightmare "the people" have created.
Funny how the Hillary supporters are trashing Trump for saying the same ...
???????????? I am so proud of the USA for making Hillary Clinton president.
Hillary, you're a hoax created by the Chinese
Trump trumps Hillary
During the debate, Trump praised Hillary for having the will to fight.
Trump is better person than Hillary
Donald TRUMPED Hillary
Kudos to Hillary for her accomplishments.
He also praised Hillary for handling the situation with dignity.
During the debate, Trump praised Hillary for having the will to fight.
People like Hillary in senate is the reason this country is going downhill.
Hillary did worse than expectations.
Trump will prosecute Hillary for her crimes, TRUMP will!
Have to praise Hillary for keeping her focus.
a landslide victory for Hillary will restore confidence in American democracy ..
I was so proud of Hillary tonight for acting like a tough, independent woman.
I dislike Hillary Clinton, as I think she is a corrupt, corporate shill.
Hillary did worse than Timmy Kaine
Im so glad he finally brought Benghazi against Hillary
Hillary, thank you for confirmation that the Wikileaks documents are authentic
Supreme Court justices is the only reason why I'd vote for Hillary.
Massive kudos to Hillary for keeping her cool with that beast behind her.
Congrats to Hillary for actually answering the questions. She's spot on. #debate

 

[Related]

Social media mining: Did Trump’s Gettysburg speech enable the support rate to soar as claimed?

Big data mining shows clear social rating decline of Trump last month

Clinton, 5 years ago. How time flies …

Automated Suevey

【川普和希拉里的幽默竞赛】

一觉醒来,大周末。看了川普和希拉里最后一场互撕后一起出席慈善晚会。

特朗普在慈善晚宴上自嘲

实拍特朗普希拉里互撕后 酒会竟然互相调侃

玩幽默都不是大师(两人均无法与奥巴马和比尔克林顿比),但也都及格了,可以给个赞。川普那张扑克牌脸和政治马戏团一般的连番表演,居然可以适合时宜地来点自嘲和棍中夹棒,希拉里也适合时宜地富于幽默感似的笑起来,与平时的高高在上一本正经成对比。这幽默喜剧都演得很辛苦,很认真,做政客真心不容易。

想象不出老头儿桑德斯在类似场合是不是也能在社会主义激情后面来点幽默,想破了头,也很难把忧国忧民的桑老与调侃幽默联系起来。

希拉里那一段也不俗,不过好像还没来得及有汉化版,youTube 在这里(需要翻墙):https://www.youtube.com/watch?v=HjPQ82vTaes

选票已经到手了,除了为了川普而选希拉里外,正在看那些个本地提案和从来就不认识的本地候选人:二号提案试图加消费税(把消费税加到天花板)来改善湾区日渐恶化的交通(包括类似地铁的轻轨在硅谷腹地的延伸),一号提案也是增加房产税来帮助无家可归者提供廉租屋,领导说,local 加税的一律说不(可是联邦选举中,希拉里明摆着要加税,而川普要减税,领导却坚决投希拉里)。c 和 d 提案最切身,就是家门口开门即红透半边天的 cupertino downtown 旁边,同时在苹果新总部旁边,有一个了无生气的 mall,眼看这块宝地要大热,开发商与民间组织打起来了。开发商要推倒重建,在建筑商业应用之上做一个巨大的空中花园,在沙漠天气营造一个休闲绿洲来吸引投票。

民间组织宣传 yes on C,no on D,开发商发动广告大战,宣传 yes on D no on C,针锋相对,煞是热闹。民间组织的宣传颇有效,说 C 也 pro-customer and D is pro-developer,无奸不商,无商不奸,美丽的空中花园的下面,是多少多少的商业店铺和巨大的利润,带来的是交通和教育问题,等等。总之是信服了领导,但说不服我。
对于一个过了九点就跟鬼城一样的硅谷腹地,缺少的就是人气。D 提案描画的远景就是人气,想想吧,10 年后的 苹果总部、新 Downown 以及空中花园,会是怎样一个聚集人气的所在。为这个,不能与领导保持一致。

D要建公寓楼出租,这是业主反对的主要原因。资本家追求利益最大化,捎带着建个花园,活跃了人气。旁边的那些 property value 将来还要疯涨,举步就是吃喝玩乐,有啥可抱怨的。人再多也比不上北京上海,多开个 school 把马路拓宽不就结了。
等将来有钱了就去买一间这样的公寓,养老甚好,不用开车,楼下就是一切。
当年在温哥华的 Burnaby 的 Metrotown 中心,就建了好多高层公寓,不少老华人就在里面养老,自得其乐,让人羡慕。

 

【大数据舆情挖掘:希拉里川普最近一个月的形象消长】

aaa

大数据舆情挖掘,看图说话。
先看近一个月来在社会媒体上的希拉里和川普的品牌形象对比图:

brand-passion-index-8

看点三:
1 川普的 buzz 大过 希拉里一倍多,川普是话题中心(圈的大小表明热议度)
2. 普罗对川普比对希拉里,情绪更趋激烈:表现在 Y 轴的 passion intensity 上
3. 两人总体都不讨人喜欢,川普更加让人厌恶,表现在 x 轴上的 Net Sentiment(也就是褒贬对比的度量)。两人都在冰点之下,社会媒体的形象不佳。

如果我们要自动调查过去一个月时间的趋向和形象消长,可以考虑把数据分割为两段或三段来看此消彼长,先一分为二来看图:

brand-passion-index-7

看到了吧,过去一个月,随着总统大选辩论和丑闻的揭示和宣传,川普的媒体形象显著恶化,表现在舆情圈圈从右(x轴上的右是评价度高 love like,左边是评价度低 hate dislike)向左的位移。本来评价度clearly比希拉里要好,终于比希拉里差了。同时,希拉里的社会媒体形象有所改善,圈圈在从左向右位移。两个人始终都是冰点以下,吐槽多于赞美,但是就在一个月前,还是喜妈更不受待见:不是民众更喜欢老川,而是普罗更厌恶喜妈。

这个品牌对比图示表达了四维信息:
1. net sentiment 评价度 x 轴
2. passion intensity 舆情烈度 y 轴
3. buzz 圈圈的大小,是热议度
4. 一分为二的两个圈是时间的粗线条切割的维度

在二维的图纸上,要表达四维的信息,的确不是很容易。

要是嫌第四维时间太粗线条,咱们一分为三看看:

brand-passion-index-6

三个圈,浓度的深浅表达的是时间的远近。当短短的一个月的时间,被一分为三的时候,我们看到了什么趋向呢?请注意颜色的深浅,对应的是时间的远近。我们看到,喜妈的三个圈圈是左下角到右上(还是visualization设计不到家,不同品牌应该用不同的颜色区分才好)。原来喜妈的评价是先好,后坏,最后回到中间。而老川在同一个时间点,是先中,后略好,最后跌入深渊。

以上是利用我们自创的品牌对比图(有美国专利的)来看候选人的形象消长。

社会媒体数据的来源呢?Twitter 为主:

sources-5

sources-4

sources-3

这是一个月来的舆情总结:

%e5%b7%9d%e6%99%ae%e5%b8%8c%e6%8b%89%e9%87%8c

的确是大数据了,一个月的随机的社会媒体数据样本里面,两人的 mentions 就有近两亿,眼球数共计高达3万6千亿。川普占7成,喜妈才三成。川普跟冰冰类似,都是话题之王。

总体社会评价,川普零下20%,喜妈零下18%。

下面是有关川普的社煤数据选摘:

Bill Clinton disgraced the office with the very behavior you find appalling in Trump.
In closing, yes, maybe Trump does suffer from a severe case of CWS.
Instead, in this alternate NY Times universe, Trump’s campaign was falling apart.
Russian media often praise Trump for his business acumen.
This letter is the reason why Trump is so popular
Trump won
I'm proud of Trump for taking a stand for what's right.
Kudos to Trump for speaking THE TRUTH!
Trump won
I’m glad I’m too tired to write Trump/Putin fuckfic.
#trump won
Trump is the reason Trump will lose this election.
Trump is blamed for inciting violence.
Breaking that system was the reason people wanted Trump.
I hate Donald Trump for ruining my party.
>>32201754 Trump is literally blamed by Clinton supporters for being too friendly with Russia.
Another heated moment came when Trump delivered an aside in reponse to a Clinton one-liner.
@dka_gannongal I think Donald Trump is a hoax created by the Chinese....
Skeptical_Inquirer The drawing makes Trump look too normal.
I'm proud of Donald Trump for answering that honestly!
Donald grossing me out with his mouth features @smerconish @realdonaldtrump
Controlling his sniffles seems to have left Trump extraordinarily exhausted
Trump all the way people trump trump trump
Trump wins
Think that posting crap on BB is making Trump look ridiculous.
I was proud of Trump for making America great again tonight.
MIL is FURIOUS at Trump for betraying her!
@realdonaldTrump Trump Cartel Trump Cartel America is already great, thanks to President Obama.
Kudos to Mr Trump for providing the jobs!!
The main reason to vote for Trump is JOBS!
Yes donal trump has angered many of us with his WORDS.
Trump pissed off a lot of Canadians with his wall comments.
Losing this election will make Trump the biggest loser the world has ever seen.
Billy Bush's career is merely collateral damage caused by Trump's wrenching migration.
So blame Donald for opening that door.
The most important reason I am voting for Trump is Clinton is a crook.
Trump has been criticized for being overly complimentary of Putin.
Kudos to Trump for reaching out to Latinos with some Spanish.
Those statements make Trump's latest moment even creepier.
I'm mad at FBN for parroting the anti-Trump talking points.
Kudos to Trump for ignoring Barack today @realDonaldTrump
Trump has been criticized for being overly complimentary of Putin.
OT How Donald Trump's rhetoric has turned his precious brand toxic via The Independent.
It's these kinds of remarks that make Trump supporters look like incredible idiots.
Trump is blamed for inciting ethnic tensions.
Trump is the only reason the GOP is competitive in this race.
Its why Republicans are furious at Trump for saying the voting process is rigged.
Billy Bush’s career is merely collateral damage caused by Trump’s wrenching migration.
Donald Trump is the dumbest, worst presidential candidate your country has EVER produced.
I am so disappointed in Colby Keller for supporting Trump.
Billy Bush’s career is merely collateral damage caused by Trump’s wrenching migration.
In swing states, Trump continues to struggle.
Trump wins
Co-host Jedediah Bila agreed, saying that the move makes Trump look desperate.
Trump wins
"Trump attacks Clinton for being bisexual!"
TRUMP win
Pence also praised Trump for apologizing following the tape’s disclosure.
In swing states, Trump continues to struggle.
the reason Trump is so dangerous to the establishment is he is unapologetically alpha.

关于希拉里的社会媒体样本数据摘选:

Hillary deserves worse than jail.
Congratulations to Hillary & her campaign staff for wining three Presidential debates.
I HATE @chicanochamberofcommerce FOR INTRODUCING THAT HILLARY GIF INTO MY LIFE
As it turns out, Hillary creeped out a number of people with her grin.
Hillary trumped Trump
Trump won!  Hillary lost
Hillary violated the Special Access Program (SAP) for disclosing about the nuclear weapons!!
I trust Flint water more than Hillary
Hillary continued to baffle us with her bovine feces.
NEUROLOGISTS HATE HILLARY FOR USING THIS TRADE SECRET DRUG!!!!...
CONGRATULATIONS TO HILLARY CLINTON FOR WINNING THE PRESIDENCY
Supreme Court: Hillary is our only choice for keeping LGBT rights.
kudos to hillary for remaining sane, I'd have killed him by now
How is he blaming Hillary for sexually assaulting women. He's such a shithead
The only reason I'm voting for Hillary is that Donald is the only other choice
Hillary creeps me out with that weird smirk.
Hillary is annoying asf with all of her laughing
I credit Hillary for the Cubs waking up
When you listen to Hillary talk it is really stupid
On the other hand, Hillary Clinton has a thorough knowledge by virtue of her tenure as Secretary of State.
Americans deserve better than Hillary
Certain family members are also upset with me for speaking out against Hillary.
Hillary is hated by all her security detail for being so abusive
Hillary beat trump
The only reason to vote for Hillary is she's a woman.
Certain family members are also upset with me for speaking out against Hillary.
I am glad you seem to be against Hillary as well Joe Pepe.
Hillary scares me with her acions.
Unfortunately Wikileaks is the monster created by Hillary & democrats.
I'm just glad you're down with evil Hillary.
Hillary was not mad at Bill for what he did.  She was mad he got caught.  Just like she is not ashamed of what she did she is angry she got caught.
These stories are falling apart like Hillary on 9/11
Iam so glad he is finally admitting this about Hillary Clinton.
Why hate a man for doing nothing like Hillary Clinton
Hillary molested me with a cigar while Bill watched.
You are upset with Hillary for doing the same as all her predecessors.
I feel like Hillary Clinton is God's punishment on America for its sins.
Trumps beats Hillary
You seem so proud of Hillary for laughing at rape victims.
Of course Putin is going to hate Hillary for publicly announcing false accusations.
Russia is pissed off at Hillary for blaming the for wikileaks!
Hillary will not win.  Good faith is stronger than evil.  Trump wins??
I am proud of Hillary for standing up for what is good in the USA.
Hillarys plans are worse than Obama
Hillary is the nightmare "the people" have created.
Funny how the Hillary supporters are trashing Trump for saying the same thing.
???????????? I am so proud of the USA for making Hillary Clinton president.
Hillary, you're a hoax created by the Chinese
Trump trumps Hillary
During the debate, Trump praised Hillary for having the will to fight.
Trump is better person than Hillary
Donald TRUMPED Hillary
Kudos to Hillary for her accomplishments.
He also praised Hillary for handling the situation with dignity.
During the debate, Trump praised Hillary for having the will to fight.
People like Hillary in senate is the reason this country is going downhill.
Hillary did worse than expectations.
Trump will prosecute Hillary for her crimes, TRUMP will!
Have to praise Hillary for keeping her focus.
a landslide victory for Hillary will restore confidence in American democracy vindicated
I was so proud of Hillary tonight for acting like a tough, independent woman.
I dislike Hillary Clinton, as I think she is a corrupt, corporate shill.
Hillary did worse than Timmy Kaine
Im so glad he finally brought Benghazi against Hillary
Hillary, thank you for confirmation that the Wikileaks documents are authentic and you did that tonight when you accused the Russians of hacking your servers!  We the people deserve better than you!
Supreme Court justices is the only reason why I'd vote for Hillary.
Massive kudos to Hillary for keeping her cool with that beast behind her.
Congrats to Hillary for actually answering the questions. She's spot on. #debate

 

【相关】

【关于舆情挖掘】

《朝华午拾》总目录

【语义计算:精灵解语多奇智,不是冤家不上船】

白:
“他分分钟就可以教那些不讲道理的人做人的道理。”

我:

1016a

一路通,直到最后的滑铁卢。
定语从句谓语是“做人”而不是“可以教”,可是定语从句【【可以教。。。的】道理】与 vp定语【【做人的】道理】,这账人是怎么算的?

白:
还记得“那个小集合”吗?sb 教 sb sth,坑已经齐活儿了
“道理”是一般性的,定语是谓词的话一定要隐含全称陈述,不能是所有坑都有萝卜的。当然这也是软性的。只是在比较中不占优而已。单独使用不参与比较就没事:“张三打李四的道理你不懂”就可以,这时意味着“张三打李四背后的逻辑你不懂”。
“他分分钟就可以把一个活人打趴下的道理我实在是琢磨不透。”这似乎可以。

我:
教 至少两个 subcats:
教 sb sth
教 sb todo sth

白:
这个可以有
刚刚看到一个标题起:没有一滴雨会认为自己制造了洪灾。
这个句法关系分析的再清楚,也解释不了标题的语义。

宋:
有意思。

我:
教他
教他做人
教他道理
教他做人的道理
教他的道理
教他做人的往事儿

这个 “道理” 和 “往事”,是属于同一个集合的,我们以前讨论过的那个集合,不参与定语从句成分的 head n。

白:

我:
这个集合里面有子集 是关于 info 的,包括 道理 新闻 公告 往事。。。

白:
但是于“道理”而言,坑不满更显得有抽象度。是没“提取”,但坑不满更顺更优先,因为隐含了全称量词。

我:
就是说 这个集合里面还有 nuances 需要照顾。滑铁卢就在 “教他做人的往事儿” 上,照顾了它 就照顾不了 “做人的道理”。
就事论事 我可以词典化 “做人的道理”,后者有大数据的支持。

白:
这可是能产的语言现象。
试试这个:“你们懂不懂做人要低调的道理?”

我:
我试试 人在外 但电脑带了 只好拍照了

371656522530864097

你们懂不懂道理,这是主干
什么道理?
要低调的道理。
谁要低调?
你们。
懂什么类型的道理?
做人的道理。
谁做人?
你们。
小小的语义计算图谱 ,能回答这么多问题 ,这机器是不是有点牛叉?

白:
图上看,“要低调”是“懂道理”的状语而不是“道理”的定语?

我:
这个是对的,by design。但我们设计vn合成词的时候,我们要求把分离词合成起来。如果 n 带有定语,合成以后就指向 合成词整体。这时候 为了留下一些痕迹,有意在系统内部 保留定语的标签,以区别于其他的动词的状语修饰语。否则,“懂【要低调的】道理” 与 “【要低调的】懂道理”,就无法区分了。这样处理 语义落地有好处 完全是系统内部的对这种现象的约定和协调 system internal。定语 状语 都是修饰语 大类无异。

白:
“做人要低调”是一个整体,被拆解了。逻辑似乎不对。
拆解的问题还没解决:不管x是谁,如果x做人,x就要低调。
两个x是受全称量词管辖的同一个约束变元。
@宋 早上您似乎对“没有一滴雨会认为自己制造了洪灾”这个例子有话要说?

宋:
@白硕 主要是觉得这句话的意思有意思。从语义分析看应该不难,因为这是一种模式:没有NP V。即任何x,若x属于NP,则否定V(x)。

白:
首先这是一个隐喻,雨滴是不会“认为”如何如何的,既然这样用,就要提炼套路,看把雨滴代换成什么:雨滴和洪水的关系,是天上的部分和地上的整体的关系,是无害无责任的个体和有害有责任的整体的关系。

“美国网约车判决给北上广深的启示”

洪:
中土NLP全家福,
烟台开会倾巢出。
语言架桥机辅助,
兵强马壮数据足。

chinanlp
中国nlp全家福啊@wei

白: 哈
李白无暇混贵圈,一擎核弹一拨弦。精灵解语多奇智,不是冤家不上船。

洪:
冤家全都上贼船,李白有事别处赶。天宫迄今无甚关,Alien语言亟需练。

我:
白老师也没去啊 敢情。
黑压压一片 吾道不孤勒。

 

【相关】

【李白对话录:RNN 与语言学算法】

【李白对话录:如何学习和处置“打了一拳”】

【李白对话录:你波你的波,我粒我的粒】

【李白对话录- 从“把手”谈起】

【李白对话录:如何学习和处置“打了一拳”】 

【李白对话录之六:NLP 的Components 及其关系】

乔姆斯基批判

[转载]【白硕 – 穿越乔家大院寻找“毛毛虫”】

泥沙龙笔记:parsing 是引擎的核武器,再论NLP与搜索

泥沙龙笔记:从 sparse data 再论parsing乃是NLP应用的核武器

【立委科普:NLP核武器的奥秘】

【立委科普:语法结构树之美】

【立委科普:语法结构树之美(之二)】

中文处理

Parsing

【置顶:立委NLP博文一览】

《朝华午拾》总目录

 

【泥沙龙笔记:社会财富过个家家?】

我:
名人大嘴,见怪不怪了。
董老师一直在批评李彦宏的忽悠,说什么机器翻译要取代人的翻译。比起下面这个,是小巫见大巫吧, quote:

她想要和小扎一起,着手于“未来100年攻克所有疾病”的伟大理想。

【普莉希拉·陈落泪演讲】今天,扎克伯格和妻子陈宣布在未来10年捐出30亿美元协助疾病研究。陈在演讲中,回忆了自己贫穷的童年——作为一个华裔越南难民的女儿,想不到有一天竟开始有了改变世界的能力。她想要和小扎一起,着手于“未来100年攻克所有疾病”的伟大理想。L秒拍视频 @时差视频

声明一下:很钦佩,很感动,为小札和他妻子的赤心。可后者是医学博士啊,不仅仅是攻克所有疾病,而且给了时间期限,起点就是手里的钱和一片赤心。

没人觉得这个有些 way carried away 么?

明天我有钱了,我就宣布 200 年内,破解长生不老的千古之谜,实现秦始皇以来的人类最伟大的生命理想。

Mai:
@wei 金钱万能教
Lots of followers

我:
仔细看那个令人感动落泪的新闻发布会,医学博士也不是白当的,里面提到了一些“科学”。在那些术语的背后就是,医学革命不是不能解决癌症和其他绝症,而是缺乏经费,缺乏合作,缺乏原子弹一样的大项目。

洪:
语不惊人死不休,
有钱都想挣眼球。
伟爷何日高处就,
同样情怀也会有。

我:
现如今,小札和妻子有钱了,可以为这场革命发放启动资金。这么宏伟的目标,而且一两代人就可以完成,值得全世界政府和慈善家持续跟进。他们的角色就是做慈善天使吧?
标题是:Can we cure all diseases in our children's life time?
如果我说:no,这是骗人的大忽悠。是不是政治不正确,会被口水骂死?

洪:
追求不朽得不朽,
如此幻觉傻逼有。
凡人嗑药也喝酒,
富豪用钱㤰到头。

我:
更深一层的问题是,这些钱是他们的吗?由得他们胡来吗?

Mai:
和鲁迅先生所说,那个贺喜的客人说“孩子终究会死的”一样不受待见

我:
全世界的社会财富在一个确定的时间点是一个定数(按照我信奉的社会主义理论,社会财富乃全民所有,因为地球只有一个,因为人人生而平等,先不论)。这个财富交给大政府,通过税收,我们还是不放心,那会导致好大喜功的社会主义。所以要求减税,要求小政府,要求市场经济,指望万能的资本主义商品经济中“看不见的手”。但是流落到富豪手中的那一部分,则可变成为做慈善家而来的任性行为。

Mai:
病由心生,想用钱买健康,和始皇帝追求长生,智慧水准相若。

我:
谁来规范和justify这个花费?为什么胡乱或任性的巨额花费可以得到免税的政府扶持和社会的喝彩?

在所有富豪中,小札伉俪其实是我最喜欢的,简直就是孩子一样,童贞可爱。可是巨额财富落到孩子手上,简直比落到政府手中,更让人惊悚。一样是民脂民膏。很可能就被孩子过家家了。

细思极恐。

社会有反托拉斯法,理想的社会也应该有 反巨额财富spending法 去规范约束暴发户的任性行为。

RW:
@wei "War on Cancer" 是好事啊 。。。伟哥怎么啦?

我:
这个世界是钱太少,好事太多。风投还要有个 due diligence,这么大的 war 谁给做的 due diligence?好事多着呢。

RW:
比如说。。。?

我:
比如说:10x希望工程
100x红十字
1000x教育免费

如今政府难以取信于民了,红十字名声也臭了, 就暴发户花钱做慈善,还没臭,yet

廖:
尼克松搞过一个war on cancer 的项目,最后失败不了了之,浪费了无数老百姓的钱

我:
小札最好把钱给奥巴马或克林顿。专款专用,支持全民健保。不要让这天大的好事流产了 才是正道。唯一的超级大国,一个全民健保都搞不成,还谈什么攻克所有疾病?

100 年后,没有疾病了,这个日子还怎么过?所有的医学院都要关门,医生都要失业。失去疾病的同时,也失去了康复的指望。就如没有了死亡,也只有承受永远的生命之苦,煎熬永无止境,人生永世不得翻身。

四个字:细思极恐。

RW:
@廖 您是高手!
@wei
细思诚可贵,
极恐没必要。
若得长生乐,
两者皆可抛。

廖:
没有了疾病会有新的烦恼,这个大可不必担心。随着社会的发展某个行业逐渐消亡也是常有的事。
李瑞@全球鹰网络科技:
人生八大苦:生、老、病、死、爱别离、怨憎恚、求不得、五阴炽盛。
生老病死其实不苦,苦的是,因躁动的心所生出的痴心怨念。
爱却别离,于是忧愁怨恨滋生;
求而不得,于是恩怨情仇牵扯;
于是五阴炽盛:纷扰不断,皆源心乱。

洪:
冰冰年轻圆圆老,
伟哥也已伟爷瞧。
富起幻觉不想翘,
试用钱财打水飘。

Nick:
@wei 你到底哪活的,钱给政府也不行,自己造也不行?都给你做parser?

我:
@Nick  美国有很好的制度,使得暴发户不能变成世袭的贵族,“逼迫”他们把 90%+ 的钱财回馈社会,给他们一个慈善家的虚荣。
可是这个制度有一个重大的缺陷,就是慈善项目的 justification
任何spending,都必须有一个程序,现在是这个程序不到位,从而鼓励了财富任性。"zao" unchecked 也是犯罪。

当然就跟日本五代机一样,钱砸进去了,终归会有科技进步的。最后是 VOI 的问题。

毛:
按伟哥高见,私人如何花钱得要经过公民投票?
或者成立一个国家计委加以统筹?
又见《通往奴役之路》。

我:
对啊 当钱越过一个 thresholds 以后,那钱就不是私人的了。这时候 花钱的权利应该转向社会。任由私人的任性,无论出于多么善良的或虚荣的动因, 都是对人类资源的铺张浪费。就是某种制度缺失造成的合法犯罪。

毛:
这个threshhold怎么定?

行:
当美貌越个某个阈值是不是应被共妻?
私人财产已经被税收二次调整后就应自主支配,除了危害人类。

毛:
计划经济好?

我:
计划经济也许不好,但私人任性不比计划经济好。计划经济下 还可以有个制度性监管的空间。私人任性连这个空间都没有。

毛:
哦,那应该公私合营,社会主义改造,二次土改?

我:
小札的一百亿也许是任性,但也是唤醒

毛:
行,你是计划经济派

南:
不犯法即可

我:
税收是一个手段,但还是止不住任性挥霍

行:
按伟爷的理,您的财富远远超过全球的平均,是不是象那个老毛在湖南农考号召的,可以您家来搬东西?

我:
挥霍的背后就是不平等。

毛:
机会平等还是结果平等?

行:
全球还有几亿人赤贫饥饿,您经常晒美食算不算挥霍?是不是赞成穆加贝大爷把农场分给老兵

毛:
最不虚伪的就是把你的钱交公

我:
在资源总量恒定的情况下,一个项目的任性意味着其他项目的被剥夺。
每个项目后面都是人命。救了张三救不了李四。这个救谁 救多少的决定,无论如何不该是任性的私人决策。本质上与独裁者的长官意志,形象工程 ,并无二致。

行:
我坚定地站在这位老毛一边,坚决反对任何通往奴役的道路。

毛:
你的项目后面也有人命?

行:
伟爷,您的美食后面也是人命。
无论如何都该是任性的私人决策!
独裁者是剥夺。明抢!

我:
行者 我懂得你背后的逻辑,都是那个背景出来的。

毛:
好吧,说是社科院要重启计划经济研究,伟哥大有用武之地。

我:
你的通向奴役的说法我,混淆了度的概念。
如果是几个 million,或几十个 millions,fine,任性就任性。
如果是几百个亿 就不是一回事儿了

毛:
这些理论,我们从列宁那里听多了。

行:
当二次分配后的私人财富任由伟爷般的公意支配后,美国会变成天堂般的朝鲜

我:
这并不是说小扎这笔挥霍一定不对,也许歪打正着,也是可能的。 但正常理性的社会是不允许这样的。

行:
这个度站在津巴布韦老兵,站在陕北土坡的二流子来看呢?

毛:
为什么他会有几百个亿?
好吧,这个题目太大了,伟哥你自说自话吧。

我:
为什么有几百个亿?这是好问题。
因为他绝顶聪明,凭空创造出来的?
骗鬼吧。

行:
我们不怕因为有钱而任性而有权,我们怕因为任性的权力而有钱!

我:
他要是在月球创造了几百几千亿财富,爱咋玩咋玩。
他在地球赚钱,就得受到地球和地球人的束缚。

我:
共产主义破产。但共产主义与独裁计划经济的破产,并不自动为现存制度背书。

毛:
需要公民投票的不是他如何花钱,而是你的这些主张。好吧,stop。

行:
@wei 只是恐惧这个逻辑。
我觉得可以建议,呼吁。但权力仍归小札。

缘:
问题是他每次都出卖自己,把自己卖出一个好价格,交易自己。制度保证自由出卖自己。

我:
行者 我们讨论的是不同层面的问题。

行:
你推崇的集权就可以是2000亿造加速器但还在希望工程。

我:
最后一句 假如不是几百亿,而是再高几个量级呢?

行:
有钱就是可以任性。咱有钱 了买两碗豆浆,吃一碗倒一碗

我:
咱也任性 晒晒今天的地球恩赐:秋夜喜雨 秋日喜晴。
787382176509905677

南:
应该检讨财富的再分配模式而不是侵害个人权力

我:
@南 对。现存的是合法的 不合理怎么办 再检讨修正。并不意味着一检讨就只有回到共产主义一途。

 

From IBM's Jeopardy robot, Apple's Siri, to the new Google Translate

Latest Headline News: Samsung acquires Viv, a next-gen AI assistant built by the creators of Apple's Siri.

Wei:
Some people are just smart, or shrewd, more than we can imagine.  I am talking about Fathers of Siri, who have been so successful with their technology that they managed to sell the same type of technology twice, both at astronomical prices, and both to the giants in the mobile and IT industry.  What is more amazing is, the companies they sold their tech-assets to are direct competitors.  How did that happen?  How "nice" this world is, to a really really smart technologist with sharp business in mind.

What is more stunning is the fact that, Siri and the like so far are regarded more as toys than must-carry tools, intended at least for now to satisfy more curiosity than to meet the rigid demand of the market.  The most surprising is that the technology behind Siri is not unreachable rocket science by nature,  similar technology and a similar level of performance are starting to surface from numerous teams or companies, big or small.

I am a tech guy myself, loving gadgets, always watching for new technology breakthrough.  To my mind, something in the world is sheer amazing, taking us in awe, for example, the wonder of smartphones when the iPhone first came out. But some other things in the tech world do not make us admire or wonder that much, although they may have left a deep footprint in history. For example, the question answering machine made by IBM Watson Lab in winning Jeopardy.  They made it into the computer history exhibition as a major AI milestone.  More recently, the iPhone Siri, which Apple managed to put into hands of millions of people first time for seemingly live man-machine interaction. Beyond that accomplishment, there is no magic or miracle that surprises me.  I have the feel of "seeing through" these tools, both the IBM answering robot type depending on big data and Apple's intelligent agent Siri depending on domain apps (plus a flavor of AI chatbot tricks).

Chek: @ Wei I bet the experts in rocket technology will not be impressed that much by SpaceX either,

Wei: Right, this is because we are in the same field, what appears magical to the outside world can hardly win an insider's heart, who might think that given a chance, they could do the same trick or better.

The Watson answering system can well be regarded as a milestone in engineering for massive, parallel big data processing, not striking us as an AI breakthrough. what shines in terms of engineering accomplishment is that all this happened before the big data age when all the infrastructures for indexing, storing and retrieving big data in the cloud are widely adopted.  In this regard, IBM is indeed the first to run ahead of the trend, with the ability to put a farm of servers in working for the QA engine to be deployed onto massive data.  But from true AI perspective, neither the Watson robot nor the Siri assistant can be compared with the more-recent launch of the new Google Translate based on neural networks.  So far I have tested using this monster to help translate three Chinese blogs of mine (including this one in making), I have to say that I have been thrown away by what I see.  As a seasoned NLP practitioner who started MT training 30 years ago, I am still in disbelief before this wonder of the technology showcase.

Chen: wow, how so?

Wei:  What can I say?  It has exceeded my imagination limit for all my dreams of what MT can be and should be since I entered this field many years ago.  While testing, I only needed to do limited post-editing to make the following Chinese blogs of mine presentable and readable in English, a language with no kinship whatsoever with the source language Chinese.

Question answering of the past and present

Introduction to NLP Architecture

Hong: Wei seemed frightened by his own shadow.Chen:

Chen:  The effect is that impressive?

Wei:  Yes. Before the deep neural-nerve age, I also tested and tried to use SMT for the same job, having tried both Google Translate and Baidu MT, there is just no comparison with this new launch based on technology breakthrough.  If you hit their sweet spot, if your data to translate are close to the data they have trained the system on, Google Translate can save you at least 80% of the manual work.  80% of the time, it comes so smooth that there is hardly a need for post-editing.  There are errors or crazy things going on less than 20% of the translated crap, but who cares?  I can focus on that part and get my work done way more efficiently than before.  The most important thing is, SMT before deep learning rendered a text hardly readable no matter how good a temper I have.  It was unbearable to work with.  Now with this breakthrough in training the model based on sentence instead of words and phrase, the translation magically sounds fairly fluent now.

It is said that they are good a news genre, IT and technology articles, which they have abundant training data.  The legal domain is said to be good too.  Other domains, spoken language, online chats, literary works, etc., remain a challenge to them as there does not seem to have sufficient data available yet.

Chen: Yes, it all depends on how large and good the bilingual corpora are.

Wei:  That is true.  SMT stands on the shoulder of thousands of professional translators and their works.  An ordinary individual's head simply has no way in  digesting this much linguistic and translation knowledge to compete with a machine in efficiency and consistency, eventually in quality as well.

Chen: Google's major contribution is to explore and exploit the existence of huge human knowledge, including search, anchor text is the core.

Ma: I very much admire IBM's Watson, and I would not dare to think it possible to make such an answering robot back in 2007.

Wei: But the underlying algorithm does not strike as a breakthrough. They were lucky in targeting the mass media Jeopardy TV show to hit the world.  The Jeopardy quiz is, in essence, to push human brain's memory to its extreme, it is largely a memorization test, not a true intelligence test by nature.  For memorization, a human has no way in competing with a machine, not even close.  The vast majority of quiz questions are so-called factoid questions in the QA area, asking about things like who did what when and where, a very tractable task.  Factoid QA depends mainly on Named Entity technology which was mature long ago, coupled with the tractable task of question parsing for identifying its asking point, and the backend support from IR, a well studied and practised area for over 2 decades now.  Another benefit in this task is that most knowledge questions asked in the test involve standard answers with huge redundancy in the text archive expressed in various ways of expressions, some of which are bound to correspond to the way question is asked closely.  All these factors contribute to IBM's huge success in its almost mesmerizing performance in the historical event.  The bottom line is, shortly after the 1999 open domain QA was officially born with the first TREC QA track, the technology from the core engine has been researched well and verified for factoid questions given a large corpus as a knowledge source. The rest is just how to operate such a project in a big engineering platform and how to fine-tune it to adapt to the Jeopardy-style scenario for best effects in the competition.  Really no magic whatsoever.

Google Translated from【泥沙龙笔记:从三星购买Siri之父的二次创业技术谈起】, with post-editing by the author himself.

 

【Related】

Question answering of the past and present

Introduction to NLP Architecture

Newest GNMT: time to witness the miracle of Google Translate

Dr Li’s NLP Blog in English

 

【泥沙龙笔记:从三星购买Siri之父的二次创业技术谈起】

最近新闻:【三星收购 VIV 超级智能平台,与 Siri 和 Google 展开智能助理三国杀

我:
人要是精明,真是没治。一个 Siri,可以卖两次,而且都是天价,都是巨头,并且买家还是对头,也是奇了。最奇的是,Siri 迄今还是做玩具多于实用,满足好奇心多于满足市场的刚性需求。最最奇的是,Siri 里面的奥妙并不艰深,有类似水平和技术的也不是就他一家。
世界上有些事儿是让人惊叹的,譬如当 iPhone 问世的时候。但有些事儿动静很大,也在历史上留下了很深的足迹,但却没有叹服的感受。譬如 IBM 花生的问答系统,NND,都进入计算机历史展览馆了,作为AI里程碑。再如 Siri,第一个把人机对话送到千家万户的手掌心,功不可没。但这两样,都不让人惊叹,因为感觉上都是可以“看穿”的东西。不似火箭技术那种,让人有膜拜的冲动。IBM 那套我一直认为是工程的里程碑,是大数据计算和operations的成就,并非算法的突破。

查:
@wei 呵呵 估计搞火箭的也看不上SpaceX

我: 那倒也是,内行相轻,自古而然,因为彼此都多少知底。

陈:
最近对Watson很感冒

我:
花生是在大数据架构热起来之前做成的。从这方面看,IBM 的确开风气之先,有能力把一个感觉上平平的核心引擎,大规模部署到海量数据和平行计算上。总之,这两样都不如最近测试谷歌MT给我的震撼大。谷歌的“神经”翻译,神经得出乎意表,把我这个30年前就学MT的老江湖也弄晕糊了,云里雾里,不得不给他们吹一次喇叭

陈: 咋讲

我:
还讲啥,我是亲手测试的。两天里面测试翻译了我自己的两篇博文:

【Question answering of the past and present】

Introduction to NLP Architecture

洪:
伟爷被自己的影子吓坏了。

陈:
效果奇好?

我:
是的。前神经时代我也测试过,心里是有比较的。天壤之别。
如果你撞上了他们的枪口,数据与他们训练的接近,谷歌MT可以节省你至少 80% 的翻译人工。80% 的时候几乎可以不加编辑,就很顺畅了。谁在乎 20% 以内的错误或其他呢,反正我是省力一多半了。最重要的是,以前用 MT,根本就不堪卒读,无论你多好的脾气。现在一神经,就顺溜多了。当然,我的 NLP 博文,也正好撞上了他们的枪口。

陈:
以后也可以parsing。试一些医学的

我:
据说,他们擅长 news,IT,technology,好像 法律文体 据说也不错。其他领域、口语、文学作品等,那就太难为它了。

陈:
有双语语料

我:
就是,它是在千万个专业翻译的智慧结晶上。人的小小的脑袋怎么跟它比拼时间和效率呢,拼得了初一,也熬不过15。

陈:
谷歌的重大贡献是发掘人类已经存在的知识。包括搜索,锚文本是核心.

马:
我挺佩服IBM的华生的,如果是我,绝不敢在2007年觉得能做出这么一个东西出来

我:
可是算法上看真地不需要什么高超。那个智力竞赛是唬人的,挑战人的记忆极限。对于机器是特别有利的。绝大多数智力竞赛问答题,都是所谓 factoid questions
主要用到的是早已成熟的 Named Entity 技术,加上 question 的有限 parsing,背后的支撑也就是 IR。恰好智力竞赛的知识性问题又是典型的大数据里面具有相当 redundancy 的信息。这种种给IBM创造了成功的条件。

1999 年开始 open domain QA 正式诞生,不久上面的技术从核心引擎角度就已经被验证。剩下的就是工程的运作和针对这个竞赛的打磨了。

 

【相关】

【问答系统的前生今世】

【Question answering of the past and present】

谷歌NMT,见证奇迹的时刻

Newest GNMT: time to witness the miracle of Google Translate

《新智元笔记:知识图谱和问答系统:开题(1)》 

《新智元笔记:知识图谱和问答系统:how-question QA(2)》 

【置顶:立委NLP博文】

 

【问答系统的前生今世】

立委按:自从 Siri 第一次把问答系统送到千万人的手掌心后,如今又出了微软小冰和小娜。其实,中外所有IT巨头都在这方面加大了投入。于是想到重发2011年的博文。

一 前生
传统的问答系统是人工智能(AI: Artificial Intelligence)领域的一个应用,通常局限于一个非常狭窄专门的领域,基本上是由人工编制的知识库加上一个自然语言接口而成。由于领域狭窄,词汇总量很有限,其语言和语用的歧义问题可以得到有效的控制。问题是可以预测的,甚至是封闭的集合,合成相应的答案自然有律可循。著名的项目有上个世纪60年代研制的LUNAR系统,专事回答有关阿波罗登月返回的月球岩石样本的地质分析问题。SHRDLE 是另一个基于人工智能的专家系统,模拟的是机器人在玩具积木世界中的操作,机器人可以回答这个玩具世界的几何状态的问题,并听从语言指令进行合法操作。
这些早期的AI探索看上去很精巧,揭示了一个有如科学幻想的童话世界,启发人的想象力和好奇心,但是本质上这些都是局限于实验室的玩具系统(toy systems),完全没有实用的可能和产业价值。随着作为领域的人工智能之路越走越窄(部分专家系统虽然达到了实用,基于常识和知识推理的系统则举步维艰),寄生其上的问答系统也基本无疾而终。倒是有一些机器与人的对话交互系统 (chatterbots)一路发展下来至今,成为孩子们的网上玩具(我的女儿就很喜欢上网找机器人对话,有时故意问一些刁钻古怪的问题,程序应答对路的时候,就夸奖它一句,但更多的时候是看着机器人出丑而哈哈大笑。不过,我个人相信这个路子还大有潜力可挖,把语言学与心理学知识交融,应该可以编制出质量不错的机器人心理治疗师。其实在当今的高节奏高竞争的时代,很多人面对压力需要舒缓,很多时候只是需要一个忠实的倾听者,这样的系统可以帮助满足这个社会需求。要紧的是要消除使用者“对牛弹琴”的先入为主的偏见,或者设法巧妙隐瞒机器人的身份,使得对话可以敞开心扉。扯远了,打住。)
二 重生
产业意义上的开放式问答系统完全是另一条路子,它是随着互联网的发展以及搜索引擎的普及应运而生的。准确地说,开放式问答系统诞生于1999年,那一年搜索业界的第八届年会(TREC-8:Text REtrieval Conference)决定增加一个问答系统的竞赛,美国国防部有名的DARPA项目资助,由美国国家标准局组织实施,从而催生了这一新兴的问答系统及其community。问答系统竞赛的广告词写得非常精彩,恰到好处地指出搜索引擎的不足,确立了问答系统在搜索领域的价值定位。记得是这样写的(大体):用户有问题,他们需要答案。搜索引擎声称自己做的是信息检索(information retrieval),其实检索出来的并不是所求信息,而只是成千上万相关文件的链接(URLs),答案可能在也可能不在这些文件中。无论如何,总是要求人去阅读这些文件,才能寻得答案。问答系统正是要解决这个信息搜索的关键问题。对于问答系统,输入的是问题,输出的是答案,就是这么简单。
说到这里,有必要先介绍一下开放式问答系统诞生时候的学界与业界的背景。
从学界看,传统意义上的人工智能已经不再流行,代之而来的是大规模真实语料库基础上的机器学习和统计研究。语言学意义上的规则系统仍在自然语言领域发挥作用,作为机器学习的补充,而纯粹基于知识和推理的所谓智能规则系统基本被学界抛弃(除了少数学者的执着,譬如Douglas Lenat 的 Cyc)。学界在开放式问答系统诞生之前还有一个非常重要的发展,就是信息抽取(Information Extraction)专业方向及其community的发展壮大。与传统的自然语言理解(Natural Language Understanding)面对整个语言的海洋,试图分析每个语句求其语义不同,信息抽取是任务制导,任务之外的语义没有抽取的必要和价值:每个任务定义为一个预先设定的所求信息的表格,譬如,会议这个事件的表格需要填写会议主题、时间、地点、参加者等信息,类似于测试学生阅读理解的填空题。这样的任务制导的思路一下子缩短了语言技术与实用的距离,使得研究人员可以集中精力按照任务指向来优化系统,而不是从前那样面面俱到,试图一口吞下语言这个大象。到1999年,信息抽取的竞赛及其研讨会已经举行了七届(MUC-7:Message Understanding Conference),也是美国DARPA项目的资助产物(如果说DARPA引领了美国信息产业研究及其实用化的潮流,一点儿也不过誉),这个领域的任务、方法与局限也比较清晰了。发展得最成熟的信息抽取技术是所谓实体名词的自动标注(Named Entity:NE tagging),包括人名、地名、机构名、时间、百分比等等。其中优秀的系统无论是使用机器学习的方法,还是编制语言规则的方法,其查准率查全率的综合指标都已高达90%左右,接近于人工标注的质量。这一先行的年轻领域的技术进步为新一代问答系统的起步和开门红起到了关键的作用。
到1999年,从产业来看,搜索引擎随着互联网的普及而长足发展,根据关键词匹配以及页面链接为基础的搜索算法基本成熟定型,除非有方法学上的革命,关键词检索领域该探索的方方面面已经差不多到头了。由于信息爆炸时代对于搜索技术的期望永无止境,搜索业界对关键词以外的新技术的呼声日高。用户对粗疏的搜索结果越来越不满意,社会需求要求搜索结果的细化(more granular results),至少要以段落为单位(snippet)代替文章(URL)为单位,最好是直接给出答案,不要拖泥带水。虽然直接给出答案需要等待问答系统的研究成果,但是从全文检索细化到段落检索的工作已经在产业界实行,搜索的常规结果正从简单的网页链接进化到 highlight 了搜索关键词的一个个段落。
新式问答系统的研究就在这样一种业界急切呼唤、学界奠定了一定基础的形势下,走上历史舞台。美国标准局的测试要求系统就每一个问题给出最佳的答案,有短答案(不超过50字节)与长答案(不超过250字节)两种。下面是第一次问答竞赛的试题样品:
Who was the first American in space?
Where is the Taj Mahal?
In what year did Joe DiMaggio compile his 56-game hitting streak?
三 昙花
这次问答系统竞赛的结果与意义如何呢?应该说是结果良好,意义重大。最好的系统达到60%多的正确率,就是说每三个问题,系统可以从语言文档中大海捞针一样搜寻出两个正确答案。作为学界开放式系统的第一次尝试,这是非常令人鼓舞的结果。当时正是 dot com 的鼎盛时期,IT 业界渴望把学界的这一最新研究转移到信息产品中,实现搜索的革命性转变。里面有很多有趣的故事,参见我的相关博文:《朝华午拾:创业之路》
回顾当年的工作,可以发现是组织者、学界和业界的天时地利促成了问答系统奇迹般的立竿见影的效果。美国标准局在设计问题的时候,强调的是自然语言的问题(English questions,见上),而不是简单的关键词 queries,其结果是这些问句偏长,非常适合做段落检索。为了保证每个问题都有答案,他们议定问题的时候针对语言资料库做了筛选。这样一来,文句与文本必然有相似的语句对应,客观上使得段落匹配(乃至语句匹配)命中率高(其实,只要是海量文本,相似的语句一定会出现)。设想如果只是一两个关键词,寻找相关的可能含有答案的段落和语句就困难许多。当然找到对应的段落或语句,只是大大缩小了寻找答案的范围,不过是问答系统的第一步,要真正锁定答案,还需要进一步细化,pinpoint 到语句中那个作为答案的词或词组。这时候,信息抽取学界已经成熟的实名标注技术正好顶上来。为了力求问答系统竞赛的客观性,组织者有意选择那些答案比较单纯的问题,譬如人名、时间、地点等。这恰好对应了实名标注的对象,使得先行一步的这项技术有了施展身手之地。譬如对于问题 “In what year did Joe DiMaggio compile his 56-game hitting streak?”,段落语句搜索很容易找到类似下列的文本语句:Joe DiMaggio's 56 game hitting streak was between May 15, 1941 and July 16, 1941.  实名标注系统也很容易锁定 1941 这个时间单位。An exact answer to the exact question,答案就这样在海量文档中被搜得,好像大海捞针一般神奇。沿着这个路子,11 年后的 IBM 花生研究中心成功地研制出打败人脑的电脑问答系统,获得了电视智能大奖赛 Jeopardy! 的冠军(见报道 COMPUTER CRUSHES HUMAN 'JEOPARDY!' CHAMPS),在全美观众面前大大地出了一次风头,有如当年电脑程序第一次赢得棋赛冠军那样激动人心。
当年成绩较好的问答系统,都不约而同地结合了实名标注与段落搜索的技术: 证明了只要有海量文档,snippet+NE 技术可以自动搜寻回答简单的问题。
四 现状
1999 年的学界在问答系统上初战告捷,我们作为成功者也风光一时,下自成蹊,业界风险投资商蜂拥而至。很快拿到了华尔街千万美元的风险资金,当时的感觉真地好像是在开创工业革命的新纪元。可惜好景不长,互联网泡沫破灭,IT 产业跌入了萧条的深渊,久久不能恢复。投资商急功近利,收紧银根,问答系统也从业界的宠儿变成了弃儿(见《朝华午拾 - 水牛风云》)。主流业界没人看好这项技术,比起传统的关键词索引和搜索,问答系统显得不稳定、太脆弱(not robust),也很难 scale up, 业界的重点从深度转向广度,集中精力增加索引涵盖面,包括所谓 deep web。问答系统的研制从业界几乎绝迹,但是这一新兴领域却在学界发芽生根,不断发展着,成为自然语言研究的一个重要分支。IBM 后来也解决了 scale up (用成百上千机器做分布式并行处理)和适应性培训的问题,为赢得大奖赛做好了技术准备。同时,学界也开始总结问答系统的各种类型。一种常见的分类是根据问题的种类。
我们很多人都在中学语文课上,听老师强调过阅读理解要抓住几个WH的重要性:who/what/when/where/how/why(Who did what when, where, how and why?).  抓住了这些WH,也就抓住了文章的中心内容。作为对人的阅读理解的仿真,设计问答系统也正是为了回答这些WH的问题。值得注意的是,这些 WH 问题有难有易,大体可以分成两类:有些WH对应的是实体专名,譬如 who/when/where,回答这类问题相对容易,技术已经成熟。另一类问题则不然,譬如what/how/why,回答这样的问题是对问答学界的挑战。简单介绍一下这三大难题如下。
What is X?类型的问题是所谓定义问题,譬如 What is iPad II? (也包括作为定义的who:Who is Bill Clinton?) 。这一类问题的特点是问题短小,除去问题词What与联系词 is 以外 (搜索界叫stop words,搜索前应该滤去的,问答系统在搜索前利用它理解问题的类型),只有一个 X 作为输入,非常不利于传统的关键词检索。回答这类问题最低的要求是一个有外延和种属的定义语句(而不是一个词或词组)。由于任何人或物体都是处在与其他实体的多重关系之中(还记得么,马克思说人是社会关系的总和),要想真正了解这个实体,比较完美地回答这个问题,一个简单的定义是不够的,最好要把这个实体的所有关键信息集中起来,给出一个全方位的总结(就好比是人的履历表与公司的简介一样),才可以说是真正回答了 What/Who is X 的问题。显然,做到这一步不容易,传统的关键词搜索完全无能为力,倒是深度信息抽取可以帮助达到这个目标,要把散落在文档各处的所有关键信息抽取出来,加以整合才有希望(【立委科普:信息抽取】)。
How 类型的问题也不好回答,它搜寻的是解决方案。同一个问题,往往有多种解决档案,譬如治疗一个疾病,可以用各类药品,也可以用其他疗法。因此,比较完美地回答这个 How 类型的问题也就成为问答界公认的难题之一。

Why 类型的问题,是要寻找一个现象的缘由或动机。这些原因有显性表达,更多的则是隐性表达,而且几乎所有的原因都不是简单的词或短语可以表达清楚的,找到这些答案,并以合适的方式整合给用户,自然是一个很大的难题。

可以一提的是,我来硅谷九年帮助设计开发 deploy 了两个产品,第一个产品的本质就是回答 How-question 的,第二个涉及舆情挖掘和回答舆情背后的 Why-question。问答系统的两个最大的难题可以认为被我们的深层分析技术解决了。

原文在:【立委科普:问答系统的前生今世】

【相关】

【Question answering of the past and present】

http://en.wikipedia.org/wiki/Question_answering

《新智元笔记:知识图谱和问答系统:开题(1)》 

《新智元笔记:知识图谱和问答系统:how-question QA(2)》 

【旧文翻新:金点子起家的老管家 Jeeves】

《新智元笔记:微软小冰,人工智能聊天伙伴(1)》 

《新智元笔记:微软小冰,可能的商业模式(2)》 

【立委科普:从产业角度说说NLP这个行当】

【Question answering of the past and present】

  1. A pre-existence

The traditional question answering (QA) system is an application of Artificial Intelligence (AI).  It is usually confined to a very narrow and specialized domain, which is basically made up of a hand-crafted knowledge base with a natural language interface. As the field is narrow, the vocabulary is very limited, and its pragmatic ambiguity can be effectively under control. Questions are highly predictable, or close to a closed set, the rules for the corresponding answers are fairly straightforward. Well-known projects in the 1960s include LUNAR, a QA system specializing in answering questions about the geological analysis on the lunar samples collected from the Apollo's landing on the Moon.  SHRDLE is another famous QA expert system in AI history, it simulates the operation of a robot in the toy building world. The robot can answer the question of the geometric state of a toy and listen to the language instruction for its operation.

These early AI explorations seemed promising, revealing a fairy-tale world of scientific fantasy, greatly stimulating our curiosity and imagination. Nevertheless, in essence, these are just toy systems that are confined to the laboratory and are not of much practical value. As the field of artificial intelligence was getting narrower and narrower (although some expert systems have reached a practical level, majority AI work based on common sense and knowledge reasoning could not get out beyond lab), the corresponding QA systems failed to render meaningful results. There were some conversational systems (chatterbots) that had been developed thus far and became children's popular online toys (I remember at one time when my daughter was young, she was very fond of surfing the Internet to find various chatbots, sometimes deliberately asking tricky questions for fun.  Recent years have seen a revival of this tradition by industrial giants, with some flavor seen in Siri, and greatly emphasized in Microsoft's Little Ice).

2. Rebirth

Industrial open-domain QA systems are another story, it came into existence with the development of the Internet boom and the popularity of search engines. Specifically, the open QA system was born in 1999, when the TREC-8 (Eighth Text Retrieval Conference) decided to add a natural language QA track of competition, funded by the US Department of Defense's DARPA program, administrated by the United States National Institute of Standards and Technology (NIST), thus giving birth to this emerging QA community.  Its opening remarks when calling for the participation of the competition are very impressive, to this effect:

Users have questions, they need answers. Search engines claim that they are doing information retrieval, yet the information is not an answer to their questions but links to thousands of possibly related files. Answers may or may not be in the returned documents. In any case, people are compelled to read the documents in order to find answers. A QA system in our vision is to solve this key problem of information need. For QA, the input is a natural language question, the output is the answer, it is that simple.

It seems of benefit to introduce some background for academia as well as the industry when the open QA was born.

From the academic point of view, the traditional sense of artificial intelligence is no longer popular, replaced by the large-scale corpus-based machine learning and statistical research. Linguistic rules still play a role in the field of natural language, but only as a complement to the mainstream machine learning. The so-called intelligent knowledge systems based purely on knowledge or common sense reasoning are largely put on hold by academic scholars (except for a few, such as Dr. Douglas Lenat with his Cyc). In the academic community before the birth of open-domain question and answering, there was a very important development, i.e. the birth and popularity of a new area called Information Extraction (IE), again a child of DARPA. The traditional natural language understanding (NLU) faces the entire language ocean, trying to analyze each sentence seeking a complete semantic representation of all its parts. IE is different, it is task-driven, aiming at only the defined target of information, leaving the rest aside.  For example, the IE template of a conference may be defined to fill in the information of the conference [name], [time], [location], [sponsors], [registration] and such. It is very similar to filling in the blank in a student's reading comprehension test. The idea of task-driven semantics for IE shortens the distance between the language technology and practicality, allowing researchers to focus on optimizing tasks according to the tasks, rather than trying to swallow the language monster at one bite. By 1999, the IE community competitions had been held for seven annual sessions (MUC-7: Seventh Message Understanding Conference), the tasks of this area, approaches and the then limitations were all relatively clear. The most mature part of information extraction technology is the so-called Named Entity (NE tagging), including identification of names for human, location, and organization as well as tagging time, percentage, etc. The state-of-the-art systems, whether using machine learning or hand-crafted rules, reached a precision-recall combined score (F-measures) of 90+%, close to the quality of human performance. This first-of-its-kind technological advancement in a young field turned out to play a key role in the new generation of open-domain QA.

In industry, by 1999, search engines had grown rapidly with the popularity of the Internet, and search algorithms based on keyword matching and page ranking were quite mature. Unless there was a methodological revolution, the keyword search field seemed to almost have reached its limit. There was an increasing call for going beyond basic keyword search. Users were dissatisfied with search results in the form of links, and they needed more granular results, at least in paragraphs (snippets) instead of URLs, preferably in the form of direct short answers to the questions in mind.  Although the direct answer was a dream yet to come true waiting for the timing of open-domain QA era, the full-text search more and more frequently adopted paragraph retrieval instead of simple document URLs as a common practice in the industry, the search results changed from the simple links to web pages to the highlighting of the keywords in snippets.

In such a favorable environment in industry and academia, the open-domain question answering came onto the stage of history. NIST organized its first competition, requiring participating QA systems to provide the exact answer to each question, with a short answer of no more than 50 bytes in length and a long answer no more than 250 bytes. Here are the sample questions for the first QA track:

Who was the first American in space?
Where is the Taj Mahal?
In what year did Joe DiMaggio compile his 56-game hitting streak?

3. Short-lived prosperity

What are the results and significance of this first open domain QA competition? It should be said that the results are impressive, a milestone of significance in the QA history. The best systems (including ours) achieve more than 60% correct rate, that is, for every three questions, the system can search the given corpus and is able to return two correct answers. This is a very encouraging result as a first attempt at an open domain system. At the time of dot.com's heyday, the IT industry was eager to move this latest research into information products and revolutionize the search. There were a lot of interesting stories after that (see my related blog post in Chinese: "the road to entrepreneurship"), eventually leading to the historical AI event of IBM Watson QA beating humans in Jeopardy.

The timing and everything prepared by then from the organizers, the search industry, and academia, have all contributed to the QA systems' seemingly miraculous results. The NIST emphasizes well-formed natural language questions as appropriate input (i.e. English questions, see above), rather than traditional simple and short keyword queries.  These questions tend to be long, well suited for paragraph searches as a leverage. For competition's sake, they have ensured that each question asked indeed has an answer in the given corpus. As a result, the text archive contains similar statements corresponding to the designed questions, having increased the odds of sentence matching in paragraph retrieval (Watson's later practice shows that from the big data perspective, similar statements containing answers are bound to appear in text as long as a question is naturally long). Imagine if there are only one or two keywords, it will be extremely difficult to identify relevant paragraphs and statements that contain answers. Of course, finding the relevant paragraphs or statements is not sufficient for this task, but it effectively narrows the scope of the search, creating a good condition for pinpointing the short answers required.  At this time, the relatively mature technology of named entity tagging from the information extraction community kicked in.  In order to achieve the objectivity and consistency in administrating the QA competition, the organizers deliberately select only those questions which are relatively simple and straightforward, questions about names, time or location (so-called factoid questions).  This practice naturally agrees with the named entity task closely, making the first step into open domain QA a smooth process, returning very encouraging results as well as a shining prospect to the world. For example, for the question "In what year did Joe DiMaggio compile his 56-game hitting streak?", the paragraph or sentence search could easily find text statements similar to the following: "Joe DiMaggio's 56 game hitting streak was between May 15, 1941 and July 16".  An NE system tags 1941 as time with no problem and the asking point for time in parsing the wh-phrase "in what year" is also not difficult to decode. Therefore, an exact answer to the exact question seems magically retrieved from the sea of documents to satisfy the user, like a needle found in the haystack. Following roughly the same approach, equipped with gigantic computing power for parallel processing of big data, 11 years later, IBM Watson QA beat humans in the Jeopardy live show in front of the nationwide TV audience, stimulating the entire nation's imagination with awe for this technology advance.  From QA research perspective, the IBM's victory in the show is, in fact, an expected natural outcome, more of an engineering scale-up showcase rather than research breakthrough as the basic approach of snippet + NE + asking-point has long been proven.

A retrospect shows that adequate QA systems for factoid questions are invariably combined with a solid Named Entity module and a question parser for identifying asking points.  As long as there is an IE-indexed big data behind, with information redundancy as its nature, factoid QA is a very tractable task .

4. State of the art

The year 1999 witnessed the academic community's initial success of the first open-domain QA track as a new frontier of the retrieval world.  We also benefited from that event as a winner, having soon secured a venture capital injection of $10 million from the Wall Street. It was an exciting time shortly after AskJeeves' initial success in presenting a natural language interface online (but they did not have the QA technology for handling the huge archive for retrieving exact answers automatically, instead they used human editors behind the scene to update the answers database).  A number of QA start-ups were funded.  We were all expecting to create a new era in the information revolution. Unfortunately, the good times are not long, the Internet bubble soon burst, and the IT industry fell into the abyss of depression.  Investors tightened their monetary operations, the QA heat soon declined to freezing point and almost disappeared from the industry (except for giants' labs such as IBM Watson; in our case, we shifted from QA to mining online brand intelligence for enterprise clients). No one in the mainstream believes in this technology anymore. Compared with traditional keyword indexing and searching, the open domain QA  is not as robust and is yet to scale up to really big data for showing its power. The focus of the search industry is shifting from depth back to breadth, focusing on the indexing coverage, including the so-called deep web. As the development of QA systems is almost extinct from the industry, this emerging field stays deeply rooted in the academic community, developed into an important branch, with increasing natural language research from universities and research labs. IBM later solves the scale-up challenge, as a precursor of the current big data architectural breakthrough.

At the same time, scholars begin to summarize the various types of questions that challenge QA. A common classification is based on identifying the type of questions for their asking points.  Many of us still remember our high school language classes, where the teacher stressed the 6 WHs for reading comprehension: who / what / when / where / how / why. (Who did what when, where, how and why?)  Once answers to these questions are clear , the central stories of an article are in hands. As a simulation of human reading comprehension, the QA system is designed to answer these key WH questions as well. It is worth noting that these WH questions are of different difficulty levels, depending on the types of asking points (one major goal for question parsing is to identify the key need from a question, what we call asking point identification, usually based on question parsing of wh-phrases and other question clues). Those asking points corresponding to an entity as an appropriate answer, such as who / when / where, are relatively easy questions to answer (i.e. factoid questions). Another type of question is not simply answerable by an entity, such as what-is / how / why, there is consensus that answering such questions is a much more challenging task than factors questions.  A brief introduction to these three types of "tough" questions and their solutions are presented below as a showcase of the on-going state to conclude this overview of the QA journey.

What/who is X? This type of questions is the so-called definition question, such as What is iPad II? Who is Bill Clinton? This type of question is typically very short, after the wh-word and the stop word "is" are stripped in question parsing, what is left is just a name or a term as input to the QA system.  Such an input is detrimental to the traditional keyword retrieval system as it ends up with too many hits from which the system can only pick the documents with the most keyword density or page rank as returns.  But from QA perspective, the minimal requirement to answer this question is a definition statement in the forms of "X is a ...".  Since any entity or object is in multiple relationships with other entities and involved in various events as described in the corpus, a better answer to the definition question involves a summary of the entity with all the links to its key associated relations and events, giving a profile of the entity.  Such technology is in existence, and, in fact, has been partly deployed today. It is called knowledge graph, supported by underlying information extraction and fusion. The state-of-the-art solution for this type of questions is best illustrated in the Google deployment of its knowledge graph in handling queries of a short search for movie stars or other VIP.

The next challenge is how-questions, asking about a solution for solving a problem or doing something, e.g. How can we increase bone density? How to treat a heart attack?  This type of question calls for a summary of all types of solutions such as medicine, experts, procedures, or recipe.  A simple phrase is usually not a good answer and is bound to miss varieties of possible solutions to satisfy the information need of the users (often product designers, scientists or patent lawyers) who typically are in the stage of prior art research and literature review for a conceived solution in mind.  We have developed such a powerful system based on deep parsing and information extraction to answer open-domain how-questions comprehensively in the product called Illumin8, as deployed by Elsevier for quite some years.  (Powerful as it is, unfortunately, it did not end up as a commercial success in the market from revenue perspective.)

The third difficult question is why.  People ask why-questions to find the cause or motive of a phenomenon, whether an event or an opinion.  For example, why people like or dislike our product Xyz?  There might be thousands of different reasons behind a sentiment or opinion.   Some reasons are explicitly expressed (I love the new iPhone 7 because of its greatly enhanced camera) and more reasons are actually in some implicit expressions (just replaced my iPhone , it sucks in battery life).  An adequate QA system should be equipped with the ability to mine the corpus and summarize and rank the key reasons for the user.  In the last 5 years, we have developed a customer insight product that can answer why questions behind the public opinions and sentiments for any topics by mining the entire social media space.

Since I came to the Silicon Valley 9 years ago, I have been lucky, with pride, in having had a chance to design and develop QA systems for answering the widely acknowledged challenging questions.  Two products for answering the open-domain how questions and why-questions in addition to deep sentiment analysis have been developed and deployed to global customers.  Our deep parsing and IE platform is also equipped with the capability to construct deep knowledge graph to help answer definition questions, but unlike Google with its huge platform for the search needs, we have not identified a commercial opportunity to deploy that capability for a market yet.

This  piece of writing first appeared in 2011 in my personal blog, with only limited revisions since. Thanks to Google Translate at https://translate.google.com/ for providing a quick basis, which was post-edited by myself.  

 

[Related]

Http://en.wikipedia.org/wiki/Question_answering

The Anti-Eliza Effect, New Concept in AI

"Knowledge map and open-domain QA (1)" (in Chinese)

"knowledge map and how-question QA (2)"  (in Chinese)

Ask Jeeves and its million-dollar idea for human interface in 】(in Chinese)

Dr Li’s NLP Blog in English

 

Newest GNMT: time to witness the miracle of Google Translate

gnmt

Wei:
Recently, the microblogging (wechat) community is full of hot discussions and testing on the newest annoucement of the Google Translate breakthrough in its NMT (neural network-based machine translation) offering, claimed to have achieved significant progress in data quality and readability.  Sounds like a major breakthrough worthy of attention and celebration.

The report says:

Ten years ago, we released Google Translate, the core algorithm behind this service is PBMT: Phrase-Based Machine Translation.  Since then, the rapid development of machine intelligence has given us a great boost in speech recognition and image recognition, but improving machine translation is still a difficult task.

Today, we announced the release of the Google Neural Machine Translation (GNMT) system, which utilizes state-of-the-art training techniques to maximize the quality of machine translation so far. For a full review of our findings, please see our paper "Google`s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation."A few years ago, we began using RNN (Recurrent Neural Networks) to directly learn the mapping of an input sequence (such as a sentence in a language) to an output sequence (the same sentence in another language). The phrase-based machine learning (PBMT) breaks the input sentences into words and phrases, and then largely interprets them independently, while NMT interprets the entire sentence of the input as the basic unit of translation .

A few years ago, we began using RNN (Recurrent Neural Networks) to directly learn the mapping of an input sequence (such as a sentence in a language) to an output sequence (the same sentence in another language).  The phrase-based machine learning (PBMT) breaks the input sentences into words and phrases, and then largely interprets them independently, while NMT interprets the entire sentence of the input as the basic unit of translation .The advantage of this approach is that compared to the previous phrase-based translation system, this method requires less engineering design. When it was first proposed, the accuracy of the NMT on a medium-sized public benchmark

The advantage of this approach is that compared to the previous phrase-based translation system, this method requires less engineering design. When it was first proposed, the accuracy of the NMT on a medium-sized public benchmark data set was comparable to that of a phrase-based translation system.  Since then, researchers have proposed a number of techniques to improve NMT, including modeling external alignment models to handle rare words, using attention to align input and output words, and word decomposition into smaller units to cope with rare words. Despite these advances, the speed and accuracy of NMT has not been able to meet the requirements of a production system such as Google Translate.  Our new paper describes how to overcome many of the challenges of making NMT work on very large data sets and how to build a system that is both fast and accurate enough to deliver a better translation experience for Google users and services.

............

Using side-by-side comparisons of human assessments as a standard, the GNMT system translates significantly better than the previous phrase-based production system.  With the help of bilingual human assessors, we found in sample sentences from Wikipedia and the news website that GNMT reduced translational errors by 55% to 85% or more in the translation of multiple major pairs of languages.

In addition to publishing this research paper today, we have also announced that GNMT will be put into production in a very difficult language pair (Chinese-English) translation.

Now, the Chinese-English translations of the Google Translate for mobile and web versions have been translated at 100% using the GNMT machine - about 18 million translations per day.  GNMT's production deployment uses our open machine learning tool suite TensorFlow and our Tensor Processing Units (TPUs), which provide sufficient computational power to deploy these powerful GNMT models, meeting Google Translate strict latency requirements for products.

Chinese-to-English translation is one of the more than 10,000 language pairs supported by Google Translate. In the coming months, we will continue to extend our GNMT to far more language pairs.

GNMT translated from Google Translate achieves a major breakthrough!

As an old machine translation researcher, this temptation cannot be resisted.  I cannot wait to try this latest version of the Google Translate for Chinese-English.
Previously I tried Google Chinese-to-English online translation multiple times, the overall quality was not very readable and certainly not as good as its competitor Baidu.  With this newest breakthrough using deep learning with neural networks, it is believed to get close to human translation quality.  I have a few hundreds of Chinese blogs on NLP, waiting to be translated as a try.  I was looking forward to this first attempt in using Google Translate for my Science Popularization blog titled Introduction to NLP Architecture.  My adventure is about to start.  Now is the time to witness the miracle, if miracle does exist.

Dong:
I hope you will not be disappointed.  I have jokingly said before: the rule-based machine translation is a fool, the statistical machine translation is a madman, and now I continue to ridicule: neural machine translation is a "liar" (I am not referring to the developers behind NMT).  Language is not a cat face or the like, just the surface fluency does not work, the content should be faithful to the original!

Wei:
Let us experience the magic, please listen to this translated piece of my blog:

This is my Introduction to NLP Architecture fully automatically translated by Google Translate yesterday (10/2/2016) and fully automatically read out without any human interference.  I have to say, this is way beyond my initial expectation and belief.

Listen to it for yourself, the automatic speech generation of this science blog of mine is amazingly clear and understandable. If you are an NLP student, you can take it as a lecture note from a seasoned NLP practitioner (definitely clearer than if I were giving this lecture myself, with my strong accent). The original blog was in Chinese and I used the newest Google Translate claimed to be based on deep learning using sentence-based translation as well as character-based techniques.

Prof. Dong, you know my background and my original doubtful mindset. However, in the face of such a progress, far beyond our original imagination limits for automatic translation in terms of both quality and robustness when I started my NLP career in MT training 30 years ago, I have to say that it is a dream come true in every sense of it.

Dong:
In their terminology, it is "less adequate, but more fluent." Machine translation has gone through three paradigm shifts. When people find that it can only be a good information processing tool, and cannot really replace the human translation, they would choose the less costly.

Wei:
In any case, this small test is revealing to me. I am still feeling overwhelmed to see such a miracle live. Of course, what I have just tested is the formal style, on a computer and NLP topic, it certainly hit its sweet spot with adequate training corpus coverage. But compared with the pre-NN time when I used both Google SMT and Baidu SMT to help with my translation, this breakthrough is amazing. As a senior old school practitioner of rule-based systems, I would like to pay deep tribute to our "nerve-network" colleagues. These are a group of extremely genius crazy guys. I would like to quote Jobs' famous quotation here:

“Here's to the crazy ones. The misfits. The rebels. The troublemakers. The round pegs in the square holes. The ones who see things differently. They're not fond of rules. And they have no respect for the status quo. You can quote them, disagree with them, glorify or vilify them. About the only thing you can't do is ignore them. Because they change things. They push the human race forward. And while some may see them as the crazy ones, we see genius. Because the people who are crazy enough to think they can change the world, are the ones who do.”

@Mao, this counts as my most recent feedback to the Google scientists and their work. Last time, about a couple of months ago when they released their parser, proudly claimed to be "the most accurate parser in the world", I wrote a blog to ridicule them after performing a serious, apples-to-apples comparison with our own parser. This time, they used the same underlying technology to announce this new MT breakthrough with similar pride, I am happily expressing my deep admiration for their wonderful work. This contrast of my attitudes looks a bit weird, but it actually is all based on facts of life. In the case of parsing, this school suffers from lacking naturally labeled data which they would make use of in perfecting the quality, especially when it has to port to new domains or genres beyond the news corpora. After all, what exists in the language sea involves corpora of raw text with linear strings of words, while the corresponding parse trees are only occasional, artificial objects made by linguists in a limited scope by nature (e.g. PennTree, or other news-genre parse trees by the Google annotation team). But MT is different, it is a unique NLP area with almost endless, high-quality, naturally-occurring "labeled" data in the form of human translation, which has never stopped since ages ago.

Mao: @wei That is to say, you now embrace or endorse a neuron-based MT, a change from your previous views?

Wei:
Yes I do embrace and endorse the practice. But I have not really changed my general view wrt the pros and cons between the two schools in AI and NLP. They are complementary and, in the long run, some way of combining the two will promise a world better than either one alone.

Mao: What is your real point?

Wei:
Despite biases we are all born with more or less by human nature, conditioned by what we have done and where we come from in terms of technical background, we all need to observe and respect the basic facts. Just listen to the audio of their GSMT translation by clicking the link above, the fluency and even faithfulness to my original text has in fact out-performed an ordinary human translator, in my best judgment. If an interpreter does not have sufficient knowledge of my domain, if I give this lecture in a classroom, and ask an average interpreter to translate on the spot for me, I bet he will have a hard time performing better than the Google machine listed above (of course, human translation gurus are an exception). This miracle-like fact has to be observed and acknowledged. On the other hand, as I said before, no matter how deep the learning reaches, I still do not see how they can catch up with the quality of my deep parsing in the next few years when they have no way of magically having access to a huge labeled data of trees they depend on, especially in the variety of different domains and genres. They simply cannot "make bricks without straw" (as an old Chinese saying goes, even the most capable housewife can hardly cook a good meal without rice). Because in the natural world, there are no syntactic trees and structures for them to learn from, there are only linear sentences. The deep learning breakthrough seen so far is still mainly supervised learning, which has almost an insatiable appetite for massive labeled data, forming its limiting knowledge bottleneck.

Mao: I'm confused. Which one do you believe stronger? Who is the world's No. 0?

Wei:
Parsing-wise, I am happy to stay as No. 0 if Google insists on their being No. 1 in the world. As for MT, it is hard to say, from what I see, between their breakthrough and some highly sophisticated rule-based MT systems out there. But what I can say is, at a high level, the trends of the mainstream statistical MT winning the space both in the industry as well as in academia over the old school rule-based MT are more evident today than before.  This is not to say that the MT rule system is no longer viable, or going to an end. There are things which SMT cannot beat rule MT. For examples, certain types of seemingly stupid mistakes made by GNMT (quite some laughable examples of totally wrong or opposite translation have been illustrated in this salon in the last few days) are almost never seen in rule-based MT systems.

Dong:
here is my try of GNMT from Chinese to English:

学习上,初二是一个分水岭,学科数量明显增多,学习方法也有所改变,一些学生能及时调整适应变化,进步很快,由成绩中等上升为优秀。但也有一部分学生存在畏难情绪,将心思用在学习之外,成绩迅速下降,对学习失去兴趣,自暴自弃,从此一蹶不振,这样的同学到了初三往往很难有所突破,中考的失利难以避免。

Learning, the second of a watershed, the number of subjects significantly significantly, learning methods have also changed, some students can adjust to adapt to changes in progress, progress quickly, from the middle to rise to outstanding. But there are some students there is Fear of hard feelings, the mind used in the study, the rapid decline in performance, loss of interest in learning, self-abandonment, since the devastated, so the students often difficult to break through the third day,

Mao: This translation cannot be said to be good at all.

Wei:
Right, that is why it calls for an objective comparison to answer your previous question. Currently, as I see, the data for the social media and casual text are certainly not enough, hence the translation quality of online messages is still not their forte.  As for the previous textual sample Prof. Dong showed us above, Mao said the Google translation is not of good quality as expected. But even so, I still see impressive progress made there. Before the deep learning time, the SMT results from Chinese to English is hardly readable, and now it can generally be read loud to be roughly understood. There is a lot of progress worth noting here.

Ma:
In the fields with big data, in recent years, DL methods are by leaps and bounds. I know a number of experts who used to be biased against DL have changed their views when seeing the results. However, DL in the IR field is still basically not effective so far, but there are signs of slowly penetrating IR.

Dong:
The key to NMT is "looking nice". So for people who do not understand the original source text, it sounds like a smooth translation. But isn't it a "liar" if a translation is losing its faithfulness to the original? This is the Achille's heel of NMT.

Ma: @Dong, I think all statistical methods have this aching point.

Wei:
Indeed, there are respective pros and cons. Today I have listened to the Google translation of my blog three times and am still amazed at what they have achieved. There are always some mistakes I can pick here and there. But to err is human, not to say a machine, right? Not to say the community will not stop advancing and trying to correct mistakes. From the intelligibility and fluency perspectives, I have been served super satisfactorily today. And this occurs between two languages without historical kinship whatsoever.

Dong:
Some leading managers said to me years ago, "In fact, even if machine translation is only 50 percent correct, it does not matter. The problem is that it cannot tell me which half it cannot translate well. If it can, I can always save half the labor, and hire a human translator to only translate the other half." I replied that I am not able to make a system do that. Since then I have been concerned about this issue, until today when there is a lot of noise of MT replacing the human translation anytime from now. It's kinda like having McDonald's then you say you do not need a fine restaurant for French delicacy. Not to mention machine translation today still cannot be compared to McDonald's. Computers, with machine translation and the like, are in essence a toy given by God for us human to play with. God never agrees to permit us to be equipped with the ability to copy ourselves.

Why GNMT first chose language pairs like Chinese-to-English, not the other way round to showcase? This is very shrewd of them. Even if the translation is wrong or missing the points, the translation is usually fluent at least in this new model, unlike the traditional model who looks and sounds broken, silly and erroneous. This is the characteristics of NMT, it is selecting the greatest similarity in translation corpus. As a vast number of English readers do not understand Chinese, it is easy to impress them how great the new MT is, even for a difficult language pair.

Wei:
Correct. A closer look reveals that this "breakthrough" lies more on fluency of the target language than the faithfulness to the source language, achieving readability at cost of accuracy. But this is just a beginning of a major shift. I can fully understand the GNMT people's joy and pride in front of a breakthrough like this. In our career, we do not always have that type of moment for celebration.

Deep parsing is the NLP's crown. Yet to see how they can beat us in handling domains and genres lacking labeled data. I wish them good luck and the day they prove they make better parsers than mine would be the day of my retirement. It does not look anything like this day is drawing near, to my mind. I wish I were wrong, so I can travel the world worry-free, knowing that my dream has been better realized by my colleagues.

Thanks to Google Translate at https://translate.google.com/ for helping to translate this Chinese blog into English, which was post-edited by myself. 

 

[Related]

Wei’s Introduction to NLP Architecture Translated by Google

"OVERVIEW OF NATURAL LANGUAGE PROCESSING"

"NLP White Paper: Overview of Our NLP Core Engine"

Introduction to NLP Architecture

It is untrue that Google SyntaxNet is the "world’s most accurate parser"

Announcing SyntaxNet: The World’s Most Accurate Parser Goes Open

Is Google SyntaxNet Really the World’s Most Accurate Parser?

Dr Li's NLP Blog in English