顾: 而且似乎某些高能人群倾向于省略小词。例如华尔街投行和硅谷人士的某些交流中,如果小词太多反而被鄙视,被认为不简洁不性感,这大概是人性,不是中国独有。举一例,出自Liar's Poker, 某trader跳槽,老板以忠诚挽留,他回答,“You want loyalty, hire a cocker spaniel”
顾: long time no see据认为是汉语入侵英语之后产生的,只是大家觉得自然,英美人也用了。这个语句困扰我很久,在网上查了据说是如此,但未必是严肃考证。
我: long time no see 是最直接的展示我东方躶体美女的一个案例。西人突然悟过来,原来语言可以如此简洁,这样地不遮不掩啊。他们觉得可以接受,是因为赶巧这对应了一个常用的语用(pragmatic)场景,朋友见面时候的套话之一,不分中外。在有语用的帮助下,句法可以马虎一些,这也是这类新成语(熟语)形成的背后理由。
说到含金量,其实很多课题,特别是面向应用的课题,并不是什么高精尖的火箭技术(not rocket science),不可能要求一个申请预示某种突破。撰写申请的人是游说方,有责任 highlight 自己的提议里面的亮点,谈方案远景的时候少不了这个突破那个革命的说辞,多少迎合了政府主管部门好大喜功的心态,但实际上很少有多少研究项目会包含那么多闪光的思想和科学研究的革命性转变。(纯科学的研究,突破也不多吧,更何况应用型研究。)应用领域“奇迹”的发生往往植根于细节的积累(所谓 the Devil is in the details),而不是原理上的突破。而对于问题领域的细节,我是有把握的。这是我的长处,也是我提出科研方案比较让人信服的原因。有的时候,不得不有迎合“时尚”的考量,譬如领域里正流行 bootstrapping 等机器自学习的算法,虽然很不成熟,难以解决实际问题,但是基金报告列上它对申请的批准是有益的。不用担心所提议的听上去时尚的方案最后不工作,由于科研的探索性质,最终的解决方案完全可以是另一种路子。说直白了就是,挂羊头卖狗肉不是诚实的科研态度,但是羊头狗头都挂上以后再卖狗肉就没有问题。绝不可以一棵树上吊死。
当然。次范畴就是小规则,小规则优先于大规则。语言规则中,大类的规则(POS-based rules)最粗线条,是默认规则,不涉及具体的次范畴(广义的subcat)。subcat based 的其次。sub-subcat 再其次。一路下推,可以到利用直接量(词驱动)的规则,那是最优先最具体的,包括成语和固定搭配。
王伟DL
文章透露着落地的经验(经历)的光泽,不同的人对此文吸收和反射的谱线也会不同。我贪婪地一连看完,很多地方只觉得在理,的确是是是,有些地方也想表己见,却欲辨已忘言。“...指与大语料库对应的 grammar trees 自动形成的 forest,比 PennTree 大好几个量级。",好羡慕这个大块头!大块头有大智慧!
@算文解字:这篇顶级高手对话,充满思想,可以当武林秘籍参悟的文章,竟然没人转。。。强烈推荐啊!
算文解字
依存关系的确更好用//@立委_米拉: (1) 分层是正道。最起码要两层,基本短语层和句法关系层。(2)顺便一提,作为生成结果,短语结构的表达远不如依存关系的表达。短语结构叠床架屋,不好用,也不够逻辑和普世(不适合词序自由的语言)。当然,这后一点是另外的话题了,不仅仅是 CFG vs FSG 之争了。
算文解字
也对,镜老师批评的是用同一层次的规则处理不同层次的现象的"原教旨"CFG生成方法,提出的对策为FST分层处理。而在CFG下用coarse2fine的(分层)策略,也算是殊途同归了。//@沈李斌AI: 没必要排斥CFG。CFG树是生成结果,不是生成步骤。设计好coarse to fine的生成策略,控制每一步的perplexity和recall
顾: 而且似乎某些高能人群倾向于省略小词。例如华尔街投行和硅谷人士的某些交流中,如果小词太多反而被鄙视,被认为不简洁不性感,这大概是人性,不是中国独有。举一例,出自Liar's Poker, 某trader跳槽,老板以忠诚挽留,他回答,“You want loyalty, hire a cocker spaniel”
顾: long time no see据认为是汉语入侵英语之后产生的,只是大家觉得自然,英美人也用了。这个语句困扰我很久,在网上查了据说是如此,但未必是严肃考证。
我: long time no see 是最直接的展示我东方躶体美女的一个案例。西人突然悟过来,原来语言可以如此简洁,这样地不遮不掩啊。他们觉得可以接受,是因为赶巧这对应了一个常用的语用(pragmatic)场景,朋友见面时候的套话之一,不分中外。在有语用的帮助下,句法可以马虎一些,这也是这类新成语(熟语)形成的背后理由。
说到含金量,其实很多课题,特别是面向应用的课题,并不是什么高精尖的火箭技术(not rocket science),不可能要求一个申请预示某种突破。撰写申请的人是游说方,有责任 highlight 自己的提议里面的亮点,谈方案远景的时候少不了这个突破那个革命的说辞,多少迎合了政府主管部门好大喜功的心态,但实际上很少有多少研究项目会包含那么多闪光的思想和科学研究的革命性转变。(纯科学的研究,突破也不多吧,更何况应用型研究。)应用领域“奇迹”的发生往往植根于细节的积累(所谓 the Devil is in the details),而不是原理上的突破。而对于问题领域的细节,我是有把握的。这是我的长处,也是我提出科研方案比较让人信服的原因。有的时候,不得不有迎合“时尚”的考量,譬如领域里正流行 bootstrapping 等机器自学习的算法,虽然很不成熟,难以解决实际问题,但是基金报告列上它对申请的批准是有益的。不用担心所提议的听上去时尚的方案最后不工作,由于科研的探索性质,最终的解决方案完全可以是另一种路子。说直白了就是,挂羊头卖狗肉不是诚实的科研态度,但是羊头狗头都挂上以后再卖狗肉就没有问题。绝不可以一棵树上吊死。
Sydney Brenner, 索尔克生物研究所高级研究员(2002年诺贝尔奖得主,在基因编码领域有突出贡献)
Marvin Minsky, 麻省理工学院媒体艺术与科学教授
Noam Chomsky, 麻省理工学院语言与哲学系教授
Emilio Bizzi, 麻省理工学院脑科学研究所教授
Barbara H. Partee 麻省大学语言与哲学系教授
Patrick H. Winston 麻省理工学院人工智能与计算机科学教授
Chomsky的主要观点:
A. Chomsky认为统计语言模型取得过工程意义上的成功,但不关科学的事。
B. 为语言事实建模就像收集蝴蝶标本。科学(尤其是语言学)想要的是基本原则。
C. 统计模型无法理解,并不是关于研究对象的洞见。
D. 统计模型或许可以对一些现象做出精确的模拟,但这是迷途。人们并不根据前面出现的两个单词去预测后面一个单词。人们生成句子(词语序列)的方式是从内在的语义到树结构,再到表层的线性词语序列。
E. 统计模型已经被证实无法用于学习语言。因此语言必然是天生的。用语言模型去解释语言是浪费时间。
Norvig的主要回应:
A. 工程上的成功确实不是科学目标。不过科学和工程是比翼齐飞的。工程上的成功可以作为科学上成功模型的证据。
B. 科学是事实和理论的混合体。理论过分凌驾于事实之上并不可取。在科学史上,不断积累事实是科研正途,并非异类。关于语言的科学也不应例外。
C. 包含几十亿个参数的统计模型确实难以直观理解。个人确实无法核查每个个体参数的意义所在。但是,人们可以通过了解整个模型的特性而获得对于统计模型合理与否的认知:即一个统计模型是怎样有效的,或者为什么无效,它是如何从数据中学到模型函数的,等等。
D. 基于词概率的Markov(马尔科夫模型)确实无法对所有的语言现象建模。这就像没有概率的简单树结构模型无法对所有的语言现象建模一样。我们需要的语言模型是可以覆盖词、树结构、语义、上下文、语篇等等不同层次语言现象的更复杂的概率模型。Chomsky不能因为旧的统计模型的缺点就一概否定所有的统计语言模型。研究如何解释语言(比如语音识别)的人当中,绝大多数人都认同,解释是一个概率问题。当一个语音流到了我耳朵里,要把这串语音流恢复为说话者的意义,是一个概率问题。爱因斯坦说过,让事情变得简单,直到不能再简单为止。许多科学现象都有随机性。最简单的模型就是概率模型。语言也是这样一种现象。因此概率模型是表达语言事实的最好工具。
E. 1967年,Gold定理指出了形式化的数学语言在逻辑推导上的理论限制。但是,这跟自然语言学习者面临的问题毫无关系。无论如何,在1969年,我们就知道了,概率推理不受这一限制的约束(Horning证明学习概率上下文无关文法PCFG是可能的)。我同意Chomsky所说的,人类具有学习语言的天赋。但是我们对如何获得概率化的语言表示,对统计学习,都还缺乏足够的知识。我认为很可能人类学习语言涉及到概率和统计推理,但是我们并不清楚细节。
1) I never, ever, ever, ever, ... fiddle around in any way with electrical equipment.
2) She never, ever, ever, ever, ... fiddles around in any way with electrical equipment.
3) * I never, ever, ever, ever, ... fiddles around in any way with electrical equipment.
4) * She never, ever, ever, ever, ... fiddle around in any way with electrical equipment.
· "It is neutral green, colorless green, like the glaucous water lying in a cellar." The Paris we remember, Elisabeth Finley Thomas (1942).
· "To specify those green ideas is hardly necessary, but you may observe Mr. [D. H.] Lawrence in the role of the satiated aesthete." The New Republic: Volume 29 p. 184, William White (1922).
· "Ideas sleep in books." Current Opinion: Volume 52, (1912).
· "Not gonna do it. Wouldn't be prudent." (Dana Carvey, impersonating George H. W. Bush)
· "Thinks he can outsmart us, does he?" (Evelyn Waugh, The Loved One)
· "Likes to fight, does he?" (S.M. Stirling, The Sunrise Lands)
· "Thinks he's all that." (Kate Brian, Lucky T)
· "Go for a walk?" (countless dog owners)
· "Gotcha!" "Found it!" "Looks good to me!" (common expressions)
语言学家可以为如何解释上面这些现象争个没完没了。但语言的多样性似乎远比用布尔值(true or false)来描述pro-drop参数值要复杂。一个理论框架不应该把简单性置于反映现实的准确性之上。
技术不是问题(笨蛋不算,你要是找到一个只会忽悠的笨蛋,那是 due diligence 太差,怨不得人)。
Nick: 嗨,老套路,骂别人是为了夸自个。
可不,卖瓜王爷。不过,那也是客观事实,内举不避己,不能因为自己能就偏要说不能,最后还是要系统说话。
当然,这玩意儿要做好(精准达到接近人的分析能力,鲁棒达到可以对付社会媒体这样的monster,高效达到线性实现,real time 应用),确实不是一蹴而就能成的。这里有个n万小时定律。大体是,NLP入门需要一万小时(大约五年工龄),找到感觉需要两万小时,栽几个有意义的跟头需要三万小时,得心应手需要四万小时,等你做到五万小时(入行25年)还没被淘汰的话,就可以成精了。那是一种有如神助、如入无人之境的感觉,体会的人不多。打住。
为了所谓语言的递归性,人脑,或电脑,必须有个堆栈的结构才好,这离语言事实太远,也违背了人脑短期记忆的限制。世界上哪里有人说话,只管开门而不关门,只加左括号不加右括号,一直悬着吊着的?最多三重门吧,一般人就受不了了。就算你是超人,你受得了,你的受众也受不了,无法 parse 啊。说话不是为了交流,难道是故意难为人,为了人不懂你而说话?不 make sense 嘛。
Quora has a question with discussions on "Why is machine learning used heavily for Google's ad ranking and less for their search ranking?" A lot of people I've talked to at Google have told me that the ad ranking system is largely machine learning based, while search ranking is rooted in functions that are written by humans using their intuition (with some components using machine learning).
Surprise? Contrary to what many people have believed, Google search consists of hand-crafted functions using heuristics. Why?
479
One very popular reply there is from Edmond Lau, Ex-Google Search Quality Engineer who said something which we have been experiencing and have indicated over and over in my past blogs on Machine Learning vs. Rule System, i.e. it is very difficult to debug an ML system for specific observed quality bugs while the rule system, if designed modularly, is easy to control for fine-tuning:
From what I gathered while I was there, Amit Singhal, who heads Google's core ranking team, has a philosophical bias against using machine learning in search ranking. My understanding for the two main reasons behind this philosophy is:
In a machine learning system, it's hard to explain and ascertain why a particular search result ranks more highly than another result for a given query. The explainability of a certain decision can be fairly elusive; most machine learning algorithms tend to be black boxes that at best expose weights and models that can only paint a coarse picture of why a certain decision was made.
Even in situations where someone succeeds in identifying the signals that factored into why one result was ranked more highly than other, it's difficult to directly tweak a machine learning-based system to boost the importance of certain signals over others in isolated contexts. The signals and features that feed into a machine learning system tend to only indirectly affect the output through layers of weights, and this lack of direct control means that even if a human can explain why one web page is better than another for a given query, it can be difficult to embed that human intuition into a system based on machine learning.
Rule-based scoring metrics, while still complex, provide a greater opportunity for engineers to directly tweak weights in specific situations. From Google's dominance in web search, it's fairly clear that the decision to optimize for explainability and control over search result rankings has been successful at allowing the team to iterate and improve rapidly on search ranking quality. The team launched 450 improvements in 2008 [1], and the number is likely only growing with time.
Ads ranking, on the other hand, tends to be much more of an optimization problem where the quality of two ads are much harder to compare and intuit than two web page results. Whereas web pages are fairly distinctive and can be compared and rated by human evaluators on their relevance and quality for a given query [2], the short three- or four-line ads that appear in web search all look fairly similar to humans. It might be easy for a human to identify an obviously terrible ad, but it's difficult to compare two reasonable ones:
Branding differences, subtle textual cues, and behavioral traits of the user, which are hard for humans to intuit but easy for machines to identify, become much more important. Moreover, different advertisers have different budgets and different bids, making ad ranking more of a revenue optimization problem than merely a quality optimization problem. Because humans are less able to understand the decision behind an ads ranking decision that may work well empirically, explainability and control -- both of which are important for search ranking -- become comparatively less useful in ads ranking, and machine learning becomes a much more viable option.
Edmond Lau's answer is great, but I wanted to add one more important piece of information.
When I was on the search team at Google (2008-2010), many of the groups in search were moving away from machine learning systems to the rules-based systems. That is to say that Google Search used to use more machine learning, and then went the other direction because the team realized they could make faster improvements to search quality with a rules based system. It's not just a bias, it's something that many sub-teams of search tried out and preferred.
I was the PM for Images, Video, and Local Universal - 3 teams that focus on including the best results when they are images, videos, or places. For each of those teams I could easily understand and remember how the rules worked. I would frequently look at random searches and their results and think "Did we include the right Images for this search? If not, how could we have done better?". And when we asked that question, we were usually able to think of signals that would have helped - try it yourself. The reasons why *you* think we should have shown a certain image are usually things that Google can actually figure out.
Part of the answer is legacy, but a bigger part of the answer is the difference in objectives, scope and customers of the two systems.
The customer for the ad-system is the advertiser (and by proxy, Google's sales dept). If the machine-learning system does a poor job, the advertisers are unhappy and Google makes less money. Relatively speaking, this is tolerable to Google. The system has an objective function ($) and machine learning systems can be used when they can work with an objective function to optimize. The total search-space (# of ads) is also much much smaller.
The search ranking system has a very subjective goal - user happiness. CTR, query volume etc. are very inexact metrics for this goal, especially on the fringes (i.e. query terms that are low-volume/volatile). While much of the decisioning can be automated, there are still lots of decisions that need human intuition.
To tell whether site A better than site B for topic X with limited behavioural data is still a very hard problem. It degenerates into lots of little messy rules and exceptions that tries to impose a fragile structure onto human knowledge, that necessarily needs tweaking.
An interesting question is - is the Google search index (and associated semantic structures) catching up (in size and robustness) to the subset of the corpus of human knowledge that people are interested in and searching for ?
My guess is that right now, the gap is probably growing - i.e. interesting/search-worthy human knowledge is growing faster than Google's index.. Amit Singhal's job is probably getting harder every year. By extension, there are opportunities for new search providers to step into the increasing gap with unique offerings.
p.s: I used to manage an engineering team for a large search provider (many years ago).
A couple of days ago I had coffee with Peter Norvig. Peter is currently Director of Research at Google. For several years until recently, he was the Director of Search Quality -- the key man at Google responsible for the quality of their search results. Peter also is an ACM Fellow and co-author of the best-selling AI textbook Artificial Intelligence: A Modern Approach. As such, Peter's insights into search are truly extraordinary.
I have known Peter since 1996, when he joined a startup called Junglee, which I had started together with some friends from Stanford. Peter was Chief Scientist at Junglee until 1998, when Junglee was acquired by Amazon.com. I've always been a great admirer of Peter and have kept in touch with him through his short stint at NASA and then at Google. He's now taking a short leave of absence from Google to update his AI textbook. We had a fascinating discussion, and I'll be writing a couple of posts on topics we covered.
It has long been known that Google's search algorithm actually works at 2 levels:
An offline phase that extracts "signals" from a massive web crawl and usage data. An example of such a signal is page rank. These computations need to be done offline because they analyze massive amounts of data and are time-consuming. Because these signals are extracted offline, and not in response to user queries, these signals are necessarily query-independent. You can think of them tags on the documents in the index. There are about 200 such signals.
An online phase, in response to a user query. A subset of documents is identified based on the presence of the user's keywords. Then, these documents are ranked by a very fast algorithm that combines the 200 signals in-memory using a proprietary formula.
The online, query-dependent phase appears to be made-to-order for machine learning algorithms. Tons of training data (both from usage and from the armies of "raters" employed by Google), and a manageable number of signals (200) -- these fit the supervised learning paradigm well, bringing into play an array of ML algorithms from simple regression methods toSupport Vector Machines. And indeed, Google has tried methods such as these. Peter tells me that their best machine-learned model is now as good as, and sometimes better than, the hand-tuned formula on the results quality metrics that Google uses.
The big surprise is that Google still uses the manually-crafted formula for its search results. They haven't cut over to the machine learned model yet. Peter suggests two reasons for this. The first is hubris: the human experts who created the algorithm believe they can do better than a machine-learned model. The second reason is more interesting. Google's search team worries that machine-learned models may be susceptible to catastrophic errors on searches that look very different from the training data. They believe the manually crafted model is less susceptible to such catastrophic errors on unforeseen query types.
This raises a fundamental philosophical question. If Google is unwilling to trust machine-learned models for ranking search results, can we ever trust such models for more critical things, such as flying an airplane, driving a car, or algorithmic stock market trading? All machine learning models assume that the situations they encounter in use will be similar to their training data. This, however, exposes them to the well-known problem of induction in logic.
The classic example is the Black Swan, popularized by Nassim Taleb'seponymous book. Before the 17th century, the only swans encountered in the Western world were white. Thus, it was reasonable to conclude that "all swans are white." Of course, when Australia was discovered, so were the black swans living there. Thus, a black swan is a shorthand for something unexpected that is outside the model.
Taleb argues that black swans are more common than commonly assumed in the modern world. He divides phenomena into two classes:
Mediocristan, consisting of phenomena that fit the bell curve model, such as games of chance, height and weight in humans, and so on. Here future observations can be predicted by extrapolating from variations in statistics based on past observation (for example, sample means and standard deviations).
Extremistan, consisting of phenomena that don't fit the bell curve model, such as the search queries, the stock market, the length of wars, and so on. Sometimes such phenomena can sometimes be modeled using power laws or fractal distributions, and sometimes not. In many cases, the very notion of a standard deviation is meaningless.
Taleb makes a convincing case that most real-world phenomena we care about actually inhabit Extremistan rather than Mediocristan. In these cases, you can make quite a fool of yourself by assuming that the future looks like the past.
The current generation of machine learning algorithms can work well in Mediocristan but not in Extremistan. The very metrics these algorithms use, such as precision, recall, and root-mean square error (RMSE), make sense only in Mediocristan. It's easy to fit the observed data and fail catastrophically on unseen data. My hunch is that humans have evolved to use decision-making methods that are less likely blow up on unforeseen events (although not always, as the mortgage crisis shows).
I'll leave it as an exercise to the interested graduate student to figure out whether new machine learning algorithms can be devised that work well in Extremistan, or prove that it cannot be done.
TRACKBACK
TrackBack URL for this entry: http://www.typepad.com/services/trackback/6a00d83471bc3153ef00e5527c00a38833
有问,这一波热潮会不会是类似2000年的又一个巨大的泡沫?我的观察是,也是,也不是。的确,在大数据的市场还不成熟,发展和盈利模式还很不清晰的时候,大家一窝蜂拥上来创业、投资和冒险,其过热的行为模式确实让人联想到世纪之交的互联网 dot com 的泡沫。然而,这次热潮不是泡沫那么简单,里面蕴含了实实在在的内容和价值潜力,我们下面会具体谈到。当然这些潜在价值与市场的消化能力是否匹配,仍是一个巨大的问题。可以预见三五年之后的情景,涅磐的凤凰和死在沙滩上的前浪共同谱写了大数据交响乐的第一乐章。
所谓大数据,更多的是社会媒体火热以后的专指,是已经与施事背景相关联的数据,而不是搜索引擎从开放互联网搜罗来的混杂集合。没有社会媒体及其用户社会网络作为背景,纯粹从量上看,“大数据”早就存在了,它催生了搜索产业。对于搜索引擎,big data 早已不是新的概念,面对互联网的汪洋大海,搜索巨头利用关键词索引(keyword indexing)为亿万用户提供搜索服务已经很多年了。我们每一个网民都是受益者,很难想象一个没有搜索的互联网世界。但那不是如今的 buzz word,如今的大数据与社会媒体密不可分。当然,数据挖掘领域把用户信息和消费习惯的数据结合起来,已经有很多成果和应用。自然语言的大数据可以看作是那个应用的继续,从术语上说就是,文本挖掘(text mining,from social media big data)是数据挖掘(data mining) 的自然延伸。对于语言技术,NLP 系统需要对语言做结构分析,理解其语义,这样的智能型工作比给关键词建立索引要复杂百倍,也因此 big data scale up 一直是自然语言技术的一个瓶颈。
大数据时代只认数据不认人。Of course, In God We Trust. But in everything else we need data. 道理很简单,在信息爆炸的时代,任何个人的精力、能力和阅历都是有限的,所看到听到的都是冰山一角。大V也是如此,大家都在盲人摸象。唯有大数据挖掘才有资格为纵览全貌提供导引。
在处理海量数据的问题解决以后,查准率和查全率变得相对不重要了。换句话说,即便不是最优秀的系统,只有平平的查准率(譬如70%,抓100个,只有70个抓对了),平平的查全率(譬如30%,三个只能抓到一个),只要可以用于大数据,一样可以做出优秀的实用系统来。其根本原因在于两个因素:一是大数据时代的信息冗余度;二是人类信息消化的有限度。查全率的不足可以用增加所处理的数据量来弥补,这一点比较好理解。既然有价值的信息,有统计意义的信息,不可能是“孤本”,它一定是被许多人以许多不同的说法重复着,那么查全率不高的系统总会抓住它也就没有疑问了。从信息消费者的角度,一个信息被抓住一千次,与被抓住900次,是没有本质区别的,信息还是那个信息,只要准确就成。疑问在一个查准率不理想的系统怎么可以取信于用户呢?如果是70%的系统,100条抓到的信息就有30条是错的,这岂不是鱼龙混杂,让人无法辨别,这样的系统还有什么价值?沿着这个思路,别说70%,就是高达90%的系统也还是错误随处可见,不堪应用。这样的视点忽略了实际的挖掘系统中的信息筛选(sampling)与整合(fusion)的环节,因此夸大了系统的个案错误对最终结果的负面影响。实际上,典型的情景是,面对海量信息源,信息搜索者的几乎任何请求,都会有数不清的潜在答案。由于信息消费者是人,不是神,即便有一个完美无误的理想系统能够把所有结果,不分巨细都提供给他,他也无福消受(所谓 information overload)。因此,一个实用系统必须要做筛选整合,把统计上最有意义的结果呈现出来。这个筛选整合的过程是挖掘的一部分,可以保证最终结果的质量远远高于系统的个案质量。总之,size matters,多了就不一样了。大数据改变了技术应用的条件和生态,大数据更能将就不完美的引擎。
大数据不是决策的唯一依据,只是依据之一。正确的决策必须综合各种信息来源。大事不提,看看笔者购买洗衣机是怎样使用大数据、朋友口碑、实地考察以及种种其他考量的吧。以为有了大数据,就万事大吉,是不切实际的。值得注意的是,即便被认为是真实反映的同一组数据结果也完全可能有不同的解读(interpretations),人们就是在这种解读的争辩中逼近真相。一个好的大数据系统,必须创造条件,便于用户 drill down 去验证或否定一种解读,便于用户通过不同的条件限制及其比较来探究真相。
所列“成见”有两类:一类是“偏”见,如【成见一】至【成见五】。这类偏见主要源于不完全归纳,他们也许看到过或者尝试过规则系统某一个类型,浅尝辄止,然后遽下结论(jump to conclusions)。盗亦有道,情有可原,虽然还是应该对其一一纠“正”。成见的另一类是谬见,可以事实证明其荒谬。令人惊诧的是,谬见也可以如此流行。【成见五】以降均属不攻自破的谬见。譬如【成见八】说规则系统只能分析规范性语言。事实胜于雄辩,我们开发的以规则体系为主的舆情挖掘系统处理的就是非规范的社交媒体。这个系统的大规模运行和使用也驳斥了【成见六】。
米拉宝鉴:确实应该展开讨论,不着急,慢慢来。所罗列的“偏见”有两类:一类是谬见,可以证明其荒谬,譬如说规则系统不能处理社会媒体,只能分析规范性语言。另一类就是“偏”见,盗亦有道,情有可原,虽然还是应该对其纠“正”。这类偏见主要源于不完全归纳,他们也许看到过或者尝试过规则系统某一个类型。 浅尝辄止,然后 jump to conclusion
回复 : 改弦易辙没有问题。从一个 school 转学到一个新 school 很自然,我要是年轻20岁,也一定加入 converting 的潮流。本文揭示的是偏见为什么如此流行,被很多高智商学者视为理所当然,乃至于不得不怀疑宗教疑似的世界观在作祟。至于翘翘板现象,又称按下葫芦起了瓢的问题,以后单论,其实是有有效对策的。当然,也必须承认统计路线的性质决定了它们比较善于在多种因素中玩平衡。
由于自然语言的歧义性和复杂性以及社交媒体的随意性和不规范,要想编制一套查准率(precision)和查全率(recall)两项指标综合水平(所谓 F-score)都很高的NLP(Natural Language Processing)系统非常不容易。但是,研发实践发现,自然语言系统能否实用,很多时候并不是决定于上述两个指标。还有一个更重要的指标决定着一个系统在现实世界的成败,这个指标就是系统对于大数据的处理能力,可以不可以真正地 scale-up 到大数据上。由于电脑业的飞速发展,云计算技术的成熟,大数据处理在现实中的瓶颈往往是经济上的羁绊,而不是技术意义上的难关。其结果是革命性的。
在处理海量数据的问题解决以后,查准率和查全率变得相对不重要了。换句话说,即便不是最优秀的系统,只有平平的查准率(譬如70%,抓100个,只有70个抓对了),平平的查全率(譬如30%,三个只能抓到一个),只要可以用于大数据,一样可以做出优秀的实用系统来。其根本原因在于两个因素:一是大数据时代的信息冗余度;二是人类信息消化的有限度。查全率的不足可以用增加所处理的数据量来弥补,这一点比较好理解。既然有价值的信息,有统计意义的信息,不可能是“孤本”,它一定是被许多人以许多不同的说法重复着,那么查全率不高的系统总会抓住它也就没有疑问了。从信息消费者的角度,一个信息被抓住一千次,与被抓住900次,是没有本质区别的,信息还是那个信息,只要准确就成。疑问在一个查准率不理想的系统怎么可以取信于用户呢?如果是70%的系统,100条抓到的信息就有30条是错的,这岂不是鱼龙混杂,让人无法辨别,这样的系统还有什么价值?沿着这个思路,别说70%,就是高达90%的系统也还是错误随处可见,不堪应用。这样的视点忽略了实际的挖掘系统中的信息筛选(sampling)与整合(fusion)的环节,因此夸大了系统的个案错误对最终结果的负面影响。实际上,典型的情景是,面对海量信息源,信息搜索者的几乎任何请求,都会有数不清的潜在答案。由于信息消费者是人,不是神,即便有一个完美无误的理想系统能够把所有结果,不分巨细都提供给他,他也无福消受(所谓 information overload)。因此,一个实用系统必须要做筛选整合,把统计上最有意义的结果呈现出来。这个筛选整合的过程是挖掘的一部分,可以保证最终结果的质量远远高于系统的个案质量。总之,size matters,多了就不一样了。大数据改变了技术应用的条件和生态,大数据 更能将就不完美的引擎。
NLP 这个领域,统计学家完胜,是有其历史必然性的,不服不行。虽然统计学界有很多对传统规则系统根深蒂固的偏见和经不起推敲但非常流行的蛮横结论(以后慢慢论,血泪账一笔一笔诉 :),但是机器学习的巨大成果和效益是有目共睹无所不在的:机器翻译,语音识别/合成,搜索排序,垃圾过滤,文档分类,自动文摘,知识习得,you name it
不容易复制的成功就跟中国餐一样,同样的材料和recipe,不同的大厨可以做出完全不同的味道来。这就注定了中华料理虽然遍及全球,可以征服食不厌精的美食家和赢得海内外无数中餐粉丝,但中餐馆还是滥竽充数者居多,因此绝对形成不了麦当劳这样的巨无霸来。而统计NLP和机器学习就是麦当劳这样的巨无霸:味道比较单调,甚至垃圾,但绝对是饿的时候能顶事儿, fulfilling,最主要的是 no drama,不会大起大落。不管在世界哪个角落,都是一条流水线上的产品,其味道和质量如出一辙。
就说过去10多年吧,我一直坚持做多层次的 deep parsing,来支持NLP的各种应用。当时看到统计学家们追求单纯,追求浅层的海量数据处理,心里想,难怪有些任务,你们虽然出结果快,而且也鲁棒,可质量总是卡在一个口上就过不去。从“人工智能”的概念高度看,浅层学习(shallow learning)与深层分析(deep parsing)根本就不在一个档次上,你再“科学”也没用。可这个感觉和道理要是跟统计学家说,当时是没人理睬的,是有理说不清的,因为他们从本质上就鄙视或忽视语言学家 ,根本就没有那个平等对话的氛围(chemistry)。最后人家到底自己悟出来了,因此近来天上掉下个多层 deep learning,视为神迹,仿佛一夜间主导了整个机器学习领域,趋之者若鹜。啧啧称奇的人很多,洋洋自得的也多,argue 说,一层一层往深了学习是革命性的突破,质量自然是大幅度提升。我心里想,这个大道理我十几年前就洞若观火,殊途不还是同归了嘛。想起在深度学习风靡世界之前,曾有心有灵犀的老友这样评论过:
To me, Dr. Li is essentially the only one who actualy builds true industrial NLP systems with deep parsing. While the whole world is praised with heavy statistics on shallow linguistics, Dr. Li proved with excellent system performances such a simple truth: deep parsing is useful and doable in large scale real world applications.
[Abstract] Five challenges to keyword-based sentiment classification: (1) domain portability; (2) micro-blogs: sentence/twit classification is a lot tougher than document classification; (3) when big data become small: big data load when sliced and diced based on the users' needs quickly becomes mall, and a precision-challenged classifier is bound to have trouble; (4) association of sentiments with object: e.g. comparative expressions like "Google is a lot better than Yahoo"; (5) too coarse-grained: no actionable insights, this is fatal.
做自动舆情挖掘(sentiment mining)已经好几年了,做之前思考这个课题又有好多年(当年我给这个方向的项目起了个名字,叫 Value Tagging,代码 VTag,大约2002年吧,做了一些可行性研究,把研发的 proposal 提交给老板,当时因为管理层的意见不一和工程及产品经理的合作不佳,使得我的研发组对这个关键项目没能上马,保守地说,由此而来的技术损失伤害了公司的起飞),该是做一个简单的科普式小结的时候了。本片科普随笔谈机器分类系统在舆情抽取中的应用,算是这个系列中的一篇。
舆情抽取的主流是利用机器学习基于关键词的分类(sentiment classification),通常的做法非常粗线条,就是把要处理的语言单位(通常是文章 document,或帖子 post)分类为正面(positive)和负面(negative),叫做 thumbs up and down classification。后来加入了中性(neutral),还有在中性之外加入一类 mixed (正反兼有)。这种做法非常流行快捷,在某个特定领域(譬如影评论坛),分类质量可以很高。我们以前的一位实习生做过这样的暑假项目,用的是简单的贝叶斯算法,在影评数据上精度也达到90%以上。这是因为在一个狭窄的领域里面,评论用语相当固定有限,正面负面的评价用词及其分布密度不同,界限清晰,识别自然不难。而且现在很多领域都不愁 labeled data,越来越多的用户评价系统在网络上运转,如 Amazon,Yelp,积累了大量的已经分类好的数据,给机器分类的广泛应用提供了条件。
第二个挑战就是,语言单位的缩小使得分类所需要的词汇证据减少,分类难为无米之炊,精度自然大受影响。从文件到帖子到段落再到短句,语言单位每一步变小,舆情分类就日益艰难。这就是为什么多数分类支持的舆情系统在微博(tweets)主导的社会媒体应用时文本抽取质量低下的根本原因(一般精度不过50%-60%)。当然,文本抽取精度不好并不表明不可用,它可以用大数据来弥补(由于大数据信息天生的大冗余度,利用sampling、整合等方法,一个大数据源的整体精度可以远远高于具体文本抽取的精度),使得最终挖掘出来的舆情概貌还是靠谱的。然而,大数据即便在大数据时代也不是总是存在的,因为一个真实世界的应用系统需要提供各种数据切割(slicing n dicing)的功能,这就使得很多应用场景大数据变成了小数据,这是下面要谈的第三个问题。
第三是大数据切割的挑战。本来我们利用机器来应对大数据时代的信息挑战,起因就是信息时代的数据量之大。如果数据量小,蛮可以利用传统方式雇佣分析员,用人的分析来提供所要的情报,很多年以来的客户调查就是如此。可是现在大数据了,别说社会媒体整体的爆炸性增长,就是一个大品牌的粉丝网页(fan pages)或一个企业的官方网页,每时每刻所产生的数据也相当惊人,总之无法依靠人工去捕捉、监测情报的变化,以便随时调整与客户的互动策略。这是机器挖掘(无论分类还是更细致的舆情分析)不可不行的时代召唤和现实基础。但是,观察具体应用和情报需求的现场就会发现,用户不会满足于一个静态的、概览似的情报结果,他们所需要的是这样一个工具,它可以随时对原始数据和抽取情报进行各种各样的动态切割(slice/dice 原是烹饪术语,用在情报现场,就是,"to break a body of information down into smaller parts or to examine it from different viewpoints so that you can understand it better", 摘自 http://whatis.techtarget.com/definition/slice-and-dice)。舆情切割有种种不同依据的需求,譬如根据舆情的类别,根据男女的性别,根据数据源,根据时间或地理位置,根据数据的点击率等。有的时候还有多次切割的需求,譬如要看看美国加州(地理)的妇女(性别)对于某个品牌在去年夏季(时间)的舆论反映。最典型的切割应用是以时间为维度的《动态晴雨表》,可以反映一个研究对象的情报走势(trends)。譬如把一年的总数据,根据每月、每周、每日,甚至每小时予以切割,然后观察其分布走势,这对于监测和追踪新话题的舆情消长,对于新产品的发布,新广告的效用评估(譬如美式足球赛上的巨额品牌广告的客户效应)等,都有着至关重要的情报作用。总之,大数据很可能在具体应用时要被切割成小数据,一个分类精度不高(precision-challenged)的系统就会捉襟见肘,被大数据遮盖的缺陷凸显,被自然过滤净化的结果在小数据时会变得不再可信。
第四个挑战是找舆情对象的问题。在几乎所有的舆情分析应用中,舆情与舆情的对象必须联系起来,而这一基本要求常常成为舆情分类系统的软肋。当然,在特定数据源和场景中,可能不存在这个问题,比如对 Amazon/Yelp 这类客户评价数据 (review data) 的舆情分析,可以预设舆情的对象是已知的(往往在标题上,或者其他 meta data 的固定位子),每一个review都是针对这个对象(虽然不尽然,review中也可能提到其他的品牌或产品,但是总体上是没问题的,这是由 review data 的特性决定的)。然而在很多社会媒体的自发舆情表述中(譬如微博/脸书/论坛等),在舆情分类之后就有一个找对象的问题。这个问题在比较类语言表达中(比如,"谷歌比雅虎强老鼻子啦" 这样语句,正面评价“强”到底是指雅虎还是谷歌,这看似简单的问题,就难倒了一大帮机器学家,道理很简单,机器分类系统依靠的是keywords,一般没有语言结构的支持,更谈不上理解)。与青春躁动期的小屁孩也差不多,满腔情绪却找不到合适的表达或发泄对象,这几乎成了所有褒贬分类系统的克星。在随兴自发的社会媒体中,这类语言现象并不鲜见,一边夸张三一边骂李四更是网络粉丝们常见的表达(譬如方韩粉丝的网络大战)。
从技术上讲,在大数据的尺度下,不管什么原因缺失部分数据(server down,数据库 bug,数据提供人改主意突然把发出的帖子又很快删除,非民主社会的政府censorship,还有由于成本原因有意排除一些原始数据而只取一定比例的样本,还有垃圾过滤系统太aggressive的误删,或者我们系统本身查全率 (recall) 不理想,比如明明有褒贬却没有识别出来,等等等等:缺失是常态,而求全则是不现实也是不必要的),都不是大问题,as long as 这种缺失对于要挖掘的话题或品牌没有歧视性/针对性。大数据追求的是舆情动态和salient情报,而这些原则上都不会因为数据的部分缺失而改变,因为动态和 salience 的根基就是信息的高冗余度,而不是真正意义上的大海捞针。不亲手做系统,你难以想象互联网的大海里面,冗余的信息有多少。重要的是,冗余本身也是情报的题中应有之义。所谓舆情就是人民(客户)的呼声,而人民的呼声只有通过个体信息的大量冗余才能听得见。这与同一个情愿诉求为什么要征集成千上万的签名道理一样,至于最终是10万签名还是9万五千人签名了,完全不影响舆情的内容及其整体效应。
安娜是个很可爱的俄罗斯上进女青年,从小弹钢琴跳芭蕾,小学没毕业即随父母移民美国。她身材高佻,曲线优美,性情温和,举止得体,善解人意,给人一种古典但不古板,现代却不俗艳,阳光而浪漫的印象。大家知道,虽然俄罗斯大嫂大多偏胖粗线条,但俄罗斯姑娘却多有迷人的风采,老帮菜耳熟能详念念不忘的就有钢铁怎样炼成里面的资产阶级小姐冬妮亚,芭蕾舞天后乌兰诺娃,风华绝代的花样滑冰艺术家 Ekaterina Gordeeva。安娜也是这样一位俄罗斯女郎,每天就在身边,给满屋大多是 boys 的办公室带来了温馨柔和的气息。自然地,大家都喜欢她。
然而,安娜辞职了,很快就要离开,大家都舍不得。我心里也不是滋味,想到午餐时不再有她的说说笑笑,餐后也不能邀她打乒乓球了,失落落的。我问她一定要离开么,你不是说很喜欢这个环境么?You know this office is already too crowded with boys, and we are trying to change this situation, trying to find some girls with affirmative action, and you are leaving?
她回说,我喜欢这个环境,是因为在这里我接触的都是你这样的世界上最聪明的人,因为你们太聪明了,结果我的发展道路堵死了,只好痛下决心离开了,我还是去 consulting company 做我擅长的分析工作去吧。两年来,我亲眼目睹我的20小时的人工怎样被你的20秒的全自动搜索所替代,而且结果往往比人工更好更全更有一致性。
两年前我加入公司的时候,公司基本上是一个 professional service 类型的公司,虽然也开发了一个内部使用的系统,但系统的输出只是缩小了人工范围,必须有长时间的后编辑,手动增删修补,分析归纳,才能提供给客户。编辑人员我们称为信息分析员,要求语言能力强,阅读理解一目十行,并具有分析综合的技能。安娜就是信息分析员中的佼佼者。经她过手的分析报告,客户特别满意。
可是公司需要成本核算。核算的结果是,肉工可以,要适度,否则入不敷出,是亏本买卖。当时平均每个搜索分析的订单需要肉工22小时方能完工,这22小时叫做 pain time (既是分析员的pain, 更是公司的pain)。要想赚钱,理想的 pain time 支出需要控制在两个小时之内,在当时有点天方夜谭。老板找我谈的时候,就把它定为主要目标,但并没有设置时间限度,因为没有人知道其可行性以及达成这样的目标需要多少资源。我自己也不明白,只是感觉到了这个重担。我以前做过的工作,都是先研究,后做原形引擎,然后寻找应用领域,最后开发产品。而这家公司与多数技术创新公司截然相反,它是先有客户,后有粗糙的引擎,最后才引进人才和技术,把希望寄托在技术的快速转移身上。这条路子让我觉得新鲜和刺激,觉得可以试一下,我的技术转移技能能不能如鱼得水,发挥出来。先有客户和应用领域的好处是显而易见的,就向搞共产主义有了遵义会议的明灯一样,省却了在黑暗中的漫长摸索。道路是光明的,就看路怎样走才能赚钱了。
人心不足蛇吞象,老板告诉我,Wei,你知道,你的技术给我们的业务带来了革命性变化。我们的立足已经不成问题,只要我们愿意,维持一个机器加人工的服务,发展成年入几千万的企业指日可待。但是,只要有人工,就不能 scale up, 赚钱就有限,盘子就做不大。我知道你是有雄心的人(我心里说,子非鱼),肯定不满足小打小闹。不管多大风险,我们还是决定放弃这条道路,而走全自动的路子,让系统可以服务所有的分析客户,而不是只供我们内部人工(安娜这样的)或者需要专门训练的 power users 使用。我们的目标是让世界上每个分析员都离不开我们,就如大家离不开Google一样。为此,我们必须做到 pain time 为零,这是着险棋,但是前景不可限量。
Big data 与 云计算一样,成为当今 IT 的时髦词 (buzzword / fashion word). 随着社会媒体的深入人心以及移动互联网的普及,人手一机,普罗百姓都在随时随地发送消息,发自民间的信息正在微博、微信和各种论坛上遍地开花,big data 呈爆炸性增长。对于信息受体(人、企业、政府等),信息过载(information overload)问题日益严重,利用 NLP 等高新技术来帮助处理抽取信息,势在必行。
对于搜索引擎,big data 早已不是新的概念,面对互联网的汪洋大海,搜索巨头利用关键词索引(keyword indexing)为亿万用户提供大海捞针的搜索服务已经很多年了。我们每一个网民都是big data搜索的受益者,很难想象一个没有搜索的互联网世界。可是对于语言技术,NLP 系统需要对语言做结构分析,理解其语义,这样的智能型工作比给关键词建立索引要复杂千万倍,也因此 big data 一直是自然语言技术的一个瓶颈。不说整个互联网,光社会媒体这块,也够咱喝一壶了。
目前的状况如何呢?
我们的语言系统每天阅读分析五千万个帖子。如果帖子的平均词量是30,就是 15 亿词的处理量。This is live feed,现炒现卖,立等可取。 至于社会媒体的历史档案,系统通常追溯到一年之前,定期施行深度分析并更新数据库里的分析结果。我们的工程师们气定神闲,运筹帷幄之中,遥控着数百台不知身处哪块祥云的虚拟服务器大军,令其在“云端”不分昼夜并行处理海量数据,有如巨鲸在洋,在数据源与数据库之间吞吐自如,气派不凡。
when we talk about NLP scaling up to big data, it is this BIG
This is the progress we have made over the last two years. I feel extremely lucky to work with the engineering talents and product managers who made this possible. It is hardly imaginable that this can be done at this speed in other places than the Valley where magic happens everyday.
Where are we?
deep parsing 50 MILLION posts a day!!!
For one year NLP-indexing of social media data we use to support our products, we have
11 billion tweets (about 6-7% of the entire sample from twitter)
1 billion Facebook posts
1 billion forum posts from 5 million domains
430 million blog posts from 160 million domains
30 million reviews from 300 domains
55 million news reports from 55,000 domains
225 million comments from 100 million domains
And that is by no means the limit for our NLP distributed computing: the real bottleneck comes from the cost considerations rather than the technical barriers of the architecture. Money matters. Archimedes said, "Give me a place to stand on, and I will move the Earth." With the NLP magic in hands, we can say, give me a large cloud, we can conquer the entire info world!
I never responded to this. Actually please notice that I have a space between 性 and 交. Furthermore, please notice the difference between the last one (where I have < [ 一 次 ] 性 > and the first two. What behind is, I have the assumption (a truth I think) that ALL (well, except for 葡萄, 玻璃 and the like) multi-character 'words' are ambiguous (so-called hidden ambiguity) and hence have to be handled with dictionary at 'application' time (在‘用’字上狠下功夫). This is consistent with your 词汇主义 and your rule-of-thumb "keeping ambiguity untouched". I actually pushed that one step further by keeping ambiguity only one level (that is, you only need to look ONE level deeper). This is consistent with your 自底而上 but more concrete/specific -- whenever I see potential ambiguity at my level, I keep them there (as in < 性 交 > and then 断链.
I mean I agree with you fully. And by today if I have a bit more added info in dictionary, I think I can do 'shallow parsing' better.
很 < 不 [ 明 白 ] > < [ 北 大 ] [ 法 ( 学 院 ) ] > < [ 怎 么 ] 会 > < 变 成 > < [ 法 轮 ] [ 大 法 ] > 的 < 大 法 >
At that time I have entity but no event.
回复 : “消费品公司预测未来的客户需要的开发”,正是我们所做的主打之一,在那里,了解why客户喜欢或不喜欢某种产品至关重要。科技文献检索方面的应用我们也做过,主要是帮助解决 how 的问题。至于文章的错误或者信息的过时,说到底是人的判断,机器最多可以帮助排一下序,比如把最新的文献信息排在前面。一个问题出来了,解决问题的答案分门别类给你列出来就完成使命了。根据这些信息做判断或决策,那是万物之灵自己的事儿。