[转载]转载留存 Google 搜索涉及ML的相关文字
屏蔽 ||| |
DatawockyOn Teasing Patterns from Data, with Applications to Search, Social Media, and Advertising
【立委按】讨论中提到,即便机器学习已经达到手工系统的水平,谷歌搜索的研发人员也不愿意转用机器学习。说担心机器学出来的模型在训练集未见的现象上铸成大错。而他们相信,手工系统对付未见现象不至于走偏太大。这个论点,不好置评。不过,我觉得,更主要的原因不在这里,而在遇到具体质量问题时,机器学习系统是一锅粥,很难 debug(除非不怕麻烦,重新训练去再煮一锅粥,但也常常是隔靴搔痒,很难保证这锅新粥对要解决的具体问题会奏效)。而手工系统只要设计合理(比如模块化设计,减少牵一发动全身的后果),具体问题具体对待,可直接针对性调控,debug 就容易多了。因此,即便质量相近的系统,机器学习也不占优势,因为不好维护调控以逐步提高质量(incremental enhancement)。
I have known Peter since 1996, when he joined a startup called Junglee, which I had started together with some friends from Stanford. Peter was Chief Scientist at Junglee until 1998, when Junglee was acquired by Amazon.com. I've always been a great admirer of Peter and have kept in touch with him through his short stint at NASA and then at Google. He's now taking a short leave of absence from Google to update his AI textbook. We had a fascinating discussion, and I'll be writing a couple of posts on topics we covered.
It has long been known that Google's search algorithm actually works at 2 levels:
-
An offline phase that extracts "signals" from a massive web crawl and usage data. An example of such a signal is page rank. These computations need to be done offline because they analyze massive amounts of data and are time-consuming. Because these signals are extracted offline, and not in response to user queries, these signals are necessarily query-independent. You can think of them tags on the documents in the index. There are about 200 such signals.
-
An online phase, in response to a user query. A subset of documents is identified based on the presence of the user's keywords. Then, these documents are ranked by a very fast algorithm that combines the 200 signals in-memory using a proprietary formula.
The online, query-dependent phase appears to be made-to-order for machine learning algorithms. Tons of training data (both from usage and from the armies of "raters" employed by Google), and a manageable number of signals (200) -- these fit the supervised learning paradigm well, bringing into play an array of ML algorithms from simple regression methods toSupport Vector Machines. And indeed, Google has tried methods such as these. Peter tells me that their best machine-learned model is now as good as, and sometimes better than, the hand-tuned formula on the results quality metrics that Google uses.
The big surprise is that Google still uses the manually-crafted formula for its search results. They haven't cut over to the machine learned model yet. Peter suggests two reasons for this. The first is hubris: the human experts who created the algorithm believe they can do better than a machine-learned model. The second reason is more interesting. Google's search team worries that machine-learned models may be susceptible to catastrophic errors on searches that look very different from the training data. They believe the manually crafted model is less susceptible to such catastrophic errors on unforeseen query types.
This raises a fundamental philosophical question. If Google is unwilling to trust machine-learned models for ranking search results, can we ever trust such models for more critical things, such as flying an airplane, driving a car, or algorithmic stock market trading? All machine learning models assume that the situations they encounter in use will be similar to their training data. This, however, exposes them to the well-known problem of induction in logic.
The classic example is the Black Swan, popularized by Nassim Taleb'seponymous book. Before the 17th century, the only swans encountered in the Western world were white. Thus, it was reasonable to conclude that "all swans are white." Of course, when Australia was discovered, so were the black swans living there. Thus, a black swan is a shorthand for something unexpected that is outside the model.
Taleb argues that black swans are more common than commonly assumed in the modern world. He divides phenomena into two classes:
-
Mediocristan, consisting of phenomena that fit the bell curve model, such as games of chance, height and weight in humans, and so on. Here future observations can be predicted by extrapolating from variations in statistics based on past observation (for example, sample means and standard deviations).
-
Extremistan, consisting of phenomena that don't fit the bell curve model, such as the search queries, the stock market, the length of wars, and so on. Sometimes such phenomena can sometimes be modeled using power laws or fractal distributions, and sometimes not. In many cases, the very notion of a standard deviation is meaningless.
Taleb makes a convincing case that most real-world phenomena we care about actually inhabit Extremistan rather than Mediocristan. In these cases, you can make quite a fool of yourself by assuming that the future looks like the past.
The current generation of machine learning algorithms can work well in Mediocristan but not in Extremistan. The very metrics these algorithms use, such as precision, recall, and root-mean square error (RMSE), make sense only in Mediocristan. It's easy to fit the observed data and fail catastrophically on unseen data. My hunch is that humans have evolved to use decision-making methods that are less likely blow up on unforeseen events (although not always, as the mortgage crisis shows).
I'll leave it as an exercise to the interested graduate student to figure out whether new machine learning algorithms can be devised that work well in Extremistan, or prove that it cannot be done.
TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00d83471bc3153ef00e5527c00a38833
Listed below are links to weblogs that reference Are Machine-Learned Models Prone to Catastrophic Errors?:
You can follow this conversation by subscribing to the comment feed for this post.