Google is a giant and its marketing is more than powerful. While the whole world was stunned at their exciting claim in Natural Language Parsing and Understanding, while we respect Google research and congratulate their breakthrough in statistical parsing space, we have to point out that their claim in their recently released blog that that SyntaxNet is the “world’s most accurate parser” is simply not true. In fact, far from truth.
The point is that they have totally ignored the other school of NLU, which is based on linguistic rules, as if it were non-existent. While it is true that for various reasons, the other school is hardly presented any more in academia today due to the mainstream’s dominance by machine learning (which is unhealthy but admittedly a reality, see Church’s long article for a historical background of this inbalance in AI and NLU: K. Church: “A Pendulum Swung Too Far”）, any serious researcher knows that it has never vanished from the world, and it actually has been well developed in industry’s real life applications for many years, including ours.
In the same blog, Google mentioned that Parsey McParseface is the “most accurate such model in the world“, with model referring to “powerful machine learning algorithms”. This statement seems to be true based on their cited literature review, but the equating this to the “world’s most accurate parser” publicized in the same blog news and almost instantly disseminated all over the media and Internet is simply irresponsible, and misleading at the very least.
In the next blog of mine, I will present an apples-to-apples comparison of Google’s SyntaxNet with the NetBase deep parser to prove and illustrate the misleading nature of Google’s recent announcement.
K. Church: “A Pendulum Swung Too Far”, Linguistics issues in Language Technology, 2011; 6(5)