IBM’s basic approach has a long history, with a lineage in the field of information retrieval that is in many ways shared with search engines. The essential idea is to start with textual documents, and then to build a system to statistically match questions that are asked to answers that are represented in the documents.
Wolfram|Alpha is a completely different kind of thing-something much more radical, based on a quite different paradigm. The key point is that Wolfram|Alpha is not dealing with documents, or anything derived from them. Instead, it is dealing directly with raw, precise, computable knowledge. And what’s inside it is not statistical representations of text, but actual representations of knowledge.
Both have the same objective and result, but while Watson is guessing what may be the right answer, Alpha knows the right answer. Head on to his blog for a complete, in-depth explanation. [Stephen Wolfram Blog]