Soon after I had studied Shannon's formal definition of information in random variables and his other remarkable performance bounds for communication, I wanted to apply them to other fields - in particular to estimation and statistics in general. After all, the central problem in statistics is to extract information from data. After having worked on data compression and introduced arithmetic coding it seemed evident that both estimation and compression have a common goal: in data compression the shortest code lenth cannot be achieved without taking advantage of the regular features in data, while in estimation it is these regular features, the underlying mechanism, that we want to learn. This led me to introduce the MDL or Minimum Description Length principle, and I thought that the job was done. [However...]My emphasis.
Tuesday 29 September 2009
Estimation and compression
In the current (September 09) IEEE Information Theory Society Newsletter, (Available online, but the September 09 one isn't there yet), J Rissanen writing on 'Optimal Estimation':
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment