The online hangout for Göteborg Functional Programming Group and everybody interested.

If by data we mean information, then the definition should be based on a "principle of surprise": if an interpreter encodes a belief over states of nature

`x`

via a probability density `p(x)`

, informative data is taken to be whatever's "unlikely" within that worldview, i.e. `- log (p (x))`

(^ Shannon information)

large deviations from the mean are more informative than small ones

and so on

The book mentions an approach by C. A. R. Hoare that is called "*abstract models*" that seem to be to define data as something which have a set of operations on it and which follows certain rules. There is another approach by Thatcher, Wagner and Wright which is called "*algebraic specification*" which has to do with abstract algebra somehow.

@jensli all algebra is abstract until you have to shove it into a computer

just kidding. I haven't read SICP and I'm sure there are many gems in there. However some definitions might be a bit ad hoc and/or stale. For example I see that the same Wagner wrote a few years later wondering why the algebraic specification approach hasn't been influential ( https://dl.acm.org/citation.cfm?id=779115 )

@ocramz Oh, you're investigating, nice! Please report if you're finding anything interesting. I'll think about what you might mean with your "surprise" thing.

@ocramz Is there any way someone without access to ACM:s protected articles could have a look at that paper?

Not Legal, but very much in the public domain.