Explaining Dretske

Information is a word that is used very differently and with vastly different connotations (Floridi, 2004):
  1. Public: Richard and Henry can have the same information, while Eric does not.
  2. Modular: Henry has the information that "Richard has won the lottery, winning 2 million dollars".
  3. Multiple Realizability: Henry can communicate this information in a number of ways, and still communicate the same information. He can tell Eric over the phone, he can write it on the Web and send Eric the URI, tap a number of bits in a secret code over a telegraph, or even send a picture of Richard with a 2 million dollar check via a carrier pigeon.
Dretske is a philosopher of the semantic theory of information, a theory that complements Shannon's theory of information. The word "semantic" from here on out does not mean "model-theoretic" or "machine-readable", but about a part of the world. Dretske's theory complements Shannon's theory, which confusingly is also called the theory of information.

One so-called "discipline independent" definition of information put forward by Losee is that information is the characteristic of being "the output of a process, these being informative about the process and the input."(1998). This is so vague since almost anything can be construed as the result of a process, including private non-shareable vague feelings that can't be communicated! As it's prime example Losee states cake delivers information about the ingredients and the method of baking. It misses the fact that information is semantic, or about "something." A cake is not about its ingredients, but constituted by its ingredients. One would think of information being more like seeing "Happy Birthday Henry" written on the cake, which conveying us the information that "it's Henry's Birthday." We will pursue this second intuition about information, focusing in particular on the differences between encoding and the content of information.

Shannon Theory of Communication:

Shannon's theory deals with sending a signal between a source over a channel to a receiver, finding the optimal encoding and size of channel so that the signal can get through. Shannon uses this theory to quantify how much information the message contains. However, most of the common-sense intuitions of information have to deal with not how the information is encoded or how to encode it, but what the information of particular message is about, the "semantic" content of a message.

Eric has heard a rumor that one out of the five people he knows in Edinburgh has won the lottery. Henry (the source) wants to send a message about his local state-of-affairs to Eric in Boston (the receiver) that Richard has won the lottery. Henry can send this message as a signal encoded in a variety of ways - an e-mail, a telephone call, a carrier pigeon carrying the note in English, taps on a telegraph (the channel). Shannon's genius was that he showed messages over a channel can be encoded into a series of binary choices, or bits, with each bit representing a binary "yes or no" decision between two possibilities. The number of bits in the encoding quantifies how much information there is (1949).

Formalizing Encoding

Since one of eight people was chosen, the amount of information can be viewed as how many bits (binary choices) does it take to encode the reduction of possibilities from eight to one. This can be formalized as log2(8)=3 bits, which is true since 23=8. If Henry wanted to send the message that of the four people, Richard won the lottery, then his message will be a reduction of four to possibilities to, or log2(4) = 2 bits. So, one can say sending the message that one person out of eight people won the lottery sends more information that sending a message that one out of four people won the lottery. This depends on an assumption, that all choices are equally likely. Shannon deals with this, as well as continuous (non-discrete) possibilities, noise where the channel causes information loss, and proves optimal encodings among other results (1949).

Dretske's Theory of Information

Problems with Encoding

There is more to information than encoding. Let's go back to our original example where there are 8 possibilities of people that won the lottery, which is encoded in 3 bits. The important thing to note is that saying the message that Richard won the lottery is 3 bits does not tell us who won the lottery. In fact, the false message that Phil won the lottery is also 3 bits. So one cannot tell by quantitative information alone what information is being conveyed by a message. Also, the content information conveys seems to have to do with certain knowledge, since Eric knows that Richard (and not Phil or anyone else) won if the message was conveyed successfully.

While according to Shannon "the semantic aspects of communication are irrelevant to the engineering problem" (1949), the amount of information a particular message has imposes "upper-level" constraints on what content can be sent. If you have only 2 possible bits of information and all eight employees need one unique encoding, Henry cannot send a message specifying what friend won since there aren't enough encodings to go around! He can at most specify that either one or another person won. However, if I have a possible 5 bits to send a message, then I have extra 2 redundant bits that I don't need, so I can still easily specify that Richard has won. One can change the coding scheme to get around the problem of not having enough bits: specify that bit 1 means that Richard has won, and bit 0 means that someone else has won. This means that if 1 is received, it contains much more semantic information than if 0 is received. So the message that Richard can be conveyed with 2 bits if both the sender and the receiver know the "new" encoding scheme, although the receiver can't know for certain if anyone other than Richard won.

Shannon's encoding has to deal with the reduction of uncertainty, but the encoding depends on this being formalized given a distinct number of possibilities. Often times in real life, we don't know how many possibilities there are. Yet, with Dretske's definition of semantic information, one can talk about information without knowing how many possibilities there are. For example, there are many places in the UK where Richard could work - we don't know how many there could be. Yet telling us "Richard works at 2 Buccleuch Place in Edinburgh" still gives us more information than "Richard works in Edinburgh", even if the range of possible places he could work at is unclear. So, content seems to be able to be broken down into distinct facts. Sometimes we want to convey more than one than one fact as the content of a particular message.

Defining Content

The "content" and "encoding" of information are related. As Dretske puts it, "saying 'There is a gnu in my backyard' does not have more content than the utterance 'There is a dog in my backyard' since former is, statistically, less probable."(1983) The question then becomes, is there a way to define the non-quantitative content of a message?

Dretske defines the content of information as: "a signal r carries the information that s is F when the conditional probability of s's being F, given r (and k) is 1 (but, given k alone, less than 1). k is the knowledge of the receiver." (1983)

To unpack this, all it's saying is that content or fact(s) (F) is conveyed from the source (s) successfully to the receiver(r) when the content about the source is made absolutely certain to the receiver. I can only successfully convey the content Richard won the lottery to Eric if and only if Eric before receiving the message does not know Richard won the lottery, and after receiving the message knows it for certain. This theory assumes that we are communicating true information (Dretske deals with this problem elsewhere, see my other notes if interested on his "Misrepresentation" paper)and that information can be divided up into facts.

To communicate a fact successfully, both the source and receiver have to be using the same encoding scheme (bits, English, etc.) and the source has to encode the content into the message relative to what the receiver already knows. So, if Eric does not know who "Richard" is, but only knows hi, as the "blond fellow that works with Henry," Henry needs to explain in his message the additional fact that "the blond fellow that works with Henry is Richard". Dretske notes that "this does not mean that a signal must tell us everything about a source to tell us something," it just has to tell enough so that the receiver is now certain about the content (1983).

Defining the Channel

Indeed, now we can define exactly what the channel is. Dretske defines the channel as: "that set of existing conditions (on which the signal depends) that either (1) generate no (relevant information, or (2) generate only redundant information (from the point of view of the receiver)." (1983) Since the channel is "the origin of no information," it is the "framework within which communication takes place". So, when communicating to Eric about Richard winning the lottery, Henry is assuming that Eric knows that there is a lottery, that Richard works with Henry, that Henry has seven other employes, and lots of common-sense assumptions - such as Richard has not died within the last few seconds. As time changes, (say Richard stops working with Henry) the assumptions of the communication changes, and so a new channel (or framework) must be established between Henry and Eric if they want to communicate about Richard.

The final point is that there is a flow of information, which Dretske dubs the Xerox principle: "If A carries the information that B, and B carries the information that C, then A carries the information that C" (1983). This means that fundamentally, if channel (encoding schemes, etc.) is set up properly, then the same information can be communicated through a variety of signals taking place in a variety of media. Henry can take a old-fashioned analog picture of Richard receiving the two million dollar check (A), scan and thereby digitize the picture(B) on his home computer, upload it to his web-page on another computer(C), and send the URI to Eric. If Eric visits the URI, and assuming his browser can decode the picture, the picture is of sufficient quality, and so on - the web-page conveys the "same" information as the picture successfully, despite being constituted in bits on some computer far removed from the original picture.

Applications to the Web

These are my thoughts, not Dretske's, although I have chatted about them with him and he seems to concur.

Information and the Web:

It seems that often when people talk about information in Web circles, they are intuitively using Dretke's theory. The central question of Web architecture: How do agents use encodings to process representations that convey "semantic information"? The interesting thing about the Web is that by standardizing certain things (http, HTML, XML, RDF), those that share the Web's "channel conditions" now can have access to any information in those encodings, and this information is preserved digitally for access anywhere at the demands of the user. This also means that, if good practice about self-description and namespace documents are followed, any information communicated through the Web is capable of being communicated to everyone who is on the Web as a receiver. .

Information Resources and Representations:

The idea of information resources was a bit controversial, but essentially means anything whose "essential characteristic can be conveyed in a message"(TAG, 2004). What exactly does that mean? A message on the web generally uses a computational encoding scheme (bottoming out in Shannon's bits) for some content. Now, when looking at the source of web-page, one in general takes as "the essential characteristics," those parts of the web-page that convey whatever content one is trying to discover by visiting the page - i.e. one does not usually care about the color of the text, but what the textual content means. So, the real point of an information resource is not the encoding, but that it's content can be delivered by the representation. Since a representation by itself is often indecipherable and so cannot convey the information, the follow-your-nose principle can then be followed to media-types, namespace documents, standards, and applications that allow the content to be de-coded and so delivered with "reasonable" certainty with to the receiver.

Accessibility

A representation is the particular encoding of the content, and a resource is the content itself that stays the same over all possible encodings. So one can have one resource (the Eiffel Tower web-page) and multiple representations (one for a mobile phone, one in French, one in English, and so on). Since to communicate content we need to have the channel conditions set-up properly, it is the responsibility of the resource owner to make sure their content is accessible to the range of agents (by having multiple representations) and that they make the content clear (a resource should provide links to tools to decode representations, a Semantic Web representation, and so on). The point of a resource is not just to have content, but to send the message conveying the content successfully across the Web. If this is unsuccessful, it is unclear how one can say the web-page is conveying any information at all. However, since Web architecture cannot guarantee success all it can do is provide guidelines that maximize the chances the content will be delivered across the web. The problem is that since everyone possess different knowledge and there is a wide-range of encoding schemes out there, there is no guarantee that the information content will always be conveyed. We can only try our best and encourage others to do so.

Bibliography: