Reconsidering Representations Draft Notes

Introduction

One question has always haunted the cognitive and computer sciences: "What is a representation?"

This question is no mere idle philosophical speculation, and cuts right to the heart of the thesis, since we are proposing a unique system of dealing with representations (dual typing, one type for "data inside the computer" ala XML Schema and another type for "what the data represents from a particular point of view" ala the Semantic Web ) on the Web.

There has been, ongoing for nearly twenty years, a giant anti-representationalist backlash in cognitive science, from neural networks to embodied robots. The most successful progeny of this movement is currently machine-learning, in which machines use statistical modelling to discover patterns in data beyond that any human observer could consciously detect. For the anti-representationalists, victory seems assured, at least inside the fields of cognitive science and natural language processing.

However, outside these academic fields, the world is rapidly filling up with representations. The anti-representationalists have yet to deliver something that can be classed as a genuine artificial intelligence. Yet, the overarching goal of AI has either failed or is yet on the distant horizon, the most significant computational phenomena yet is not AI, but the World Wide Web. And the Web is absolutely rife with representations! The Web can be considered itself can a universal network of representations. After all, REST stands for Representation State Transfer. The question is: can the Web look to artificial intelligence and cognitive science for a theory of representations?

Brief History of Representations in AI

However, neither philosophers, AI practitioners, or cognitive scientists have yet to provide a satisfactory theory of representations - even though for the most part the fields implicitly rely on a notion of representations. Chomsky avoids the problem by regulating it to another system (the 'conceptual system'), allowing the computational system (the 'syntax') to be studied autonomously. AI originally based itself directly on symbols, as stated by the famous 'Physical Symbol Systems' Hypothesis of Newell and Simon. This thesis stated that the actual physical manipulation of syntax of the symbols was enough to give rise to intelligence. AI researchers literally implemented the logical atomism of analytic philosophy by programming a computer with a set of logical formulae representing "common-sense" knowledge and an inference engine to operate over them (McCarthy). However, as shown by the critique of Dreyfus, the tradition of analytic philosophy was found to be wanting, due to serious problems involving objectivity (the explanatory gap), temporal changes (the Frame Problem), connecting the symbols to their meaning (Symbol-Grounding Problem) and the use of Tarski Semantic Theory of Truth as a theory of meaning. As shown by the development of knowledge representations systems such as KRL, these symbolic systems ended up either being too complex and therefore intractable, so simple that the system was incapable of expressing (representing) much of the knowledge it was needed to model, such as the KL-ONE and subsequent description logic systems. Analytic philosophy and classical artificial intelligence did not provide a theory of representations. In fact, it assumed such a theory, the theory of representations based in logic and analytic philosophy, and found the analytic theory of representations wanting.

While outside of linguistics representations fell upon hard times. The views of Chilean biologist Humberto Maturana rose to prominence, who in his Autopoeisis and Cognition, said that systems should be characterized by their domain of interaction and their composition, and that representations per se need not or should not be involved at all. Several philosophers tried to rehabilitate some theory of representations on a hermeneutic basis, in particular Brian Cantwell Smith, while others like Fred Dretske drew inspiration from information theory. The most ambitious project came from Barwise and Perry's situation semantics, who hoped to base a linguistic model-theory of semantics on "real" situations instead of truth values, and yet this project ultimated failed to have a computational implementation since it was much of the power of their theory lied in their acknowledgement of context, a concept that has proven impossible to formalize. The problem with formalizing context is that one never knows exactly how to separate "context" from what is explicit, and when to stop formalizing context, and so without any finite limits, the formalization fails. Yet philosophers such as Andy Clark acknowledge that while far less of intelligent activity seems to be based on representations that previously thought, there are certain problems that are "representation-hungry," and that only something like a theory of representation can possibly explain.

Reviving Representations

First, as noted by Brian Smith and Humberto Maturna, it would be a mistake to assume almost anything about representations. If we created a theory of representations that assumed an independent world that was easily sliced into coherent objects, sets, and properties that ever so conveniently mapped unto mathematical set theory, then we would obviously be making an inscription error. An inscription error occurs when taking our own particular theory of the world as an universal and objective assumption, rather than providing a theory of how these assumptions came to be. Still, we have to be grounded in some starting point. Brian Smith said that we must be simply "grounded simpliciter," yet this standpoint hopes that the theory can bring itself up by its bootstraps and leads to confusion.

The Flux and Systems

We take as our grounding point the flux, the vast undifferentiated world. The flux exists without subjects and objects, without any thing in particular. It is predominantly immanent, there is no transcendence or abstraction. The question is how can anything arise from the flux? The only assumption about the flux we can make is that it exists in space-time.

If the flux exists in space-time, then aspects of flux will be connected to other aspects. while likewise other aspects are disconnected from other aspects. Since these aspects exist over time and change, they can be considered processes. We use Brian Smith's story of registration to explain how certain processes of the flux are physically connected, and so begin to maintain some invariance or become a regularity. These parts of the flux can therefore be considered systems composed of processes. As Maturana says, many of these processes are autopoetic, resisting change from in order to maintain their own invariances. However, Maturana's theory of autopoetic system has a crucial mistake: it seems to purport that these systems are static in terms of their involved processes. This is false, as argued by philosopher Andy Clark in the "Extended Mind Hypothesis", the boundaries between systems and the rest of the world is quite arbitrary from the functional point of view.

Coupling and Presentation

Therefore the key to these systems is that they are not self-contained, but constantly in a process of maintaining some invariances by physically coupling with and transforming other systems. In the "drift and vicissitudes" of earthly existence, the systems keep changing their boundaries, physically coupling with one system (one would almost say "incorporating") and then releasing it, often with that process of coupling changing the structure of the incorporating system. When two systems couple, they have a presentation, i.e. they are "present" for each other. What is remarkable is that for these instances of "presentation" the two systems are, from the standpoint of the physical composition the systems, indistinguishable from each other. Between us, there is literally nothing - we are the same system. If only for an instance.

For an example, let us inspect how I form a representation of an apple. When I lock my eyes upon the Eiffel Tower, there are, as Brian Smith would note, invisible to my eyes yet physically existent atoms connecting me to the Eiffel Tower. As recent work in visual cognition shows, the Eiffel Tower is not somehow encoded onto light rays and beamed inside my head, where my brain draws some representation of it (Noe and O'Regan). Far from it, in fact, at the moment when I glimpse the Eiffel Tower,the Eiffel Tower and me are literally one. However, the light changes, and the Eiffel Tower changes hue. However, as Maturana would note, this is not exactly true: there is no independent changing of hue at all, which is then sent as a signal to my eyes. What happens is that the system composed of myself, the light and other atoms connecting us, and the Eiffel Tower change composition, and this change is reflected in my phenomenology as a "change in hue."

Correspodance and Representation

This plastic power of some systems to reconfigure themselves and couple with others is inordinate, so that they connect and couple, or disconnect and compensate, from other systems constantly. Some connections are fleeting and passing, such as the easily forgotten incidents of our lives. Other physical connections are long yet consciously undetected, such as having a splinter in one's toe. One can be physically connected, after all, present, and not even engage in representation. However, there is a class of physical couplings that causes reconfiguration of one or possibly both systems, such that the systems may disconnect, but that the reconfiguration stays. This reconfiguration is not magical, but physical. In some systems, certain processes of the system, after contact with other systems, configure themselves so that they are in correspondence with the connected system, so that when they disconnect, the correspondence remains. This is how representations form, such that the presentation may be "re-presented" again.

So I change my gaze, and look away from the Eiffel Tower - or the Eiffel Tower disappears into a puff of smoke due either to some terrorist attack or a surrealistic slip in reality. Either way, the connection is broken, and now the Eiffel Tower is no longer part of my system. Or is it? Has not the system that composes me somehow changed? For my system is more plastic than imagined, for in the act of physically connecting with another system the system that composes me has undergone structural adaption, and now - assuming it was a truly beautiful day outside, with the memory of the Eiffel Tower is emblazoned upon me in such a way that when I close my eyes, I can summon up a somewhat fuzzy, perhaps overly idealized, rough and ready depiction of the Eiffel Tower. I can write this depiction down, talk about it to my friends, draw a picture of it. In other words, both internally and externally, I can represent the Eiffel Tower. This comes down to the fact of the matter that at some distant point, the point of presentation, myself and the Eiffel Tower were one. When we were one there was physical effects, as often happen when earthly things bump into each other. In fact, not surprisingly, these changes reflected a correspondence between the two systems. Now, likely the Eiffel Tower was not, due to its lack of plasticity, to represent me. But some part of my system - and one suspects the neurons - aligned themselves in reaction to Eiffel Tower and stayed that way. So when I leave, I can invoke a mental picture of the Eiffel Tower at will. For the contact with the Eiffel Tower had changed me, and like it or not, due to the plastic nature of my brain, I am now partially constituted by a representation of the Eiffel Tower somewhere.

The Flow of Information and External Representations

There is no magic in the system since all the interactions are resolutely physical, but there is change. Systems change and when they change, representations often change. The story up till now has two systems connecting, having an initial presentation, and then disconnecting, with possibly one or both systems having a representation of the connection. However, this story is far too simple: for first, even if one of the system has a representation, that representation may change with the rest of the system. As such, it could be invigorated and added unto, or more likely it will decay over time as the system is buffeted with ever more connections and couplings with other systems. However, the system can use these connections with other systems to maintain the representation. First, the system can be positioned in such a manner that it connects with the other system again and again, so maintaining afresh newer representations of the system. Second, the system can position itself to connect with other systems that are also plastic, and change them so that these other systems too are in correspondence with the representation. This second option can be called the flow of information, while the invariances of the presentation and representation can be considered the content of the information. Information may exist at various stages of abstraction, where some of the rich information of the original presentation is lost through the process of representation. It is lost due to the fact that the representing by nature cannot maintain a perfect correspondence between two systems, since the systems are differently constituted themselves with separate histories and connections.

After visiting the Eiffel Tower, I can draw a depiction of it on a sketch pad. I can write a description of it in a document on my home computer. I can even find some clay and model a small Eiffel Tower myself, or if I am lazier, I can purchase one in the tourist shop nearby. I can talk to my friends about it, enunciating descriptions that transfer information about the Eiffel Tower to them. Regardless, I am soon surrounded by the aftereffects of the initial presentation of the Eiffel Tower in the form of external representations. These representations themselves may bear only the slightest correspondence to the original presentation, for perhaps I drew it badly off-scale, and in a delirious state I write down overly romantic embellishments to my brief viewing of the Eiffel Tower. Regardless, as long as these bear some correspondence to my representation and these other systems (such as the clay, the paper of the sketch pad, the document in my computer), they are representations. And they are less than perfect, since they convey only approximations of my original presentation of the Eiffel Tower.

Interlude...

The Web is absolutely overflowing with digital representations. There is clearly something that makes the Web and computers extraordinary as information-bearing devices in the fact that they are so overflow in representations. The reasons are three fold: one is to do with the plastic power of computers, the second is the special nature of digital representation, and the third is with the nature of the universal naming ability of the Web.

Computation

Computers and representation have been considered separate: as pointed out by Fodor, there are representations and then there are computations that manipulate and change representations. First, the main question with computers is how they can have a representation at all. After all, the only presentation many computers have is with the fingers of whoever is typing along their keyboard. The answer to how computers possess representations lies within that very answer! First, the system of the computer has some physical connection with another system, the human using it, and through this connection some aspect of the computer is structurally adjusted to be in correspondence to the human. If the human is typing, the letters will appear in an electronic document, and then be stored in the remarkably plastic memory of the computer. Therefore, if the letters are in some correspondence with the representation, the content of the information has been successful and to some extent the computer can be said to possess a representation. One can even say that humans are the transducers for the computers, bringing them into contact with the world. The second route for letting computers have access to representations is to increase their bandwidth beyond the keyboard and their connections with other computers by giving them richer sensors. Computers are special insofar as computers are extremely plastic, that is, can easily be given representations by having their physical constitution changed via manipulation to their memory. The second notion of interest about computers is their remarkable ability to manipulate representations, i.e. to computer, but this is currently outside of the main thrust of the argument.

Digital Representations

The aspect of computers that is even more interesting is that the representations are digital, that is, encoded in a discrete manner. Furthermore, we today also associate representaitons implemented on computational systems to be resistant to change, given the computer doesn't break. Encoding is an important notion - for every representation has some informational content that has to be implemented on some physical substrate to be represented. The theory of communication proposed by Shannon is properly speaking a theory of communication of informational content through encoding. For Shannon, the informational content is the reduction of probability a message can deliver, while the optimal encoding is always in bits. In the world, encoding is much more complex. With the hypothetical drawing of the Eiffel Tower, I encode the Eiffel Tower on a set of pen strokes on paper, while the hypothetical description of the Eiffel Tower encodes the representation of the Eiffel Tower onto words, which may in turn be encoded in sound waves through speech or, if written, on paper or computer form. One would even assume the mental picture I dimly can bring up of the Eiffel Tower exists encoded in some neurons. What is interesting is that the information content itself may have multiple forms of physical encoding, and the encoding itself may exist on several layers of abstraction. Digital encoding is so resilient to change because of the unique nature of digital abstraction, which maps via abstraction information content to physical encoding in a one-to-one mapping. Therefore, if the physical encoding can be maintained, the information content can be maintained. In such a manner the physical encoding of bits can accurately digitally map the twenty-six characters of the English alphabet, and any document written using these bits can be maintained if the configuration of bits can be preserved. These abstract bits are then encoded on top of very concrete and analog spikes of positive and negative electromagnetism on a physical location in a hard-drive. The mapping process takes a range of abstraction over noise to map from the richer information inherent in analog to the more precise and replicable information states available in digital. This mapping could be that any sample of a voltage within a certain range near zero is considered a '0' (say between zero and two) while any sample within a certain range of a positive voltage (say three and five) is considered in the hard-drive is considered to be a binary one, with anything between two and three being the cause for a resampling.

I decide to scan my picture of the Eiffel Tower into my scanner so it may be preserved. I figure that I am likely to lose the scrap of paper sooner or later, but that if I can "digitize" my artwork I am more likely to keep my representation preserved. The scanner acts as a transducer into the computer, and creates a digital representation of my picture. Due to the quality of the scanner, I can probably not tell with human eyes that I have lost any information in the mapping process, but information certainly has been lost that would be noticed if I had a microscope to inspect the representation on the screen and the representation lying in the scanner bed. Also note that the transformation from representation to scanner proceeds in a remarkably similar fashion to that of my original presentation and representation of the Eiffel Tower, for the moment the my representation on paper of the Eiffel Tower is put into the scanner, the scanner and the computer make a physical connection and become one system, with certain components of the computer corresponding, after suitable digital encoding, to the piece of paper in the scanner. Once the paper is removed, the digital representation of the computer's presentation with my drawing, which also corresponds with my representation of my original presentation with the Eiffel Tower, remains. There are multiple levels of encoding at work in my picture, from the encoding of my representation to lines with a pencil, to those lines to patterns of bits that are interpreted by the JPG standard, to those actual bits and my hard drive and computer memory. My newer digital representation is now easily manipulatable, and I begin by adding in elements of color I remember from my mental representation that were not available with my previous pen. For when one encoding moves to another, while some information may be lost, some forms of encoding allow information to be added back.

Naming and the Web

Once digitized, a representation can now be given a name. Indeed, the process of naming is just another way of shuffling in representations, albeit a very abstract one since there is usually no correspondence between a name and the named system. While all sorts of non-digital representations have names that make sense within a given context, such as "Mona Lisa" within a art history class, what is truly remarkable about the Web is that there is one unique naming system, the URI, that allows within the context of the Web any representation to be uniquely identified. From within that representation, other representations can be linked to other URIs to create a web of linked representations. These representations are then available to everyone with access to the Web, and assuming standards compliant software, the Web then allows everyone who accesses the URI to have be presented with the same representation. In this case the URI does not merely reference the representation, like a name ordinarily does, but with the correct processor such as web browser, actually signifies possession of the representation. From this context the Web actually fulfills much of the promise of communication. The wonder of representation is that it allows us to share information through representations, and so so expand the range of systems that we can couple with, and since the Web is provided with a remarkable technological scaffolding, with no respect for space and time and the ability for the representation to be re-coupled with again and again simply by invoking the URI.

Thoughts on the Semantic Web

The main problem with computation before the Web is that meaning of what various bits on a program represented was all implicit. For example, we would often have code that stated such a thing like name = "Scotland". From a "let's pretend it's English" point of view, any programmer can tell you the value of this particular string is "Edinburgh", and the type of value it is that of a name. However - and here's the key - name was merely a useful mnemonic for the programmer to remember what relationship the string's value had with the rest of the world outside the computer. To the computer name has nothing to do with its mnemonic usage to the programmer, and could very well be string a = "Edinburgh", assuminga was a free variable name. However, to the programmer it's important, as this name can be used to reference a data structure that can keep track of, for example, a list of tourist locations in Scotland. There could be other references to Scotland on the same computer, such as the use of "Edinburgh" in a document describing the University of Edinburgh, the balance of certain bank accounts in "Edinburgh." There is also a host of implicit information about Edinburgh, that it is a city, in a region known as Scotland, part of the United Kingdom, and so on - and that importantly it may be disambiguated from another "Edinburgh" such as "Edinburgh, North Carolina" in the United States. That's a lot of implicit information, and while we are dealing with one computer and one program, that information may be kept implicit, even though it makes identifying the differing occurences of Edinburgh difficult at best even when only dealing with one machine.

If dealing with the Web, and we mention "Edinburgh" in a particular representation such as a web-page, all is fine if our human readers can disambiguate what the author meant, and if not, it does not effect the rest of the Web. However, if we want to use string name="Edinburgh" on the Web to transmit information from machine to machine, we need to give it a unique name, or a URI. No longer can we be implicit in our terminology, but must also give the rest of our terms URIs and the relationships, such as names, URIs with implicit meanings and a model-theoretic semantics capable of being understood by computers. This is the crucial difference between data types in XML Schema, which are of type xsd:string, is that these types are denote in a universal way exactly what was denoted by string earlier - that this data is of type string, a type of data normally used by many computer programming languages and databases and so easily interpretable by the computer. What the Semantic Web ontologies encode, such as http://anotherOntology/city http://myOntology/name xsd:string^"Edinburgh", is denotations that data has in the world in a machine-readable manner. It is an attempt to represent relationships among our representations, and to name all the various representations. As a corollary, while it succeed as a universal encoding for the model-theoretic semantics of representations on the Web, where it is doomed to fail is by virtue of the multiplicity of representations out there. Since there are literally countless ways to register the flux into representations and abstract over them, a coherent universal system will never emerge. Even naming will remain controversial, although one will suspect that some top-down ontologies might converge and bottom-up ontologies shared among people that "cut the world up the same way" might emerge.

The intuitive results actually share a lot with both the anti-representationalist and representationalist arguments, and differ in tremendous ways. First, I agree that humans are often poor at representation, and that we often get along in our everyday life in such arenas as perception and sensormotor ability without much use of representation. However, that's exactly why humans create systems that deal quite heavily in representation, such as books and computers. This allows humans to both manufacture and specify representations and the connections between them in increasing detail. Often this is for the our own private use, such as the string around our thumb to remind us of the doctor's appointment. However, that is the "edge" case of representations, as external representations exist to be shared among a variety of systems. Instead of studying the private representations that may or may not be hidden in "mental pictures" or in grammar, cognitive scientists and philosophers would be better in looking at the immense world of public representations, representations that exist to be shared. Indeed, human evolution can be seen as the deepening and further externalisinig of representations, and our increasing dependence on them. The Web is just one further step in the evolution of sharing representations, and the Semantic Web just one further step in a universal encoding towards the sharing of representations for machines.