The Intelligent Image
The content of my speech will be a reinterpretation of the last hundred years of cinema or the cinematic code - followed by my projection of what I imagine should be the direction for the next hundred years. The title of my lecture is "Intelligent Image." Naturally, it is a paradox, as an image cannot be intelligent, but I will demonstrate how we have transformed the image machinery, as well as the content of the image, to the extent that we might ultimately see in the future a so-called intelligent image. Whilst we have recently celebrated the hundred years of cinema, referring to 1895 as the official birthday of the cinema - that which we came to call the cinematic apparatus, or the cinematic code, referring to 'le cinématographe' by the Brothers Lumičre -- we must acknowledge that the cinematograph was the final product of an extremely long research. Hundreds of people, hundreds of scientists (physicists, opticians, mathematicians, inventors, amateurs, as well as some artists), worked for nearly a century to produce this machine, the cinematograph. Thus, we may say that if 1895 was the birthday of the cinema, then the 19th century was pregnant for nearly a hundred years, before it was able to give birth to this machine. This hundred-year pregnancy started, in fact, in 1824, when a doctor of medicine, Mr. Roget, discovered the persistency of vision. The persistency of vision discovered by Roget, who is the same person who later gave us the famous thesaurus, made possible the development of a specific kind of machinery which produced optical illusions. Therefore, the cinematic machine was above all a machine of vision. One of the great Polish filmmakers, Wojciech Bruszewski, started also with machines of vision, but now is making a dictionary of phraseology, a machine which creates texts, because there is always a link between automatic machines of vision and automatic machines of letters. Roget not only discovered the persistency of vision, but also gave us the first great dictionary, the famous 'Thesaurus.' Upon the discovery of what we call today the after-image, or after-effect produced by the laziness of the retina, another scientist, Michael Faraday, built his first machine in 1830, the so-called Faraday-disc, an optical disk for optical illusions. Following the path of Roget and Faraday, we will make a kind of parallel listing.
It was Plateau, a Belgian physicist, who invented the phenakistoscope in 1833, and an Austrian mathematician, Stampfer, who in 1834 invented the so-called stroboscope. Both of these inventions were physiological discoveries. The beginning of the cinema, the stroboscopic principal, was first discovered by ophthalmologists. Thus, the cinema was first invented by doctors of medicine; a physiological discovery was the basis for the cinema. During the next fifty years, technologists attempted to construct various machines to find the one to take advantage of this physiological discovery. From where did this idea to discover the laziness of sight come? It is quite clear that the discovery of the laziness of the eye originated in England, as this was the home of the Industrial Revolution. The people there discovered something very strange: i.e., that machines are faster than the human body. They discovered a terrible gap between human performance and machine performance. Machines became more reliable, faster and more precise. Suddenly, people were compelled to think: what is a body compared to a machine? They had to adjust the body to the machine. Therefore, it is very significant that in 1895, not only the cinematograph was presented to the public, but Mr. Taylor, for whom taylorism is named, gave his first lecture. How do you measure in capitalist terms the speed of the human body? Hence, the concept of the body as machine was developed simultaneously with the advent of machines - machines of motion, like trains - in the Industrial Revolution. The original machine of vision turned under the spell of the industrial revolution to a machine of motion. Motion picture became the name of something that was initially a vision machine, because the vast realm of optical illusions became reduced to the creation of the optical illusion of motion.
The comparison between the competence of the body versus the machine gave birth to what is called experimental physiology, experimental psychology or experimental medicine. It was the famous Claude Bernard who wrote the book "Introduction ŕ la médicine experimentale," two years prior to "Das Kapital" by Karl Marx. This book, published in 1876, was a synthesis and a foundation of experimental physiology and psychology. The science of experimental physiology and psychology attempted to discover how fast the eye operates. The temporal aspects of the performance of the body became extremely important in the age of Taylorism. Machine controlled time became the universal model for each temporal activity. They even discovered at the end of the 19th century the terminus: reaction time, as they needed to investigate how much time the body needs to react to the processes of machines.
We had the advent of locomotive machines - industrial machines of motion - in the industrial civilisation. The task, then, was to construct machines of motion that could translate the discovery of physiology. Hence we may observe that the 19th century was obsessed with motion -- in two steps: not only illusions of motion, but also machines of motion. And these machines of motion were of two kinds: the first kind tried to analyse motion; the other kind tried to synthesize motion. The analysis of motion was the task of the camera, and the synthesis of motion was the task of the projector. And the cinematograph was both, a camera and a projector.
Subsequently, we can see what came after experimental physiology. Starting around 1900, came Gestalt psychology, reaching its peak between 1930 and 1950. It was, for example, Max Wertheimer who formulated in 1910 the famous 'phi-phenomenon,' which was a classical principle on which many filmmakers of the 1960's built their oeuvre. It was reintroduced as a cultural discussion. After Gestalt psychology came neuroscience and cognitive science. When we follow this line, we can see very clearly that, while in the 19th century machinery was related to experimental physiology, the new machines of vision have to come to be related to neuroscience and cognitive science. The evolution of the cinema in the 19th century can be described in two major trends: (1) the progress in experimental physiology and psychology leading to the Gestalt psychology initially formulated by Ernst Mach and Christian von Ehrenfels at the end of the 19th century, and then elaborated in the 1920's and 1930's by the Berlin Gestalt psychology of Köhler, Koffka and Wertheimer; (2) the progress in machines that attempted to adapt and to transfer the physiological mechanism of perception into machines capable of the simulation of motions and - herein lies the problem - not into machines of perception, but into machines of motion. Thus, what is called cinema today is in fact already a reduction of the principle of the 19th century, which began to investigate machines of vision, but finally reduced them into machines of motion. We have this industry named the moving image, i.e., the Hollywood system. The cinema itself is already a code; the cinematic code is already a heritage of the 19th century, and is a reduction of the initial journey. When people started on this journey into machines in the 19th century, it was about machines of perception. There is no reason to reduce these machines of vision to motion, as is done in the Hollywood system. Only the avant-garde cinema of the 1920's, 50's and 60's has maintained the original intention of creating machines of vision. The rest of the cinematographic code has been machines that could visually simulate motion. The camera analysed motion, and the projector simulated motion. To clarify this, I will refer to some of the objects upstairs in the exhibition in the Mucsarnok to prove my theory. Marey, who is seen upstairs, started not as a photographer, nor an artist, but as a doctor of medicine, a physiologist. His first paper, published in 1860, was about the circulation of blood in the human body. The motion of blood. Then, I believe it was in 1876, he wrote a book called "La machine animale." "Animal machine": very typical. The human body is a living organism. But he does not say it is a living organism; he calls it "animal machine." As I said, in the 19th century, there was a dictatorship to define the body as a machine within the perspective of production. Since a machine is producing something, and according to the normative ideology of the 19th century, we are forced to compare the body with the machine, we also have to find a concept of the body that is producing something. Within this dictatorship of production, we also introduce slowly the terminus of reproduction.
To make a side-step into the future, at that time there was the invention of the division of the bodies. The productive bodies have been the male bodies; there has been another kind of body, which had just been reproductive, and this was the female body. The difference between gender is not something that came to us naturally and biologically based. The difference of gender was constructed as an ideological effect in the 19th century within the discourse of comparison between body and machine. Ideology would say, okay, here we have machines to produce something -- these are the male bodies. Then we have machines that are just for reproduction - to reproduce human material - and since this reproduction is not as good as production, these are second-rate people -- these are the female bodies. It is due to this socially constructed difference that women were excluded from the construction of modern society. Even later, when, following the progress of science, the hormones were discovered, the ideology started to conceptualize a hormonal body, i.e., how you can regulate hormones, as in the invention of the great Austro-Hungarian Scientist who invented the Pill (Carl Djerassi); this is the trick, another technology of social construction of the body
The tools of our communication also have an effect on our subjectivity and on our body. But to return to Marey, he started, as I said, as a physiologist, and then made machines that could record motions. First, as you know, he used drawing machines, and following his acquaintance with the works of Muybridge, he used cameras. The point is to reinterpret on an actual level Marey. He wrote an article which he called 'La methode graphique.' Thus, he introduced the idea of the graphic notation of motion, meaning we are to take literally the word 'cinematograph.' That means you have a writing - a schrift - of motion. So cinema, in fact, is a reduction already from the initial enterprise, which was about perception. Perception is reduced to the perception of motion. It is not how our brain perceives something, so it stays on a retinal level, and people constructed machines with a kind of graphic notation - "la methode graphique" - of motion. We can say that this graphical method is still valid, tragically enough, today. What he did then was to analyse motion, and to deconstruct it, with his famous graphical method, and there was no difference if you used a drawing machine, or if you used a photographic machine, as was done by Muybridge. Both Muybridge and Marey realised very soon that it is not enoug.h to analyse motion, but that you have also to use all these other machines, like the thaumatoscope, to project, to synthesise motion, to combine both. We may then conclude this interpretation with the fact that everything was invented in the 19th century. The 20th century only turned the inventions of the 19th century into mass media. Even television was invented. What we did in the 20th century was to make it a consumer apparatus, and the side-effect of this machinery was that we turned it not only into mass media, but also into art, or an individual approach, at the same time.
The cinema is a writing of motion; it is just a machine that simulates motion for the eye. As I have said, in the avant-garde, from Vertov to the Vasulkas, they kept to the initial idea, as 20 years ago, the Vasulkas in Buffalo called it: machine vision - not machine motion. When you go back to Vertov, he gave us the term Kinoglaz, the camera eye. He made a kind of delirious prose and poetry about the triumph of the machine eye over the natural eye. He referred to the camera that can see much better, is more reliable and faster than the natural eye. It was the avant-garde within this limited cinematographic code who kept to the idea not of constructing machines of motion, but of constructing machines of vision, especially in the 1960's. With the advent of video (Latin: I see), it was clear that we had to make a shift - a paradigmatic shift - from imitating and simulating motion to imitating and simulating vision, with the help of machines. We had to change from cinematography - the writing of motion - to what I would call the writing of seeing. That is what I could call opsi-, from the Greek word opsis (as in 'optics'), opsigraphy - the writing of seeing, or even into opsiscopy, the seeing of seeing. That means observing observing mechanisms. For example, in cyberspace, when you see yourself and what you are doing as an image, you are already in the opsiscopic space, because you are observing yourself in a picture that you observe, so it is an observation of the second order. In fact, cyberspace is the beginning of opsiscopy: machines that see how we see. This is the beginning of the next hundred years. We start with cyberspace and with some pioneers of video and cinema, such as the works of my old friends, Bruszewski, and Ivan Galeta -- they showed us in their movies in the 1970's: the collapse of motion and the observer relativity of motion. The next step from this experimental avant-garde and from the new sciences is the exploration of mechanisms of perception, but mechanisms of perception not based on the eye, but on the brain. How does the brain construct, with the help of the machine called eye, the world? And how do we see the world, not on the basis of the eye, but on the basis of the brain? There is an Austrian, Alfons Schilling, who, working in the 1970's with the Vasulkas, already discovered in 1976 the concept of brain-scapes instead of landscapes, because he realised that when we look at an object, we do not see a natural landscape; we construct the objects in the landscape with our brain, and thus we may call it brain-scape.
This leads us to the first group who gave us an idea for the next hundred years: the cyberneticians. The cyberneticians followed this comparison between machine and man. It was even the subtitle of the famous book published in 1947 by Norbert Wiener, the analogy between living organisms, animals and machines. There was still this dictatorship of the 19th century, comparing the performance of the human body with a machine. They divided machines into three sections. They did not speak about motion anymore, happily enough. So this is the basis for the future: machines that simulate perceptual processing machines with receptors and with effectors. Machines which are able to receive information, and then to put this information back into the environment. The so-called effectors. This is what we call today robotics. These are the new machines substituting the old machines of the 19th century. Then they came up with the next category of machines: machines that simulate thinking, from the calculating machine to the computer. This was a much higher step: the machines which similarly to the brain simulate thinking, not seeing. Machines were compared to the highest body organ, the brain. Finally, they had the idea already in 1950 that we can build machines that simulate not only motion, but also simulate life. There is a very unknown but wonderful article about the apex, the height of machines, that was called "Machina Speculatrix," the speculative machine, which is a creative, visionary machine, published by Walter Gray in 1950. The article published in Scientific American had a wonderful title, 'The imitation of life.' Here you have a clear indication of what it is all about. We do not have in mind the business of how to imitate motion, but what we are doing now in the next hundred years is examining how to imitate life. The question is, upon the basis of 'machina speculatrix,' the speculative machine, how we change our concept of the image. Now we have to ask, naturally, how these machines were operating -- how they can simulate life or simulate thinking. It is one part of what we called opsigraphy, or opsiscopy; this is one subsection of this large field. Naturally, when people were working on this 'machina speculatrix,' they realised that the highest degree of machinery is not a machine any more. That is interesting, just as we discovered the highest degree of painting is not painting any more. It is the next step, it is new media: video, photography and film. So these people discovered the same: to make the imitation of life possible, they had to leave the machine level and go into system. They realised that only systems can imitate living processes and life, and only systems can imitate the thinking process. Hence, this was the next step: going into system theory, and as a subsection, to think also of the image as a system.
In system theory, we have the first big problem, which then leads us to the images: how do we define a system? People realised that there is the system, and there is the environment; so you have to come up with the idea of border, which distinguishes between the system and the environment. When we look back, we can see what are our borders: we have the skin, then we have a membrane; what yesterday we called skin and membrane, today we call "interface technology." Thus, interface technology is the border that separates the system from the environment, or if you like, in classical metaphorical language, interface technology separates the image, or differentiates the image, from the real world. Naturally, when you have the interface, you know that this difference is not very clear; so this difference is not a strict border like a wall, but rather like foam. With our receptors, we are able to go beyond our border; we see something beyond our own body, and we have invented hundreds of telematic machines that go much further then our natural sensory organs can go, and have a much larger horizon of visibility than the horizon of things that we can see and process.
We have to change the idea of the image, which we always think about in terms of painting; suddenly in fact, cinematographic code is already the wrong word. The true cinematographic apparatus is not a machine any longer. The true cinematographic code is already a system, and a system has machine elements, but it also has software. We have a system consisting of a camera, a system consisting of projectors, of computers and a lot of peripheral interface technology. When you look at, for example, the work of János Sugár upstairs, then you see here an interface which is a light-dependent resistor, which is changing two databases: one database is local, but the other database is in the Net -- it is non-local. So here the system is not even in the space any more. We have access to the system that is allocated to the telematic space, to virtual space. In fact, the first thing we have to learn is that when we speak in our field about images, we do not speak anymore about a window as an image; an image has to be defined as a system. We need a theory of border, a theory of interface technology, to separate a system from the environment, and to make exchanges between them. Therefore, I refer also to Sutherland, who in 1964 wrote an article about man, machine and interface. Here again, you see the 19th century relation between man and machine, but now there is a new concept that we have no direct access between man and machine; the new concept is interface technology. There is something between us, between the world and us, between machines and us. There is a border - there must be a border - because if there is no border, there would be no difference between system and environment. The important point is that the border is permeable; it is permissive and variable. The border can be extended. What is in the environment now, can be part of the system in the next step. What is in the system, can be the environment for the sub-system. This means that when I am an observer in one system, I can be, for the next environment, for another observer, part of the system. Normally we believe as in classical cinema: we are external observers of the image; our observation has no effect on the image. But we have constructed systems where our observation -- for example, in my piece upstairs, when you stand in front of the wall and you look at the wall, and see that you are part of the wall, you see very clearly that my observation is part of the system that I see. It is seeing myself seeing in front of something. Naturally, I can step back myself and can also be part of the environment and not part of the system. We can see that the border between system and environment is variable -- that is very important. We cannot destroy the idea of the border -- we need it, but we can make it variable. Thus, the border is what we technically call interface technology. The interface makes a variable exchange of signals between environment and system. Therefore, in the future, there is actually a lot to expect from architecture, which is a classical field of internal systems, as in a house, and external environment, like a city. Architecture, much more than cinema, will invent a new interface technology. We should look at the classical field of architecture in order to get ideas about how we can develop and extend our interface technology.
When we are able to define the image as a system, then we have to introduce three concepts. We are compelled to see that the digital image is, for the first time, a real system. Cinema and video have been close to the idea of the system. Video was much closer than cinema because there we speak about open circuits -- open circular installations where people enter the image by the observation of the camera. But even in video, the information was locked magnetically, so information was not free -- it was in a prison. The first point is that for an image as system, information is stored virtually. Virtual reality is based on the virtuality of information storage in the computer. The photographic image was locked and processed chemically in a way that made change difficult. Even in video, information was locked and stored magnetically. Today, information is just an electronic representation, which means you can change it immediately at any time. For the first time, we have the concept of simultaneity in the image system itself. The virtuality of information storage makes possible, as a next step, the variability of information content. Since you can change each point of the image immediately, of the image as a system, then the content of the image is also variable. Instantaneously, the content of the image can change. Each point is a variable. The image becomes a system of variables, and this is the next important step. Image is a system which has peripheral machines, like cameras and computers, which are able to react immediately to your observation. So when you are part of this interface technology, by acting in front of the image, you can change, you can modify. Thus, the image is a dynamic system of variables.
Now, the next point I would like to make is how we define these variables of an image system. We define them as agents. This idea of agents has a long history, and it is interesting to go back to its roots. It was a mathematician, Axel Thue, who had a linguistic problem. He had a string of letters: let's say seven letters, from A-G, building an alphabet, and he wrote down a formula, let us say with four letters. He had a grammar of two rules: A can always be transformed to B and A; and B can always be transformed to A. He was asking if you can find now, again like Marey, a general method that would decide whether any written formula could be derived from the seven letters, the alphabet, by the two rules, the grammar. He found out that this "word problem" cannot be solved on the general level, but only step by step. The unsolvability of this "word problem" of Thue was later made famous by E. Post. Chomsky, who invented the semi-thue system made the first logical-mathematical models for language. And from this semi-thue system by Chomsky, it was Bacchus and Naur who developed the first programming language, ALGOL -- algorithmic language. Subsequently, it was Aristid Lindemayer, the Belgian mathematician, who in 1967 invented the L-System -- the Lindemayer-System -- which was a clear application of the technology of Thue. This was a programming language, which he called "genetic algorithm." The word algorithm was coined in 1904 by Thue, who said that when you have this word problem of finding out if a chain of letters can be derived from the basic formula, then you have to invent an algorithm, a general method. And Lindemayer took up, after nearly a hundred years of research, this idea and created a genetic algorithm. With the help of this mathematical-linguistic model, he was able to simulate the growth of plants! It was not geometry; it was not machines of vision; it was a programming language built on mathematics, which made it possible to simulate one aspect of life, which is what Walter Gray tried to say: the imitation of life. The cinematic code of today is not to imitate motion, but to imitate growth, and even chaotic growth, as an aspect of life. How the plant grows is defined by this genetic algorithm. The visual simulation of motion as one part of life is not enough for the future of cinema. Other aspects of life must be simulated too.
After Lindemayer, John H. Holland wrote the best textbook about the genetic algorithm. He came up with the next step, i.e., what systems would look like, and he called them "complex adaptive systems" -- to make a system able to adapt to the environment. Up to now, we had the separation of the system and the environment, and there was exchange between them. There was no idea how the system can evolve itself, and how the whole system can grow and adapt to the environment. He just recently published a book, called "Hidden Order," about how adaptation builds complexity. Here, he gives us seven concepts like: aggregation, tagging, non-linearity, flows, diversity, internal models, and building blocks; and all these mechanisms or properties of a system make it possible that a system which is complex enough can adapt to the environment.
Naturally, this adaptation is carried out by so-called software agents, or by so-called autonomous agents, so that we can see that when we have defined the image as a system, with variables and agents, these agents have autonomous faculties. These agents, these software agents, can make their own decision within the algorithm about what to do, so that for the first time, within the system, the system can adapt. And this process of adaptation is what I will call "intelligent behaviour"; or, the system as a whole, after being virtual and variable, becomes "viable." The image shows lifelike behavior, i.e., viability.
Thus, I will introduce three concepts: (1) the virtuality of information storage; (2) the variability of information content and, now comes the last step; (3) the viability of the image system. What Walter Gray wanted, to make a machine which imitates life -- today we are developing systems with acoustic events and with pictorial events. Thus, the image as a system behaves like a living organism. Our images of the most advanced status are not about the simulation of motion; they are, in fact, about the simulation of several aspects of life. When a system behaves like a living organism, then you can call this image not only an animated image; you can even call it an intelligent image. An image cannot be intelligent, but when you turn the image into a system, then it can demonstrate behaviour like a living organism and can have artificial intelligence.
This deduction, this basis for the next hundred years, shows us clearly what we have to learn from neuroscience, from cognitive science. For the moment, we already have two fundamental strategems to follow. One is what I have just explained: the idea of the complex adaptive system. The other is the idea of the Net.
The basic principle underlying the Net, beyond all this euphoric chat, is that here we have, for the first time, a decentralised system. The idea of the Net, in fact, was not invented by the ARPA people in the late 1960's in the military; the idea of the Net originally came from neurophysiology, so again we have the same schedule. The first was physiology, but it was low-tech physiology: blood circulation. Today it is high-tech physiology, which we call neuronal information theory. It was in the late 1940's that people like McCulloch and others wrote an article called „The logical calculus of ideas immanent in nervous activity." These people invented the idea of the Net, not the military people who came up with the technology of the Net. It is the same principle: there have been physiologists on a high level, who have gone not into the retina, but into the brain, and it took many years from Sutherland to whomever to develop the machinery to apply these new neurophysiological discoveries. The basic idea of the Net is that this is so different from all the other systems before: we have more connections than knots. When you have a hierarchy, for example, you have a line: you have three subjects that we call three knots, and only two connections. But when you take a square, and define it as a net, then you have six connections and only four points. So you see, the basic rule is: in the Net you always have more connections than knots - more connections than subjects. This is a non-hierarchical system.
In the future, to have an intelligent image, we must apply the idea of the Net, which means the observer is not hierarchical anymore. With the classical image, we have the so-called visual hierarchical pyramid because there was one observer looking at one image. What we have to try in the next decade is to make the image as a system where the observer is only one knot, one knot in the net, in the image system as a net. Thus, the observer is not privileged anymore -- he is just a peripheral interface machine, like other machines. When we know this, then we can remember that the cinema was in the beginning a single observer looking into one machine, and you saw the imitations of motion, and you only saw one film in a local environment. We have this definition: in the beginning of the cinema you had one observer, one film, one space -- this is the classical form. Then slowly it was transformed so that we have now the collective experience, but still, even when you go into a movie hall and have a collective experience, still, there is one film, locally in one room at one time: still the principle of oneness. Then comes the next step: television, where we have collective observation, but it is not one room; it is not local -- it is in different rooms, but still seeing it simultaneous -- still seeing one film -- this is again a restriction. TV already has a net structure as its distribution model. The next step was, for example, in the Net, or will be in the Net, or in video-on-demand. We have a collective experience -- simultaneous, dislocated -- but seeing different films, because each one from the Net can see it at home, non-located peripheral, and each one can see a different movie from some kind of database.
Cyberspace as it is now, in a head-mounted display, is old-fashioned cinema because, again, you see, you have one observer, being locally defined, one movie, one space, one time -- this is absolutely old-fashioned technology. What we have to do now is to develop in the next hundred years the cyberspace technology to meet the demands we have now. We have to develop collective experience; we cannot go back to single experience. We have to develop a system of technology so that we can have collective experience that is not simultaneous alone. It must be simultaneous and also non-simultaneous; it must be non-local -- the image must be Net-based, it must be based on information which comes from non-local sources. So we need the Net. Naturally, we need not the Net that we have today, which is technologically too low. We have to expect for the next hundred years what is called 'fibre optics.' A hundred years ago people made railways -- all this incredible amount of work: put these cables in the ocean, put the cables under the earth, put these cables in the air; this is what we are doing over the next years to make this fibre optics as a net-base for this collective experience of other visual codes. Naturally, it must be telematic, and it must also be non-wired. As you see with the head-mounted display, there is still a wire going to the computer. The future must find a technology, learning from neurophysiology, so we could call the future film a neuro-cinema, we must find a collective experience of visual and acoustic data which are non-simultaneous, non-local and not wired.
Another science which we can learn from, is what effect these tools of communication will have on us and the subject. As I have said already, it is changing our status as observer. We can learn from quantum theory, which teaches us that reality is observer-relative. Whatever you observe, you change it by your very act of observation. This is another step of our system theory. Quantum theory not only told us that reality is observer-relative; it is more than just an observer-relative perception of reality. With our act of observation we change the behaviour of the image. But we do not change reality. So we have to go from receptor technology (which is cameras), which is now down at the low-end, to effector technology. In the act of seeing, I want at the same time a machine that effects something from the outer world. Up to now, we have developed only receptors, recording machines with which to record the world, to represent the world; and now for decades we have had this famous crisis of representation, and the crisis can only be solved when we develop a technology of effectors. The act of seeing is changing not only the perception of reality and the perception of the image, but also the real world itself. This is a basic proposal of quantum theory. Naturally, when in our case the observer is a machine, then we can realise that our reality is not only observer-relative, but also machine-relative. Our observing machines, which we are developing now, from satellite TV to computers, are not only changing perception, not only simulating reality (simulating life) - they are constructing reality. This change to reality as observer-relative, as machine-relative, whereby machines by the interface can construct reality, finally returned even to us as subjects. While in the classical world we said 'know yourself' or 'express yourself,' it is clear that this is not valid any more in the world we are now constructing with the help of these machines. It is also our subject which has to be constructed: we cannot say I have a natural identity, and I am a male or I am a female, and there is something in me, a kind of genetic code which I am developing; I only have to find a way to express myself, to know myself. We have learned from psychoanalysis that we never will know ourselves because there is a subconscious, and that we are not even able to express ourselves. The only thing we can do is "construct ourselves" like the machines that can construct what they see.
These ideas, in fact, are not only coming up now in the cyberspace area; the natural sciences already at the end of the 19th century had such vague ideas, but they are being explored today. It is interesting to see that even in the science of the 19th century, there were strange ideas about identity. There was a wonderful mathematician Josiah Gibbs, who died in 1903, and who wrote an article (in 1873) unnoticed by the cultural world, almost at the same time as Marey. This article was called 'A method of geometrical representation of the thermodynamic properties of substances by means of surfaces.' He did not say 'a graphical method' like Marey, who already was old-fashioned. The point was to find a method of geometrical representation of the thermodynamic properties, i.e., the energetical properties, of living organisms -- it was not a still image. He was thinking about how he could represent a thermodynamic property, a property of a living organism -- how he could represent energy. He did not want to represent motion or some particles; he wanted to represent an immaterial property -- by means of surfaces. So he came up with what we call today texture-mapping. He realised that the only thing you have is geometry, and you can construct surfaces, and with the construction of surfaces you can get an idea of how to represent energy, a stream of energy. This is the type of mathematics we will need very soon to construct intelligent images. This article created what is called today the 'state,' or the 'Gibbs phase state,' which shows that space is not something continuous -- that space itself is a statistic property.
The next scientist who understood his idea was the famous Nobel-prize winner Richard Feynman, who went even a step further. He saw that the viewpoint of an initial value problem (i.e., the viewpoint of a born faculty: eg., somebody grows up, is a male and has some faculties; he has an initial value -- he is born a middle-class male), has to be replaced by the state of a system. The state of a system is represented by a vector in its evolution time. Gibbs worked only in geometry; Feynman said we have also to consider evolution time. Instead of considering the motion of a particle from a point A in phase space to a point B, we have to substitute it by phase space: it has to be defined not as motion, but as transitions between different states -- this is a grandiose idea. We do not see anymore like the recording machines of the 19th century that analysed motion in frames -- this is very old-fashioned technology even used today, dividing motion into frames. When we divide motions wherever we speak about motion, we have to see it as a dynamic system where we have to divide transitions of states. There is one transition of state, then the whole system is changing to another state, to another phase state -- this is the point. He sees it as he defined it, in transitions of probabilities, because you can never really measure this space; you can only give it a probabilistic value.
So we are left with probabilities for possible paths. In old-fashioned cinema, here is starting point A and ending point B, and we know clearly where this guy is walking. In physics it is not the same: we have many probabilities of possible paths. The one path that one particle is taking is just a probabilistic average of (all) many possible paths. He introduced what is called today the 'Feynman path integral': an integral of all possible paths one can take. It means for us when you make a movie -- when you make optical work -- we have to leave the idea of linearity, we must give the observer the chance -- we have to construct possible paths. Not to say he does this or he does that -- we have this kind of causal circularity, going from this to going to that. We have to invent a technology which makes a possibility of paths, a stochastic probability model of paths, of movements. Then, only when we have this, can we imagine that a spectator can make his own choice. We try it today, very roughly, with the so-called hypermedia, with CD-ROMs, when people can follow different narrative paths through a story. We can say, "I follow this fellow to this direction; I follow this girl in that direction," like in the work of G. Legrady upstairs. Legrady is an example for the status quo now in that you can follow from one image to another, so that the computer is calculating somehow beyond the observer, which image is following the next image. There is already a tendency to give the observer an undefined field, a probabilistic of different paths. Then the observer makes the Feynman-integral of the probabilistic values, which is stored in the computer.
This means that not intention, but chance is the future of the image. Not the artist -- the artist in the best case should give a probabilistic value; within this domain of values you can make your probabilistic path through the system. Not intention but chance determines the local path. In the cinema now you have one projector and one image -- a linear model. When you recognise that the source of the image is a vast field of probabilities, even of stochastic probabilities, then you realize how it is possible that many people see at the same time different movies. My vision is that even when you would sit in a hall as we do now, we would have a technology that would allow each one of us to see a different movie. This is not possible when you have the images on the celluloid; it is only possible with the virtual storage of information in a quantum-computer. The quantum computer gives us the chance of stochastic access to the information. Then each one of us has a set of variables -- naturally it is not infinite, but each one of us can see a different movie coming out from the same source of probabilistic values. This is a kind of idea of the Feynman-integral in the optical field.
A friend of mine, the physicist Otto Rössler, has gone even a step further: he published a paper called 'A new principle for identity generation-three equal parts on the ring.' It is a difficult paper, but it showed something paradoxical, which derives from Gibbs and Feynman: that identity implies indistinguishability. This means that when you really define what identity is, then you realise that it is something that Leibniz called the principle of indiscernibility. It is a very strange theory that identity is implying just the opposite: the indistinguishability -- that we cannot make a difference.
Thus, the most advanced Physics is teaching us - what I would call - after the culture of reproduction, of which cinema was part -- we are entering something new: the world of cloning. Again, when we speak about the genetic algorithm as a software program, we have also to take into account genetic biology, which means that when you have a clone, this is the basic idea: a clone is not something different from the original; a clone means it is all equal. Identical clones, which is a paradox, are identities without distinguishability. In the classical world of reproduction, we had the original, then we reproduced something. In the classical world, we had an object, and the highest object was an object which existed just once. This was what we call the original. Art was a very conservative cultural product, a terrible product, because art was always trying to make one unique object. When you made a painting, its value was defined by being an original painting, meaning a work only existed once. And even when you made another painting which looked similar to the first one, it was not unique and it was a fake. We have built up a tremendous horrible metaphysics of fakes and simulations; in the next hundred years, we will create objects which are not originals, and objects which are not just copies -- we are at the end of 'copy-culture,' and we create objects which are clones. And a clone means each one is identical, so there is no original, there is no copy anymore. We have just clones. People will be afraid. What we are going to do is construct identities which are clones, and in art already -- and some good artists will see this -- people try to show us these possibilities. The point is, when we are talking about the effect our tools of communication have on us ourselves on an ideological level, then we have learned from cinema that there is an image without original because we construct this image, and this means that we have learned to create objects without originality. The next step will be that we have to compare the status of the originality of the object with the status of identity of the subject. Here is the object and it is original; here we have the subject and it has an identity. Now we learn there are objects without originality, and we have to learn there are subjects without identity. Our next step will be constructing these machines to make us subjects without identity, and then not to be afraid of it, to enjoy it. Because in postmodernism, everybody knew the idea that we are on the way to constructing ourselves. We had a lot of signs: we had Duchamp who introduced an artwork, the famous 'pissoir' -- but art history has overlooked one important thing: that Duchamp did not enter his artwork, the 'pissoir,' by his own name. He invented a pseudonym, Richard Mutt. He took an important step. He said, I am not anymore Duchamp who shows you this wrong object; I am a wrong person myself. So I substitute the object with something strange, and I substitute the personality with something strange. It is a fake identity called Richard Mutt who shows you a fake object. Then we have Fernando Pessoa, who wrote under various names numerous books. Then we have Cindy Sherman, who always shows you photographs of herself in different personalities. In modernity, the problem was that people have been afraid of multiple identity. They saw anxiously the possibility of what we call today 'terminal identity,' as named by William Burroughs, one of the great men of the century, who wrote a novel in 1964, Nova Express, and I make a citation: "the entire planet is being developed into terminal identity and complete surrender." In Nova Express, he created the concept of terminal identity. There is a later book by Scott Bukatman called Terminal Identity; on the literature on 'virtual identity.' Marxist philosophers like Laclau call it 'positional subject': the idea that a subject can run through a range of positions in life offered by society-- at one time you are the mother, at one time the beloved person, at one time the office woman, and so on. In our society, the subject can construct itself and then position itself, run through different positions. Naturally, these philosophers still believe that you can determine your positions. What we can learn from physics is that there are a lot of probability and chance operations in the construction of your identity. The concept of terminal identity or virtual identity is the same; it is a subject not bound anymore to identity; it is a subject without identity. It is like an article by Chantal Mouffe, called 'The politics of nomadic identity.'
Hence what we learn from these effects on ourselves is that we are moving away from reproduction culture, we are moving away from Walter Benjamin, and we are moving into clone culture, where the ideas of originality, the ideas of identity, do not count anymore. Then we may understand finally one last sentence by Borgés. He said, "Unfortunately the world is real; unfortunately I am Borges." He said that the identity of a person is bound to reality, and when we come up with machineries which can create virtual spaces and realities, immaterial spaces, we have also to pay the price, we have also to construct virtual identities, virtual bodies -- one is linked to the other. It means the aim is to create virtual identities together with virtual images, and this together will constitute an intelligent image. This intelligent image system will be another step in the human liberation from the natural prison of space and time.
Back to "The Moment Before Discovery"