# Cybernetics today # ## MUTE panel ## 30 minutes talk. - Semantic Web: ‘ontology’ - ‘semantics’ of a programming language - interaction/interactive: input prompt vs. background process - system: complex organization - learning / self-learning system: not reprogramming, statistical data - artificial intelligence Experimental, hypothetical, metaphorical use of philosophical / humanities terms. Re-imported into humanities and philosophy, mostly via new media studies, in their literal meaning. (Virtual Reality philosophy, interactivity theory etc.), with misunderstandings. Joseph Weizenbaum: (concepts like intelligence / cognition get used in grossly simplified ways. “ "I want them [teachers of computer science] to have heard me affirm that the computer is a powerful new metaphor for helping us understand many aspects of the world, but that it enslaves the mind that has no other metaphors and few other resources to call on. The world is many things, and no single framework is large enough to contain them all, neither that of man's science nor of his poetry, neither that of calculating reason nor that of pure intuition. And just as the love of music does not suffice to enable one to play the violin - one must also master the craft of the instrument and the music itself - so it is not enough to love humanity in order to help it survive. The teacher's calling to his craft is therefore an honorable one. But he must do more than that: he must teach more than one metaphor, and he must teach more by the example of his conduct than by what he writes on the blackboard. He must teach the limitations of his tools as well as their power. Conclusion: faith in systems, very similar to Joseph Vogl’s conclusion in his book “Das Gespenst des Kapitals” (2010) that the market is a quasi-religious belief system trading mostly fictions. Theodore Roszak, 1976, Cult of Information: Conflation of data with knowledge, computer can neither develop nor refute ideas. Norbert Wiener, The Human Use of Human Beings, p. 162: “Let us remember that the automatic machine, what- . ever we think of any feelings it may have or may not have, is the precise economic equivalent of slave labor. Any labor which competes with slave labor must ac­ cept the economic conditions of slave labor. It is per­ fectly clear that this will produce an unemployment situation, in comparison with which the present reces­ sion and even the depression of the thirties will seem a pleasant joke. This depression will ruin many indus­ tries-possibly even the industries which have taken advantage of the new potentialities. However, there is nothing in the industrial tradition which forbids an in­ dustrialist to make a sure and quick profit, and to get out before the crash touches him personally. Thus the new industrial revolution is a two-edged sword. It may be used for the benefit of humanity, but only if humanity survives long enough to enter a period in which such a benefit is possible. It may also be used to destroy humanity, and if it is not used intelligently it can go very far in that direction. There are, however, hopeful signs on the horizon. Since the publication of the first edition of this book, I have participated in two big meetings with representatives of business manage­ ment, and I have been delighted to see that awareness on the part of a great many of those present of the social dangers of our new technology and the social obligations of those responsible for management to see that the new modalities are used for the benefit of man, for increasing his leisure and enriching his spiritual life, rather than merely for profits and the worship of the machine as a new brazen calf.” Example: Eliza, semantical system. I would like to be the skeptical agent here and talk about language and terminology surrounding generative/computational system. An interesting quirk is, at least for people with an arts/humanities background, that many core humanities concepts have been adopted in systems engineering and computer science in rather figurative/metaphorical or at least pragmatically simplified ways. (Usually, it's the other way around, and concepts from the science and technology get metaphorically adopted by the humanities.) This concerns such concepts as 'autonomy', 'learning', 'semantics' and 'ontology'. For example, what is called an ontology in computer science would be simply called a taxonomy or classification scheme in the humanities. A lot of confusion is created when engineers talk about 'autonomous systems' that are 'self-learning' and based on 'semantical ontologies', because the humanities implications are much larger - and implicate, to a humanities audience, much greater capability of the machines - than intended. I think that reflecting these gaps is important to also understand agency and real-life effects of false beliefs in the real capabilities of programmable systems. The subprime crisis is a good example since an important part of it was caused by risk assessment formulas (based on Gaussian probability distribution, btw.) that were implemented into bank computers. Management believed that it had credit default risk under control because they had 'science' to assess it. ian Bogost, Unit operators