Выбери любимый жанр

True Names - Vinge Vernor Steffen - Страница 26


Перейти на страницу:
Изменить размер шрифта:

26

However, these theories have not emerged from relaxed meditation about, or passive observation of those complicated biological signals; what little we have learned has come from deliberate and intense exploitation of the accumulated discoveries of three centuries of our scientists' and mathematicians' study of analytical mechanics and a century of newer theories about servo-control engineering. It is generally true in science that just observing things carefully rarely leads to new "insights" and understandings. One must first have at least the glimmerings of the form of a new theory, or of a novel way to describe: one needs a "new idea". For the "causes" and the "purposes" of what we observe are not themselves things that can be observed; to represent them, we need some other mental source to invent new magic tokens.

But where do we get the new ideas we need? For any single individual, of course, most concepts come from the societies and cultures that one grows up in. As for the rest of our ideas, the ones we "get" all by ourselves, these, too, come from societies — but, now, the ones inside our individual minds. For, a human mind is not in any real sense a single entity, nor does a brain have a single, central way to work. Brains do not secrete thought the way livers secrete bile; a brain consists of a huge assembly of sub-machines which each do different kinds of jobs — each useful to some other parts. For example, we use distinct sections of the brain for hearing the sounds of words, as opposed to recognizing other kinds of natural sounds or musical pitches. There is even solid evidence that there is a special part of the brain which is specialized for seeing and recognizing faces, as opposed to visual perception of other, ordinary things. I suspect that there are, inside the cranium, perhaps as many as a hundred kinds of computers, each with its own somewhat different architecture; these have been accumulating over the past four hundred million years of our evolution. They are wired together into a great multi-resource network of specialists, which each knows how to call on certain other specialists to get things done which serve its purposes. And each of these sub-brains uses its own styles of programming and its own forms of representations; there is no standard, universal language-code.

Accordingly, if one part of that Society of Mind were to inquire about another part, this probably would not work because they have such different languages and architectures. How could they understand one another, with so little in common? Communication is difficult enough between two different human tongues. But the signals used by the different portions of the human mind are even less likely to be even remotely as similar as two human dialects with sometimes-corresponding roots. More likely, they are simply too different to communicate at all — except through symbols which initiate their use.

Now, one might ask, " Then, how do people doing different jobs communicate, when they have different backgrounds, thoughts, and purposes? " The answer is that this problem is easier, because a person knows so much more than do the smaller fragments of that person's mind. And, besides, we all are raised in similar ways, and this provides a solid base of common knowledge. Even so, we overestimate how well we actually communicate. The many jobs that people do may seem different on the surface, but they are all very much the same, to the extent that they all have a common base in what we like to call "common sense" — that is, the knowledge shared by all of us. This means that we do not really need to tell each other as much as we suppose. Often, when we "explain" something, we scarcely explain anything new at all; instead, we merely show some examples of what we mean, and some non-examples; these indicate to the listener how to link up various structures already known. In short, we often just tell "which" instead of "how".

Consider how hard we find it to explain so many seemingly simple things. We can't say how to balance on a bicycle, or distinguish a picture from a real thing, or, even how to fetch a fact from memory. Again, one might complain, It isn't fair to expect us to be able to put in words such things as seeing or balancing or remembering. Those are things we learned before we even learned to speak! But, though that criticism is fair in some respects, it also illustrates how hard communication must be for all the subparts of the mind which never learned to talk at all — and these are most of what we are. The idea of "meaning" itself is really a matter of size and scale: it only makes sense to ask what something means in a system which is large enough to have many meanings. In very small systems, the idea of something having a meaning becomes as vacuous as saying that a brick is a very small house.

Now it is easy enough to say that the mind is a society, but that idea by itself is useless unless we can say more about how it is organized. If all those specialized parts were equally competitive, there would be only anarchy, and the more we learned, the less we'd be able to do. So there must be some kind of administration, perhaps organized roughly in hierarchies, like the divisions and subdivisions of an industry or of a human political society. What would those levels do? In all the large societies we know which work efficiently, the lower levels exercise the more specialized working skills, while the higher levels are concerned with longer-range plans and goals. And this is another fundamental reason why it is so hard to translate between our conscious and unconscious thoughts! The kinds of terms and symbols we use on the conscious level are primarily for expressing our goals and plans for using what we believe we can do — while the workings of those lower level resources are represented in unknown languages of process and mechanism. So when our conscious probes try to descend into the myriads of smaller and smaller sub-machines which make the mind, they encounter alien representations, used for increasingly specialized purposes.

The trouble is, these tiny inner "languages" soon become incomprehensible, for a reason which is sun-pie and inescapable. This is not the same as the familiar difficulty of translating between two different human languages; we understand the nature of that problem: it is that human languages are so huge and rich that it is hard to narrow meanings down: we call that "ambiguity". But, when we try to understand the tiny languages at the lowest levels of the mind, we have the opposite problem — because the smaller be two languages, the harder it will be to translate between them, not because there are too many meanings but too few. The fewer things two systems do, the less likely that something one of them can do will correspond to anything at all the other one can do . And then, no translation is possible. Why is this worse than when there is much ambiguity? Because, although that problem seems very hard, still, even when a problem seems hopelessly complicated, there always can be hope. But, when a problem is hopelessly simple, there can't be any hope at all!

Now, finally, let's return to the question of how much a simulated life inside a world inside a machine could be like our ordinary, real life, "out here"? My answer, as you know by now, is that it could be very much the same — since we, ourselves, as we've seen, already exist as processes imprisoned in machines inside machines. Our mental worlds are already filled with wondrous, magical, symbol-signs, which add to everything we "see" a meaning and significance.

All educated people already know how different is our mental world from the "real world" our scientists know. For, consider the table in your dining room; your conscious mind sees it as having a familiar function, form, and purpose: a table is "a thing to put things on". However, our science tells us that this is only in the mind; all that's "really there" is a society of countless molecules; the table seems to hold its shape, only because some of those molecules are constrained to vibrate near one another, because of certain properties of the force-fields which keep them from pursuing independent paths. Similarly, when you hear a spoken word, your mind attributes sense and meaning to that sound whereas, in physics, the word is merely a fluctuating pressure on your ear, caused by the collisions of myriads of molecules of air — that is, of particles whose distances, this time are less constrained.

26

Вы читаете книгу


Vinge Vernor Steffen - True Names True Names
Мир литературы

Жанры

Фантастика и фэнтези

Детективы и триллеры

Проза

Любовные романы

Приключения

Детские

Поэзия и драматургия

Старинная литература

Научно-образовательная

Компьютеры и интернет

Справочная литература

Документальная литература

Религия и духовность

Юмор

Дом и семья

Деловая литература

Жанр не определен

Техника

Прочее

Драматургия

Фольклор

Военное дело