Friday, 25 September 2020

Pennies and elephants

 Contents

  • Introduction
  • A digression
  • Elephants
  • Some theory
  • Other devices
  • What does the subject bring to the party?
  • Other matters
  • Conclusions
  • References

Introduction

Something of a medley, but started off by thinking about seeing a near small object largely occluding a far big object, perhaps an old penny occluding an elephant. How does LWS-R deal with this? What gets projected onto LWS-R’s patch of cortex, and from there into consciousness?

An seemingly unlikely spin-off from perusal of reference 8, about the unlikely possibility of being conscious while not being conscious of anything, to which we shall return in due course. While for LWS-R, see reference 1.

Note that, for present purposes, an important property of LWS-R is that it is self contained; it includes everything which gets into consciousness, an everything which has been put there by the compiler and there is no need or possibility of referring out within the context of a frame. Everything has to be expressed by layer objects in the layers of LWS-R, supplemented by the links provided by column objects – for which see reference 6. While the compiler, which came before, did free-range, could and probably did draw on material from all over the brain.

A digression

Before getting onto elephants, we pause to think about trees.

Suppose we are in a state of relaxed repose, perhaps after some physical exercise. Just comfortably sitting in a garden chair, outside, gazing at the scene in front of one. We suppose a more or less natural scene, perhaps with grass, bushes, trees and some sky. A predominantly green scene; a scene in which things are growing, rather than having been made.

The eyes flicker about as our attention flickers around the scene. Lighting first on this spot, then that, without stopping anywhere very long.

Sometimes, perhaps for some considerable time, there are no conscious thoughts at all, although one is perfectly awake, ready for action if need be – and would be able to answer questions about that scene. Some without thinking, some needing one to take another look. To decide, perhaps, what sort of tree one was looking at.

No conscious thoughts in the sense of inner thoughts, typically in the form of words which do not make it all the way to the vocal apparatus of throat and mouth. For which see reference 14.

Furthermore, it might well be that the brain knew, in some sense, that one was looking at a beech tree – but did not see fit to intrude on consciousness with this fact. One was not conscious of looking a at beech tree.

One might be looking at the pattern of the branches holding up the canopy of the tree. One might be tracking the branches up and down. Or admiring the pattern that they made. Or one might be watching the rhythmic movements of the branches in the wind. And again, there would be nothing much in consciousness except the tracking, admiring or watching itself. The visual experience would be sufficient unto itself, it would not need supplementing with botanical or any other knowledge.

Not even the name of colour of the leaves, a strong and reasonably uniform green. The word ‘green’ is not drifting through the brain as inner thought from time to time. One might even pause for a second or two if asked the main colour of the scene. Information which one can get fast enough, but which is not in consciousness at the time of the question. It might be somewhere in the LWS-R data structure as a matter of computational convenience, but it would not have been activated, projected into the subjective experience.

We associate here to Chater’s argument at reference 10 that not much information is held in consciousness at any one time. Much less than one might at first think. Also to the tricks one can play with people by changing big things in full view, but which they are not attending to, without their noticing the change. And from there to the rather noisy video at reference 11.

Figure 1

Rather different, we might be in the room above, provided by a shop selling interior design. We try to sit quietly, our eyes flicking about the room. Here, many of the things the eyes light on have names usually in the form of common nouns, perhaps short phrases or clauses built around common nouns, which was not the case with the tree. But generally speaking, the names do not come to mind.

The brain may well being working away in the background, working out what things are and whether this supporting information needs to be brought to the subject’s attention. Perhaps a thought such as ‘don’t much like blue glass’ flits through the mind, being attached, at least temporarily, to the table lamp left on the visual layer.

But for most of the time, the brain is content with the unadorned visual layer in LWS-R.

Elephants

Figure 2

We now suppose that we are looking at an elephant, some distance away, partially occluded by an old penny (about an inch in diameter), somehow suspended in front of the elephant but a good deal nearer to our face, to our eyes, than the elephant. The small penny occludes a good part of the large elephant. So what does LWS-R make of this? A penny, incidentally, brought to us here by the recently discovered ‘remove background’ tool lurking in the depth’s of Microsoft’s Powerpoint. With the white spike right being the end of the left tusk.

We suppose that the penny, the elephant and the background are on the same visual layer of LWS-R.

We suppose that we are looking straight at the penny and that the central part of the image above can be mapped onto our plane patch of cortex without too much distortion, without the sort of distortion and trickery involved in mapping the whole hemisphere of the visual field onto that plane patch of cortex. Think of all the different projections used in atlases of the world, for which, for example, see reference 2.

Our interest is in the subjective experience of the relative size of the penny and the elephant. Does this experience go beyond experiencing the visual, the pixels as it were? Which might be quantified by the amount of electrical activity generated on our patch by the corresponding layer objects on the visual layer. But is there more? It may well be that most of the time we are not consciously aware of size at all. But sometimes we are aware of size, perhaps because the question of relative size has popped into mind, either by a question from the outside or otherwise. An awareness which is perhaps short-lived, not extending much beyond the short period of time needed to vocalise or sub-vocalise the answer. 

So how does LWS-R express the answer to the question about size when it is asked and answered? We are not so interested in how the brain at large gets to that answer, right or wrong. Or with how the compiler decides whether or not to get the answer ready, in the wings as it were, just in case it is needed in consciousness. Rather, how is this information expressed when it is there?

The penny and what is left of the elephant both occupy roughly the same amount of space on our patch, that is to say the image of the penny on the patch is about the same size as that of what is left of the elephant. We suppose that, in the absence of too much of the distortion noted above, if one could look at the patch with a suitable microscope, one would recognise both penny and elephant; quite possibly a bit distorted, but roughly what you see above. This follows from the hypothesised topical organisation of our patch. Continuity also requires these two images to be adjacent, to abut each other. 

Note that the penny is clearly in front of the elephant, although we have yet to define what has been done to make this so. In LWS-N, the scheme was that the relative position of two objects was marked by the way that their shape nets abutted. Here the scheme might be that order is marked by linkage through column objects and a composite object. But that is another matter: size is the present concern, not position.

So, as far as this visual layer goes, the penny and the elephant are perceived as being of the same size, although one might be louder, as it were, than the other. In any event, we probably know that they are not the same size, know in the sense that we will come up with the right answer if prompted. We probably know that the penny is a lot nearer than the elephant and that live elephants are a lot bigger than pennies. On the hypothesis that we are conscious of these matters, at least some of the time, where does this extra information live?

Figure 3

Remember that we have defined our visual layer objects and their regions. In this case, a penny, a suspending thread, an elephant, the foreground and the background, this last possibly split into three: bush, mountain and sky. These visual layer objects might be thought of as an unlabelled diagram projected into consciousness. The raw pixels have been processed into layer objects and regions, have had some structure superimposed on them, but the labels for those layer objects and regions are somewhere else. We associate to the common practise, snapped above from reference 9, of having quite elaborate descriptions attached to diagrams. Descriptions which are not part of the picture but which are linked to the picture.

The LWS-R proposition is that object relations, in this case telling us that this object is larger than that object, are stored on some other layer, with the arguments of those relations linked to the corresponding visual layer objects by column objects.

Note that in real life, some object relations can be deduced from small movements of the head. When the penny is in front of the elephant small movements of the head will result in the more or less fixed image of the penny moving slightly over the changing image of what is left of the elephant, rather than vice-versa. But this is how we and our brain get the information: the job of LWS-R is to express that information with the rather different tools at its disposal.

Note also that in real life, range information can be deduced from binocular vision. Information that the brain needs to extract if LWS-R is to use it in its monocular world.

Figure 4

For a refresher on the sort of plane geometry suggested above see reference 4. While in thinking about size, the brain might start with solid angles, for which see reference 3. But other things being equal, something with a big solid angle is big: in which connection, we can say that the solid angle subtended at the eyes by the penny is about the same as that subtended by the elephant. However, there is the complication that for a given solid angle, the image on the retina of an object on the periphery is bigger than that of an image at the centre – with the figure above – a vertical section through an eyeball, lens centre right – being suggestive rather than robust. 

In any event, solid angle alone does not tell us everything that we want to know about size, even if the brain is clever enough to work out solid angle from what it has got on the retina. Noting here that while the brain probably can do the necessary sums, it is also true that it prefers not to have to, preferring rather to look straight at things which are important. Think of the pointing of a hunting dog. Think of squaring up to things.

All things considered, it seems unlikely that the subjective knowledge about relative size is stored on our visual layer: rather we have one of the object relations mentioned above, stored on some other layer. Which we illustrate in the figure following.

Figure 5

In this example, adapted from reference 6, LWS-R wants to express the fact that Object A is actually smaller than Object C, despite appearances. Which it does by introducing the link object containing something equivalent to ‘is larger than’. To which link we need to add direction, which we say in this case, by convention should run from object C to object A. perhaps in the form of waves of activation traversing object B in that direction.

Note that layer A is different from layer B and layer C is different from layer B, but layer A may well be the same as layer C.

So while at reference 6, we were concerned that the layer objects introduced to carry the column objects needed to create composite objects might themselves be part of the subjective experience, in this example, we need object B to be part of the subjective experience. 

Speculating, it may be that this object B is some derivative of the motor actions, the many commands to face, throat, mouth and tongue, needed to say ‘is larger than’.

Some theory

First, we define some terms.

Subject. The person having a subjective experience, the person (or possibly the animal) who is conscious. Rather than the ego which we came across in some old papers recently

Experience. Shorthand for the subjective experience, the subjective conscious experience

Data. The totality of material in the frames of LWS-N under consideration

Selection. The material in the frame of LWS-N under consideration which is making a contribution to the subjective experience. Often a proper subset of the data

Thing. Something out in the real world. Often something simple like a rabbit or a saucepan. Sometimes something more complicated or diffuse like a cloud, a small flock of birds feeding on the lawn or a small crowd of people playing football

Object. Whatever it is that the subject is attending to. Most of the time some thing out there in the real world. Or some things in the case that the subject is attending to the relations between them. Sometimes to some thing which has been internalised. Sometimes to something quite abstract

Image. The collection of material in the frame of LWS-N which goes into to the experience of the object. This will usually involve more than one layer and will often include part of the visual layer. Not very well defined at the margins. The image is part of, is a subset of the selection, usually a proper subset.

With our to-do list including mapping these terms onto those used by Langer in her discussion of the logic of signs and symbols in chapter 3 of reference 13. Where, roughly speaking, a sign announces something – in the way that the gong used to announce dinner – and signs being something that many animals can cope with – while a symbol merely evokes something, brings it to mind, rather than announcing its imminent arrival, in the flesh, as it were.

We do not attempt tight definitions but we hope, nevertheless, that these special words will be helpful. Note the use of ‘frames’ in the plural and see Figure 13 in the ‘other matters’ section below for a proposed loosening of the rule that the frame is the indivisible unit of consciousness.

We note also that it is quite hard, although not impossible, to attend to two things which are, visually, a long way apart. Two things which are next to each other, or which at least appear to be next to each other are much easier.

Second, we define a process.

Figure 6

Suppose some thing out in the real world catches our subject’s eye. Becomes the object of the moment, a moment which might translate into one or more frames of consciousness. The figure above suggests various stages that the subject might go through with this object: stages which are optional, which overlap, which are of varying duration and which might or might not be conscious, might or might not be part of the experience. But possibly a major part, at least for a while. 

The is the reflex option is there for emergencies and is usually taken when the object is damaging or threatening. Clear and present danger to use a bit of jargon from the US. The fastest possible action is needed, even at risk of making a mistake. Some people can be trained to block this option, at least a lot of the time: reflexes might be fast but they are too often wrong.

The next option is identification. A preliminary identification of the object, speed still being important here, in case of danger. Nevertheless, a preliminary identification which the subject likes to verify, it being all too easy, for one reason or another, to make mistakes. So having decided, for example, that the object is an elephant on the basis of its large ears, for verification the subject looks around for some tusks and the distinctive tail. So having heard the dinner gong, or what sounded like the dinner gong, one might look at one’s watch to make sure that it was about the right time for dinner.

Then the subject might just attend to the object and not be consciously thinking about anything. There may be nothing in the experience apart from that of attending. Alternatively, the subject may be aware, for example, that verification is going on, but not be aware of the details of that verification. Perhaps until something is uncovered which falsifies the identification. Then that something will become part of the experience.

Exploration is what happens when the subject has decided what the object is. Is reasonably sure that the object is an elephant and is giving it the once over to see how it compares to elephants in general. Is this one an outlier or is it a pretty regular elephant? Does it have any unusual features? Is it a threat or an opportunity?

The subject may decide to do something in response to or about the elephant. Perhaps move towards it to take a better look. Alternatively, the subject may decide that the elephant is of no particular interest and attention moves on. A decision which might well be taken without the elephant ever having made it to consciousness.

With our to-do list including mapping these stages onto the takes and frames of LWS-R. For which again, see Figure 13 below.

Other devices

Making use of binocular vision to see around small object near the eyes

Figure 7

The figure above – a view from above of the two eyes of a human looking at a boat – suggests that one or other eye can see more or less all of an object apparently occluded by another, much nearer object.

So to this extent the brain can chose what to project into consciousness. Between them, the eyes are seeing both the occluding object and the occluded object. Or in the figure above, most of it.

Making use of brain power to blot out objects

Figure 8

The thought here is that the brain decides to delete the bird at some point in the vision processing pathway, perhaps thinking it is noise rather than signal, just patching over the hole with something plausible. In this sky, patched with a neighbouring bit of sky in Powerpoint. No doubt someone with Photoshop could do a better job, a proper invisible mend.

Note, incidentally, how the various extra layers of image processing have changed the appearance of the clouds bottom left. Another reminder that images are not the same as facts on the ground. Indeed, that there are no visual facts on the ground: colour is an artefact of the brain, or in this case, the combination of camera and computer.

Alternatively, the brain seems to be quite good at papering overing things it has not got around to noticing. Perhaps the image which reaches consciousness is the result of sampling, and sometimes the sampling is not very representative and features get missed. So one can be looking at a clear blue sky and all of a sudden a bird or an aeroplane pops into focus, a bird or an aeroplane which was there, unseen, all along.

We associate to the management training film clips in which people taking their basketball seriously fail to notice the gorilla wandering about the pitch.

Making use of brain power to vary the sizes of elements of images

Here the brain varies the subjective size of foreground objects according to their importance. A continuous deformation of the original image, a deformation which preserves sight lines but which does change the relative sizes of things. Deformations which are beyond our modest skills with Powerpoint. 

Attention

Rather different, we might be attending to the penny rather than the elephant, or vice-versa. 

This information could be coded on the visual layer by means of the amplitude of the signals, of the travelling waves involved, perhaps by saying that what we mean by attention is the sum of amplitude over the area of the layer object or region in question. And it may well be that most of the time, one particular layer object or region is getting most of the attention, with the rest getting little if any. 

One might also argue that attention is something going on elsewhere in the brain and that this sum of amplitude, a product of the compiler, is a symptom of that attention, rather than actually amounting to attention in itself.

What does the subject bring to the party?

By which we mean, what sort of value does the subject, the person having the subjective experience, add to the inbound visual information from what we are calling the object? In which connection we note the common saying about works of art, certainly old masters and the plays of Shakespeare, that the more one puts in, the more one gets out. The naive consumer does not get to consume very much at all.

Part of this is the subject’s knowledge of this objects and objects like it. But another part is the state of mind of the subject. Another is the relationship between the subject and the object. 

Then we might divide the problem in time: where are we in the sense of the stages of Figure 6 above. Are we at assessment or action?

So suppose we present Figure 2 above to the subject – as an image, not for real – and ask him about the relative sizes of the foreground object, unnamed, and the elephant? What does the subject need to know in order to decide what the object was and how big it was. 

First, he needs to know enough about coins to work out that this is one. Second, he needs to know that coins are generally quite small. Third, he needs to work out that a large cardboard cut-out of a coin is unlikely. Although this would get more likely if there were any signs of filming or staging equipment or personnel in the background.

Suppose we present a picture of a bird to the subject and give him a multiple choice box to check? If the choice was coot or cuckoo, this might not be too difficult, but what about whinchat or stonechat? It seems likely that a subject who was an expert on birds would see more, would have a richer experience, than a subject who was not. The expert would know to look for the tricky little patch of red underneath the tail. Or whatever.

But if the subject was from E. F. Benson’s Mapp & Lucia and had recently participated in a disastrous full-dress tableau of ‘Rule Britannia’, the subject might have a very strong emotional response to the image on the coin, to the exclusion of the elephant and of any sober consideration of matters of size.

There will be interaction between perception and the percept. What we see is strongly influenced by what we think it is and we humans have the ability to project all sorts of things onto all sorts of other things, particularly things like clouds and the more or less random dots of paint on the floors of trains on the London undergound. It which latter case, it is very easy, for example, to make two round dots of the same colour, reasonably close together, into the gazing eyes of an animal or person. A very important matter in the jungle, both for us and for many of our vertebrate relatives, where every bush might be hiding a predator and their eyes might be all that is visible.

There will be interaction between the state of the subject, the subject’s emotions, desires and intentions, and the perception. Some people, for example, are all too apt to see what they want to see. While other people are all to apt to see what they don’t want to see. A tweeter might oscillate rather violently between being sure he is seeing a whinchat (let us say, very rare) and being sure that he is seeing a stonechat (let us say, very common). 

Then what the subject sees will be influenced by the relation between the subject and object.

If the subject is on a bicycle and the object is a car moving in his direction, the subject ought to be interested in the likely trajectory of the car, relative to himself. A prediction which will be largely based on current speed and direction. He may see little else while he is making this prediction and deciding what, if anything, he needs to do. With the ingredients of this prediction being mostly unconscious, as is the business of predicting itself.

Suppose that the subject is the computer in charge of my car and the object is another car. As with the cyclist, all the computer needs to know is speed, direction and changes in same. Maybe something about the size and shape of the car. In the knowledge that it is getting and can process updates very fast, a lot faster than a human. All of which it can do on the basis of a quite impoverished visual experience. It doesn’t need to know that the car has the very latest paint job.

Suppose that the subject is a hunter and object is a deer, presently stationary, but not quite near enough. That the hunter has already decided that this deer is a worthwhile target. In this case the subject is mainly interested in getting closer to the deer without disturbing it. His thoughts will be on the direction of the wind and on the crackling – or not, preferably not – of the litter of twigs and leaves underfoot.

Suppose that the subject is an animal lover and the object is a deer. An animal lover who is interested in the detail of the appearance of the deer and in what it is doing. Perhaps the texture of its fur and the pretty pattern of its spots. Matters of only peripheral interest to the hunter.

Figure 9

Lastly, suppose that the subject is a farmer and the object is a hay barn, something like that above left, a reproduction of the woodcut at reference 7. The farmer will know all about the construction of barns and all about hay. He will be interested in all the details. Perhaps the condition of the poles holding up the roofs. Is the hay adequately ventilated? Whereas a holiday maker who knew nothing of barns or hay might be taking a more aesthetic interest. Was the ambience properly countrified and rustic? What about all the litter? Were there any visitor facilities in the vicinity? Whereas a baby might see nothing much at all. Just a few vague shapes which did not mean a great deal. But perhaps something to be explored, to be poked or picked up.

The subjective experience of these three subjects is not going to be the same, even if we try to confine ourselves to visual aspects of things. Mainly because they each contribute something different – or in the case of the baby, nothing much at all – to the party.

Note that the baby might be quicker on the uptake about spotting something with eyes, this being more basic to its maintenance and survival. This bit boots up quite early in the process of growing up.

Other matters

Flatness

At reference 6 we suggested that our patch of cortex ought to be more or less flat. In which one might include being more or less spherical, or at least hemispherical. With mapping the visual and audio world onto a hemisphere being a more straightforward business than mapping them onto the plane.

But we do not include, for example, the tight folds of the cerebellum, rather tighter than those of the cerebrum. The point being that on such a tightly folded surface the fields generated by our travelling waves on the various part of our patch would destructively interfere each other.

Triangulation

Figure 10

Figure 11

We have had some reports about the retinal image breaking down a bit, judged retinal as the damage moves around with the eyes. No information about whether it was one or both eyes. With strips of brightly coloured triangles appearing around the centre of the field of vision. Usually wearing off after a short while, minutes rather than hours. We have attempted to suggest something of the sort in the first of the two figures above, although we have quite failed to capture the brightness, whiteness and intensity of the triangles described.

Evidence that triangulation is used, in the way of computer vision programs, somewhere along the human visual pathway? The sort of thing we mean being suggested by the second of the two figures above.

Diagrams

Figure 12

These two sketches being taken from page 69 of Langer’s book at reference 13. Being a neat illustration of how two very simple diagrams of animals can be nearly identical while successfully symbolising two quite different animals, different enough that we are unlikely to confuse them in real life. A trick accomplished by the rabbit having long ears and short tail and the cat having short ears and long tail. Two traits, two properties which are enough to both identify and differentiate these two animals. Noting in passing that diagrams of this sort might have difficulty distinguishing rabbits and hares. 

Quite young children will correctly identify these sketches.

The shape nets of LWS-N would have followed the line drawings directly, usually breaking down the interior into parts, while the drawings implicit in the regional representation of animals in LWS-R would amount to something very similar: perhaps a region for ear, a region for tail and a region for the rest of the body. Perhaps a column link to something saying cat, rabbit or whatever. 

Frames

Figure 13

 At reference 1, the scheme was the consciousness was delivered by the frame, with the understanding that a frame might allow predicted change, say a car moving across the foreground in a predictable way.

We now propose loosening this up a little, incidentally giving the take a clearer role between the scene and the frame. Scene is like a scene in the theatre; it involves a change of scene. Typically a change of places and of persons. Most objects are created for the duration of a scene, although we do allow some object creation at the other levels. A take is a segment of a scene for which the layers are generally fixed, although we do allow some layer creation at frame level. A frame is what is delivered by the compiler, usually making use of a lot of the material which went into the one before. It is only the period of consciousness, at the top of the heap, which really starts from scratch.

And while a frame is often only of the order of a second in duration, it may be a lot longer. Perhaps if attention has really been caught by something which is not itself changing much in time. Perhaps trained meditators work at achieving long frames.

Conclusions

A bit of a medley. But hopefully it has served to flesh out some of the visual byways of LWS-R. 

One outcome is the suggestion that information about the relative position of objects in the visual field, on the stage as it were, be held off-layer in composite objects. Rather than using fiddly properties of the waves which define regions, fiddly properties which would be analogous to those of shape nets developed for LWS-N and of arrays developed for LWS-W. For which last see, for example, Figure 7 of reference 1.

We have also suggested that much of the information which might be supporting the visual layer does not make it to consciousness most of the time. Most of the time we are not conscious of the visually interesting flower being that of a dracaena trifasciata – for which follow the pointer at reference 12.

All this supporting information is becoming transient supporting information. Available to and sometimes in consciousness, but mostly not. And in any event, on supporting layers, rather than included on the visual layer in some more or less tricky way.

References

Reference 1: http://psmv4.blogspot.com/2020/09/an-updated-introduction-to-lws-r.html.

Reference 2: https://en.wikipedia.org/wiki/Map_projection

Reference 3: https://en.wikipedia.org/wiki/Solid_angle

Reference 4: School Geometry: Matriculation Edition – Workman & Cracknell – 1923. For revision on school geometry. A throwback to our own school days.

Reference 5: https://en.wikipedia.org/wiki/Fovea_centralis

Reference 6: http://psmv4.blogspot.com/2020/09/column-objects.html

Figure 14

Reference 7: Haybarns at Eemdijk - George Mackley – 1962. Eemdijk being a small place on the River Eem, a little to the east of Amsterdam. According to gmaps, more or less brand new, but maybe the woodcut was derived, worked up, in part at least, from the barn in the middle of the snap above. But be warned! Sixty years ago is a long time and who knows what was to be found, what happened at Eemdijk during the second world war.

Reference 8: Consciousness Without Content: A Look at Evidence and Prospects - Narayanan Srinivasan – 2020.

Reference 9: Architecture and development of olivocerebellar circuit topography - Stacey L. Reeber, Joshua J. White, Nicholas A. George-Jones and Roy V. Sillitoe – 2013.

Reference 10: The Mind Is Flat: The Illusion of Mental Depth and the Improvised Mind - Nick Chater – 2018.

Reference 11: https://www.youtube.com/watch?v=ubNF9QNEQLA.

Reference 12: https://psmv4.blogspot.com/2020/09/more-flower.html

Reference 13: Philosophy in a new key: A Study in the Symbolism of Reason, Rite, and Art - Langer, S. K. – 1942.

Reference 14: https://psmv3.blogspot.com/2017/01/progress-report-on-descriptive.html

Reference 15: https://www.nationalgalleries.org/. With thanks for Figure 9. Rather better than we could manage with our telephones.

Group search key: sre.

No comments:

Post a Comment