Figure 1 |
A recent early morning experience, not exactly a waking moment, rather a not long awake moment. Spectacles on. Morning cup of tea on the way down, balance on the bedside table. Nearly recumbent.
Having spent a few days wondering about why exactly the layered, rather visual structure of LWS-R of reference 3, with its two plus or two and a half dimensions, was a good thing, particularly with regard to vision, I was given a rather different take on things. A take which is caricatured above, a caricature which might be thought to have been derived from a careless reading of Figure 8.2 of reference 1, a book mainly about how the brain of a cat might go about integrating cross-modal stimuli, that is to say integrating input from more than one of the senses.
For present purposes, the first chapter of reference 1 makes three points. First, there are plenty of neurons in the brains of all large animals which respond to stimuli from more than one modality, in particular the visual, aural and somatosensory modalities – this last being sensations from the body, rather than from specialised sensory organs. Second, there has to be integration of information from all the modalities if we are to make good use of it all and so to prosper and survive. So, for example, sound and smell mitigate poor sight in the dark. Third, there is plenty of evidence of such integration in very young human babies. The eyes, for example, of a not-long-born baby will follow a sound, long before the baby has had time to learn anything much about either.
Returning to Figure 1 above, the busy scene in front of me is suggested by a snip from the painting at reference 4. The big toe of each foot sticking up into the scene. The bedspread running, more or less, up to my chin. So the visual field includes some of me and some of the bed, as well as things rather further afield.
Then, for some reason, I am very conscious of my ears, one on each side of my head, scanning the aural field suggested by the blue half circles. Ears which are sensed as holes in the side of the head, each hole perhaps a quarter of an inch across and half an inch deep, sensations perhaps arising from neural activity around the ear canals, perhaps around the exit of the Eustachian tubes into the back of the throat. Not that there is very much in this aural field. In fact, I cannot hear anything at all, not even the blood flow in the ears which does, on occasion become audible.
So how does this conscious experience, which seemed very much a whole, not at all something in parts, get into the sort of layered structure proposed at reference 3?
Where a layer can be thought of, roughly speaking and for present purposes, as a real valued function over the unit square. Otherwise, a piece-wise continuous, two dimensional, textured surface, with the texture being a way of describing a local, repetitive pattern of variation of the function.
Figure 2 |
The figure above suggests the sort of thing we have in mind here: a repetitive local variation superimposed on something with much greater amplitude and with much lower frequency.
Figure 3 |
First, we observe that sight and sound are the two senses which draw in most of our information about the world at large. We do get sights and sounds from the body and its immediate surroundings, but most of them come from further afield, from that world at large. Sights and sounds which we want to locate in space. While we want to locate somatosensory inputs on the body, tastes in the mouth and smells in the nose – with the last of these being complicated by being sensed in the nose, but quite possibly perceived as coming from somewhere outside the body, a somewhere with a location, possibly a remote location. In any event, not further considered in what follows.
Then we simplify Figure 1 to give us Figure 3 above, an unusual projection from inside a cylinder onto a plane, a cylinder with the head in the middle, an ear to each side. The central oval is what is in front and where we get both sight and sound. The two half circles are what is behind, to the left and to the right respectively, from where we get just sound. Two half circles which actually meet behind the head, as suggested by the two red lines. So the blue area as a whole is the aural field, the central oval is the visual field. A layer which is about space, with position on the layer giving direction (left and right, 360° in all) and altitude (up and down, 180° in all). A lack of symmetry which can be seen as the result of only looking outwards, not inwards as well.
Note that, given a workspace of a fixed size and that we want to align visual and aural signals in space, at least approximately, there is now less space available for purely visual information.
We have two layers of this sort, one for the visual signal and one for the aural signal.
The first of these layers (A) can carry the content of the visual signal, which we suppose can be reduced to a value for colour for each direction and altitude. So the central oval is populated by the visual signal, as in Figure 1. And, in the case of vision, the signal on layer A is often more or less fixed for the duration of a frame of consciousness; we are looking at a still, not a movie. Furthermore, by default, the things we are looking at are persistent. They will be still be there when we look again.
But the second of these layers (B) can only carry the location and the intensity of the various aural signals, not the rest of their content, which cannot be reduced to a single number and probably not even a texture. So we then add other layers, as required, to carry the sounds themselves, localised by their connection to layer B by column objects. Sounds which we see as being another two dimensional surface, this one frequency by time. Furthermore, sounds do not persist and will not usually still be there when we look back again, as it were. At least, the sound may still be there, but it will not be the same sound: for example, there is still the chattering of the children, but the detail is different. Even though we might be hard put to say what the difference was, after the event.
We have already noted that while visual images are consumed in time, what gets into layer A can often be thought of as a snapshot, outside of time. This may not be true of aural images: layer B might not carry all of the aural signal, but it might for example carry volume, perhaps expressed as intensity, and rhythm, perhaps expressed as texture. But it might not: as far as the spatial layer B is concerned, the signal might be fixed for the duration of a frame of consciousness. A matter of biological design which we are some way from bottoming out.
Notwithstanding, we can now ascribe the fact the silence seemed to come from left and right, rather than the front, to the dominant visual field swamping the silence in the middle. The visual field was busy, providing plenty of stimulation, so the absence of stimulation there from the aural field went unnoticed.
So a story which can be fitted into the layered world of reference 3.
PS: Stein and Meredith also draw attention to the problem of point of view when one is trying to decipher a letter someone is tracing on the back of one’s head. Does one see it from the inside or the outside? Does one see – or at least sense – a ‘b’ or a ‘d’? For which see, for example, reference 5.
References
Reference 1: The Merging of the Senses – Stein and Meredith – 1993.
Reference 2: https://psmv3.blogspot.com/search?q=meredith. Previous notice, albeit in a rather different context, of reference 1.
Reference 3: https://psmv4.blogspot.com/2020/09/an-updated-introduction-to-lws-r.html.
Reference 4: An outdoor theatre with a quack doctor and an audience of gentry – Pieter Angillis – 1685-1734. The painting from which the oval scene above has been snipped. Another trick of the Powerpoint trade: an object with a hole in it.
Reference 5: Inferring the locus and orientation of the perceiver from responses to stimulation of the skin - Natsoulas and Dubanoski – 1964.
Group search key: sre.
No comments:
Post a Comment