Friday 8 November 2019

More on making regions into objects and their parts

Contents
  • Introduction
  • Recap
  • Objects
  • Object relations in woodcuts
  • Object relations in LWS-N
  • Additional
  • Conclusions
  • References
Introduction

This post builds on reference 1, something of a new departure for LWS-N (introduced getting on for two years ago at reference 9), to describe anew how LWS-N builds objects from a sheet of neurons, for projection into consciousness.

The search key ‘srd’ has now been upgraded to ‘sre’ in honour of this new departure.

Recap

To recap on what was intended as a suggestive story at reference 1, we have our 4 square centimetres of unit square, populated by a two dimensional array of one million neurons, more or less arranged in 1,000 rows and 1,000 columns. While our more usual assumption has been that LWS-N is 5 square centimetres in area, containing 100 million neurons, more or less uniformly distributed over that area. No split into a left hand part and a right hand part, in the way of most brain structures. No maculae or foveae after the way of retinas – at least not yet. On the other hand, we might well allow eye movement within a frame of consciousness, to keep the frame strong and up to date. Eye movement which does itself not make it to consciousness.

Figure 1
Figure 2
Figure 3
We suppose our neurons to be of type shown in Figure 1 above, to be uniformly arranged in the unit square shown in Figure 2 and visualised in what follows as shown in Figure 3 – in which the array of neurons is shown as a landscape rectangle, for PowerPoint convenience. If one thought of this snap having been taken from vertically above a sheet of cerebral cortex, we might have it that the dots are the cell bodies of our neurons, while their dendritic arbours might be thought of, but are not shown, as circular discs, in real life perhaps one fifth of a millimetre in diameter. So neurons only approximate to points – thinking here of the point charges one often finds in electrostatics – and there will be a fair amount of overlap of arbours.

Figure 4
Figure 5
Active neurons on our unit square are arranged in regions, for example the green region top right above, where the neurons are firing in synchronised patterns which amount to two dimensional travelling waves. We will probably include the limiting case of a stationary wave. Regions are defined by their shape and by the three or four (positive real) parameters which describe the corresponding travelling wave. The most important parameter is the one which specifies the frequency of firing of the neurons, a frequency which will range up to something less than 1,000Hz. But given all the noise in the system, there are not going to be that many frequency bands, with Buzsáki and Draguhn identifying just 10, organised logarithmically and spanning from a lot less than 1Hz to 600Hz, in reference 6. We call this frequency the primary frequency. We call the frequency of the travelling waves the secondary frequency. We will, in a post to come, define a procedure which maps a region, that is to say the neurons making up a region, onto a subset of the unit square which we will call the span of a region.

The span of a region will be connected although it may contain holes. It will be very roughly convex, that is to say, holes apart, it will contain its interior.

Spans of distinct regions may intersect, may overlap. Indeed, the span of one region may be inside the span of another.

The present post tries to carry the story of reference 1 into the expression of visual objects from the outside world, objects with edges, shape, textures, position relative to the subject and position relative to other objects. More or less the layer objects about which we have posted before.

Some of this is described by the soft-box model above. In which:
  • Active neuron is distinguished from neuron, with one of the former belonging to exactly one region
  • We allow parts to contain more than one region, thus allowing colour and texture to vary within a part. Not further discussed in what follows
  • Frame is added left to remind us that what you get here is very transitory, unlikely to be lasting more than a second or so
  • We assign neurons to positions on our unit square, despite neurons being far from point objects, as already noted above
  • We have omitted spans, which need their own model.

Objects

An object is a subset of the set of neurons in our unit square, all firing at the same frequency, which is maximal in the sense that there are no neurons just outside the object which have that same primary frequency. Two different objects can have the same primary frequency but there has to be a reasonable air gap (as it were) between them.

We do not require an object to have exclusive rights over a patch of our unit square, with some overlap being shown in the figure above.

But we do require an object to have a reasonably simple boundary with the rest of the world, the black line in the figure below, to be made up of a small number of straight lines and smooth curves. As the boundary gets more complex, the object is more likely to be lost in the background, not experienced as an object at all. This lack of experience corresponding to a lack of a coherent travelling wave of activation both spanning that object and distinguishing it from its background.

Figure 6
Note that the black and blue lines of the figure below are not part of the object so delineated. They have been superimposed on the object for clarity. A boundary is the transition between two regions, which may or may not approximate to a straight line – although we imagine that straight lines and simple curves give a stronger experience than something more complicated. From where I associate to the camouflage of first world war warships, deliberating breaking up the lines, the silhouette of the ship to confuse the eye.

Figure 7
A part of an object is a maximal connected subset of the neurons making up that object which have the same secondary frequency. An object may contain just one part with which it coincides, otherwise we expect the parts of an object to cover that object, that is to say the union of the spans of the parts is equal to the span of the object.

Figure 8
We do not require a part to have exclusive rights over a patch of neurons making up our object, although in the first of the two figures above, they all do. We will consider more complex possibilities in due course, although it not yet clear how important these more complex possibilities will be in practise, how often they will crop up. Some of the simpler ones, involving plenty of overlaps and two holes are suggested in Figure 8 above.

Figure 9
While the part mechanism is mainly about dividing an object into parts, as in Figure 7 above, one use of the part mechanism might be to express a really strong boundary, a boundary given a part to itself, a boundary separating the two parts or bounding an object as a whole, with the first of these possibilities being suggested by the thin red part in the figure above. Something which is expressed in the object, unlike the black boundary of Figure 6 above. A rather weaker version of the shape nets previously suggested. But one which allows the parts of an object to have the same secondary frequency, a use for which will be noted below.

Figure 10
Then, certainly in the case of things that are seen, we have the issue of relations in space. How does our essentially two dimensional representation deal with occlusion, deal with one thing being in front of another, some – but not all – permutations of which are suggested in the figure above. How can we be sure that we don’t have something like the unlikely scenario suggested bottom left, where the two objects do not really overlap at all, despite appearances? Bottom right and top left we can deal with by saying that simple shapes rule in the absence of evidence to the contrary, evidence which is present bottom left in the form of a thin white line. Top right, with one or the other thing being transparent is different again, being symmetrical with regard to the two shapes. But here, if not in LWS-N, we do have the three shades of blue. Which no doubt helps, although we cannot say something like: ‘if the middle blue is closer to the right hand blue than to the left hand blue, then the right hand shape is on top’, because it all depends on the degree of transparency and which of the two objects is transparent, which is circular. We need some more clues.

Figure 11
Luckily, in real life there is often some framing information to help us, where the frames are not transparent. So in the figure above, the strong frame on the right hand shape suggest that that is the shape on top. A frame which could be replicated on our unit square by application of the technique shown at Figure 9 above.

Figure 12
With the figure above offering further puzzles, these ones involving a possible hole in the primary object. Again, in real life, an important part of computing scenes like those above is small head movements, which can tell one which parts are moving together and are so likely to be parts of the same object. So top right, by bringing the head down and to the right a bit, the thickness of the object on top comes into view, an improvement on top left. Bottom right we have the further help of seeing the background (or background object) pattern or texture move slightly against the foreground as we move our head, confirmation that it is indeed the background, not part of the foreground object.

With the issue here being that the brain might work all this out, but how does it express the answer in LWS-N? Can it do so within the confines of one topically organised layer, or does it need another layer to carry some supplementary information. Issues which we have been turning over for at least a couple of years. For which see, for example, reference 2.

A simple answer might involve forcing exclusivity, for each object and each part to have exclusive occupancy of its bit of two dimensional space. But such a simple answer allows neither for our knowing more about the objects than can be seen from a single point of view nor for objects and parts being transparent – and while ancient man might not have had glass, he did have water.

Object relations in woodcuts

Figure 13
We now turn to the consumption of images. Does this have anything to tell us about how LWS-N might project images from the real world into consciousness? A subject which many artists, over the years and centuries, have taken an interest, albeit with their interest being in how the brain, how the visual system as a whole does things. While my present interest is in the very large stage of that system, where the image has been processed and is ready for conscious consumption. And in which we have a range of options:
  • Looking at the real thing, where we started
  • Looking at the real thing, through a window, reduced and framed by that window
  • Looking at a colour photograph of the real thing hung on the wall
  • Looking at a black and white photograph hung on the wall
  • Looking at a full colour painting hung of the wall
  • Looking at a black and white woodcut hung in the wall
  • Looking at a reproduction of such a woodcut, what we have here

Roughly in descending order of proximity to the original, whatever that might mean. And the thought is, that the woodcut, with its very limited range of expression, is perhaps closest to, or at least comparable to what LWS-N has to do when turning the firing of a small sheet of neurons into the subjective experience of the scene.

So we turn now to object relations on a woodcut, with one such being reproduced by telephone above – not bad full screen on this laptop, not good inserted in a Word document – reproduction which does include image processing artefacts, for example in left hand end walls and in the sky – but which still serves the present purpose. With the immediate interest being the tricks and techniques the woodcutter uses to generate a sense of the real, using what might be thought to be a medium – a flat block of wood – which was not terribly promising. The idea of a woodcut being to cut lines and areas out of the wood, leaving what is left to print black – which means that the simplest motif is a white line on a black background – as in Figure 16 below.

The way that the brain delivers all this to LWS-N is a quite different matter to which we will return in due course. The present thought being that in doing this, the LWS-N compiler will have to use tricks and techniques which are in some sense comparable to those used here.

Note the conventions used by this woodcutter to suggest shape and texture, conventions which vary from woodcutter to woodcutter. Conventions which we get to know and which the brain somehow integrates into our seemingly seamless subjective experience. So in the figure above we have conventional markings suggesting the roundness of the poles supporting the roofs of the barns, and other conventional markings suggesting the leaves of trees and other plants. With neither convention being very close to what you might see in the real world or get from a photograph.

Note also that object relations in the two dimensional woodcut have to be coded in a different way to real life. Not least because small head movements here are of no help, they do not change the appearance of things here in the way that they do in real life.

Figure 14
In this detail, the larger, middle pole is clearly in front of the sky and the tree. In both cases the background resumes as one, as the eye moves across the pole. An arrangement not much disturbed by the right hand, shaded side of the lower part of the pole merging with the tree to its right. The white stripe running up the whole of the left hand side of the pole strengthens the sense of continuity of pole.

The cockerel weather vane is reasonably clearly a cockerel, not a peculiar hole in a striped foreground object, even though the stripes on either side of the cockerel do not line up exactly. The eye does not notice this in normal viewing conditions. The white boundary serves to sharpen up the boundary, but at this magnification it does not seem to help in deciding whether we have a hole in the sky or a weather vane – in any event a white boundary which would not exist in real life.

Furthermore, the cockerel is in front of the tree to its left, albeit not very strongly. This must make use of the sequence cockerel sitting on roof of first building, first building in front of second building, second building in front of tree. And it may be we only get that sense when we are asked, or ask ourselves the question explicitly, when, in one way or another we put the brain to work on the question.

All of which works much better when the woodcut is viewed from a sensible distance, at a sensible scale. When the woodcut is taken in as a whole and the brain has all the information provided by the woodcutter to work on, not just the much small number of cues and clues it gets from a detail.

Figure 15
Turning to the mainly black tree being in front of the sky, the horizontal hatching of the whole of the sky is one clue. It is unlikely to be a foreground object with a complicated boundary.

And even looking at the small portion at Figure 15 above, it is still a black object in front of a horizontally striped object, the white jagged boundary being more plausibly, more probably that of the black object than that of the striped object, from which it follows that the black object is in front.

The brain is bringing its knowledge of the real world to bear in its compilation, if we may use the LWS-N term, of the woodcut into the subjective experience of that woodcut. It is also making use of information about the whole in order to compile parts. So how does LWS-N – hypothesised to be self-contained – pull off a comparable trick?

Figure 16
The point about the experience of the part depending on the whole is brought out very clearly in this very elementary woodcut by Eric Gill. Elementary in the sense that this very effective image is made with just a few cuts, compared with the thousands which went into the hay barns above, showing up white on black. The snips to the side, by themselves, have little impact, certainly not much erotic impact, at least under normal circumstances. So, once again, what is LWS-N adding to the raw image? Do the devices suggested at reference 7, but not yet deployed in this changed world, do the trick?

Figure 17
In this last detail, the object relations are getting more complicated, with the horizontal pole clearly being in front of the vertical pole, but with both being rooted in the same building. In which one is edging towards the sort of geometrical tricks of Escher and his imitators, tricks in which the cues and clues get a bit mixed up. While some woodcutters drift into something close to abstract expressionism, with just a central core of realism.

Back to object relations in LWS-N

Figure 18
So what are to do with the two rectangles above? Which one is nearer the subject? Which one is the larger? And while the brain may well have worked out the answer, it is clearly going to have to do more than put a couple of blue rectangles onto our unit square if the subject is going to experience that answer.

One thing we take away from the woodcuts, is that LWS-N is going to need some device or convention to express these object relations. And while we may be a bit shaky on distances and sizes, we do have strong perceptions of order and depth, we do have memories about either those objects in particular or objects in general, and LWS-N needs some device to express the relation in space, relative to the subject, of our two squares set against a background, complicated or otherwise, a relation derived from those perceptions and memories. The brain may use all kinds of clever cues, clues and algorithms to work those relations out, but what we need here is some way for LWS-N to express, to encode the answer, to project it into the subjective experience.

And for the moment we want to see what can be done within the confines of a single layer, within the confines of our (single) unit square, without supplements of the sort talked about at reference 7.

To this end, we suppose a world in which the nearer, more important things are in the middle of the scene, a world not unlike that of a proscenium arch theatre with wings on both sides, a backdrop, actors, actresses and other stuff arranged on the floor of the stage. Things of most interest tend to be in the middle of the scene. Possibly a rather simplified world, but note that we are only talking about the experience of a single frame of consciousness here, the experience of a second or so. If something else comes to be of most interest, the complier may well present a new frame. The parsimoniousness of consciousness of Chater of references 4 and 5 is relevant here.

We then suggest a rule in two parts. One: if objects A and B are close together, then if the primary frequency of A is greater than the primary frequency of B, then A is in front of B, is nearer the subject than B. Two: if we have two parts A and B of the same object C (for which the primary frequency will be the same), then if the secondary frequency of A is greater than the secondary frequency of B, then A is in front of B, is nearer the subject than B.

This does not exhaust two of our four parameters, our four degrees of freedom, but it does take a chunk out of them. There is less left to do stuff like colour and texture, which we shall, in any event leave to the next episode.

Figure 19
So how does all this play in this expanded version of the previous figure? Our proscenium arch theatre, with the stage itself distinguished in brown, as we do have some sense of up and down, which we said something about at reference 1.

We have two objects in this experience, both of them made up of four parts, with a very modest amount of part overlap, bottom right. A background, with a fair amount of detail, but very much in the background. The left hand object has the high primary frequency, the right hand object has the medium primary frequency and the background the low. The left hand object is what we are focussed on, and it seems bigger than right hand object, whether or not it is in real life.

While the blue is high secondary frequency, green medium and red low, this giving us the spatial ordering of the parts.

Noting, once again, that this is the experience of a moment, of a single frame of consciousness. Rankings may move around a bit for the next frame.

Figure 20
While in this version we suggest upstaging. The high frequency object is the dark object upstage, that is to say towards the back of the stage, while the three low frequency objects at the front of the stage seem smaller, even though they are nearer and actually much the same size. An effect said by some to be caused by the eyes of the subject following the eyes of the three actors at the front of the stage, looking back, towards the one at the back.

Figure 21
In this example we show a simple use of overlapping parts, both opaque and semi-transparent, to suggest folded paper. Dark blue for high secondary frequency, light blue for low.

We have a clear boundary between the two parts of the object upper right the figure above, but how do we resolve the ambiguity about which of the two parts is on top, is in front, from the point of view of the subject? In the figure, this resolution is more or less achieved by depth of colour, with the darker blue being presumed to be above the lighter blue, although it is still possible for the brain to flip from one orientation to the other.

But assuming that we solve this problem in the world of LWS-N, the possibility opens up of being able to see three dimensional objects in the round, albeit with less intensity and less detail in the occluded parts. And it seems quite likely that people will vary in the extent to which they are able to do this. I associate to an anecdote, I think from Glazer of reference 8, about a colleague who was able to rotate complicated crystal structures in her mind, a trick which he was not anything as good at.

Figure 22
Both real life and artists make plenty of use of shadows, in this last example above straightforward enough to deal with. The shadow is compiled by LWS-N as a part of the lower object, with the knowledge that it is the shadow of the upper object, if present in the subject experience at all, consigned to a supplementary layer, in the way of reference 7.

Additional

A few oddments follow, slightly relevant to foregoing.

I went to a concert at the Wigmore Hall while preparing this post and got intrigued by the eight wires from which the two microphone clusters were hung from the ceiling. The small circular holes in the ceiling from which the wires emerged were clearly visible. The lower parts of the wires, the parts which seemed to catch the light, were clearly visible, although some wires seemed much fatter than others - which I thought an illusion rather than a fact on the ground, or rather a fact in the air. But the main point of intrigue was the way in which the upper parts of around half of the wires were invisible, despite the brain knowing perfectly well where they were. It did not see fit to make up for the deficiencies of lighting or eyes. It did not join up the dots.

The LWS-N hypothesis is that it is the organised firing of millions of neurons generates an electrical field which, of itself, amounts to the subjective experience of consciousness. Neurons such as those illustrated in Figure 1 above. But we say nothing about what else that firing may do, apart from being used to supporting that organisation. We say nothing about what outputs there may be or about where all the energy of all that firing goes.

I have also been thinking about the generation of the hypothesised electrical field. I recently read somewhere, perhaps at reference 3, about the potentials captured by EEG machines being more to do with charge running around dendrites, than the concentrated charge of an action potential running up an axon. It would be good to know more about what one might expect of a field generated by this complicated mass of neural tissue – but I don’t suppose that I ever will.

Furthermore, when talking of neurons, we have been thinking of a uniform population of neurons, all built the same say, all behaving the same way, all modelled in a tractable way by one set of equations, one set of rules, although we do allow individuality in the growth of dendrites and axons. There is no central ground plan for these. This may be an acceptable approximation, an acceptable simplification, but we need to bear in mind that that is what it is. In a population of a million neurons there is going be some damage and some turnover; neurons at the margin which do not come up to specification for one reason or another. Rather in the way that in a register of 50 million holders of national insurance numbers there are going to be some curiosities. Odd freaks of name, number or registration. Possibly left-overs from the past - even on computerised registers.

Figure 23
The analysis of the stream of consciousness into frames, each of the order of a second long, has been mentioned above. Here I mention and example of the lack of continuity from one frame to the next, even when one is looking at the same scene, without moving one’s head. The occasion was sitting outside a public house in Slapton, in Devon, watching the swifts flying around the (detached) church tower as the evening closed in. I have a very clear memory of how the birds suddenly popped into consciousness when the brain happened to refresh that bit of the scene, presumably drifting out again in the seconds to come. Presumably the ‘maintain threat awareness’ part of the unconscious eye movement machinery is only sampling the visual scene enough to detect threats to a large animal, with small birds not qualifying.

Conclusions

We have carried the story of our unit square of neurons onto the expression of objects, their parts and the relations in space between them. Hopefully the next episode will cover colour and texture.

Which will leave the less obvious business of supplementary information about visual objects, handled in the past by having supplementary objects on supplementary layers, linked to the visual objects by column objects, objects which perhaps serve to pass activation from one region to another. A sample of the previous treatment is to be found at reference 7.

References

Reference 1: http://psmv4.blogspot.com/2019/10/the-field-of-lws-n.html.

Reference 2: http://psmv3.blogspot.com/2017/03/on-seeing-rectangles.html.

Reference 3: Electroencephalography (EEG): neurophysics, experimental methods, and signal processing - Nunez, M. D., Nunez, P. L., & Srinivasan, R. – 2016.

Reference 4: The Mind Is Flat: The Illusion of Mental Depth and the Improvised Mind - Nick Chater – 2018.

Reference 5: http://psmv3.blogspot.com/2018/08/the-myth-of-unconscious.html.

Reference 6: Neuronal Oscillations in Cortical Networks - György Buzsáki, Andreas Draguhn – 2004.

Reference 7: http://psmv3.blogspot.com/2017/07/binding.html.

Reference 8: https://psmv3.blogspot.com/2017/04/bragg-and-son.html.

Reference 9: http://psmv3.blogspot.com/2018/01/an-introduction-to-lws-n.html.

Group search key: sre.

No comments:

Post a Comment