Wednesday 2 September 2020

An updated introduction to LWS-R

 Contents

  • Preface
  • First introduction
  • Second introduction
  • The compilation of frames
  • Topicality
  • Some history
  • Time
  • Some other matters
  • Conclusions
  • References

Preface

Figure 1: The chip's life

The figure above, slightly adapted from the top of reference 2, suggests the context for LWS. The blue is the central nervous system (CNS) – that is to say, from left to right, the spinal cord, the brain stem and the brain – located in the rather larger body. There is a great deal of electrical and chemical activity within this blue region. We are particularly interested in the human body, although at this level, the layout and organisation of all large mammals is much the same. There is also a great deal of electrical and chemical communication between the CNS and the rest of the body, for example between the brain and a knee. There are specialised sensory organs – particularly the eyes, ears and nose – which last provide a large part of the brain’s input. With the nose, very old in evolutionary terms, being unusual in that it is connected directly to the brain, while pretty much everything else goes in through the brain stem, ultimately either from the spinal cord or from cranial nerves. The brain stem does a lot of work.

LWS, a hypothesis about how we get from brain stem and brain to subjective experience, is here called the chip, located in the bottom of the brain and labelled ‘D’.

First introduction

Quite a lot of posts on this blog mention LWS, LWS-W, LWS-N and now LWS-R. It seems time, once again, to update what there is by way of introduction, with the second version being at reference 1 (January 2018) and the first version being at reference 2 (2 (April 2017). There is also the list of important posts, itself now a little out of date, not least because we have moved on from the ‘srd’ series to the ‘sre’ series, at reference 5.

Our general concern is with consciousness, what it is and how it comes to be. The consciousness of humans in particular, although it seems very likely that higher animals have something of the sort, with some (perfectly respectable) people going further, perhaps going so far as to include fishes, maybe even amphibians and some cephalopods. The consciousness which an adult, awake human can usually readily report on. A subject which was not, until recently, considered a proper subject for scientific research – not that this need concern amateurs for whom considerations of proper do not carry the weight that they might for professionals, with reputation, grants, zero hours contracts and positions to secure.

LWS originally stood for local workspace, in contrast to the well known global workspace promoted by Baars and his colleagues (for which reference 3 is a reasonable place to start), but later it seemed that layered workspace might be just as useful a name. Also that complementary might be a better word than contrast: the two hypotheses are addressing different aspects of the problem of consciousness. Also that workspace does not carry quite the right baggage: LWS is more an end product than a place of work, a jumping off point for the delivery of consciousness and it is not yet clear that it does or is for anything else. And most of the hard work of building something worth projecting into consciousness is done elsewhere.

The present hypothesis is that consciousness arises from the electrical activity in an approximately two dimensional patch of cortex, the LWS, perhaps 5 square centimetres in area, perhaps 2 or 3 millimetres thick, perhaps 0.2% of the cortical sheet taking both hemispheres together, but containing around 100 million neurons, perhaps 0.5% of the total, excluding the cerebellum. In what follows, the patch.

The cortical mantle seems an unlikely location for a function of this sort, in part because gross disturbance of this mantle does not necessarily destroy consciousness, in part because it seems likely that a central function of this sort requires connectivity not to be found at the periphery, and we suppose that this particular patch of cortex lives somewhere in the middle of the brain, probably above the brain stem but below the cortical mantle. Along with others, we had wondered about the claustrum, now disqualified by evidence from brain-damaged Vietnam veterans, and we wonder presently about the cerebellum, containing a lot more neurons that the brain proper and whose attested functions seem to be growing from their motor roots.

It may well be that while there is a preferred, a central location for LWS-R, there is also flexibility. The patch can contract, expand and perhaps even move around a bit, to meet the needs, possibly transient needs, of the moment. Perhaps to accommodate some damage to the brain, either at birth or later.

Part of the thinking is that, by analogy with getting fusion out of a tokamak, we need to concentrate a lot of activity in a small space if we are to get consciousness out of neurons. Not that all this activity need sum to anything much that can be detected or seen from a distance.

However, while an electrical field might be considered to exist at a point in time, in the same way as a gravitational field, consciousness is the result of electrical activity which takes place in time; it is a process rather than a state – although we do not yet know how short a period of such activity can be while still amounting to consciousness, perhaps embedded in some contextual activity of rather longer duration. 

While the global hypothesis is much more global than local, with consciousness arising from the joint activity of a number of wide area networks spanning a large proportion of the brain. Or, according to Dehaene and his colleagues, with the neural correlates of consciousness including the activity of a number of wide area networks. To which reference 4 is one entry point.

These two points of view might be brought together by observing that while these wide area networks are indeed necessary for there to be consciousness, to assemble the necessary data, it is the local network which projects that data into the electrical field which amounts to consciousness.

Second introduction

In LWS our particular concern is to provide a springboard for consciousness and the purpose of this note is to describe that springboard. Build the LWS data structure, activate that structure appropriately and, lo and behold, you have consciousness. The present guess is that this activation is waves of electrical activation travelling across the neurons which make up that structure.

In what follows we do not include attempting to say what consciousness might be for, to say how we might get some value out of all the metabolic energy which went into its construction. Nor do we address the many and complicated processes which would be needed for that construction – beyond saying that our model is that of the compilation of the 1970’s. 

We do not need to worry about, for example, how the light coming into the two eyes separately ends up as a more or less unitary percept built from some tricky combination of what is coming in and what was already there.

While the LWS data structure has evolved, a large part of it has stayed much the same. This is sketched in the figure which follows.

Figure 2: the first box model

Top right we have the host in green, the subject. The column of red boxes suggests the hierarchic organisation of consciousness in time, with frames of consciousness at the bottom. A frame might last of the order of a second or so and while it is not static, is it the brain’s best guess at what is going to happen at some point in time shortly before the frame is projected into consciousness. A frame might include, for example, the regular, the predicted movement of a foreground object across a background. 

A frame is self contained. Whatever there is in consciousness is in the frame; it does not reference somewhere else in the brain for support, whatever is needed has to be there, on the spot. Somehow consciousness has to be conjured out of the tabula rasa of the neurons on our patch of cortex – with a lot of effort having been put into exactly how this might be done, particularly in the case of vision.

Frames are organised into layers, with the number of layers being in the ten to twenty range. The layer is a two dimensional and LWS, despite these layers, is essentially a two dimensional world, in the same way that the cerebral cortex is essentially a two dimensional world, this being reflected in the experimental habit of mapping fMRI scans of entire brains onto a standard planar map of same. There are plenty of examples of this sort of thing at reference 6.

The idea of layers is taken from their widespread use in technical drawing packages, with the layers there being used to partition, to reduce the complexity of a complex world. So if we were drawing a diagram of an animal, perhaps pinned out on a dissection mat, we might have a layer for muscles, another layer for the pipework of the vascular system and yet another layer carrying a photographic image. For more detail on this see reference 7.

Layers are organised into layer objects, perhaps supplemented by a special layer object called the background. That is to say everything that is left after the objects of present importance have been separated out.

Layers are linked by means of column objects, with a column object linking two or more layer objects together. The objects linked by a column object all come from different layers and are implemented by neurons in the same bit of our patch of cortex, our patch of cortical sheet. Column objects are, in that sense columns, they are vertical like the columns of a multi-storey building. Columns objects are also part of the LWS solution to the binding problem, the business of sticking together all the bits of the conscious image, which at some point during assembly were scattered all over the brain, never mind all over our small patch of cortex.

Note that the layers of LWS are not the same as the layers of the cortex. The layers of the cortex contain different sorts of neurons and presumably do different sorts of things. The way that a column of cortex is organised in the information layers of LWS has nothing to do with its own histological layers.

After the layer objects and the column objects, we have the big brown box labelled implementation – and the contents of that box have shifted several times since 2017. 

The compilation of frames

So the frame is our unit of consciousness. And our proposition is that there is a compiler which produces that frame. A compiler which gathers up all the goings-on in the brain and condenses them into a snapshot, a frame. Going slightly further, our present guess is that the amount of information in consciousness is of the same order as the amount of information that is to be found in a photograph taken by a mobile phone, that is to say around 5Mb. Which means that we have 100 million neurons providing 5 million bytes of data, with each byte being made up of eight binary bits. Noting here that a neuron is a complicated bit of machinery in its own right, doing a lot more work than a pixel on a photograph. And that our percept of this 5Mb photograph is a lot more complicated – if no bigger – than a rectangular array of pixels.

Figure 3: Compilation

Back in the 1970’s, the form was that you compiled your Fortran program (say) from source code into machine code and then you ran that machine code, unchanged for the duration. Machine code being what the machine was built to execute – but which the average programmer would not want to have to use to write his programs. Under this regime, there was a strong separation between code and data with the understanding being that the data will change during the execution of the code, but the code itself does not. If you want to change what the code does you have to stop the execution, and start over, compiling afresh. Furthermore, in the early days the code and the working data together would occupy a fixed segment of memory of the computer – say location 5,678 to location 12,345 – for the duration. No exceptions, no excuses.

In the snap above we have some Fortran source code left and two columns of machine code (not derived from this same Fortran code) right, with some intermediate assembly code in the middle. There might be more than one stage on the way from Fortran code to machine code. And while brains do not do this, it serves as a reminder that one can look at complex systems from various points of view and at various levels. One can work at various levels.

And while these compilation processes may well need to create or adjust synapses, the present hypothesis is that such adjustment is not part of consciousness itself. A frame of consciousness is delivered from a population of neurons and synapses which is fixed for its duration – that is to say for a second or so.

Figure 4: Frames

We are also mindful of the frames of an old-fashioned cinema film – which perhaps reflect a visual bias here. A bias which reflects in turn the importance of vision to most vertebrates, the part of the animal world to which we belong.

In the snap above, on the left, we have three complete and two bits of frames, taken from an old home cine-film. On the right, something more professional, with the soundtrack to the right – leaving just one row of sprocket holes. A form of a two layered structure.

Topicality

Frames are organised, at least to a large extent topically. By which we mean, in the case of vision, something which would be recognised. Rather in the way that the body is mapped onto the primary sensory and motor cortices.

A corollary is that the map from real world to layer object is more or less continuous. Things which are near each other in the real world – perhaps two successive notes in a bit of music – are near each other in our patch of cortex. The map also preserves direction. If two directions are different in the real world, they will be different in our patch of cortex. Rather more than that, from any given point there will be a simple, smooth map from the directions of one to the directions of the other. A map from one circle to another.

Remembering here that the real world is more than what we can see: we also hear, touch, taste and smell. We also think, perhaps in words.

Figure 5: Topicality in the brain

Lots of images like that above are turned up by the search key ‘homunculus cortex’, showing a topical mapping of the body onto the sensory and motor cortices. Notice the while the map is essentially a smooth map of the body onto a strip of cortex, there are several major discontinuities, lots of minor discontinuities (also known as cuts) and the details of the two functions and the two hemispheres are not quite the same. 

In the case of vision, given that the contents of consciousness are a two dimensional version of some part of the three dimensional world, in much the same way as an old master painting, we imagine that there are no discontinuities of this sort. On the other hand, we do need to account for the orientation of our topical map, the sense of up and down, left and right. From where we associate to the fact that vision does seem to work best when body, head and eyes are all aligned, all pointing in the same direction, directly at whatever it is that is being attended to. In which connection, see the pointing dog of reference 9.

Figure 6: Topicality in the cerebellum

We note that the topical organisation of the cerebellum is a good deal more fractured than is suggested above for a part of the cerebrum, with this fracturing suggested in the computer assisted graphic above. Which means that if the cerebellum’s content were to be reassembled into a continuous image, fit for consciousness, there is rather more work to be done than otherwise, work which has to be done somewhere. A matter which we are pursuing from reference 8.

Some history

The first version of LWS, LWS-W for worksheet, was based on rectangular arrays of small cells, perhaps 1,000 of them up and down, 1,000 of them left to right, perhaps a million altogether. Very much like the pixels of a computer screen. Very much like the worksheets of an Excel workbook.

These cells were assumed to take a small number of integer values, perhaps with two special values, a high value and a low value. To that extent not like an Excel worksheet where the values of a cell can be more or less arbitrarily large and complicated.

Figure 7: Three layer objects of LWS-W

The snip above is taken from the 2017 post at reference 10 and suggests, on an Excel worksheet, the sort of thing that can be done. Blue is for high value cells, used here to mark the boundaries of layer objects – three of them – and their parts. Yellow is for low value cells, used here to separate out the right hand objects being partially occluded by the left hand objects.

Figure 8: Column objects

While this figure, from the post at reference 11, suggests how column object might be implemented. A post which suggests, inter alia, that column objects might be a bit like the blind spot on the retina, in the sense of being a defect in the layer object one just has to put up with. There is also talk of column objects functioning as sources and sinks of activation.

However, despite the progress being made, we felt that working through a large, rectangular array of cells was rather unnatural, not well suited to the underlying population of neurons. And even if it were possible on a small scale, probably at prohibitive expense in neurons for anything big enough to support real frames of consciousness. So we moved on to LWS-N, where N stood for neuron or neural – although it might just as well have stood for network.

Figure 9: A two part layer object in LWS-N

Here the work of defining layer object and their parts was done by a network. With something called a shape net defining the gross geometry and something called a texture net, one for most parts of most layer objects, doing the fine detail. Their being joined by the brown links, distinguished from other links in some way, here by colour.

The idea was that it was not reasonable to expect one neuron to implement a node in such a network, rather that one would have a tight bunch of neurons. And that the connections between nodes might be a bunch of axons and might include more neurons acting as relay stations.

A scheme which was not fully worked through, but which did seem to play to a neural substrate better than the arrays which it replaced. And the texture nets looks to be a more promising approach to giving some substance to the interiors of the parts of layer objects.

Figure 10: Second box model

Figure 2 at the start of this post was enlarged along the lines above.

We still have time expressed in red on the left.

We still have the same, relatively small number of layers. We might, for example, have several layers about the visual scene, another layer about the sounds and yet another about the smells. Perhaps another providing text back up to the images.

Layers which are organised into layer objects and column objects. Layer objects which are organised into one or more parts.

While the four brown boxes suggest the organisation of consciousness in space, with shape nets, texture nets, nodes and edges all mapping onto clumps of neurons occupying bits of our patch of cortex. Bits which have position, position which layer objects and their parts inherit, thus giving them some grounding. 

While consciousness might be thought of as existing at the meeting point between the red time and the brown place.

Figure 11: The stage

One metaphor running through much of this was the proscenium arch stage, with consciousness being made up of foreground objects, set against a background (vertical), sometimes with a foreground (horizontal), more rarely with things in the wings to each side. With a recent example of this interest, in connection with upstaging, from reference 13, reproduced about.

A concern running through much of this was the business of partial occlusion. So in the figure above, the object far left is clearly different from the object far right, even though they are given the same colour. It is much less clear what is happening in the middle, although the usual, the natural interpretation is a dark blue object behind a light blue object.

Figure 12: Three layer objects in LWS-N

The way that this was tackled in LWS-N is illustrated in the figure above, reproduced, slightly amended, from reference 12. In which object A has two parts, object B one part and object C one part. We are not here concerned with the fact that we are not seeing the backs of objects – which much simplifies moving from three dimensions to two. With the convention being that where one part or object is partially occluded by another, the boundary of the part or object so occluded is incomplete; its bit of the common boundary is missing. Much the same, in fact, as the LWS-W solution to the same problem illustrated at Figure 7 above.

We saw the shape nets, the blue perimeters, as being rope-like, strongly connected collections of neurons, loops of rope, while we saw the texture nets, the green interiors, being relatively sparsely connected, planar collections of neurons, with data about texture – perhaps colour or tone – being carried in the way that those interiors were tiled over. Tiling in squares is different, for example, from tiling in hexagons or tiling in triangles. And where by a planar collection of neurons we mean a collection of neurons where the neurons and the connections between them can be mapped onto a plane without crossings.

A central idea was that consciousness would arise from (electrical) activation running around these shape nets and texture nets. But when we got down to thinking about exactly how one might get from a mass of firing neurons to pulses of activation running around networks built from those neurons, we were uncomfortable. It seemed too contrived, to need too many neurons and too much compilation. Something rather simpler was needed. Which led us, in November 2019, to what has now been dubbed LWS-R.

Figure 13: Interim box model

With the object model above being lifted from reference 13 being an early version of what has now appeared in text at reference 14. We had travelling waves then, but also talk of primary frequency and secondary frequency, and using them to derive layer objects with parts which have regions, an additional layer of structure. The neurons generating the travelling waves of activation are associated with both space, in the sense of our patch of cortex, and regions in a straightforward way.

The distinction between neuron and active neuron being that not all neurons on our patch participate in a regional travelling wave and some neurons will not participate in regions at all.

Figure 14: Third box model

Figure 15: The wave equation

Figure 16: A layer object of LWS-R with seven regions

So Figure 14 is the story today, with the extra layer being dropped, with regions taking over from parts, with the change of name serving to mark the change in their implementation. Time to the right, conceptual top right and neurons bottom right. While Figure 15, adapted from reference 14, is the specification of one of the possibly several elements of a regional travelling wave. From which ω might serve to define layer and the square root might serve to define a layer object. And Figure 16, a layer object with seven regions, has been lifted from reference 14.

The implementation of column objects is as yet undefined.

Time

The presentation so far has been rather vision orientated, not unreasonable given the importance of vision to vertebrates generally, not just ourselves. But there are other senses, other modalities: sound, touch, taste and smell. We have inner thought, some part of which seems to take the form of speech. We have speech, the only modality in which we have both input and output: while it is true that we can both smell the outside world, that others can smell us and that we can control how we smell, all this falls far short of the information carrying capacity of speech.

Vision is being modelled in two dimensions; we have both up and down and left and right. Vision in one dimension would be very limited in comparison. The present point being that normal vision in two dimensions does not need time. Movement is important, but a lot of images are static. And even when there is movement in the image as a whole, large parts of that image are static. The car might be moving across London Bridge, but if one tracks that car, the car part of the image is more or less static, a matter addressed earlier in the year at reference 15.

Figure 17: A spectrogram

Sound is rather different, particularly organised sound like speech or music: without a time dimension it does not exist at all. In this case we imagine, without there having been much development so far, that of the two dimensions available to a frame, one is taken by time and the other is taken by pitch. So the LWS-R image is some relative of the sound spectrogram in the figure above. Together with some kind a vertical cursor for the current time: the recent past to the left, rapidly fading and the predicted future to the right, rapidly becoming vague.

Remembering that sound includes some spatial information. Sounds usually come from somewhere or something – an aspect of sound which one might argue is much diminished when concentrating on a complex sound like speech or music.

Touch is different again, with some tactile sensations being static – like temperature or pressure – and others like stroking or scraping – only existing in time.

Taste likewise involves a mixture of static sensations – those arising from the taste receptors – and dynamic sensations – those of texture arising from the movement of whatever is in the mouth moving around relative to the oral cavity and the tongue. And there are those who would argue that taste has five dimensions, each dimension corresponding to a basic taste; sweet, sour, meaty and so on. Not comfortably modelled in two dimensions.

Smell is fairly static, although it can, like vision, change with time. And it may well be that the basic vocabulary of smell is a lot larger than that of the five basic tastes.

So there is work to be done here.

Some other matters

Some workers – amongst whom I include Antonio Damasio – argue for a sense of self being an essential ingredient of consciousness. See, for example, reference 16. 

Some workers – amongst whom I include Giulio Tononi – argue for some minimum level of complexity being an essential ingredient of consciousness. There has to be a viable amount of integration and differentiation. The contents of consciousness have to amount to something. See, for example, reference 17. 

However, LWS-R makes no such arguments: its first answer to both is that it is enough for there to be a travelling wave. And in both cases one can quibble. One might argue that such things are not necessary for LWS-R to work, for there to be subjective experience, but they are necessary for the compiler to kick in. Perhaps because there needs to be a reasonable amount of arousal for the necessary energy to be available.

Furthermore, regarding the first argument, one might reply that LWS-R is the sought-after self. There is no need for anything more. Whether it happens to contain information about the rest of the self, its own host, rather than information about the world outside, does not bear on the question.

And some support to the second argument is given by my having read, many years ago, of sensory deprivation experiments. One observation from which was that people deprived of sensation were apt to fall asleep for a long time, perhaps for as long as 24 hours. In other words, in the absence of anything worthwhile to be conscious about, one might as well fall asleep and hope for something better in the morning.

Switching from lack of input to lack of output, being able to report, being able to respond to questions, by means of speech or otherwise, is the best available evidence of consciousness, the best marker of consciousness, at least for the present. Nevertheless, we also believe that some animals which do not understand questions, which cannot report, are conscious – disregarding here the possibility that one could devise cunning experiments with animals which tested for consciousness otherwise. We also have the situation, fortunately rare, where the human subject is conscious but is unable to report. Perhaps unable to hear or see, so asking the question becomes difficult. Although it seems unlikely that one could be conscious, could be alive, without there being any sensation at all. The central nervous system is needed to sustain life and it seems unlikely that it could do that and maintain consciousness while suppressing all awareness of afferent, that is to say in-bound, nervous traffic.

In any event, in principle, if one knew where in the brainstem or the brain the LWS-R was, one could place cunning electrodes with which one could detect the sort of activity which marked consciousness.

Conclusions

We have drawn together some of the threads which have gone into our hypothesis that consciousness is the product of the activation of the neurons on a small patch of cortex which we have dubbed the LWS, with the present version being dubbed the LWS-R.

References

Reference 1: http://psmv3.blogspot.com/2018/01/an-introduction-to-lws-n.html. From January, 2018.

Reference 2: http://psmv3.blogspot.co.uk/2017/04/its-chips-life.html

Reference 3: https://en.wikipedia.org/wiki/Global_Workspace_Theory

Reference 4: http://psmv3.blogspot.com/2018/01/what-is-consciousness-and-could.html

Reference 5: https://psmv4.blogspot.com/2019/06/a-further-update-on-seeing-red.html

Reference 6: Natural speech reveals the semantic maps that tile human cerebral cortex – A. G. Huth, W. A. de Heer, and others – 2016.

Reference 7: http://psmv3.blogspot.co.uk/2017/04/a-ship-of-line.html

Reference 8: The human cerebellum has almost 80% of the surface area of the neocortex – Martin I. Sereno, Jörn Diedrichsen, Mohamed Tachrount, Guilherme Testa-Silva, Helen d’Arceuil, and Chris De Zeeuw – 2020.

Reference 9: https://psmv4.blogspot.com/2020/01/on-as-aspect-of-attention-and.html

Reference 10: http://psmv3.blogspot.com/2017/07/rules-supplemental.html

Reference 11: http://psmv3.blogspot.com/2017/07/binding.html

Reference 12: https://psmv3.blogspot.com/2018/07/from-neurons-to-layer-objects.html

Reference 13: http://psmv4.blogspot.com/2019/11/more-on-making-regions-into-objects-and.html

Reference 14: http://psmv4.blogspot.com/2020/08/waved-up-regions.html

Reference 15: https://psmv4.blogspot.com/2020/01/moving-cars.html

Reference 16: Self comes to mind: constructing the conscious brain – Antonio Damasio – 2010.

Reference 17: Consciousness: here, there and everywhere - Tononi & Koch – 2015.

Group search key: sre.

No comments:

Post a Comment