Figure 1 |
Also associating to the issue highlighted in Delbourgo at reference 4, a book which has recently resurfaced here in Epsom. That is, when you are drawing a plant, perhaps for inclusion in a botany book, do you draw a real plant or do you draw what you think of as the ideal plant, a composite, drawing in interesting features from a number of real plants? And, digressing, are we still producing enough artists with the drawing skills needed to make such drawings, for which there is still a place in the world?
From where we turned to modelling in the Visual Basic supplied as part of Microsoft Excel, starting with some box models, as is customary in these pages.
Figure 2 |
So to build our brain we simply need to make a list of these modules and then to build them, perhaps simply working through them from left to right. The top line in the figure above suggests such a list.
But then, assembling a brain differs from assembling a car in that each module comes alive, is alive during the build process, as are the connections to other modules. And the successful build of one module will usually depend on the prior building of various other modules: so our list has an order and there are dependencies between the items in that list. The bottom line in the figure above suggests this additional structure, a suggestion which is limited by our limited skill with Powerpoint arrows.
The next thought is that it is not enough for module A to exist in order for module B to be built, it must be connected, be bound to that building. And one module can only be bound to one thing at a time. A thought drawn from elementary programming, where an instance of a subroutine can only work for one master at a time.
Figure 3 |
The next thought is that this is all very serial – and slow. A real brain works in a much more parallel fashion, pushing out in lots of different directions at once. Which motivates the idea of a thread. Our build starts with just one processing thread, but at various points along the way a processing thread can spawn another thread.
So if our master list is 1,000 items long, our first thread reaches position 23, at which it spawns a second thread starting at position 453, before carrying on to position 24. Both threads then proceed in parallel. And so on. With what we are now calling a build perhaps containing a lot of threads during the peak of construction activity. And when a thread finishes its business, it hits a stop instruction which stops it.
Figure 4 |
We have now arrived at a number of different sorts of instruction: build a module, bind a module to the current thread, release a module from the current thread, spawn a new thread and stop a thread. It quickly becomes convenient to add a further instruction, at least an instruction of sorts, the named location. Then we can spawn a new thread at a named location, rather more robust when the number of locations gets large than spawning a new thread at a numeric location identifier. Rather than going to location 238643, we go to location Balham, with the latter needing more system – but also being less error prone.
The default position is that at the end of a step the pointer, the cursor, for a thread advances one location in the master list, as in, for example, Figures 2 and 3 above. Before we had threads, this default would be overridden in the case that a build instruction took several steps to complete, in which case the cursor would not advance, it would wait, it would pause. Now we have threads, there will be more overriding: perhaps to wait for a location to be free, to wait for a module to be available to bind. And looking ahead, one might wait for a condition to be true or for the completion of a pause instruction. The threads can interact with each other, and to a limited extent with the world outside, significant complications which we did not have before. All this being managed, looked after by our host system, our host system which is missing, or at least present in a quite different form in a brain.
Figure 5 |
We defer looking at modules in the same sort of way until we have introduced substances below.
We have talked about modules without giving any thought to what they might be for. The need for modules and for their arrangement into a complex network has been taken for granted.
Nor have we given any thought to error and to the propagation of error through the build. To how early error can survive to result in symptoms much later in the build. With one of the prompts for the present paper being the idea, taken from references 2 and 3, that a chemical failure in the growing foetal brain can result in problems, specifically schizophrenia, in young adults. But as things stand, a module is either built or its build fails and the host build stops – which is all a bit too all or nothing.
We address these gaps by introducing substances, with the purpose of modules being to produce substances and to generally manage the substance economy in the brain as a whole.
We propose some number of substances, say less than a hundred for present purposes, and there is a central store of substances associated with each build. The activity of threads and modules both consumes and produces substances and the job of the build, the job of the master list is to keep the levels of substances within proper bounds for the duration. We might have it that some substances are mandatory and that the build stops altogether when any one of them fails.
We leave as an exercise for readers the consideration of the possibility of putting this into reverse, that the whole point of substances is to enable the production of modules. On which story, modules are what the brain is really about, rather than substances.
Figure 6 |
A recipe is made up of ingredients, of specified amounts of substances. In terms of VisualBasic, a data type. Then a build starts with an opening stock, expressed as a recipe. It ends with a closing stock, also expressed as a recipe. It has a target stock, where the system designer thinks it ought to be for satisfactory operation. It has a regular exogenous supply, a supply from the outside world which we take as a given, again expressed as a recipe, probably as so much for every build step, a proxy for time. There is the possibility of disposal when stocks exceed the target by too much.
Stock is consumed in building modules (the create line in the figure above), in their existing (built) and additionally in their being bound to other modules (bound). A module which is bound is doing something and needs extra. Stock is produced when a module is in production mode (produce), as opposed to being in construction mode or just resting. We introduce another instruction to tell the system to do this.
Note that a build instruction may say that construction takes a number of steps and a produce instruction may say that production should continue for a number of steps. In any event, all consumption and production is conducted, is expressed in recipes on a per-step basis. The dependencies for consumption will not be the same as those for production, with module production being less demanding in that way than module construction. Dependencies which can be checked both at the time the master list is compiled, at the outset, and when it is being executed, at run time.
Modules may be damaged during construction or during life. Damage to a module will arise from shortfall in bound consumption by the parent modules during construction and from shortfall in its own built consumption during life. This damage is reflected in its actual production falling short of the ideal production for a module of its type.
Damage is not repaired during life, with replacement by a new module being the only option available. A badly damaged module will be retired.
Given that any particular instance of a module can only be bound or otherwise active in at most one thread and that for any one thread, there is only one thread step to the build step, all these rates can be expressed in terms of so much per build step.
Figure 7 |
One might extend the thread life history at Figure 5 above to match, with boxes for consume and produce.
Figure 8 |
Our build was originally conceived as expressing the growth of a system. But now, there is enough there to support ongoing life, at least of a sort – not including a body, peer objects or the outside world generally. If properly tuned, the build could reach a stable state. A stable state which might turn out to include modest oscillations, perhaps reflecting the granularity of the business of module construction.
A stable state which would rest on a fixed, finite master list of instructions, with enough instructions now being available to support looping and repetition. Also, to enable the condition known to computer people as deadly embrace, where the system locks up, perhaps with module A waiting for module B and module B waiting for module A. A condition which might have some parallel in a brain, although a brain, being organic and alive, cannot lock up – although it might die.
Some observations from Excel
Figure 9 |
The figure above lists the instructions which we are working on, with the last two not having left the starting blocks. We are reminded of all the work that has been done over the years on minimum instruction sets: how with a few basic instructions one can do everything. It was surprising how much could be done with a very small instruction set – but it was also true that very small instruction sets resulted in very long programs: larger sets of larger instructions won the day for most practical purposes and most people are no longer terribly interested it what goes on under that hood. Larger sets which included plenty of redundancy in the sense that there are usually lots of ways of accomplishing any different task, rather as there are usually lots of different ways of expressing something or other in a natural language.
An omission from the list above is a call and return mechanism, included in virtually all high level computer languages, for example Fortran and Basic, and an essential element of the modularisation needed to keep complexity under control. For example, to hide all the messy details around calculating logarithms from most of the routines which need logarithms. They don’t usually need to know, don’t want to know. A gap which could in principle be plugged if we were allowed to compute the current address, to store it in some named variable, and then, at some point in the future, go back to the address stored in that variable. In practise, call and return is more usually thought of in terms of a call stack, with calls pushing the stack down and returns popping the stack up.
Whether or how exactly a brain does that sort of thing are interesting questions.
Build steps
Our model advances in discrete steps, thread steps within build steps. Steps which are about control will generally take very little time, while construction and production steps might take rather longer. Nevertheless, individual steps will be kept relatively short by allowing both construction and production to be spread over many steps.
Things like brains do not step in this way, except perhaps at level of the firing of a neuron, which is a discrete, cell level activity. Otherwise, the activity of a brain is very diffuse, very decentralised, with lots of different things going on at once. On the other hand, some see waves of activity across the brain, synchronised by brain waves, perhaps gamma waves at 40Hz. Forty steps a second, as it were.
Separation of construction and production
On a high level view, our model is producing while it is being built, to that extent like a brain which has to work at the same time as learning, developing and growing. But on a low level view this is not the case at all. In our model, a module cannot produce until it has been built, until it has gone through however many thread steps have been allocated to its building. And when those steps have been taken, a switch is flipped (as it were) and the module is available for use, for production. As far as we are aware, a brain is not like this.
Determinism
A build is completely determined by the master list. The build is the same every time that list is executed - which is not much like real life. However, it would not be difficult to introduce an element of chance into the processing of the master list, perhaps into the workings of consumption and production of substances.
Conclusions
We have described a building model and we have started to build something in Excel. It may be that such a model, while not attempting to model the real world, the way that a human brain actually grows, does nevertheless illustrate the sort of things that might go wrong in that real world. In particular, how early damage might result in late symptoms.
A model which lies somewhere between the cellular automata and the famous Turing machine on the left and the real world on the right.
Lastly, we continue to assert that while a brain does not do things in the same way as a computer, it still has to address many of the same (information processing) problems, albeit in different clothes. Establishing the parallels is likely to be informative.
Work in progress.
References
Reference 1: https://www.aurea.com/our-acquisitions/artemis/.
Reference 2: Hidden Valley Road - Robert Kolker – 2020.
Reference 3: Prenatal choline and the development of schizophrenia – Robert Freedman, Randal G. Ross – 2015.
Reference 4: Collecting the world: Hans Sloane and the origins of the British Museum - James Delbourgo - 2017. Pages 102-103.
Reference 5: http://psmv4.blogspot.com/2020/04/a-family-with-troubles.html.
Reference 6: http://psmv4.blogspot.com/2020/04/choline.html.
Reference 7: http://psmv3.blogspot.com/2016/09/the-choice-model.html. There are some links to this one too.
Reference 8: https://psmv4.blogspot.com/2020/03/a-very-short-history-of-computing.html. A rather different story, aimed at a different problem, with the current story occupying a different dimension.
Group search key: sre.
No comments:
Post a Comment