Thursday 30 July 2020

Teamwork

Figure 1

An outing prompted by the piece at reference 1 in a newsletter from an outfit called Neuwrite West, with the main course being the paper at reference 2.

With this last being concerned with how global graph theoretical properties of the networks derived from fMRI scans of human brains interact with the performance of the owners of those brains on mainly cognitive tasks, while inside the scanner. Near 500 owners, drawn from the Human Connectome Project of reference 5. A paper which I have spent some time with – although not enough to be able to say that I have read it.

With the graph theoretical properties in question being about the extent to which the nodes of the graph can be usefully considered as a collection of communities, about the strength of the connections within those communities and the strength of the connections between them. The existence of local hubs serving connections within communities and connector hubs serving connections between communities. All of which might be encapsulated in the term ‘modularity’. With the wiring cost, for which the total length of all the connections is a proxy, being an important consideration: all that wiring is expensive to build and expensive to service. 

Large computer programs, for what it is worth, are very modular: modularity is the only way to keep complexity under control. Deep modularity too, rather than the one level of modularity under consideration here.

The conclusion of the paper appears to be, that in common with many other complex systems, particularly living ones or the dead ones built by humans, modularity in brains is both present and important. Quite apart from what it is exactly that the individual modules are doing and how well they are doing it. Furthermore, failures in performance can be reliably predicted from failures in modularity. One supposes that such failures in modularity might easily be the product of local damage to the brain, damage to an important connector hub, the sort of damage caused by, for example, strokes, although the present paper does not go into that.

The scanning

The scans used in this work were taken from the Human Connectome Project of references 5, 6 and 7 – with plenty more material waiting to be found out there. 

I think the idea was that each subject was in the scanner for a couple of sessions of a couple of hours each. For some of this time they were doing tasks, tasks based on the batteries of psychological tests developed over the years, adapted from use inside the rather confined space available inside a noisy scanner. While keeping the head still, I think by use of a bite bar. 

Tasks which were organised, for present purposes into four categories: working memory, relational, language & maths, social reasoning. Details of this sort of thing are to be found at reference 8. Four categories which involve rather different parts of the brain.

Scans are done a slice at a time with perhaps 50 slices making up a three dimensional image of a head. It might take 2 or three seconds to build such an image. They are grouped into blocks, with the whole of a block being given to one of the four tasks. It may well be that more than one block is given to any one task, which would reduce the amount of information going into any one analysis, but would also provide a check of replicability. 

Quite a lot of work has to be done on the raw data before it can be used, work which might be called pre-processing, for example to make allowance for the fact that the data which goes into the image of a head has been collected over a small number of seconds, while one wants to think of it as having been collected at a point in time, a point in a time series. I do not know where the boundary between the scanner and the computer used by the authors of the present paper, reference 2, lies. On the other hand, I don’t think that matters for present purposes.

A box model
 
Figure 2 - click to enlarge for legibility

A good part of the argument rests on a demonstration of a relatively simple model predicting task performance from a small number of statistics derived from the 264 node network derived in turn from the series of scans made while the task was going on. This is suggested by the three green boxes bottom right in the figure above.

While the line of five big blue boxes top right is intended to be suggestive of the huge amount of computation that has to be done to get to those statistics from the scans of an individual engaged in some task. Computation which does not have any regard for exactly what it is that the brain is doing, although to be fair, arriving at the 264 nodes used in this work did have regard to the anatomy of the brain, if not its functions. This arrival is described by Power and his colleagues at reference 3.

The two group of red boxes are intended to be suggestive of the move from the voxels left – perhaps one or two millimetre cubes – carrying a signal through time, through a slice of time, to the nodes right connected by weighted edges, with the weights of those edges being a measure of the correlation in time of the signals at the two endpoints, with the two endpoints of each edge being suggested by the two arrows – a common enough device among IT people. And with those nodes being organised into communities – that is to say, the one level of modularity mentioned above. So the right hand red structure is quite different conceptually from the left hand red structure from which it is derived: a move from a series of images in time to a weighted graph.

The set of nodes with lots of edges is sometimes called the rich club, the club from which local hubs are drawn. While the set of nodes with connections to lots of communities is sometimes called the diverse club, the club from which connector hubs are drawn. Typically, few if any nodes are in both clubs.
Nodes have positions derived from those of the voxels from which they are derived. Positions which can be projected from three dimensional space onto a cortical plane and which are often used in graphics.

These nodes are arranged in a functional network, a network which will vary with both subject and function, in this case task. I have not yet learned anything about this variation, beyond its existence.

Note also that these 264 nodes are standing for around 20 billion neurons, 100 billion if one includes the rather dense cerebellum at the back. So each node is capable of doing an awful lot of work. There is an awful lot going on under the hood.

With the point of all this being that this relatively simple model – that is to say the three green boxes – built from a small number of Perceptrons (see reference 11), arranged in a small number of layers – so qualifying as ‘deep’ – can predict something about task performance from the distillation of those scans into a very small number of statistics about the modularity of the derived network of nodes. 

Some comments

I am not qualified to comment on whether the conclusions of this paper are warranted, although I do find them both plausible and interesting. And I associate to the weight put by Tononi & Koch in the pairing of, the tension between integration and differentiation which figures large in their paper at reference 10.

And digging into the paper and following the various Bing (or Google) trails turned up plenty of interest. The claim, for example, that good ways to modularise networks are intimately linked with minimum length descriptions of paths through those networks, with this last having been a important area of research for getting on for a century. A claim which is expounded in an accessible way at reference 4.

I was also reminded of an observation by FIL in connection with a pamphlet about the institutional treatment – that is to say in mental hospitals – of mental disorder in the UK: a very good summary for those who were already well informed. Not so much use for those who were not. In this case, I was at first completely baffled by the introduction to something or other but rather impressed with it after I had done my homework.

But the takeaway for today is teamwork. That the breath of knowledge needed to produce work of this sort is unlikely to be available in one head. One needs lots of cooks to make this particular broth – and the cooks are going to have to learn how to get on with each other if they want to make serious progress. They will also need to know how to sell themselves to the people that hold the purse strings. But in any event, the role for one-man-bands has shrunk and continues to shrink.

So to properly understand this work and its implications, never mind do the work in the first place, one needs to know about:

Graph theory, with particular reference to the clustering of nodes of large networks into communities. Noting in passing, that for present purposes, a complete graph in which every node is equally connected to every other node is as uninformative as a null graph in which there are nodes but no connections. As so often, one wants something in-between.

Statistics. This paper is, for example, thick with Pearson’s correlation coefficients.

Systems. It probably helps to be familiar with systems theory in general. The theory of behaviour of large and complex systems. Not to say dynamical systems.

Brain scanning. How fMRI scans work – bearing in mind that for perhaps as much as £500,000 you are getting a lot of machinery. What are the problems and limitations? In which connection I was pleased to come across the undated reference 9.

How you map the brain scan of one person onto a standard atlas so that you can work with scans of more than one person at a time, or with scans from the same person at different times.

How you reduce perhaps hundreds of thousands of fMRI voxels into something a bit more tractable, in this case 264 nodes (a number suspiciously close to two raised to the power of eight, itself two raised to the power of three. Which would be a very numerological number, otherwise 256).

The human connectome project, HCP (references 5, 6 and 7).

Psychological testing (reference 8).

Maybe, in time, things will settle down. The framework within which we talk about all this will be settled and well-documented. There will be consensus about what works and what does not and we will be able to take much more on trust. But I don’t think we have got to that point quite yet.

One also worries about quality control. Given the range of skills going into this work and that doing stuff is much more fun than checking the work of others, is the quality control there? Mindful here of having, in the past, heard of reports of the woeful standard of statistics in some psychological work.

References

Reference 1a: A brief history of our language for the brain: Vocabulary, dictionary, and poetry – Ellie Beam – 2020.

Reference 1b: http://www.neuwritewest.org/blog/a-brief-history-of-our-language-for-the-brain. The location of reference 1a and the source of the snap above.

Reference 2: A mechanistic model of connector hubs, modularity, and cognition - Bertolero, M.A., Yeo, B.T.T., Bassett, D.S. & D’Esposito, M.D. – 2018.

Reference 3: Functional Network Organization of the Human Brain - Power, J. D. et al. – 2011.

Reference 4: The map equation – M. Rosvall, D. Axelsson and C. T. Bergstrom – 2009. 


Reference 6: WU-Minn HCP 1200 Subjects Data Release Reference Manual – Human Connectome Project – 2017.

Reference 7: The WU-Minn Human Connectome Project: An overview - David C.Van Essen and others – 2013. ‘The Human Connectome Project consortium led by Washington University, University of Minnesota, and Oxford University is undertaking a systematic effort to map macroscopic human brain circuits and their relationship to behavior in a large population of healthy adults’. 

Reference 8: NIH Toolbox brochure – Northwestern University and collaborators – 2017.

Reference 9a: Glossary of MRI [and fMRI] Terms – American College of Radiology – 2010?


Reference 10: Consciousness: here, there and everywhere - Tononi & Koch – 2015. 

No comments:

Post a Comment