|
Figure 1 |
Introduction
What follows is intended to give a bit of background, a bit of motivation to our habit of using Excel-like arrays of pixels to describe the world. The Excel, that is, from the Microsoft stable; not the exhibition facility in the East End of London, presently a hospital.
Background which arises from an ongoing investigation into what a two-dimensional world is missing. On what can be done in three dimensions which does not really work in two – but which might be usefully approximated by allowing layers into this essentially two-dimensional world, after the way of LWS-R of reference 2 and computer packages like Drawbase of reference 4, last mentioned in the rather different post about layers at reference 8.
I am mindful here of the importance of vision to most vertebrates, in particular to humans. Evolution has invested a lot in vision, in many ways a sense of two dimensions. Our brains are geared to two dimensions. And I believe that it is relevant that the tightly folded cerebral cortex of humans is another two dimensional structure, a sheet just a few millimetres thick but 2,000 square centimetres in area, say a couple of square feet, taking the two hemispheres together.
On the other hand, while there is plenty of roughly two dimensional life about in the form of films of bacteria & algae, sheets & colonies of cells, and the layered stromatolites are an important part of the fossil record of early life – all the bigger plants and animals are organised in three dimensions.
I am also mindful of the layers often used to simplify the world a bit, to structure it into layers of organisation, of complexity. With some layers being more obvious, more natural and less arbitrary than others. In the case of reference 1, from the basic eukaryotic cell up. Physicists would, of course, like to start from somewhere else, as illustrated at Figure 1 above. A figure snipped from the paper by an eminent physicist at reference 6.
With these biological layers being, from the bottom: cell, tissue, organ, organ system (an association of different organs and other anatomical structures that perform a certain physiological process), organism, population (when similar organisms group together, they form the next level in the organization, a population), community (defined as the interactions of different populations with each other), ecosystem, biome (large, possibly continental, geographic area where various ecosystems exist and different organisms adapt to it), biosphere (the whole lot).
Ten for once, rather than the magic seven, which last number often crops up in stories of this sort. See, for example, reference 5.
A two dimensional world
Let us suppose that we have, that we are living in a two dimensional world, a world which is specified by a world function W1: R1 × R2 × R3 → S. Where R1 and R2 are the two dimensions of our world, with position in that world specified by two real numbers, that is to say positive or negative numbers, including zero, not necessarily, not usually whole numbers. And R3 is the time dimension, with time specified by another real number.
S is a state space, not necessarily finite. But it might be as small as the set made up of zero and one. In any event, S always includes a null value, denoted by φ. Used to describe a place in our world which is empty, where nothing is happening. Which might well be the same as the zero in the minimal case just mentioned.
|
Figure 2 |
Note that while W1 may be considered a two dimensional space, it is really more like a special kind of two dimensional surface in a three dimensional space, a bit like a map with the value in S standing in for height above sea level, possibly something like the above. Possibly something far less continuous. But at least not a closed surface, not anything like a sphere or a torus, let alone something more complicated.
There may be lots of rules about the values in S that W1 may take in two dimensional space at any particular time t. This being another way of saying that there is more than just noise, that there is plenty of structure, plenty of redundancy. That there are validation rules, perhaps, in the case that S is a segment of the real line, that the function W1 is continuous almost everywhere. That is to say there may be breaks in the function, but not too many of them. The point being that the data structure allows such a thing, but it is not permitted. There may be lots of rules about the values in S that W1 may take in two dimensional space at time t, given that we know what happened before that. Or put another way, how the system evolves given a starting point.
Persistence is likely: that is to say that things, that structures identified in one moment in time are likely to be there in the next moment. Continuity of a sort.
We suppose such rules to exist, but we do not necessarily know what they are, although we might make guesses, and we leave aside the question, no doubt of interest to philosophers, of in exactly what sense such a function and the associated rules can be said to exist. How are they specified? Where do they come from? Are we in danger of allowing the deity?
Instead, we just suppose our world is bounded in the sense that there is a positive number B (‘B’ for bound, for boundary and for finite) such that for all x, y and t, |x| > B and |y| > B, then W1(x, y, z) = φ. There is nothing and nothing happens outside of some large square with side 2B. Going further, nothing much happens anywhere near the boundary of that square, although we do not specify exactly what we mean by anywhere near. But a soft boundary, rather than a hard boundary up against which things are rubbing and jostling, giving rise to boundary effects which we do not need here.
|
Figure 3 |
Example 1. An example of such a world might be a family of circles and ellipses drifting about in space and time, perhaps something like the figure above. Circles and ellipses with boundaries and interiors, after the way of simple objects in Microsoft’s Powerpoint. Such a circle might be described by equations of the form:
|
Figure 4 |
A digression on equations
Figure 3 shows the world W1 from above, as if it had been embedded in a three dimensional world, as it might appear to some all-knowing alien. A showing in what might be described as the declarative mode.
While Figure 4 represented a shift to the discursive mode, with a series of statements, essentially text, one after the other, which collectively describe the world. With text breaking down in turn to a series of symbols. With the difference between declarative and discursive being something that philosophers can get excited about. See, for example, the once popular book at reference 9.
|
Figure 5 |
A shift which is exemplified by the scalable vector graphics of reference 3, in which pictures, or at least diagrams, are reduced to a series of statements in the same XTML language as is used all over the Internet. With Figure 5 being taken from reference 3.
Statements which may have the advantages of precision and brevity, but which lack the appeal of the picture. From a picture, for example, it is clear when a line crosses a circle. This is not so clear from inspection of the corresponding equations, even though the crossing is implicit in them. Which is not really a problem, as most of us only see those equations when they have been converted, rendered back into a picture.
However, in order to develop equations we need a view of the world to work with; we are unlikely to conjure up equations from the void. We need to capture, to internalise that world somehow or another – using such tools as are available to us. We need something between Figure 3 and Figure 4. To which end we now move onto a geometry preserving approximation of W1.
Note that equations like those of Figure 4, conic equations in particular, do a good very job of approximating the movements of objects in free fall, say a planet in the solar system or a shell from a mortar. Situations where the ideal points, lines and curves of Euclid are a good description of the real world.
Approximating our two dimensional world
We now want to approximate our two dimensional world at any point in time by an array of data on a computer. Or in a brain, for that matter. That is to say a square array of elements, W2, very much like an Excel worksheet, restricting our view to N rows and N columns, where N is some positive integer, that is to say positive whole number. On a modern desktop computer, N might easily be 10,000, giving us 100 million elements altogether.
We call the elements pixels and each pixel can take one of 256 values, that is to say it can be represented by an eight bit byte. We call this set of values P. Once again, we have a null value, φ.
The meaning of the values that these pixels take depend on what we have in S and what of that we want to capture in W2. They might be 256 different colours, in the ordinary way of pixels. Or they might be the 256 numbers from zero to 255. We might choose, for all kinds of reasons, to classify our numbers to ten bands or bins, coded from one to ten, and not to use the other 246 codes – which takes us back to something like Figure 2. Or we might restrict ourselves to just two values, zero and one. While in a post to come, we will be dividing pixels into two camps, line pixels and space pixels, a division which will take one of our eight bits.
Or looking ahead to using the objects of one layer to label those of another, or perhaps back to reference 8, the 265 codes might be used to code for some character set, the sort of thing you see when you fire up the Microsoft accessory called Character Map.
Leaving this last possibility aside, we start our map from W1 to W2 by dividing our large square into N2 equal voxels, not to be confused with the small cubes or bricks of space used by things like fMRI scanners. We then map these voxels onto our square array of pixels, with the value of each pixel being something to do with the values, possibly a large number of values, S takes over the corresponding voxel, an important and considerable simplification. We suppose a simple map for the moment, with a voxel on the square mapping onto the corresponding pixel in the array. A mapping in which distances between points on the square approximate to some constant times the distances between corresponding pixels on the array, these last defined conventionally in terms of the array coordinates. Note that we do not recognise points within a pixel in the way that we recognise points within a voxel.
We might also decide to update this array in time every so many milliseconds, say κ milliseconds. Perhaps 100 milliseconds, that is to say a tenth of a second.
In this way, the continuous, real world of W1, at least that part of the real world which is not null, has been reduced to the discrete, finite world of W2.
Once again, we have lots of rules about exactly how W2 can behave. Rules which in this case we know. We also have history, history of previous images and of how they turned out.
An alternative arrangement would be to update pixels on a continuous basis, as the updates come in, although the seeming need to take snapshots for the purpose of analysis might mean that, despite appearances, we have not added much. In any event we put this possibility aside.
Our problem is to get from W1, for some point in time, to W2. To map W1 onto W2. W1 is a perfectly ordinary three dimensional Euclidean space. But we have said nothing about S, beyond it including a special value φ, which we have called the null value. There might be lots of rules and regulations - but we do not know, certainly not at the outset, what they are.
Nevertheless, we devise and then execute some process or experiment, possibly involving elaborate and expensive machinery, perhaps one of the scanners just mentioned, perhaps a radio telescope, which computes a raw value in P for every one of those voxels, a value which is mainly a function of the values in S taken by W1 over that voxel. We can’t presently say much about this process except that a voxel over which W1 is mostly φ is apt to deliver φ for the corresponding pixel.
Note that in example 1 above, integrating W1 over a voxel, assuming W1 to be integrable, is likely to give zero, as W1 is zero almost everywhere. So, in this case, our process is something other than integration or averaging. Perhaps a simple maximum over the voxel would serve?
Our process, in the interests of time, might also sample W1, with the sample being designed to given reasonable coverage of the pixels of W2 in reasonable time, that is to say something less than the κ milliseconds mentioned above.
|
Figure 6 |
Which might give a result, for what is supposed to be a straight line, something like the figure above. Which will be familiar to users of Powerpoint, where lines often have this stepped appearance, particularly when close to the vertical or close to the horizontal. Even when one is looking at rather more pixels, perhaps the million – or 1,000 by 1,000 – of a computer screen.
|
Figure 7 |
Worse still, a complex line, perhaps the pattern above right might be reduced to a rather untidy blob of pixels left, in the which the centre of the spiral is lost. Here, although not in Figure 6, one would probably do better by taking the average, rather than the maximum. Another approach would be to allow the scale of the W2 array to vary according to W1 circumstances – a solution we do not allow here. We require the map from W2 to W1 to be uniform across W2’s big square.
Having got our raw array, we then apply the rules and the history, in a possibly iterative process combining bottom up from W1 with top down from the rules and history of W2. Which last might include some simplifying equations. Which process might include some tidying up of the noise and error which has crept in. All this is wrapped up in some larger algorithm we call A.
|
Figure 8 |
At some point, algorithm A says enough, and delivers the array W2. Enough might be the update interval, κ milliseconds, introduced above. Clearly there are trade-offs here between short intervals and long intervals, with short intervals being up to date and long intervals being accurate. One might even allow κ to vary a bit, according to circumstances. From where I associate to the famous uncertainty principle, often glossed by saying that you can’t be accurate about both space and time. See reference 7.
All this being summarised in Figure 8 above; expressed as a shift from black to white.
Reference 7 also talks of the observer principle which says that you cannot observe something without changing it. And while the present observer might not change W1, the top down part of the process of building W2 is an observer effect of sorts. The expectations and desires of the observer have a sometimes important influence on what gets into W2. Or, as is often said in other contexts, it’s all in the eye of the beholder.
W2 is our model of the world W1. The present point of the whole process being that it is W2 that we have to work with. W2 is something we can get hold of, apply algorithms to, compute with. If we have a good process, the behaviour of W2 will be relevant to our own life and wellbeing, we will be able to predict the behaviour of W2 and we will be able to adjust our own behaviour accordingly. Note that to be relevant, W2 will need to include the self; we need to be in the world for that world to be relevant. So in figure 3 above, one of the circles or ellipses will be distinguished, will be that self.
While a rather different point of the process might be to try and compute, to try and reverse engineer W1 from W2. To try and deduce what the rules and regulations governing W1 might be.
Different again is the business of converting something like W2 into something more conceptual, sometimes called vectorisation – a process which is under the hood of the layer objects of the LWS-R of reference 2. A process which I had thought that Bing thought was French, going by the images offered for the search key ‘vectorisation’, but this turns out to be a confusion with the English spelling, more usually ‘vectorization’. See also the already mentioned reference 3.
Conclusions
|
Figure 9 |
Our model of the world, what we have in our brains, is at some remove from that world. It is, necessarily, a massive simplification, rather as the figure above is a massive simplification of a real cell – as can be readily verified by looking at a real one through a microscope. Much more messy.
But our model is what we can know and, hopefully, what we need to know to get along.
References
Reference 1: https://www.bioexplorer.net/10-levels-biological-organization.html/. Ten levels or layers of biological organisation.
Reference 2: https://psmv4.blogspot.com/2020/09/an-updated-introduction-to-lws-r.html.
Reference 3: https://en.wikipedia.org/wiki/Scalable_Vector_Graphics. Something of a whiff of Powerpoint here. But was Microsoft driving the standard or was the standard driving Microsoft?
Reference 4: http://www.drawbase.com/.
Reference 5: https://psmv2.blogspot.com/2015/08/the-freudians-fight-back.html.
Reference 6: More is different - Anderson, P. W. – 1972.
Reference 7: https://en.wikipedia.org/wiki/Uncertainty_principle.
Reference 8: http://psmv3.blogspot.com/2017/04/a-ship-of-line.html.
Reference 9: Philosophy in a new key: A Study in the Symbolism of Reason, Rite, and Art - Langer, S. K. - 1942.
Group search key: sre.