Saturday, 12 May 2012

Roger Scruton on Algorithms, Data Structures and Mental Attribution



This is a datastructure in the form of a two-dimensional array of 24 bit integers, processed by the algorithms in your PC.  Any appearance to the contrary is purely a projection of your own mind.     http://www.flickr.com/photos/90664717@N00/145257237/


In Buddhist philosophy, all functioning phenomena are said to exist in three ways, known as the three modes of existential dependence:

  • Causality
  • Structure
  • Mental Designation ('Imputation') or Meaning

Causal dependency can be modelled as algorithms, and compositional/structural dependency can be modelled as datastructures, but where does that leave conceptual dependency?

According to Buddhist philosophy, the function of the mind cannot be reduced to physical or quasi-physical processes. 

The mind is clear, formless, and knows its object.  Its knowing the object constitutes the conceptual dependency,
which is fundamental, axiomatic and cannot be explained in terms of other phenomena, including algorithms and datastructures.

The question that separates the Materialist from the Buddhist is whether there is anything left to explain about reality once algorithms and and data structures have been factored out.

The Materialist would answer that algorithms and datastructures offer a complete explanation of the universe, without any remainder.  The Buddhist would claim that a third factor, mind, is also required.

Computer algorithms cannot interpret their data
In a recent article, 'Brain Drain', philosopher Roger Scruton has given a vivid illustration of the need for this third aspect of reality - mental imputation or designation - in addition to algorithms and data structures.

"...So just what can be proved about people by the close observation of their brains? We can be conceptualised in two ways: as organisms and as objects of personal interaction. The first way employs the concept ‘human being’, and derives our behaviour from a biological science of man. The second way employs the concept ‘person’, which is not the concept of a natural kind, but of an entity that relates to others in a familiar but complex way that we know intuitively but find hard to describe. Through the concept of the person, and the associated notions of freedom, responsibility, reason for action, right, duty, justice and guilt, we gain the description under which human beings are seen, by those who respond to them as they truly are. When we endeavour to understand persons through the half-formed theories of neuroscience we are tempted to pass over their distinctive features in silence, or else to attribute them to some brain-shaped homunculus inside. For we understand people by facing them, by arguing with them, by understanding their reasons, aspirations and plans. All of that involves another language, and another conceptual scheme, from those deployed in the biological sciences. We do not understand brains by facing them, for they have no face.

We should recognise that not all coherent questions about human nature and conduct are scientific questions, concerning the laws governing cause and effect. Most of our questions about persons and their doings are about interpretation: what did he mean by that? What did her words imply? What is signified by the hand of Michelangelo’s David? Those are real questions, which invite disciplined answers. And there are disciplines that attempt to answer them. The law is one such. It involves making reasoned attributions of liability and responsibility, using methods that are not reducible to any explanatory science, and not replaceable by neuroscience, however many advances that science might make. The invention of ‘neurolaw’ is, it seems to me, profoundly dangerous, since it cannot fail to abolish freedom and accountability — not because those things don’t exist, but because they will never crop up in a brain scan.

Suppose a computer is programmed to ‘read’, as we say, a digitally encoded input, which it translates into pixels, causing it to display the picture of a woman on its screen. In order to describe this process we do not need to refer to the woman in the picture. The entire process can be completely described in terms of the hardware that translates digital data into pixels, and the software, or algorithm, which contains the instructions for doing this. There is neither the need nor the right, in this case, to use concepts like those of seeing, thinking, observing, in describing what the computer is doing; nor do we have either the need or the right to describe the thing observed in the picture, as playing any causal role, or any role at all, in the operation of the computer. Of course, we see the woman in the picture. And to us the picture contains information of quite another kind from that encoded in the digitalised instructions for producing it. It conveys information about a woman and how she looks. To describe this kind of information is impossible without describing the content of certain thoughts — thoughts that arise in people when they look at each other face to face.

But how do we move from the one concept of information to the other? How do we explain the emergence of thoughts about something from processes that reside in the transformation of visually encoded data? Cognitive science doesn’t tell us. And computer models of the brain won’t tell us either. They might show how images get encoded in digitalised format and transmitted in that format by neural pathways to the centre where they are ‘interpreted’. But that centre does not in fact interpret – interpreting is a process that we do, in seeing what is there before us. When it comes to the subtle features of the human condition, to the byways of culpability and the secrets of happiness and grief, we need guidance and study if we are to interpret things correctly. That is what the humanities provide, and that is why, when scholars who purport to practise them, add the prefix ‘neuro’ to their studies, we should expect their researches to be nonsense."


---

Pixel art
Pixel art uses the minimum number of pixels needed to give a recognisable object.  Looked at closely it appears as an 'abstract art' style set of color blocks.

Looked at from a distance, cherries appear.   But where does the appearance of the shiny cherries and their stalk originate?   From a few dozen pixels, or from your mind?

Cherries and pixels

Pixel art long predates computers, and can be found in counted stitch embroideries, where the minimum configuration of counted stitches is used to invoke the mind's projection of an object. 



Pixel embroidery

- Sean Robsville



Related Posts

Buddhism and Process Philosophy
 
The Church-Turing-Deutsch Principle and Buddhist Philosophy
 
Why Beauty Matters - Roger Scruton
 
Algorithmic compression and the three modes of existence
 
How things exist - according to Buddhism and Science


---


---

2 comments:

  1. How fun have stumbled upon this today, a day on which I listened to Sam Harris interview Jay Garfield (episode title: Do you really have a self?), worked hours building a convolutional neural network, and reflected on a tribute to Sir Roger Scruton I read yesterday.

    ReplyDelete
  2. “…to have…”

    ReplyDelete