The Singularity – Background

Continuing on with our search for the Singularity – See Article below- we will continue on with John von Neumann.

Around 1947 Von Neumann wrote an article on computing called : “First Draft of a Report on the EDVAC”. An article which described a computer architecture in which the data and the program are both stored in the computer’s memory in the same address space. This architecture is the basis of most modern computer designs found today. Modern computers are known as von Neumann Machines. The design is linear or sequential, in other words it is like a queue where you have to wait for the person in front of you to get attention before your request can be dealt with. Not exactly efficient, However the Brain does not work this way. Imagine trying to watch a film in which you need to wait a discrete period of time before you can watch the next bit and how, when you are watching each bit do you get to understand it?

The Brain is not sequential nor is it linear, the Brain works holistically. Many parts fire off all at the same time in many different areas of the brain, all at once. You may perceive thought as linear but it is not. So, although many great advances have been made in Artificial Intelligence – you cannot create a Singularity Event in replicating true thought by using a current von Neumann Machine. For this we need to think way outside of the box.

Image courtesy of : https://giphy.com

2 Replies to “The Singularity – Background”

  1. I Understand VNM’s have linear structure but non VNM’s can have parallel processing – are non VNM’s not a viable prospect for this too.

    1. Hi Boris I agree that non von Neumann Machines (VNM’s) have long been in development in the form of Parallel processing unfortunately parallel processing is also fraught with its own difficulties mainly difficulties with programming data structure, one in that for every problem the code has to be broken down into many parts so that each processor can work on that part of the problem then all parts need to assimilated together to provide a result – This can lead to unequal size tasks which can lead to errors in program execution and incorrect results. In other words – Parallel Processing is sometimes faster but mostly wrong. Here I will quote Wilki “Parallel computer systems have difficulties with caches that may store the same value in more than one location, with the possibility of incorrect program execution.” So, despite the limitations, in the end it becomes quicker and easier to use current linear models mainly by increasing clock speed and reducing chip sizes – Unfortunately there are limits to this – limits to which we are reaching now. I will be looking at how to get round this in the next article. Many thanks Liz.

Leave a Comment

Your email address will not be published. Required fields are marked *