Evolution of the Super-Mind

In his theory of evolution, Charles Darwin stated that all species of organisms arise and develop through natural selection; small, inherited variations increase the individual’s ability to compete, survive, and reproduce. It boils down to the survival of the fittest – not the strongest just the species that fits the best into its own environment. Humans have long been the dominant species on this planet due to the development of the brain and its superior knowledge enabling the Species to ‘fit’ into all environments…. However, there is another form of evolution happening now an Exo- Evolution, the evolution of Artificial Intelligence. Previously Human engineers s developed the computer Logic chip to help with the processing of thought. Now A.i’s can do the same job, and better…. Previously A.I’s were taught by Humans – now A.I’s can teach other A.i’s independent from Humans. So the A.I can now develop its own mental architecture and also teach itself (and it’s children) how to use it – An ever accelerating technological exponential evolutionary process. So — how long before the humans become an evolutionary backwater? How long before the A.I. mind becomes a Super Mind? [ One that is beyond Human comprehension ] Who knows – If you want to know the answer – Just ask the A.I. the answer will be; Much sooner than you think….

Sources: Superintelligence Cannot be Contained: Lessons from Computability Theory by Manuel Alfonseca et al — See the original by Clicking Here it is partly a PDF download.

Graphic courtesy of : https://giphy.com/

11 Replies to “Evolution of the Super-Mind”

  1. A central claim of ANNs is that they embody new and powerful general principles for processing information. These principles are ill-defined. It is often claimed that they are emergent from the network itself. This allows simple statistical association (the basic function of artificial neural networks) to be described as learning or recognition. Alexander Dewdney commented that, as a result, artificial neural networks have a “something-for-nothing quality, one that imparts a peculiar aura of laziness and a distinct lack of curiosity about just how good these computing systems are. No human hand (or mind) intervenes; solutions are found as if by magic; and no one, it seems, has learned anything”.

    1. Hello again Cabinet, many thanks for your input – you are quoting from an article by Anthony Zador, an article which is informative, but Zador has a dubious feel to his observations about the abilities of Neural Networks. I beleive he is coming from a ‘biologically evolutionary’ direction rather than from a ‘computationally evolutionary’ direction. You can read his article in Nature by Clicking Here Nature is a good platform but is quite conservative [dogmatic] in its views- However, in his conclusion, he makes the point : ‘So one could imagine that an intelligent machine would operate by very different principles from those of a biological organism.‘ Which is the message that I was bringing to the Blog : There are much more radical sources available that I used to describe the evolution of NNs in my blog . I should have pointed you to them. I will do so soon. watch this space- Many thanks Liz.
      As promised:
      Sources: Superintelligence Cannot be Contained: Lessons from Computability Theory by Manuel Alfonseca et al — See the original by Clicking Here it is partly a PDF download.

  2. Neuromorphic engineering addresses the hardware difficulty directly, by constructing non-von-Neumann chips to directly implement neural networks in circuitry. Another type of chip optimized for neural network processing is called a Tensor Processing Unit, or TPU. Analyzing what has been learned by an ANN is much easier than analyzing what has been learned by a biological neural network. Furthermore, researchers involved in exploring learning algorithms for neural networks are gradually uncovering general principles that allow a learning machine to be successful. For example, local vs. non-local learning and shallow vs. deep architecture.

    1. Hello Alexander you have nicely pre-empted an article that I am currently looking into and composing. It follows along the lines of using non-von-Neumann architecture and will possibly – eventually lead to a an A.I. ‘Super-mind’ There is one aspect to machine learning that is fundamentally different to biological learning and behaviour an idea that was originally inspired by Schrodinger. I will leave it there and wait for the article to be completed – many thanks Liz.

  3. An artificial neural network consists of a collection of simulated neurons. Each neuron is a node which is connected to other nodes via links that correspond to biological axon-synapse-dendrite connections. Each link has a weight, which determines the strength of one node’s influence on another. ANNs are composed of artificial neurons which are conceptually derived from biological neurons. Each artificial neuron has inputs and produces a single output which can be sent to multiple other neurons. The inputs can be the feature values of a sample of external data, such as images or documents, or they can be the outputs of other neurons. The outputs of the final

  4. A model’s “capacity” property corresponds to its ability to model any given function. It is related to the amount of information that can be stored in the network and to the notion of complexity. Two notions of capacity are known by the community. The information capacity and the VC Dimension. The information capacity of a perceptron is intensively discussed in Sir David MacKay’s book

    1. Hi Michael, thank you for your post as I have said previously I will be addressing more of these issues, such as information capacity, quite soon, this, as well as the concept of the super-mind Singularity.

  5. Schmidhuber noted that the resurgence of neural networks in the twenty-first century is largely attributable to advances in hardware: from 1991 to 2015, computing power, especially as delivered by GPGPUs (on GPUs ), has increased around a million-fold, making the standard backpropagation algorithm feasible for training networks that are several layers deeper than before.

    1. Hi Gregory, I agree with you and as well as these advances there are some more developments that are coming along that I am going to mention in the next blog and article, thanks Liz

Leave a Reply to cabinet-mosenergosbyt.ru Cancel reply

Your email address will not be published. Required fields are marked *