Now, while I have been investigating this topic – I have come across a movement I did not quite anticipate. I thought that machine Learning, Neural Networks and A.I. would be mainly populated by Male Influences : hence Deus est Machina: – which it is: However I was pleasantly surprised to find that there is a growing movement amongst woman’s groups that wished to be included within the data science A.I. community. This is good: There needs to be a balance between the masculine Phallus Dei and the Feminine Way. If A.I. is to be the projection of the Male Ego – Mind made In the image of Man – then this needs to be fed, nurtured and cultivated by the Pink Evolution. It is the way. You can read the full article if you Click Here.
Graphic courtesy of : https://giphy.com/
52 Replies to “The Soft Machine – The Cultivation of A.I.”
Artificial neural networks ( ANNs ) or connectionist systems are computing systems inspired by the biological neural networks that constitute animal brains. Such systems learn (progressively improve their ability) to do tasks by considering examples, generally without task-specific programming. For example, in image recognition, they might learn to identify images that contain cats by analyzing example images that have been manually labeled as “cat” or “no cat” and using the analytic results to identify cats in other images. They have found most use in applications difficult to express with a traditional computer algorithm using rule-based programming .
Hi Maria thanks for your comment – I agree, Telling the difference between a cat and a dog in rule based logic tree program would be difficult (Impossible) – Does the image have four legs y/n and a tail y/n have fur, have froward pointing eyes, a round nose, whiskers, ears on top of head – is it a ‘cat’ answer yes – is it a ‘dog’ answer – yes ( How you program for ‘has fur ?’ is beyond me.) Neural Networks do not work this way – there are no program rules – only connections between neuron units. It is the ‘patterns’ of these connections that produce the result – So, there is only one logic tree – do the ‘neuron patterns’ represent cat or dog? How the patterns are constructed are simply the result of ‘repeated learning’ -and ‘learning’ is beyond ‘programming’ – Many thanks liz.
Alternatively, engineers may look for other types of neural networks with more straightforward and convergent training algorithms. CMAC ( cerebellar model articulation controller ) is one such kind of neural network. It doesn’t require learning rates or randomized initial weights for CMAC. The training process can be guaranteed to converge in one step with a new batch of data, and the computational complexity of the training algorithm is linear with respect to the number of neurons involved. Since the 20, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks that contain many layers of non-linear hidden units and a very large output layer.
Hi Olga, Now you are going a little ahead of what I know so let me absorb what you are saying – so the training algorithm is linear with respect to the number of neuron units -O.K we got that and now it goes over my head – I will let Bishop do the rest [ Admin Edit ] ‘With no learning rates nor randomized initial weights – then the boundaries may not be set – So it would be difficult to see how the training process could converge in one step – AI neural nets tend to work in a linear fashion – each neural unit, tends towards a limit – that limit is set by the initial boundaries and is a contiguous function of the limit of a ( possibly infinite ) linear expansion : Which, may be only perceived as non-linear.'[ Bishop ] – right O.k. Yes, Well great yes – Many Thanks Liz..
In 1994, Andre de Carvalho, together with Mike Fairhurst and David Bisset, published experimental results of a multi-layer boolean neural network, also known as a weightless neural network, composed of a 3-layers self-organising feature extraction neural network module (SOFT) followed by a multi-layer classification neural network module (GSN), which were independently trained. Each layer in the feature extraction module extracted features with growing complexity regarding the previous layer. In 1995, Brendan Frey demonstrated that it was possible to train (over two days) a network containing six fully connected layers and several hundred hidden units using the wake-sleep algorithm, co-developed with Peter Dayan and Hinton .
Hi Many thanks for your comment – I’m afraid we haven’t got to Multilayer Networks in this blog yet but we are getting there. The dream wake algorithm is similar to the one Руслана – pointed out in our ‘Dreaming in Binary – Blog’. The study is by Elena Agliari et al. Where the researchers allowed a Neural Network time for learning, then allowed the network time to rest or sleep and possibly dream in order to assimilate the awakened learning phase. This too is a most Interesting article that you can all read if you Click Here. It’s on Researchgate – many thanks Liz.
Classic examples include principal components analysis and cluster analysis. Feature learning algorithms, also called representation learning algorithms, often attempt to preserve the information in their input but also transform it in a way that makes it useful, often as a pre-processing step before performing classification or predictions. This technique allows reconstruction of the inputs coming from the unknown data-generating distribution, while not being necessarily faithful to configurations that are implausible under that distribution. This replaces manual feature engineering, and allows a machine to both learn the features and use them to perform a specific task. Feature learning can be either supervised or unsupervised. In supervised feature learning, features are learned using labeled input data. Examples include artificial neural networks, multilayer perceptrons, and supervised dictionary learning. In unsupervised feature learning, features are learned with unlabeled input data. Examples include dictionary learning, independent component analysis, autoencoders, matrix factorization
Hi lukojl, many thanks for your comment -your comment has many layers within it – curiously enough, this is a related topic that I am currently addressing in the next section of Machine learning. In our previous examples on this site we have explored supervised learning – ergo where the ‘answers’ are already known and teaching the neural unit to produce the correct output. However there are learning methods where the outcome cannot be known – This is used mainly in Game Theory: where the outcome of a game (Like Chess) is not known – To solve this situation we need another method of learning – this we are calling Heuristic learning – Heuristics involve a situation where no out come is predictable therefore learning needs to be supported by Heuristic means. This is not the same as ‘supervised’ learning but one of ‘structured’ learning. As soon as I finish this section I will put a link to Heuristics Here. Many thanks liz…
As deep learning moves from the lab into the world, research and experience shows that artificial neural networks are vulnerable to hacks and deception. By identifying patterns that these systems use to function, attackers can modify inputs to ANNs in such a way that the ANN finds a match that human observers would not recognize. For example, an attacker can make subtle changes to an image such that the ANN finds a match even though the image looks to a human nothing like the search target. Such a manipulation is termed an “adversarial attack.”
Hi Cabinet – you make a very valid point, one that I am going going to address in a future post. Security services are becoming more reliant on A.I. based facial recognition in order to control anti social behavior and criminality. Even if a hacker does manipulate data – Artificial Networks are not totally reliable – they make mistakes. This has led to innocent people being accused of anti-social acts based purely on an identification brought forwards by an A.I. Machine. Unfortunately the security Services are not only becoming more reliant on these machines but appear to be believing their A.I. to be perfect – which it is not. This results in cases where citizens are having to prove their innocence beyond any doubt against an indictment based purely on the sporadic, mistaken identification from a machine. Be aware – This will only increase in the future – You can read the article here: New York Times – Many Thanks Liz.
If you want to use the photo it would also be good to check with the artist beforehand in case it is subject to copyright. Best wishes. Aaren Reggis Sela
Hi Aaren I assume you are referring to the Grace Hopper Picture, it came, as all the photos on the page -from Wikipedia – You can see the full permission details by Clicking Here And you will see I have provided the full Creative commons license details as best I can. Thank you for making your comment and for making sure that I am trying to keep out of trouble.
I am regular visitor, how are you everybody? This paragraph posted at this site is really fastidious. Cecilia Alfons Simmons
Hi Cecilia I am really well. I am happy you are a regular visitor, there are many other visitors and we a trying to make the site interesting so that many people come back – thank you Liz..
proved that if the width of a deep neural network with ReLU activation is strictly larger than the input dimension, then the network can approximate any Lebesgue integrable function ; If the width is smaller or equal to the input dimension, then deep neural network is not a universal approximator. as well as the optimization concepts of training and testing, related to fitting and generalization, respectively. More specifically, the probabilistic interpretation considers the activation nonlinearity as a cumulative distribution function .
Hi Mirena, many thanks for your comment. I see you refer to a paper on Deep Learning by Lu et al. who proved that the width of a deep neural network, can approximate any integral function and if the width is smaller than the input then the network is not a universal approximator – this appears to set a limits on how efficiently neural networks can really function. It is an interesting paper which can be read here Hmm A bit heavy though. It does bring me closer to another topic that I am about to pursue which is about a specific limitation to A.I. which may raise a few concerns – Many Thanks Liz.
You made some nice points there. I looked on the internet for the subject matter and found most individuals will agree with your site.
Many thanks for your message I am glad you find the subject informative. I do try to make sure my information is all correct and interesting – thank you liz.
Hello! I could have sworn I’ve been to this blog before but after browsing through some of the post I realized it’s new to me. Anyways, I’m definitely happy I found it and I’ll be book-marking and checking back frequently!
Hi ronnie, I am glad you found my site and that you are happy with it. There is lots more to come – hope you found the A.I. games interesting – see if you can beat the computer – by Clinking Here Its good fun and I personally haven’t beat the computer yet. Many thanks Liz.
Great blog article. Really thank you! Really Great. Elspeth Kit Hobey
Hi and yes, thank you so much – glad you liked the Soft Machine – I always hope for positive responses from readers – Found me from Yandex – Yandex rules yes. Many thanks – Спасибо – Liz.
Asking questions are actually good thing if you are not understanding something totally, but this article provides fastidious understanding even. Thomasa Gregorio Carina
Hi Erotik asking questions is always good the only reply I can give is : Ebba Geno Neva – many thanks – liz.
Hiya very nice blog!! Man .. Excellent .. Wonderful .. Ebba Geno Neva
Hi sikis hope you are liking what I am trying to do. I will have more coming soon keep coming back it will be good – liz. Oh yes Ebba Geno Neva to you too …(Don’t worry most people don’t know what we are on about – just takes a g**gle search to explain it !.)
I love it when individuals come together and share thoughts. Great site, stick with it. Fernanda Cly Latta
Hi porno many thanks nice to hear from you again, I am hoping the site will bring minds together – only when thoughts are brought together do we excel and go beyond our own personal limits – thank you Liz.
Some really quality articles on this internet site , saved to bookmarks . Devi Pietro Heyde
Many thanks – hope you are enjoying the articles – liz.
I have been examinating out many of your stories and i must say pretty nice stuff. I will definitely bookmark your blog. Tamar Pepe Lais
Many thanks again izle – we do try to make it interesting -liz
Outstanding post however , I was wondering
if you could write a litte more on this subject?
I’d be very grateful if you could elaborate a little bit further.
Hi and thanks for your comment – Have you looked at the link that I placed in the blog ? This will lead you towards more information on the point I was making. Thanks Liz
Greate article. Keep writing such kind of info on your blog.
Thank you Geraldo – that is what I am hoping to do – Liz
This section may contain an excessive number of citations. Please consider removing references to unnecessary or disreputable sources, merging citations where possible, or, if necessary, flagging the content for deletion. Simpler models that use task-specific handcrafted features such as Gabor filters and support vector machines (SVMs) were a popular choice in the 19 and 20, because of artificial neural network ‘s (ANN) computational cost and a lack of understanding of how the brain wires its biological networks.
I solved the issue by updating to the latest version of the Mendeley Desktop and the MS Word Plugin. Thanks Liz.
Hey there I am so delighted I found your blog, I really found you by error, while I was researching on Yahoo for something
else, Nonetheless I am here now and would just like to say thanks a lot for a tremendous
post and a all round thrilling blog (I also love the theme/design),
I don’t have time to go through it all at the moment but I have book-marked it
and also included your RSS feeds, so when I have time I will be
back to read much more, Please do keep up the excellent work.
Feel free to visit my blog post – CBD for Dogs
Hello, and many thanks for your nice comments – my Spam Bot took your Comment out – however I have retrieved it and put it out as a post. I have also replaced your link that the Spam Bot removed as I am in favour of you blog subject – Please remember, multiple similar posts with different IP addresses will not get past the Spam Bot. However, I am intrigued at what you were searching for that led you to this site? Machine intelligence = CBD – Cannabidiol: for dogs ? Most strange but quite an interesting pairing. Thank you Liz 🙂
Hello There. I found your weblog the usage of msn. This is an extremely neatly written article.
I’ll make sure to bookmark it and come back to read extra of your useful information.
Thank you for the post. I’ll definitely return.
Hi Carmel, many thanks for you kind comments. Liked your article link in LA Times too. Many thanks Liz.
Hmm is anyone else experiencing problems with the images on this blog loading?
I’m trying to figure out if its a problem on my end or if it’s the blog.
Any feed-back would be greatly appreciated.
Hi delta 8 thanks for your input – yes I know some of the images load slowly – that’s because I am running a small capacity server that is fast but small so, most of the images are stored off site so there will be an initial lag until the browsers cache catches up – once the cache gets the image then, next time it loads up fairly OK. I have checked the pictures and they all load OK on my browser – the slowest is the Bletchley park picture by Alex Motoc but that picture is rather large and I cant change it because it is used by permission ‘as is’. Thanks liz
What i don’t realize is actually how you are not actually
a lot more smartly-liked than you might be now. You are so intelligent.
You realize therefore considerably with regards to this subject, produced
me individually imagine it from so many various angles.
Its like men and women don’t seem to be interested unless it’s something
to do with Woman gaga! Your individual stuffs excellent.
All the time take care of it up!
Thank you Karma – It is also stunning to me why I am not overrun with followers but that is the nature of Blogs. I think both men and women should be equally interested in computer stuff but I am waving the ‘gaga flag’ as many more women feel excluded from AI than men do. Thanks Liz.
Hi there! Do you use Twitter? I’d like to follow you if that would be okay.
I’m definitely enjoying your blog and look forward to new posts.
Hi Jeanette oh dear yes I am on twitter but my twits are infrequent and really mindless rubbish but you are welcome to look me up on twitter @lizziegray_net
Blog sites of this nature are the type I like bookmarking as well as going to on a regular basis.
Hi CB Thanks for your message. I hope you learn a little by coming back – thanks Liz.
I was wondering if you ever thought of changing the layout of your site? Its very well written; I love what youve got to say. But maybe you could a little more in the way of content so people could connect with it better. Youve got an awful lot of text for only having 1 or 2 images. Maybe you could space it out better?
Hi Lourde, many thanks for your comment I always appreciate feedback. I haven’t really thought much about changing the layout of the site as I thought it was a nice simple layout. I agree that I do tend to have a lot of text with one picture but its a bit like in a newspaper where an article describes a situation ( such as the current petrol shortage ) and the picture ( of an empty forecourt ) is only needed once to illustrate the article. However I will bear this in mind and my latest article Listed Here does have a few more pictures on it. Many thanks Liz,