In order to control anti social behaviour and criminality – Security services are becoming more and more reliant on A.I. based facial recognition. Unfortunately – Artificial Neural Networks are not totally reliable – they make mistakes.. They are programmed by ‘Humans’ and yes, they makea mustaken. This situation has now led to innocent people being accused of anti-social acts based purely on an identification brought forwards by a ( stupid ) A.I. Machine. Unfortunately the security Services are not only becoming more reliant on these machines but appear to be believing their A.I. Machines to be perfect – which they are not. This results in cases where citizens are having to prove their innocence beyond any doubt against an indictment based purely on the sporadic, mistaken identification from a machine. Be aware – This will only increase in the future – You can read the whole sorrowful article here: New York Times – In the very near future – cover yourself – For every breath you take -get an Alibi – or you too will be ‘Framed’ 🙂
This explains how you can master a habit such as reading, writing, eating healthy, meditation or exercising. Your neural network has the habit in place after much practice and repetition. Therefore, when you feel as if you are resistant to change, it is not that you are a weak individual, the strength of your neural network just makes you feel as if you cannot change. However, you can change.
Hello Cabinet, nice to see you again. You are quite correct: The Neural Networks do learn purely by repetition, this is how we ‘Humans’ also learn. We learn bad habits BUT we can change them. We change them by learning GOOD habits. We teach our own Neural Networks to replace those old bad habits with new good habits. How ? With practice and repetition. We learn how to re-learn. Learning is to find out how to become stronger in this moment – just by learning with practice and with good, and directed, and determined positive repetition, how to become stronger now, than we used to be when we were young. So, Cabinet – Thank you for your interesting comment Liz.
Ralph Waldo Emerson says “What lies behind us, and what lies before us are tiny matters, compared to what lies within us.” When your emotions and thoughts are habitual, a neural network is formed that regulates your mindset. It literally keeps you in place. The habit becomes effortless over time.
Hi Andrew, Emerson is correct, what lies within us is great indeed. Neural Networks within can form from habitual sequences that we learn as we grow and these can regulate us towards normality. However, some Networks are also formed before we are born – Embryonic networks – these Embryonic networks are those that create our learning and eventually, are fundamental in the formation of our learning habits. You provide many interesting thoughts – Many thanks Liz.
writes “For twenty years, my research has shown that the view you adopt for yourself profoundly affects the way you lead your life.” Think about it: when you get up in the morning and you’re in a bad mood, or worried about something going on in your life, or feeling overwhelmed by work, that translates to your behavior and overall performance.
Hi Tatyana. I am in agreement with you, what you say has great merit: Sometimes it is not just the view that we adopt for ourselves that affects our lives: Sometimes it is the views that are imposed upon us by Parents, Teachers and Society that also affects our lives and, our reactions to life. Many times we adopt a view of ourselves that we are taught (and learn) to adopt and apply to ourselves, together with the expectations of others, that can also cause distress in our lives. Many thanks Liz.
Mindsets aren’t just any beliefs. They are beliefs that orient our reactions and tendencies. They serve a number of cognitive functions. They let us frame situations: they direct our attention to the most important cues, so that we’re not overwhelmed with information. They suggest sensible goals so that we know what we should be trying to achieve. They prime us with reasonable courses of action so that we don’t have to puzzle out what to do. When our mindsets become habitual, they define who we are, and who we can become. The mindsets you have developed over time have serious implications on how you live your life. If you are constantly thinking about everything wrong with your life, you are more likely to be stressed than people who choose to focus on the bright side of life.
Hi Corona, thanks you, Yes Mindsets are not just any beliefs – they are specific belief functions that offer a limited view of the world. These functions do not always protect the mind from information. The world is much bigger than we can perceive. Thus as you say we can produce habits, learned responses to help our perceptions – habits that seek to make the world a much simpler place than it is. Habits which enable us to understand the world in our own terms. Unfortunately, these simple habits can sometimes give us a negative view of life – Fortunately these Mindset habits are learned -( or taught to us by others). And so WE can replace a negatively learned habit by re-programming our own mind with a positive self- realised habit and mindset, One that is good for us and one we can teach to ourselves to enable us to see the bright side of life which, despite the times we are in – is always there – Seek and you shall find- Many thanks Liz.
Deep neural networks (DNN) are increasingly being accelerated on application-specific hardware such as the Google TPU designed especially for deep learning. Timing speculation is a promising approach to further increase the energy efficiency of DNN accelerators. Architectural exploration for timing speculation requires detailed gate-level timing simulations that can be time-consuming for large DNNs which execute millions of multiply-and-accumulate (MAC) operations. In this paper we propose FATE, a new methodology for fast and accurate timing simulations of DNN accelerators like the Google TPU. FATE proposes two novel ideas: (i) De 81 layNet, a DNN based timing model for MAC units; and (ii) a statistical sampling methodology that reduces the number of MAC operations for which timing simulations are performed. We show that FATE results in between 8X –58X speed-up in timing simulations, while introducing less than 2% error in classification accuracy estimates. We demonstrate the use of FATE by comparing a conventional DNN accelerator that uses 2 s complement (2C) arithmetic with one that uses signed magnitude representation (SMR). We show that that the SMR implementation provides 18% more energy savings for the same classification accuracy than 2C, a result that might be of independent interest. A machine learning (ML) design framework is proposed for adaptively adjusting clock frequency based on propagation delay of individual instructions. A random forest model is trained to classify propagation delays in real time, utilizing current operation type, current operands, and computation history as ML features. The trained model is implemented in Verilog as an additional pipeline stage within a baseline processor. The modified system is experimentally tested at the gate level in 45 nm CMOS technology, exhibiting a speedup of 70% and energy reduction of 30% with coarse-grained ML classification. A speedup of 89% is demonstrated with finer granularities with 15.5% reduction in energy consumption.
Hi watch stand, yes you are quoting from an excellent article by Jeff Zhang – the article is hosted on the well respected Resarchgate website – it is a little more advanced than my simple little blog – but we will get there eventually – however if anyone wants to read the full article it can be found by Clicking Here. Many thanks for your informative comment – Liz.
In recent years, neural network accelerators have been shown to achieve both high energy efficiency and high performance for a broad application scope within the important category of recognition and mining applications. Still, both the energy efficiency and performance of such accelerators remain limited by memory accesses. In this paper, we focus on image applications, arguably the most important category among recognition and mining applications. The neural networks which are state-of-the-art for these applications are Convolutional Neural Networks (CNN), and they have an important property: weights are shared among many neurons, considerably reducing the neural network memory footprint. This property allows to entirely map a CNN within an SRAM, eliminating all DRAM accesses for weights. By further hoisting this accelerator next to the image sensor, it is possible to eliminate all remaining DRAM accesses, i.e., for inputs and outputs. In this paper, we propose such a CNN accelerator, placed next to a CMOS or CCD sensor. The absence of DRAM accesses combined with a careful exploitation of the specific data access patterns within CNNs allows us to design an accelerator which is 60? more energy efficient than the previous state-of-the-art neural network accelerator. We present a full design down to the layout at 65 nm, with a modest footprint of 4.m2 and consuming only 3W, but still about 30? faster than high-end GPUs.
Thank you Pauline you are quoting from one of my favored places – Research gate:- It is a nice article – a little beyond what we are doing right now but most interesting – I believe the ‘mining’ applications in the article relate to live facial recognition and to facial / social data mining Indeed Live A.I. based CCTV data mining – most interesting. Time to get our masks on. if you would like to read the full article you can Click Here. Most impressive find and thank you Liz.
“a mental frame or lens that selectively organizes and encodes information, thereby orienting an individual toward a unique way of understanding an experience and guiding one toward corresponding actions and responses” Your mindsets (thoughts, beliefs, and expectations) are the lenses through which you perceive the world. And these lenses affect how you live and the choices you make every day.
Hi Jeanne Many thanks for your comment. I agree with you. We need to keep those lenses as clear as we can make them – thanks Liz.
Anybody home? 🙂
XEvil 4.0: revolution in automatic CAPTCHA solution
XEvil.Net
Hi yes, we are all here – XEvil is very good capcha – We had to dig your comment out of the spam bin 🙂
Have you ever thought about publishing an ebook or guest authoring on other sites? I have a blog centered on the same ideas you discuss and would really like to have you share some stories/information. I know my audience would enjoy your work. If you’re even remotely interested, feel free to shoot me an e-mail. Relaxation
Hi Relaxation, its very nice of you to comment. I am currently doing several projects that are taking up most of my time. However soon as I get a free space I will endeavour to contact you – thanks Liz.