|—||Andrew Ng. (via re-workblog)|
Scientists in Germany are using artificial nerve cells to classify different types of data. These silicon ‘neurons’ could recognize handwritten numbers, or distinguish plant species based on their flowers.
An excerpt from Daniel Dennett’s Point of Inquiry interview “Tools for Thinking”
Very relevant:Take…one out of every million pixels…I can [use the D-Wave to] reconstruct the original object with near-perfect fidelity… This…doesn’t work with random objects. If I were to take a completely random image this will fail. It works somehow because the objects that we care about in video, or pictures, or text, or whatever - they have structure in them and it’s somehow tied to the fact that we wrote them down at all… So there’s something about the way that we interact with the world that makes it so that the things we care about, we write about, we talk about, these are all compressible in the sense that they don’t have a lot of information content in them.—Geordie Rose
'Ein “Deep-Learning”-Computer, der mit zahllosen Fotos gefüttert wurde, um darauf Gesichter zu erkennen und diese in Kategorien einzuteilen, fing von alleine an, eine weitere, nicht vorgesehene Kategorie anzulegen: Katzenbilder. Dabei rätseln die Google-Mitarbeiter, wie ihr Computersystem auf diese Idee kam – explizit einprogrammiert haben sie das nicht.'
Delve deeper into deep learning with Expect Labs’ Simon Handley, as he takes us through the inner workings of one fascinating subfield of machine learning.
Pair with this previous video that discusses recent innovations in the field.
Is “Deep Learning” a Revolution in Artificial Intelligence?
Can a new technique known as deep learning revolutionize artificial intelligence as the New York Times suggests?
The technology on which the Times focusses, deep learning, has its roots in a tradition of “neural networks” that goes back to the late nineteen-fifties. At that time, Frank Rosenblatt attempted to build a kind of mechanical brain called the Perceptron, which was billed as “a machine which senses, recognizes, remembers, and responds like the human mind.” The system was capable of categorizing (within certain limits) some basic shapes like triangles and squares. Crowds were amazed by its potential, and even The New Yorker was taken in, suggesting that this “remarkable machine…[was] capable of what amounts to thought.”
But the buzz eventually fizzled; a critical book written in 1969 by Marvin Minsky and his collaborator Seymour Papert showed that Rosenblatt’s original system was painfully limited, literally blind to some simple logical functions like “exclusive-or” (As in, you can have the cake or the pie, but not both). What had become known as the field of “neural networks” all but disappeared.
Battling fiercely to acquire deep learning talent are Microsoft, Facebook, and Google—which just spent up to half-a-billion dollars to buy DeepMind
A paper in the journal <em>Science</em> explores why Google’s flu tracker overestimated the number of cases
(Phys.org) —Scientific uncertainty is a ‘monster’ that prevents understanding and delays mitigative action in response to climate change, according to The University of Western Australia’s Winthrop Professor Stephan Lewandowsky and international colleagues, who suggest that uncertainty should make …