Not all ducks are the same. Some like shallow water for dabbling, others like the deep stuff for diving, and some like something in-between. So how do you manage a wetland wildlife refuge to mainta…
There are more ways than ever to measure traffic, readership, attention and engagement with our content. But all that means is there are even more things to distract us from the important questions about who we are trying to reach and how.
The primary author of the celebrated Bitcoin paper, and therefore probable creator of Bitcoin, is most likely Nick Szabo, a blogger and former George Washington University law professor, according to students and researchers at Aston University’s Centre for Forensic Linguistics (UK).
|—||Andrew Ng. (via re-workblog)|
Scientists in Germany are using artificial nerve cells to classify different types of data. These silicon ‘neurons’ could recognize handwritten numbers, or distinguish plant species based on their flowers.
An excerpt from Daniel Dennett’s Point of Inquiry interview “Tools for Thinking”
Very relevant:Take…one out of every million pixels…I can [use the D-Wave to] reconstruct the original object with near-perfect fidelity… This…doesn’t work with random objects. If I were to take a completely random image this will fail. It works somehow because the objects that we care about in video, or pictures, or text, or whatever - they have structure in them and it’s somehow tied to the fact that we wrote them down at all… So there’s something about the way that we interact with the world that makes it so that the things we care about, we write about, we talk about, these are all compressible in the sense that they don’t have a lot of information content in them.—Geordie Rose
'Ein “Deep-Learning”-Computer, der mit zahllosen Fotos gefüttert wurde, um darauf Gesichter zu erkennen und diese in Kategorien einzuteilen, fing von alleine an, eine weitere, nicht vorgesehene Kategorie anzulegen: Katzenbilder. Dabei rätseln die Google-Mitarbeiter, wie ihr Computersystem auf diese Idee kam – explizit einprogrammiert haben sie das nicht.'
Delve deeper into deep learning with Expect Labs’ Simon Handley, as he takes us through the inner workings of one fascinating subfield of machine learning.
Pair with this previous video that discusses recent innovations in the field.
Is “Deep Learning” a Revolution in Artificial Intelligence?
Can a new technique known as deep learning revolutionize artificial intelligence as the New York Times suggests?
The technology on which the Times focusses, deep learning, has its roots in a tradition of “neural networks” that goes back to the late nineteen-fifties. At that time, Frank Rosenblatt attempted to build a kind of mechanical brain called the Perceptron, which was billed as “a machine which senses, recognizes, remembers, and responds like the human mind.” The system was capable of categorizing (within certain limits) some basic shapes like triangles and squares. Crowds were amazed by its potential, and even The New Yorker was taken in, suggesting that this “remarkable machine…[was] capable of what amounts to thought.”
But the buzz eventually fizzled; a critical book written in 1969 by Marvin Minsky and his collaborator Seymour Papert showed that Rosenblatt’s original system was painfully limited, literally blind to some simple logical functions like “exclusive-or” (As in, you can have the cake or the pie, but not both). What had become known as the field of “neural networks” all but disappeared.