Calculating risk
With Deep Learning, you just give the system a lot of data ‘so it can discover by itself what some of the concepts in the world are.’
Andrew Ng.  (via re-workblog)
209 plays

slartibartfastibast:

An excerpt from Daniel Dennett’s Point of Inquiry interview “Tools for Thinking”

Very relevant:

Take…one out of every million pixels…I can [use the D-Wave to] reconstruct the original object with near-perfect fidelity… This…doesn’t work with random objects. If I were to take a completely random image this will fail. It works somehow because the objects that we care about in video, or pictures, or text, or whatever - they have structure in them and it’s somehow tied to the fact that we wrote them down at all… So there’s something about the way that we interact with the world that makes it so that the things we care about, we write about, we talk about, these are all compressible in the sense that they don’t have a lot of information content in them.
Geordie Rose

garp:

'Ein “Deep-Learning”-Computer, der mit zahllosen Fotos gefüttert wurde, um darauf Gesichter zu erkennen und diese in Kategorien einzuteilen, fing von alleine an, eine weitere, nicht vorgesehene Kategorie anzulegen: Katzenbilder. Dabei rätseln die Google-Mitarbeiter, wie ihr Computersystem auf diese Idee kam – explizit einprogrammiert haben sie das nicht.'

How Does Deep Learning Work?

expectlabs:

Delve deeper into deep learning with Expect Labs’ Simon Handley, as he takes us through the inner workings of one fascinating subfield of machine learning.

Pair with this previous video that discusses recent innovations in the field.

Read More

neurosciencestuff:

Is “Deep Learning” a Revolution in Artificial Intelligence?
Can a new technique known as deep learning revolutionize artificial intelligence as the New York Times suggests?
The technology on which the Times focusses, deep learning, has its roots in a tradition of “neural networks” that goes back to the late nineteen-fifties. At that time, Frank Rosenblatt attempted to build a kind of mechanical brain called the Perceptron, which was billed as “a machine which senses, recognizes, remembers, and responds like the human mind.” The system was capable of categorizing (within certain limits) some basic shapes like triangles and squares. Crowds were amazed by its potential, and even The New Yorker was taken in, suggesting that this “remarkable machine…[was] capable of what amounts to thought.”
But the buzz eventually fizzled; a critical book written in 1969 by Marvin Minsky and his collaborator Seymour Papert showed that Rosenblatt’s original system was painfully limited, literally blind to some simple logical functions like “exclusive-or” (As in, you can have the cake or the pie, but not both). What had become known as the field of “neural networks” all but disappeared.
Read more

neurosciencestuff:

Is “Deep Learning” a Revolution in Artificial Intelligence?

Can a new technique known as deep learning revolutionize artificial intelligence as the New York Times suggests?

The technology on which the Times focusses, deep learning, has its roots in a tradition of “neural networks” that goes back to the late nineteen-fifties. At that time, Frank Rosenblatt attempted to build a kind of mechanical brain called the Perceptron, which was billed as “a machine which senses, recognizes, remembers, and responds like the human mind.” The system was capable of categorizing (within certain limits) some basic shapes like triangles and squares. Crowds were amazed by its potential, and even The New Yorker was taken in, suggesting that this “remarkable machine…[was] capable of what amounts to thought.”

But the buzz eventually fizzled; a critical book written in 1969 by Marvin Minsky and his collaborator Seymour Papert showed that Rosenblatt’s original system was painfully limited, literally blind to some simple logical functions like “exclusive-or” (As in, you can have the cake or the pie, but not both). What had become known as the field of “neural networks” all but disappeared.

Read more