由买买提看人间百态

boards

本页内容为未名空间相应帖子的节选和存档,一周内的贴子最多显示50字,超过一周显示500字 访问原贴
Neuroscience版 - ibm cognitive computing
相关主题
有人对 short-term synaptic plasticity 有研究吗?有做patch clamp的么(whole cell recording mode)
请问稀疏的neuron连接应该如何设计网络模型有人听说过这句话吗? (转载)
请哪位专家详细谈谈STDP的机制问题?谈谈对IBM Blue Brain Project看法吧
Nat. Neurosci.:新研究显示人免疫能力和精神状态或存在某种关联Neuron:大脑从成功经验中学到更多
"shunting effects" 怎么翻译?最新论文推荐:脑神经元连接同步定位-nature
electrical synpases 有啥研究的头吗?那里研究的最好?生命在压力下能长出更多的干细胞
电影limitless里面的NZT (转载)what's the next big thing in Neuroscience?
单独提取 presynapsome / postsynapsome, 靠谱不?23+ questions in systems neuroscience
相关话题的讨论汇总
话题: ibm话题: cognitive话题: neurons话题: computing话题: chips
进入Neuroscience版参与讨论
1 (共1页)
f*******y
发帖数: 421
1
Today, IBM researchers unveiled a new generation of experimental computer
chips designed to emulate the brain’s abilities for perception, action and
cognition. The technology could yield many orders of magnitude less power
consumption and space than used in today’s computers.

In a sharp departure from traditional concepts in designing and building
computers, IBM’s first neurosynaptic computing chips recreate the phenomena
between spiking neurons and synapses in biological systems, such as the
brain, through advanced algorithms and silicon circuitry. Its first two
prototype chips have already been fabricated and are currently undergoing
testing.
Called cognitive computers, systems built with these chips won’t be
programmed the same way traditional computers are today. Rather, cognitive
computers are expected to learn through experiences, find correlations,
create hypotheses, and remember–and learn from–the outcomes, mimicking the
brains structural and synaptic plasticity.
To do this, IBM is combining principles from nanoscience, neuroscience and
supercomputing as part of a multi-year cognitive computing initiative. The
company and its university collaborators also announced they have been
awarded approximately $21 million in new funding from the Defense Advanced
Research Projects Agency (DARPA) for Phase 2 of the Systems of Neuromorphic
Adaptive Plastic Scalable Electronics (SyNAPSE) project.
The goal of SyNAPSE is to create a system that not only analyzes complex
information from multiple sensory modalities at once, but also dynamically
rewires itself as it interacts with its environment – all while rivaling
the brain’s compact size and low power usage. The IBM team has already
successfully completed Phases 0 and 1.
“This is a major initiative to move beyond the von Neumann paradigm that
has been ruling computer architecture for more than half a century,” said
Dharmendra Modha, project leader for IBM Research. “Future applications of
computing will increasingly demand functionality that is not efficiently
delivered by the traditional architecture. These chips are another
significant step in the evolution of computers from calculators to learning
systems, signaling the beginning of a new generation of computers and their
applications in business, science and government.”
Neurosynaptic chips
While they contain no biological elements, IBM’s first cognitive computing
prototype chips use digital silicon circuits inspired by neurobiology to
make up what is referred to as a “neurosynaptic core” with integrated
memory (replicated synapses), computation (replicated neurons) and
communication (replicated axons).
IBM has two working prototype designs. Both cores were fabricated in 45 nm
SOI-CMOS and contain 256 neurons. One core contains 262,144 programmable
synapses and the other contains 65,536 learning synapses. The IBM team has
successfully demonstrated simple applications like navigation, machine
vision, pattern recognition, associative memory and classification.
IBM’s overarching cognitive computing architecture is an on-chip network of
light-weight cores, creating a single integrated system of hardware and
software. This architecture represents a critical shift away from
traditional von Neumann computing to a potentially more power-efficient
architecture that has no set programming, integrates memory with processor,
and mimics the brain’s event-driven, distributed and parallel processing.
IBM’s long-term goal is to build a chip system with ten billion neurons and
hundred trillion synapses, while consuming merely one kilowatt of power and
occupying less than two liters of volume.
Why cognitive computing
Future chips will be able to ingest information from complex, real-world
environments through multiple sensory modes and act through multiple motor
modes in a coordinated, context-dependent manner.
For example, a cognitive computing system monitoring the world's water
supply could contain a network of sensors and actuators that constantly
record and report metrics such as temperature, pressure, wave height,
acoustics and ocean tide, and issue tsunami warnings based on its decision
making. Similarly, a grocer stocking shelves could use an instrumented glove
that monitors sights, smells, texture and temperature to flag bad or
contaminated produce. Making sense of real-time input flowing at an ever-
dizzying rate would be a Herculean task for today’s computers, but would be
natural for a brain-inspired system.
“Imagine traffic lights that can integrate sights, sounds and smells and
flag unsafe intersections before disaster happens or imagine cognitive co-
processors that turn servers, laptops, tablets, and phones into machines
that can interact better with their environments,” said Modha.
For Phase 2 of SyNAPSE, IBM has assembled a world-class multi-dimensional
team of researchers and collaborators to achieve these ambitious goals. The
team includes Columbia University; Cornell University; University of
California, Merced; and University of Wisconsin, Madison.
IBM has a rich history in the area of artificial intelligence research going
all the way back to 1956 when IBM performed the world's first large-scale (
512 neuron) cortical simulation. Most recently, IBM Research scientists
created Watson, an analytical computing system that specializes in
understanding natural human language and provides specific answers to
complex questions at rapid speeds. Watson represents a tremendous
breakthrough in computers understanding natural language, “real language”
that is not specially designed or encoded just for computers, but language
that humans use to naturally capture and communicate knowledge.
IBM’s cognitive computing chips were built at its highly advanced chip-
making facility in Fishkill, N.Y. and are currently being tested at its
research labs in Yorktown Heights, N.Y. and San Jose, Calif.
k*****1
发帖数: 454
2
奇怪,在网上找不到任何的细节,IBM的网站上也没有,搞的这么神秘。
如果我理解没错的话,chip还是用的传统电子元器件,只不过是集成了一些特别的回路
,用来模拟吐触和神经连接。
大家有什么看法?
p*******r
发帖数: 4048
3
hype.
I think this direction is interesting though.

【在 k*****1 的大作中提到】
: 奇怪,在网上找不到任何的细节,IBM的网站上也没有,搞的这么神秘。
: 如果我理解没错的话,chip还是用的传统电子元器件,只不过是集成了一些特别的回路
: ,用来模拟吐触和神经连接。
: 大家有什么看法?

r***y
发帖数: 25
4
Their macro architecture, instead of their micro ones, are more interesting
. From a glimpse of the diagram (it is really a glimpse in all of their
videos, LOL), they have single units from all brain regions (visual cortex,
thalamus, BG, etc) involved in a task and establish hardware connections
between these units. The single unit could be any currently successful
neuron model (Though in personal opinion, the Izhikevich version is best
suited in this large-scale network approach). The connection strength (aka '
synaptic strength' in their system) is determined by learning (largely
Hebbian, my pure guess). That's why I found the macro architecture more
important in their system -- since you have to determine the involving
regions, the number and type of neurons (the channels, conductance, or time
constants for different type of neurons) in each region, and the most
interesting part to me, the connections. The connections might be sparse to
avoid the computation load problem and minimize power consumption. However,
how sparse the connections are and how to determine the connection map is
the million dollar question and we don't know how did they do it (a follow-
up of Brian Wandell's work, especially on visual pattern recognition might
be able to provide some hints).
So, their chips, at least the public released version, works as a 'faculty'
in cognitive science jargon -- it integrates many neurons among all regions,
sensory, motor, etc for a certain function -- and only for that function.
Since performance of other functions (i.e. auditory pattern recognition
instead of visual pattern recognition) needs integration of neurons from
other regions with other type of connections, etc. It will be very
interesting to see how do they cope with this flexibility requirement with
their macro architecture of the chip, or how do they integrate the chips
with different 'faculty' into VLSI to simulate more cognitive functions. The
latter approach, in my opinion, has no difference from the canonical AI
approaches except using parallel specific computing units instead of an CPU.
This reminds me of two recent works by two other groups, Eugene Izhikevich/
Maxim Bazhenov/BrainCorp, and Henry Markram. Izhikevich/Bazhenov approaches
are more on neuronal level, in which general units are simulated single
neurons without much functional difference, the different functional roles
are achieved by connections and mappings of neuron clusters, which are all
learned through experience. However, their problem is, according to my pure
guess, how to apply the architecture into hardware and reduce the power
consuming since they needs a very large system (millions of neurons).
Markram approach is more on the column level ('microcircuits' in their own
word), where general column units can be assembled into circuits. That
approach is very biological, sometimes too biological in my opinion -- too
much biological details have been taken into account and the generalization
is very hard.

and

phenomena

【在 f*******y 的大作中提到】
: Today, IBM researchers unveiled a new generation of experimental computer
: chips designed to emulate the brain’s abilities for perception, action and
: cognition. The technology could yield many orders of magnitude less power
: consumption and space than used in today’s computers.
:
: In a sharp departure from traditional concepts in designing and building
: computers, IBM’s first neurosynaptic computing chips recreate the phenomena
: between spiking neurons and synapses in biological systems, such as the
: brain, through advanced algorithms and silicon circuitry. Its first two
: prototype chips have already been fabricated and are currently undergoing

d*****r
发帖数: 2583
5
right.
agree.

【在 p*******r 的大作中提到】
: hype.
: I think this direction is interesting though.

f*******y
发帖数: 421
6
so what is the difference from a neural network? using neuron structures to
replace node in a network?

interesting
,
'

【在 r***y 的大作中提到】
: Their macro architecture, instead of their micro ones, are more interesting
: . From a glimpse of the diagram (it is really a glimpse in all of their
: videos, LOL), they have single units from all brain regions (visual cortex,
: thalamus, BG, etc) involved in a task and establish hardware connections
: between these units. The single unit could be any currently successful
: neuron model (Though in personal opinion, the Izhikevich version is best
: suited in this large-scale network approach). The connection strength (aka '
: synaptic strength' in their system) is determined by learning (largely
: Hebbian, my pure guess). That's why I found the macro architecture more
: important in their system -- since you have to determine the involving

r***y
发帖数: 25
7
Based on the information released, it looks like they are using each 'circle
' ( (in the diagram, consists of neurons and their connections) as an '
functional unit', as 'faculty' in cognitive science. And then use these '
units' to construct bigger networks. Though the hardware (the 'board') they
have released so far is just one 'circle' specifically for visual pattern
recognition. If we look into the 'circle', I don't know whether it is
necessary for the neurons/nodes being more complicated and specified than
the simple integrator 'nodes' in the simple networks like backpropagation. I
guess they have to use more complicated and specified ones, but it is hard
tell. And also, in each 'circle', the connection pattern (their 'synapse
pattern') should play more important role in determining the specific '
function' for the 'unit' since too much specification of each neuron will
generate much more complicated computation and cause much more trouble in
the future integration, that's why I emphasized their macro architecture in
the previous post.

to

【在 f*******y 的大作中提到】
: so what is the difference from a neural network? using neuron structures to
: replace node in a network?
:
: interesting
: ,
: '

r***y
发帖数: 25
8
And if you turn the circles and connection into blocks and connections and
put them into an hierarchical way, you will see a canonical network we've
seen in many papers. :-)

to

【在 f*******y 的大作中提到】
: so what is the difference from a neural network? using neuron structures to
: replace node in a network?
:
: interesting
: ,
: '

d*****r
发帖数: 2583
9
they are still using Carver Mead's way of making silicon neuron.
at hardware level, just refer to Kwabena Boahen's papers.
I don't think their approach is going anywhere promising,
they aim to reproduced neural(functional) anatomy on silicon,
but before puppeteer(or his previous MIT boss) plot out the
connectome, it will be impossible.
so as puppeteer pointed out, it's just a hype.

circle
they
I
hard

【在 r***y 的大作中提到】
: Based on the information released, it looks like they are using each 'circle
: ' ( (in the diagram, consists of neurons and their connections) as an '
: functional unit', as 'faculty' in cognitive science. And then use these '
: units' to construct bigger networks. Though the hardware (the 'board') they
: have released so far is just one 'circle' specifically for visual pattern
: recognition. If we look into the 'circle', I don't know whether it is
: necessary for the neurons/nodes being more complicated and specified than
: the simple integrator 'nodes' in the simple networks like backpropagation. I
: guess they have to use more complicated and specified ones, but it is hard
: tell. And also, in each 'circle', the connection pattern (their 'synapse

t*******o
发帖数: 424
10
why it's necessary to map out the whole connectome? I feel it's possible if
we know (the most crucial) organizing princples......

【在 d*****r 的大作中提到】
: they are still using Carver Mead's way of making silicon neuron.
: at hardware level, just refer to Kwabena Boahen's papers.
: I don't think their approach is going anywhere promising,
: they aim to reproduced neural(functional) anatomy on silicon,
: but before puppeteer(or his previous MIT boss) plot out the
: connectome, it will be impossible.
: so as puppeteer pointed out, it's just a hype.
:
: circle
: they

相关主题
electrical synpases 有啥研究的头吗?那里研究的最好?有做patch clamp的么(whole cell recording mode)
电影limitless里面的NZT (转载)有人听说过这句话吗? (转载)
单独提取 presynapsome / postsynapsome, 靠谱不?谈谈对IBM Blue Brain Project看法吧
进入Neuroscience版参与讨论
d*****r
发帖数: 2583
11
right, it's not necessary to explore the universe, if we know
the organizing principles of the universe...
you can tell puppeteer's previous boss to stop the connectome project, :)
this handsome guy:
http://www.youtube.com/watch?v=HA7GwKXfJB0
still not too late, :)

if

【在 t*******o 的大作中提到】
: why it's necessary to map out the whole connectome? I feel it's possible if
: we know (the most crucial) organizing princples......

t*******o
发帖数: 424
12
这个东东也知道一段时间了,anyway谢谢你的连接...
我只是想说有这个可能性而已,正如不需要在世界上每一个高楼扔铁球来证明自由落体
定律...当然这个例子可能不是很恰当

【在 d*****r 的大作中提到】
: right, it's not necessary to explore the universe, if we know
: the organizing principles of the universe...
: you can tell puppeteer's previous boss to stop the connectome project, :)
: this handsome guy:
: http://www.youtube.com/watch?v=HA7GwKXfJB0
: still not too late, :)
:
: if

r***y
发帖数: 25
13
:-) It's been THE debate ever since the simulation of neural network started
structure of the neural network in living creatures. Canonical AI people
said no, we just need to replicate the functions, which is faster but hard
to generalize. Neuroscience people said yes, we have to, otherwise we couldn
't replicate such complicated, decent, and flexible network, which makes the
question always stuck at the level of functioning units -- protein, synapse
, or single neuron? Any connection map seems too complicated to map it out (
That's why connectome is so hot, everyone expects a lot, maybe too much,
from it).
I guess their approach points to a very interesting direction: whether could
we replicate the biological units and their connection as much as we can
based on current knowledge and make a network functioning like the
biological ones. The next step is to integrate these 'basic networks' into
large-scale networks to perform like human brain. However, this approach has
potential problems from the beginning: (1) Generalization/economy, in
living brain neurons are participate in multiple functions through different
networks, this approach will replicate many neurons with same function but
within different networks (like the frontal recognition decision-making
neuron for visual and auditory pattern recognition). (2) Redundancy, in
living network most of the functioning network are redundant, which
guarantees robustness to small injury. In their network, any redundancy will
make the power and computation cost much higher than their current design.
(3) Skill learning, in living network we can learn new skills anytime by
reconnecting currently available components. I didn't see any these kind of
capacity in their network. I guess this is why we don't see any bright
future for their network, though I still think this direction is worth a try.
d*****r
发帖数: 2583
14
I tried to make a "generic silicon neuron", made up of ~20 transistors,
and only by changing the bias of the transistors, you can get different
types of input-output functions. Our aim was to make this "generic
silicon neuron" to replicate almost all major types of neurons.
Then we can fabricate millions of them.
It turned out to be too hard. I wasted one year...I was too young and
too ambitious then...
At the beginning, when Carver Mead was talking to Max Delbruck in Caltech
about the "silicon neuron" idea, he was just trying something fancy with
silicon. If his best student Misha Mahowald hadn't died too early, we
would have seen something really different today. She collaborated with
Rodney Douglas in Zurich, trying to figure out the "general principle" of
small network of neurons, say, 10-20 neurons level. Well, many years
has passed since then, we are still very very far from the "general
principle", so at this point, she was "lucky" to leave early.
However, Kwabena was the only legacy from Carver Mead's tree, he did
almost all the easier ones, vision, audition, but it already turned
out to be extremely difficult in primary cortex...don't even mention
more higher levels...
I think the only way to get there, is to map out the whole connectome
first. There's no other way.

couldn
the
synapse
(
could

【在 r***y 的大作中提到】
: :-) It's been THE debate ever since the simulation of neural network started
: structure of the neural network in living creatures. Canonical AI people
: said no, we just need to replicate the functions, which is faster but hard
: to generalize. Neuroscience people said yes, we have to, otherwise we couldn
: 't replicate such complicated, decent, and flexible network, which makes the
: question always stuck at the level of functioning units -- protein, synapse
: , or single neuron? Any connection map seems too complicated to map it out (
: That's why connectome is so hot, everyone expects a lot, maybe too much,
: from it).
: I guess their approach points to a very interesting direction: whether could

r***y
发帖数: 25
15
I guess you've learned a lot during that one year. I can tell from your post
. :-)
Intuitively I don't agree with your point that 'there is no other way'
except figure out the whole connectome. Opportunity and Spirit have been
roving around on Mars for years with simple learning mechanism based on
experience. To some extent, I still believe a 'function directed' network
can provide us insights in how does the living network function, though it
might still be far away from our dream of 'generalized neuronal unit' or
even we need to take a totally different route in the future.
Anyway, I believe I need to read more to better follow this thread for
further discussion, it is a very good one, and very informative.

【在 d*****r 的大作中提到】
: I tried to make a "generic silicon neuron", made up of ~20 transistors,
: and only by changing the bias of the transistors, you can get different
: types of input-output functions. Our aim was to make this "generic
: silicon neuron" to replicate almost all major types of neurons.
: Then we can fabricate millions of them.
: It turned out to be too hard. I wasted one year...I was too young and
: too ambitious then...
: At the beginning, when Carver Mead was talking to Max Delbruck in Caltech
: about the "silicon neuron" idea, he was just trying something fancy with
: silicon. If his best student Misha Mahowald hadn't died too early, we

r***y
发帖数: 25
16
I guess you've learned a lot during that one year. I can tell from your post
. :-)
Intuitively I don't agree with your point that 'there is no other way'
except figure out the whole connectome. Opportunity and Spirit have been
roving around on Mars for years with simple learning mechanism based on
experience. To some extent, I still believe a 'function directed' network
can provide us insights in how does the living network function, though it
might still be far away from our dream of 'generalized neuronal unit' or
even we need to take a totally different route in the future.
Anyway, I believe I need to read more to better follow this thread for
further discussion, it is a very good one, and very informative.

【在 d*****r 的大作中提到】
: I tried to make a "generic silicon neuron", made up of ~20 transistors,
: and only by changing the bias of the transistors, you can get different
: types of input-output functions. Our aim was to make this "generic
: silicon neuron" to replicate almost all major types of neurons.
: Then we can fabricate millions of them.
: It turned out to be too hard. I wasted one year...I was too young and
: too ambitious then...
: At the beginning, when Carver Mead was talking to Max Delbruck in Caltech
: about the "silicon neuron" idea, he was just trying something fancy with
: silicon. If his best student Misha Mahowald hadn't died too early, we

d*****r
发帖数: 2583
17
I just want to make it into engineering, instead of "Einstein" science...
genome project is an engineering, so it's foreseeable; I think the same
project for brain is necessary, just as important as periodic table to
Chemistry before Chemistry could take off as a serious science.
I still don't think biology is a serious science yet, so as neuroscience,
coz we haven't figured out the "periodic table" yet. In that case,
I am very very pessimistic of any serious approach to "make an airplain
before Newton was born".
That's why I think an engineering type project, like connectome, is
necessary. We just need to know all the basics first.

post

【在 r***y 的大作中提到】
: I guess you've learned a lot during that one year. I can tell from your post
: . :-)
: Intuitively I don't agree with your point that 'there is no other way'
: except figure out the whole connectome. Opportunity and Spirit have been
: roving around on Mars for years with simple learning mechanism based on
: experience. To some extent, I still believe a 'function directed' network
: can provide us insights in how does the living network function, though it
: might still be far away from our dream of 'generalized neuronal unit' or
: even we need to take a totally different route in the future.
: Anyway, I believe I need to read more to better follow this thread for

f*******y
发帖数: 421
18
Today, IBM researchers unveiled a new generation of experimental computer
chips designed to emulate the brain’s abilities for perception, action and
cognition. The technology could yield many orders of magnitude less power
consumption and space than used in today’s computers.

In a sharp departure from traditional concepts in designing and building
computers, IBM’s first neurosynaptic computing chips recreate the phenomena
between spiking neurons and synapses in biological systems, such as the
brain, through advanced algorithms and silicon circuitry. Its first two
prototype chips have already been fabricated and are currently undergoing
testing.
Called cognitive computers, systems built with these chips won’t be
programmed the same way traditional computers are today. Rather, cognitive
computers are expected to learn through experiences, find correlations,
create hypotheses, and remember–and learn from–the outcomes, mimicking the
brains structural and synaptic plasticity.
To do this, IBM is combining principles from nanoscience, neuroscience and
supercomputing as part of a multi-year cognitive computing initiative. The
company and its university collaborators also announced they have been
awarded approximately $21 million in new funding from the Defense Advanced
Research Projects Agency (DARPA) for Phase 2 of the Systems of Neuromorphic
Adaptive Plastic Scalable Electronics (SyNAPSE) project.
The goal of SyNAPSE is to create a system that not only analyzes complex
information from multiple sensory modalities at once, but also dynamically
rewires itself as it interacts with its environment – all while rivaling
the brain’s compact size and low power usage. The IBM team has already
successfully completed Phases 0 and 1.
“This is a major initiative to move beyond the von Neumann paradigm that
has been ruling computer architecture for more than half a century,” said
Dharmendra Modha, project leader for IBM Research. “Future applications of
computing will increasingly demand functionality that is not efficiently
delivered by the traditional architecture. These chips are another
significant step in the evolution of computers from calculators to learning
systems, signaling the beginning of a new generation of computers and their
applications in business, science and government.”
Neurosynaptic chips
While they contain no biological elements, IBM’s first cognitive computing
prototype chips use digital silicon circuits inspired by neurobiology to
make up what is referred to as a “neurosynaptic core” with integrated
memory (replicated synapses), computation (replicated neurons) and
communication (replicated axons).
IBM has two working prototype designs. Both cores were fabricated in 45 nm
SOI-CMOS and contain 256 neurons. One core contains 262,144 programmable
synapses and the other contains 65,536 learning synapses. The IBM team has
successfully demonstrated simple applications like navigation, machine
vision, pattern recognition, associative memory and classification.
IBM’s overarching cognitive computing architecture is an on-chip network of
light-weight cores, creating a single integrated system of hardware and
software. This architecture represents a critical shift away from
traditional von Neumann computing to a potentially more power-efficient
architecture that has no set programming, integrates memory with processor,
and mimics the brain’s event-driven, distributed and parallel processing.
IBM’s long-term goal is to build a chip system with ten billion neurons and
hundred trillion synapses, while consuming merely one kilowatt of power and
occupying less than two liters of volume.
Why cognitive computing
Future chips will be able to ingest information from complex, real-world
environments through multiple sensory modes and act through multiple motor
modes in a coordinated, context-dependent manner.
For example, a cognitive computing system monitoring the world's water
supply could contain a network of sensors and actuators that constantly
record and report metrics such as temperature, pressure, wave height,
acoustics and ocean tide, and issue tsunami warnings based on its decision
making. Similarly, a grocer stocking shelves could use an instrumented glove
that monitors sights, smells, texture and temperature to flag bad or
contaminated produce. Making sense of real-time input flowing at an ever-
dizzying rate would be a Herculean task for today’s computers, but would be
natural for a brain-inspired system.
“Imagine traffic lights that can integrate sights, sounds and smells and
flag unsafe intersections before disaster happens or imagine cognitive co-
processors that turn servers, laptops, tablets, and phones into machines
that can interact better with their environments,” said Modha.
For Phase 2 of SyNAPSE, IBM has assembled a world-class multi-dimensional
team of researchers and collaborators to achieve these ambitious goals. The
team includes Columbia University; Cornell University; University of
California, Merced; and University of Wisconsin, Madison.
IBM has a rich history in the area of artificial intelligence research going
all the way back to 1956 when IBM performed the world's first large-scale (
512 neuron) cortical simulation. Most recently, IBM Research scientists
created Watson, an analytical computing system that specializes in
understanding natural human language and provides specific answers to
complex questions at rapid speeds. Watson represents a tremendous
breakthrough in computers understanding natural language, “real language”
that is not specially designed or encoded just for computers, but language
that humans use to naturally capture and communicate knowledge.
IBM’s cognitive computing chips were built at its highly advanced chip-
making facility in Fishkill, N.Y. and are currently being tested at its
research labs in Yorktown Heights, N.Y. and San Jose, Calif.
k*****1
发帖数: 454
19
奇怪,在网上找不到任何的细节,IBM的网站上也没有,搞的这么神秘。
如果我理解没错的话,chip还是用的传统电子元器件,只不过是集成了一些特别的回路
,用来模拟吐触和神经连接。
大家有什么看法?
p*******r
发帖数: 4048
20
hype.
I think this direction is interesting though.

【在 k*****1 的大作中提到】
: 奇怪,在网上找不到任何的细节,IBM的网站上也没有,搞的这么神秘。
: 如果我理解没错的话,chip还是用的传统电子元器件,只不过是集成了一些特别的回路
: ,用来模拟吐触和神经连接。
: 大家有什么看法?

相关主题
Neuron:大脑从成功经验中学到更多what's the next big thing in Neuroscience?
最新论文推荐:脑神经元连接同步定位-nature23+ questions in systems neuroscience
生命在压力下能长出更多的干细胞human connectome project
进入Neuroscience版参与讨论
r***y
发帖数: 25
21
Their macro architecture, instead of their micro ones, are more interesting
. From a glimpse of the diagram (it is really a glimpse in all of their
videos, LOL), they have single units from all brain regions (visual cortex,
thalamus, BG, etc) involved in a task and establish hardware connections
between these units. The single unit could be any currently successful
neuron model (Though in personal opinion, the Izhikevich version is best
suited in this large-scale network approach). The connection strength (aka '
synaptic strength' in their system) is determined by learning (largely
Hebbian, my pure guess). That's why I found the macro architecture more
important in their system -- since you have to determine the involving
regions, the number and type of neurons (the channels, conductance, or time
constants for different type of neurons) in each region, and the most
interesting part to me, the connections. The connections might be sparse to
avoid the computation load problem and minimize power consumption. However,
how sparse the connections are and how to determine the connection map is
the million dollar question and we don't know how did they do it (a follow-
up of Brian Wandell's work, especially on visual pattern recognition might
be able to provide some hints).
So, their chips, at least the public released version, works as a 'faculty'
in cognitive science jargon -- it integrates many neurons among all regions,
sensory, motor, etc for a certain function -- and only for that function.
Since performance of other functions (i.e. auditory pattern recognition
instead of visual pattern recognition) needs integration of neurons from
other regions with other type of connections, etc. It will be very
interesting to see how do they cope with this flexibility requirement with
their macro architecture of the chip, or how do they integrate the chips
with different 'faculty' into VLSI to simulate more cognitive functions. The
latter approach, in my opinion, has no difference from the canonical AI
approaches except using parallel specific computing units instead of an CPU.
This reminds me of two recent works by two other groups, Eugene Izhikevich/
Maxim Bazhenov/BrainCorp, and Henry Markram. Izhikevich/Bazhenov approaches
are more on neuronal level, in which general units are simulated single
neurons without much functional difference, the different functional roles
are achieved by connections and mappings of neuron clusters, which are all
learned through experience. However, their problem is, according to my pure
guess, how to apply the architecture into hardware and reduce the power
consuming since they needs a very large system (millions of neurons).
Markram approach is more on the column level ('microcircuits' in their own
word), where general column units can be assembled into circuits. That
approach is very biological, sometimes too biological in my opinion -- too
much biological details have been taken into account and the generalization
is very hard.

and

phenomena

【在 f*******y 的大作中提到】
: Today, IBM researchers unveiled a new generation of experimental computer
: chips designed to emulate the brain’s abilities for perception, action and
: cognition. The technology could yield many orders of magnitude less power
: consumption and space than used in today’s computers.
:
: In a sharp departure from traditional concepts in designing and building
: computers, IBM’s first neurosynaptic computing chips recreate the phenomena
: between spiking neurons and synapses in biological systems, such as the
: brain, through advanced algorithms and silicon circuitry. Its first two
: prototype chips have already been fabricated and are currently undergoing

d*****r
发帖数: 2583
22
right.
agree.

【在 p*******r 的大作中提到】
: hype.
: I think this direction is interesting though.

f*******y
发帖数: 421
23
so what is the difference from a neural network? using neuron structures to
replace node in a network?

interesting
,
'

【在 r***y 的大作中提到】
: Their macro architecture, instead of their micro ones, are more interesting
: . From a glimpse of the diagram (it is really a glimpse in all of their
: videos, LOL), they have single units from all brain regions (visual cortex,
: thalamus, BG, etc) involved in a task and establish hardware connections
: between these units. The single unit could be any currently successful
: neuron model (Though in personal opinion, the Izhikevich version is best
: suited in this large-scale network approach). The connection strength (aka '
: synaptic strength' in their system) is determined by learning (largely
: Hebbian, my pure guess). That's why I found the macro architecture more
: important in their system -- since you have to determine the involving

r***y
发帖数: 25
24
Based on the information released, it looks like they are using each 'circle
' ( (in the diagram, consists of neurons and their connections) as an '
functional unit', as 'faculty' in cognitive science. And then use these '
units' to construct bigger networks. Though the hardware (the 'board') they
have released so far is just one 'circle' specifically for visual pattern
recognition. If we look into the 'circle', I don't know whether it is
necessary for the neurons/nodes being more complicated and specified than
the simple integrator 'nodes' in the simple networks like backpropagation. I
guess they have to use more complicated and specified ones, but it is hard
tell. And also, in each 'circle', the connection pattern (their 'synapse
pattern') should play more important role in determining the specific '
function' for the 'unit' since too much specification of each neuron will
generate much more complicated computation and cause much more trouble in
the future integration, that's why I emphasized their macro architecture in
the previous post.

to

【在 f*******y 的大作中提到】
: so what is the difference from a neural network? using neuron structures to
: replace node in a network?
:
: interesting
: ,
: '

r***y
发帖数: 25
25
And if you turn the circles and connection into blocks and connections and
put them into an hierarchical way, you will see a canonical network we've
seen in many papers. :-)

to

【在 f*******y 的大作中提到】
: so what is the difference from a neural network? using neuron structures to
: replace node in a network?
:
: interesting
: ,
: '

d*****r
发帖数: 2583
26
they are still using Carver Mead's way of making silicon neuron.
at hardware level, just refer to Kwabena Boahen's papers.
I don't think their approach is going anywhere promising,
they aim to reproduced neural(functional) anatomy on silicon,
but before puppeteer(or his previous MIT boss) plot out the
connectome, it will be impossible.
so as puppeteer pointed out, it's just a hype.

circle
they
I
hard

【在 r***y 的大作中提到】
: Based on the information released, it looks like they are using each 'circle
: ' ( (in the diagram, consists of neurons and their connections) as an '
: functional unit', as 'faculty' in cognitive science. And then use these '
: units' to construct bigger networks. Though the hardware (the 'board') they
: have released so far is just one 'circle' specifically for visual pattern
: recognition. If we look into the 'circle', I don't know whether it is
: necessary for the neurons/nodes being more complicated and specified than
: the simple integrator 'nodes' in the simple networks like backpropagation. I
: guess they have to use more complicated and specified ones, but it is hard
: tell. And also, in each 'circle', the connection pattern (their 'synapse

t*******o
发帖数: 424
27
why it's necessary to map out the whole connectome? I feel it's possible if
we know (the most crucial) organizing princples......

【在 d*****r 的大作中提到】
: they are still using Carver Mead's way of making silicon neuron.
: at hardware level, just refer to Kwabena Boahen's papers.
: I don't think their approach is going anywhere promising,
: they aim to reproduced neural(functional) anatomy on silicon,
: but before puppeteer(or his previous MIT boss) plot out the
: connectome, it will be impossible.
: so as puppeteer pointed out, it's just a hype.
:
: circle
: they

d*****r
发帖数: 2583
28
right, it's not necessary to explore the universe, if we know
the organizing principles of the universe...
you can tell puppeteer's previous boss to stop the connectome project, :)
this handsome guy:
http://www.youtube.com/watch?v=HA7GwKXfJB0
still not too late, :)

if

【在 t*******o 的大作中提到】
: why it's necessary to map out the whole connectome? I feel it's possible if
: we know (the most crucial) organizing princples......

t*******o
发帖数: 424
29
这个东东也知道一段时间了,anyway谢谢你的连接...
我只是想说有这个可能性而已,正如不需要在世界上每一个高楼扔铁球来证明自由落体
定律...当然这个例子可能不是很恰当

【在 d*****r 的大作中提到】
: right, it's not necessary to explore the universe, if we know
: the organizing principles of the universe...
: you can tell puppeteer's previous boss to stop the connectome project, :)
: this handsome guy:
: http://www.youtube.com/watch?v=HA7GwKXfJB0
: still not too late, :)
:
: if

r***y
发帖数: 25
30
:-) It's been THE debate ever since the simulation of neural network started
structure of the neural network in living creatures. Canonical AI people
said no, we just need to replicate the functions, which is faster but hard
to generalize. Neuroscience people said yes, we have to, otherwise we couldn
't replicate such complicated, decent, and flexible network, which makes the
question always stuck at the level of functioning units -- protein, synapse
, or single neuron? Any connection map seems too complicated to map it out (
That's why connectome is so hot, everyone expects a lot, maybe too much,
from it).
I guess their approach points to a very interesting direction: whether could
we replicate the biological units and their connection as much as we can
based on current knowledge and make a network functioning like the
biological ones. The next step is to integrate these 'basic networks' into
large-scale networks to perform like human brain. However, this approach has
potential problems from the beginning: (1) Generalization/economy, in
living brain neurons are participate in multiple functions through different
networks, this approach will replicate many neurons with same function but
within different networks (like the frontal recognition decision-making
neuron for visual and auditory pattern recognition). (2) Redundancy, in
living network most of the functioning network are redundant, which
guarantees robustness to small injury. In their network, any redundancy will
make the power and computation cost much higher than their current design.
(3) Skill learning, in living network we can learn new skills anytime by
reconnecting currently available components. I didn't see any these kind of
capacity in their network. I guess this is why we don't see any bright
future for their network, though I still think this direction is worth a try.
相关主题
脑连接图研究进展请问稀疏的neuron连接应该如何设计网络模型
推荐:Axon Physiology---Physiol. Rev,2011请哪位专家详细谈谈STDP的机制问题?
有人对 short-term synaptic plasticity 有研究吗?Nat. Neurosci.:新研究显示人免疫能力和精神状态或存在某种关联
进入Neuroscience版参与讨论
d*****r
发帖数: 2583
31
I tried to make a "generic silicon neuron", made up of ~20 transistors,
and only by changing the bias of the transistors, you can get different
types of input-output functions. Our aim was to make this "generic
silicon neuron" to replicate almost all major types of neurons.
Then we can fabricate millions of them.
It turned out to be too hard. I wasted one year...I was too young and
too ambitious then...
At the beginning, when Carver Mead was talking to Max Delbruck in Caltech
about the "silicon neuron" idea, he was just trying something fancy with
silicon. If his best student Misha Mahowald hadn't died too early, we
would have seen something really different today. She collaborated with
Rodney Douglas in Zurich, trying to figure out the "general principle" of
small network of neurons, say, 10-20 neurons level. Well, many years
has passed since then, we are still very very far from the "general
principle", so at this point, she was "lucky" to leave early.
However, Kwabena was the only legacy from Carver Mead's tree, he did
almost all the easier ones, vision, audition, but it already turned
out to be extremely difficult in primary cortex...don't even mention
more higher levels...
I think the only way to get there, is to map out the whole connectome
first. There's no other way.

couldn
the
synapse
(
could

【在 r***y 的大作中提到】
: :-) It's been THE debate ever since the simulation of neural network started
: structure of the neural network in living creatures. Canonical AI people
: said no, we just need to replicate the functions, which is faster but hard
: to generalize. Neuroscience people said yes, we have to, otherwise we couldn
: 't replicate such complicated, decent, and flexible network, which makes the
: question always stuck at the level of functioning units -- protein, synapse
: , or single neuron? Any connection map seems too complicated to map it out (
: That's why connectome is so hot, everyone expects a lot, maybe too much,
: from it).
: I guess their approach points to a very interesting direction: whether could

r***y
发帖数: 25
32
I guess you've learned a lot during that one year. I can tell from your post
. :-)
Intuitively I don't agree with your point that 'there is no other way'
except figure out the whole connectome. Opportunity and Spirit have been
roving around on Mars for years with simple learning mechanism based on
experience. To some extent, I still believe a 'function directed' network
can provide us insights in how does the living network function, though it
might still be far away from our dream of 'generalized neuronal unit' or
even we need to take a totally different route in the future.
Anyway, I believe I need to read more to better follow this thread for
further discussion, it is a very good one, and very informative.

【在 d*****r 的大作中提到】
: I tried to make a "generic silicon neuron", made up of ~20 transistors,
: and only by changing the bias of the transistors, you can get different
: types of input-output functions. Our aim was to make this "generic
: silicon neuron" to replicate almost all major types of neurons.
: Then we can fabricate millions of them.
: It turned out to be too hard. I wasted one year...I was too young and
: too ambitious then...
: At the beginning, when Carver Mead was talking to Max Delbruck in Caltech
: about the "silicon neuron" idea, he was just trying something fancy with
: silicon. If his best student Misha Mahowald hadn't died too early, we

r***y
发帖数: 25
33
I guess you've learned a lot during that one year. I can tell from your post
. :-)
Intuitively I don't agree with your point that 'there is no other way'
except figure out the whole connectome. Opportunity and Spirit have been
roving around on Mars for years with simple learning mechanism based on
experience. To some extent, I still believe a 'function directed' network
can provide us insights in how does the living network function, though it
might still be far away from our dream of 'generalized neuronal unit' or
even we need to take a totally different route in the future.
Anyway, I believe I need to read more to better follow this thread for
further discussion, it is a very good one, and very informative.

【在 d*****r 的大作中提到】
: I tried to make a "generic silicon neuron", made up of ~20 transistors,
: and only by changing the bias of the transistors, you can get different
: types of input-output functions. Our aim was to make this "generic
: silicon neuron" to replicate almost all major types of neurons.
: Then we can fabricate millions of them.
: It turned out to be too hard. I wasted one year...I was too young and
: too ambitious then...
: At the beginning, when Carver Mead was talking to Max Delbruck in Caltech
: about the "silicon neuron" idea, he was just trying something fancy with
: silicon. If his best student Misha Mahowald hadn't died too early, we

d*****r
发帖数: 2583
34
I just want to make it into engineering, instead of "Einstein" science...
genome project is an engineering, so it's foreseeable; I think the same
project for brain is necessary, just as important as periodic table to
Chemistry before Chemistry could take off as a serious science.
I still don't think biology is a serious science yet, so as neuroscience,
coz we haven't figured out the "periodic table" yet. In that case,
I am very very pessimistic of any serious approach to "make an airplain
before Newton was born".
That's why I think an engineering type project, like connectome, is
necessary. We just need to know all the basics first.

post

【在 r***y 的大作中提到】
: I guess you've learned a lot during that one year. I can tell from your post
: . :-)
: Intuitively I don't agree with your point that 'there is no other way'
: except figure out the whole connectome. Opportunity and Spirit have been
: roving around on Mars for years with simple learning mechanism based on
: experience. To some extent, I still believe a 'function directed' network
: can provide us insights in how does the living network function, though it
: might still be far away from our dream of 'generalized neuronal unit' or
: even we need to take a totally different route in the future.
: Anyway, I believe I need to read more to better follow this thread for

h*i
发帖数: 3446
35
Biology is probably already a serious science. However, I don't think that "
cognitive computing" is a neuroscience subject, despite that this particular
project is taking the route of building neuron in silicon.
I think the relationship between cognitive science and neuroscience is like
that between chemistry and physics. Each is a subject of its own and can be
developed independently. I believe most people have not realized that, even
the majority of cognitive psychologists.
As such, it is okay to build machines to try to explore the principles of
cognition without figuring out all the details of the neuron connections.
Calling it a hype is not fair. Disclaimer, I used to work next door to this
"cognitive computing" guy, but I know nothing about the project, except that
this guy is a computer science theory person, with specialty in information
theory. He has done some good work in caching algorithms for storage
devices. So it is understandable for the neuroscience people to dismiss him
and this project, but the dismissal is misguided, because cognition is
simply not a neuroscience subject matter.

【在 d*****r 的大作中提到】
: I just want to make it into engineering, instead of "Einstein" science...
: genome project is an engineering, so it's foreseeable; I think the same
: project for brain is necessary, just as important as periodic table to
: Chemistry before Chemistry could take off as a serious science.
: I still don't think biology is a serious science yet, so as neuroscience,
: coz we haven't figured out the "periodic table" yet. In that case,
: I am very very pessimistic of any serious approach to "make an airplain
: before Newton was born".
: That's why I think an engineering type project, like connectome, is
: necessary. We just need to know all the basics first.

h*i
发帖数: 3446
36
Biology is probably already a serious science. However, I don't think that "
cognitive computing" is a neuroscience subject, despite that this particular
project is taking the route of building neuron in silicon.
I think the relationship between cognitive science and neuroscience is like
that between chemistry and physics. Each is a subject of its own and can be
developed independently. I believe most people have not realized that, even
the majority of cognitive psychologists.
As such, it is okay to build machines to try to explore the principles of
cognition without figuring out all the details of the neuron connections.
Calling it a hype is not fair. Disclaimer, I used to work next door to this
"cognitive computing" guy, but I know nothing about the project, except that
this guy is a computer science theory person, with specialty in information
theory. He has done some good work in caching algorithms for storage
devices. So it is understandable for the neuroscience people to dismiss him
and this project, but the dismissal is misguided, because cognition is
simply not a neuroscience subject matter.

【在 d*****r 的大作中提到】
: I just want to make it into engineering, instead of "Einstein" science...
: genome project is an engineering, so it's foreseeable; I think the same
: project for brain is necessary, just as important as periodic table to
: Chemistry before Chemistry could take off as a serious science.
: I still don't think biology is a serious science yet, so as neuroscience,
: coz we haven't figured out the "periodic table" yet. In that case,
: I am very very pessimistic of any serious approach to "make an airplain
: before Newton was born".
: That's why I think an engineering type project, like connectome, is
: necessary. We just need to know all the basics first.

m******r
发帖数: 2
37
I'm just here to show my admiration to your discussions... They are truely
eye-opening and inspiring to me.
1 (共1页)
进入Neuroscience版参与讨论
相关主题
23+ questions in systems neuroscience"shunting effects" 怎么翻译?
human connectome projectelectrical synpases 有啥研究的头吗?那里研究的最好?
脑连接图研究进展电影limitless里面的NZT (转载)
推荐:Axon Physiology---Physiol. Rev,2011单独提取 presynapsome / postsynapsome, 靠谱不?
有人对 short-term synaptic plasticity 有研究吗?有做patch clamp的么(whole cell recording mode)
请问稀疏的neuron连接应该如何设计网络模型有人听说过这句话吗? (转载)
请哪位专家详细谈谈STDP的机制问题?谈谈对IBM Blue Brain Project看法吧
Nat. Neurosci.:新研究显示人免疫能力和精神状态或存在某种关联Neuron:大脑从成功经验中学到更多
相关话题的讨论汇总
话题: ibm话题: cognitive话题: neurons话题: computing话题: chips