>>[Commentator]: Hi. I want to first thank all of you for coming. If you remember a couple
of months ago, we had a speaker here, Aubrey Gilbert, from Berkeley, who was talking about
the brain. And after that talk there was definitely a
lot of interest here and she put me in touch with one of her colleagues. So today we're
really excited to have Bradley Voytek here from Berkeley.
He's, he is finishing up his Ph.D. in a few months, and he's also a neuroscientist, and
he's going to talk to us a little bit more on the computational side of how they do,
you know, some of the work that they do. I'm not gonna try to talk a lot about it, about
his research - I know he's gonna do justice to it.
If you have questions, please try to use the mics so people in other locations can hear
you and also for the YouTube video. So with that, Bradley.
[pause] [talking in audience]
>>Bradley Voytek: Okay. So, thanks for everybody for coming. I'm gonna talk a little bit now
today about some of the research that I do at Berkeley. Like Reza said I'm finishing
up my Ph.D. right now, hopefully here in the next few months.
And some of the research that we do at Berkeley, we do a lot of human research as well as lower
level, like neurobiological type stuff, like I was mentioning before the talk started.
So I'm in the Neuroscience Institute. There's no actual department at Berkeley for a Ph.D.
There's a Neuroscience Institute. And the research that's being done kind of spans a
wide range of different topics. So there are, are some people within the institute
that are studying, like ion channels within single cells and their structures, and then
there are people doing some of the research that I'm doing, which is human level cognitive
types of research. And we run into a lot of problems in the types
of research that we do, so I'm hoping to sort of start a little bit of a dialog here with,
with people who are experienced in some of the issues that may help us in our research.
But before that, I'm gonna talk a little bit about my background, I guess.
I initially as an undergraduate was studying physics at the University of Southern California.
And I was a physics major for two years and worked in an ultra-low temperature physics
lab very briefly, and after a little bit of time in that field, I realized that wasn't
exactly what I wanted to be doing. College kind of happened, and I got more interested
in, in the brain and brain function. And so I ended up switching majors my junior year
to psychology, because at the time, USC didn't have a neuroscience or cognitive neuroscience
type of degree. So in my junior year I switched and started
studying - I took a bunch of classes on AI programming, philosophy of mind, psychology
- sort of a gamut of different topics. And I worked in a laboratory as an undergraduate
and, as an undergraduate already, one of my first projects was to code a bunch of data
that they had, and it was all these old text files that they had acquired over the years;
hundreds of these text files, and then I was supposed to type everything into an Excel
spreadsheet from the, from the text files. And I thought: "Okay. This is kind of ridiculous."
And the guy that was, that I was working for said: "Oh. This should take a couple of weeks."
And I said: "No, no." So I wrote a quick little C program; scraped
all the data out of the text files; and then dumped them into a, a spreadsheet and it took
like a day and a half; and I handed it to the guy and I said: "Okay. I'm done." And
it was like - I may as well have done a magic trick. The guy was like totally blown away
by this, and that's when I realized that the field of neuroscience in general could use
a lot of help. [laughs] And so that low-level type of programming
acumen ended up landing me a job as an RA after I graduated at UCLA in the brain mapping
center. And I was hired as a PET technician. For those of you who don't know what PET is:
that's positron emission tomography, which is a neuroimaging technique where they actually
inject some sort of radioactive isotope into the body and watch the decay. And they put
a person in the scanner and then using this method they can sort of localize function
in different brain areas. And the main radioactive tracer that was being
used at the time is something called fluorodioxyglucose, which is basically radioactive sugar. With
the idea that if you watch the different rates that the brain takes up sugar - sugar being
the main fuel for the brain and neuronal functioning - you can get an idea of what brain areas
are working to do a certain problem. The issue with that is obviously you're, you're
introducing radiation into a person. And so in order to reduce the overall effective dose
of radiation that a person receives, immediately after the scan we had the people use the restroom,
because this radioactive glucose actually gets cleared out of the bladder.
And so when I was first hired I thought: "This is really cool. This is like a very technical
job. I'm gonna be running this scanner." And I assure you that in the job description the
next part was certainly not listed. And that was after the people used the restroom,
I had to go in with a Geiger counter and gloves and make sure that they didn't miss. So basically
I had to go into the restroom and clean up radioactive pee as part of my job.
[laughter] And despite that, I still thought: "Okay.
This is a really cool field." And I ended up applying to Berkeley for a Ph.D. program.
I actually applied to several schools; Berkeley was the only one that even gave me an interview.
And I ended up going to Berkeley and I've been there since fall of 2004. And I'm gonna
talk a little bit about my research, but first, a very basic introduction.
This, this is my wife and I, MR, from the MRI scanner at Berkeley. And it's sort of,
it's sort of interesting that, I think people have these different kinds of views of what
neuroscience can and can't do and what it currently is.
First of all, it still kind of wows me if I sit and think about it, that I can actually
look at a scan like this - like this is actually my brain - this is my wife's brain; this is
my brain. And yet that's somehow very blasé that we can do that at this point.
Like people are like: "Oh, yeah. It's a brain and I've seen this, you know, tons and tons
of times." And I think that's pretty surprising still.
Now when I give a talk - some kind of public lecture or something like this - it's a very
interesting topic. It seems that, that for some reason it captures people's attention
- neuroscience does. And so I find it's very easy to give an interesting type of public
lecture that, that can capture people's attention a little bit.
I remember watching a Ted talk about statistics. I think that guy had a much harder time making
it interesting, you know, he did. Whereas, here I can, you know, throw up a couple of
slides of like brain pictures and stuff like that, and, you know, general audiences will
go: "Ooh. Neat." So I sort of have to try and not be a little
bit lazy when I'm talking about this stuff. And, and I'm gonna try and talk about a little
bit of the technical side of things and still make it interesting, hopefully.
Now like I said, there's different conceptions about what neuroscience is and isn't. And
there's kind of different conceptions about how the brain works, and everybody has this
- or a lot of people - have some sort of model in their head of what the brain does
and how it works. In psychology this phenomenon where people
think they know how their own psychology and psyche works, that's called folk psychology.
So there's also a little bit of folk in neuroscience. This is an article from the Atlantic, I think
2008, "Is Google Making Us Stupid?" And this guy was writing about the woes of using Internet
searches because you don't remember things as well as you used to.
We were relying on search engines and technologies and things like that. And there might be some
truth to that, but to a certain extent this is, this represents a type of folk psychology.
Now a bigger type of folk psychology that I get a lot when I do a, more public lectures,
are certain things like the ten percent myth. We only use ten percent of our brains. I hear
that a lot. That's not, not true. If you remove 90 percent of the brain, I guarantee
you're gonna have some problems. There is also the big questions that I get
a lot from the general public, are things like: "Oh, my grandfather or grandmother has
Alzheimer's disease. What can you tell me about that?"
Also like "Are men's brains really different from women's brains?" Or "We were listening
to NPR on the drive down here this morning, and there was a guy talking about left brained
versus right brained." And like these things might have a little
bit to do with the actual neuroscience, or might have a basis in actual scientific fact;
but for the most part, they're, they're, may be useful metaphors, but they don't actually
reflect real things about the brain. But, there is this kind of futuristic sort
of notion about the brain as a computer, and though we're figuring everything out really
quickly and pretty soon we're gonna be able to have - as my friend keeps poking me and
prodding me - specifically saying: "When are we gonna get Google in our heads?" Because
he's sick and tired of having to look things up and not knowing things on the fly.
But this kind of idea of these - like brain feeder interfacing is a really popular topic
right now and, direct neural interfaces is a term that people use. Can you like directly
jack into a computer like a kind of neo - whoa - I know Kung Foo, right?
That stuff is really hard. And I don't think that's gonna be coming any time soon, so I
just want to kind of set the groundwork right now and say that's not actually what real
neuroscience is like. It's not like this. It's not like soon we're gonna be all jacked
into some sort of like worldwide network of, of the board like, whatever, I don't know
what conceptions people have, you know. You have cool things. This isn't actually what
happens in the lab. It doesn't look this awesome. This isn't what the data actually looks like.
[laughter] We're nowhere near to being at the level of
like Neuromancer or Snow Crash. I think Randall Monroe from xkcd put it really
well in a comic a couple days ago, actually, where he showed a, a movie science montage
where a couple scientists are sitting around, and they are like messing with lasers and
all of this high tech equipment, and at the end they say: "Okay, paint flecks from the
killer's clothing match an anti-matter factory in Belgrade." And they are like they run off
and they capture the guy. Bottom panel more accurate reflection of what
we actually do. They start something; they wait; they wait; they wait; and then they
say: "Okay, we've determined that there is neither barium nor radium in this sample.
Probably." [laughs] That's really what we can kind of do.
Yeah? >>Q: When you say: "Not any time soon." Could
you, would you care to quantify that a little bit?
>>Bradley Voytek: Absolutely not. [laughs] [laughter]
I will not play that game. Because, I might say: "Oh. It will be 20 years" and then we'll
figure it out in two years, and I'm gonna look like a jackass. So these kinds of prediction,
prediction problems are always an issue. It's, what was Bill Gates' thing - like a
128 kilobytes is surely enough computing - 640 kilobytes is surely enough for anybody.
I don't - he didn't say it? Okay. Well, all right, okay. But you have all - as a scientist
you hear these cautionary tales all the time, so I'm, I'm gonna try and step back.
I mean afterward, I'd be happy to talk to you about it, actually though.
So I encourage people to ask questions during the talk. I want this to be a little bit more
of a dialog, but, and some of the stuff that I will show - it might actually seem really
impressive - it might seem like we're really close to this kind of stuff - and if I delve
into the methods that are being used in neuroscience to do some of the stuff that we're doing,
like brain computer interfacing and things like that - they're actually ridiculously
simple. And the issue isn't the can or can't we, it's
how are the systems in the brain actually doing what they're doing. We can sort of jack
in a little bit and hijack the signals and not know what the signals are actually doing
or meaning or anything like that; but we can still make use of them.
So it's - a friend of mine who's a grad student, his name is Levecia Kunda at Berkley, and
we sort of go back and forth a little bit about this, because you can use the signals
to drive an arm or something like that, and that's an engineering problem. It's taking
a signal, an input signal and making it do something useful, but it still doesn't necessarily
tell you anything about the brain. And so we sort of go back and forth about what does
that mean? Yeah, question.
>>Q: Yeah, I read an article recently about speech synthesis from brain signals.
>>Bradley Voytek: Right. >>Q: You know anything about that?
>>Bradley Voytek: Yeah, and that's actually the same general kind of, kind of problem
and solution. The question I think everybody heard, was about synthesizing speech using
brain signals. So I'll show you a little bit of research
that's related to that, towards the end of my talk, and I can address that a little bit
more. So, like I was saying, you have the science
– the, the like movie science kind of view - but in the, well okay, this isn't actually
what real science is like - [laughter]
So I taught anatomy at Berkeley for a couple of semesters and we had a lot of access to
real human brains. I was considering maybe bringing a human brain
here, but I, I ended up opting against it. UCLA had an issue a couple years ago where,
I think they lost a body part or something out of their morgue, and so I'm very nervous
about taking any sort of human specimen off of, off the site now. So I didn't even ask
anybody if I could. I just, I left it there and didn't even bother.
Okay. So this is actually more what real neuroscience looks like.
So I have in this picture on my head an EEG cap. This is recording 64 channels of electroencepha-,
of my electroencephalogram, which is what EEG stands for. And all that is are the aggregate
sum of the, of the potential of hundreds of millions of neurons.
So in the brain you have 100 billion neurons, more or less; it changes depending on the
person and things like that. And each one of those neurons has anywhere on the order
of maybe 1,000 to 10,000 synapses, which are connections.
So you're talking anywhere on the order of 100 quadrillion to one trillion interconnections
within the brain. And this leads people to say things such as: "The brain is the most
complex thing that we know about in the universe," which is one of those other statements that
always seems kind of ridiculous, because our brain is part of our body and if you count
the body as a thing, then inherently the brain within the body is more complex than the brain
by itself. So, anyway, you have these kinds of statements
that people like to use, but, who knows? It doesn't really make a lot of sense. But nevertheless
it is a really complicated problem, and so I can understand using that metaphor.
Yeah. >>Q: Can you get the same number for the chimp?
>>Bradley Voytek: Same numbers for the chimp? >>Q: Of neurons?
>>Bradley Voytek: Of neurons? No, I don't actually know, I don't do any work with monkeys
or anything like that. I don't, I don't study chimp - I would guess it's, I don't know,
may one twentieth, one tenth? I don't know. I'm not sure.
[pause] Real life science also; not nearly as glamorous.
These are reviews I had from the same paper that I tried to publish just last year.
First one. Really nice. "The findings clearly merit publication." The second reviewer: "Admittedly,
I find this a very poor state of affairs for a manuscript - and, as a reviewer, also at
least mildly irritating." [laughter]
This is what the peer review process is like. And I have a paper that's out for review right
now, and this is a graph of how I feel about it. On the x axis, it's the number of days
in review. On the y axis it's the number of milliseconds before checking my email in the
morning. And you actually notice that as the number of days in review increases, I actually
start checking my email before I even wake up in the morning. Like I just, I notice I'm
laying there with my phone in my hand, like really anxiously waiting to hear back about
this paper. It's a very frustrating process. [talking in audience]
So our field has sort of an interesting history. This guy on the left is Santiago Ramon y Cajal,
and he's really kind of the father of modern ways of thinking about neuroscience. He won
the Nobel Prize in 1906 for what he called, or what, not he actually, what was called
the Neuron Doctrine. Which is the idea that there are individual cells that make up our
brain and those cells aren't directly connected. This is in contrast to reticular theory which
was that all the cells in the brain are one continuous network that are all interconnected.
And what's interesting is he actually shared the Nobel Prize in 1906 for medicine and physiology
with the guy that was a huge proponent of reticular theory.
And so they both won joint Nobel prizes in the same year for opponent theories.
And apparently this guy, Santiago Ramon y Cajal, was an interesting dude. When he was
like eight years old in the town that he grew up in Spain, he built a homemade cannon and
blew down his town gates and got arrested and was actually placed in jail as an eight
year old for doing this. And there are similar stories. I read his
autobiography, and he has stories kind of like this all growing up. And so he was a
very outspoken, very confident kind of guy, and apparently the 1906 acceptance speech
for the Nobel Prize was interesting. And there is a little blurb on the Nobel website
about this sort of opponent theory thing. But I like to point out: this drawing on the
right is actually a drawing that he made of the neurons in the chick retina - so a little
chick. And it's, it's actually extremely accurate. But right now, currently in modern neuroanatomy,
what's done is people look at a microscope; the microscope simultaneously projects an
image onto a page; and they sort of trace the image either by hand or just sort of take
a photograph, digital photograph of it. However, in 1906, in the late 1900's, or late
1800's or early 1900's, they didn't have this technology. And so in his autobiography he
talks a little bit about this. What he actually would do, would be sit and stare at these
images for hours on end. And then he would walk down to a local Spanish café, and sit
there and drink absinthe. And then once he was sort of nice and good, he would draw from
memory. So like I said again, very interesting dude, but extremely accurate in his drawings.
Here's that drawing again on the right. This is a drawing - the drawing on the left is
from a neuroanatomy textbook that I use when I teach, showing the same exact system; and
you can actually see how very similar everything is. You have all these different kinds of
neurons in these different layers down here, matches down here - you have these sideways
sorts of neurons that are laterally connected here and here. I mean it's extremely accurate
drawing. And the Neuron Doctrine is this idea that
we have all of these individual neurons, and these neurons send off action potentials which
are these sort of - they are thought of as binary on, off. Yes, no types of signals.
They get information and the cell body of the neuron actually integrates that information
and then if there's enough activity it sends an action potential out of the neuron.
Again, it really doesn't look like this. This is a nice, cool little graphic. Unfortunately,
science that we do isn't nearly this cool. They're thought of as these little transistors
in a lot of ways. Individual neurons that are on off switches. And they send these action
potentials and the action potentials are thought to be the messages that are used to communicate
between different brain areas. And so this led a lot of people early on to
do recordings from individual action potentials; individual neurons; record the action potentials.
And this method in neuroscience is known as cellular electrophysiology. All that means
is that you're hoarding electrical signal from, from either one cell or a group of cells
at a time. Now, this has extremely good timing. You know
exactly when a signal comes in. You know exactly when a signal goes out. But you're limited
only to at maximum - using current technology - maybe a few hundred neurons at a time.
And usually within one single brain area it's hard to do multiple brain areas in, in an
awake behaving animal at the same time. So, to kind of give a little bit of an analogy,
I don't like the brain as a computer analogy very much, because I don't think it's very
accurate. But to sort of build on that analogy a little bit - cellular electrophysiology
is like having some sort of alien computer, that you know nothing about, you know it has
transistors; you know, you know that's how it works; so I'm not talking about something
like - what was the Independence Day - it's not like an alien computer that Jeff Goldblum
can figure out in 20 seconds and write a virus for.
Like some alien computer that we know nothing about its operating system, programming language,
anything like that. And yet all you can do to figure out how it works, is record from
a few transistors and you can press buttons that have all these alien symbols that you
know nothing about, and see if a transistor fires or not, or opens or closes or whatever;
that's it. That's like trying to figure out the brain
doing single cell neuroscience. It's really hard.
Now in contrast, one of the things, one of the techniques that I use, is macro scale
electrophysiology, in this EEG - scalp EEG - recording electrical signals off the brain.
I'm recording 64 channels off the brain. The skull, unfortunately, is opaque or semi-opaque
to electrical signals. So the electrical signals coming off the brain hit the inside of the
skull and smear out, and so I have no idea really where a signal is coming from within
the brain. But I have really good timing. I know exactly when something's happening.
My timing is only limited by my sampling rate of my analog to digital converter.
So I have really good timing, but really, really poor spatial resolution. Now again,
try and understand how this alien computer works using this, is like holding some sort
of recording electrode - here's my laptop here - like sort of out here with a wall in
the way - and trying to figure out how the brain works - or the computer works based
off of that. The analogy that's used actually for scalp
EEG is: it's like trying to figure out how a car works by measuring the vibrations coming
off of its hood. [pause]
Actually, so speaking of scalp EEG, you can use these signals a little bit for something,
something interesting. So I have a buddy of mine, Richard Warp and
a colleague down at UC San Diego in the Schwartz Center for Computational Neuroscience, UCSD,
who are using - they're doing this really strange sort of, it's a concert where they're
recording EEG from the sky at UCSD and transmitting the EEG signals to a computer bank in Huddersfield,
UK; while my friend up here is conducting the music and the signals that are coming
off of my friend's brain from San Francisco and they're using that to play music in, in
And they're actually doing it through Google Voice Chat. So my buddy in San Diego is hearing
the conductor who's playing the music over Google Voice Chat in San Diego and that's
altering his brain rhythms and then it's playing some sort of instruments in the UK. And it
kind of works; it's interesting, if nothing else.
Another method that's used – as a lot of people know about this in neuroscience and
neuroimaging - is functional magnetic resonance imaging, or FMRI. And this is actually the
magnet - an old magnet - this is the (4T) 4 Tesla scanner at Berkeley that's no longer
in operation. They've closed it down. They now have a 3 Tesla scanner. Scanners are rated
basically by the strength of the magnetic field and that affects the quality of the
signal that you get. And the image on the right is an image activation
map in, in a group of people. So in FMRI analyses you have really good spatial resolution, that's
limited to maybe two or three millimeters on a side throughout the entire brain, but
the temporal resolution is awful, because the FMRI is actually measuring blood flow.
And so the brain activates a certain region and uses up a little bit more blood, and then
a few seconds later you get a rebound as more blood comes in to fill the missing blood need.
That's, this is a very low level of type of explanation, but this takes a few seconds.
And so you know exactly where something is happening, but you have no idea when it happened.
And again, using the alien computer analogy, this is like being able to kind of - you can
really, really precisely measure the electrical flow throughout different components in the
system, where they are, but you have no idea when it happened. It's like summing electrical
flow over a few seconds. And so you have horrible timing, but great spatial resolution.
And so when you see these kinds of things in, in Popular Science you see these kinds
of images a lot on the news and stuff like that - well, this is, I don't know, the love
area of the brain or something like that. Again, not, not, not really very informative.
Not nearly as informative as we'd hope. Now, the other issue with FMRI, and EEG as
well, is that you're averaging activity across many different repetitions of some sort of
stimulus and across a group of people. And now an FMRI if you want to look at compare
one brain to another brain, what you have to do is - everybody's brain is a different
shape. So to actually spatially warp all of these brains to be the same shape, and then
average them together. So you are actually losing spatial information.
So the power of FMRI, knowing where something's happening, is actually reduced because you
have to average across people. There are some people at Berkeley, for instance, there was
a paper a couple years ago where they had two people - two graduate students - did FMRI
on themselves looking at different images. And they would do FMRI over and over and over
again and show them of these different kinds, show themselves all these different kinds
of images, and use their individual brain anatomy and activation patterns in the different
brain areas as sort of a statistical model to try and figure out what brain area is activated
when you show it a brighter spot over here versus over here; or down here versus over
here. And they did this so many times that they
could build up a rough statistical model of the spatial distribution on the brain of activity
when you saw different things in the visual space. And they used that to reconstruct what
they were probably seeing.
So it's, you can do really good single subject analyses as long as you keep it in the native
brain space. Like a person's actual brain shape. But most analyses are done averaging
across people and warping individual brains to this template brain.
Another thing that you can use these MRI scanners for is actually - this is Diffusion Tensor
Imaging. This is a scan of my brain at Berkeley and it actually measures the flow of water
through the brain. And so the brain is, like I said, connected
by these neurons and the neurons are connected to each other through these long, what are
called axons, which are interconnecting fibers. And these axons can strain the flow of water
in the brain - these long fibers. And so based upon these water constraints you can actually
build connectivity maps of what is connected to what within the brain. And this is called
Diffusion Tensor Imaging. In this image the front of the brain is here;
the back of the brain is here; and we're looking down from the top. And all of the green colored
fibers are fibers that are moving from the front to the back. All of the red colored
fibers are the ones moving from left to right, right to left. And all of the blue are moving
in and out of the plane. So you can actually get these really cool
connectivity maps. So if you want to know how strongly are two brain areas connected,
and use that as some sort of weighting, some sort of weight, in determining connectivity
functional communication between brain areas, you can do that too.
But this is really kind of the forefront of, of neuroscience right now. This is really
difficult, people are having a hard time incorporating this into functional brain data.
Now, like I said, the scanner at Berkeley, the 4T scanner was recently closed down and
they have built a new 3T scanner. And these scanners actually operate using a superconducting
magnet. And these magnets are actually kept cooled using liquid nitro, or liquid helium.
And so when they shut down the scanner, they actually quench, which is release, all of
the liquid helium. They do that in an emergency if something - like if somebody brings metal
into the scanning room and, or at a hospital if somebody's on a metal gurney - this, I
don't think it's ever happened, but this is something you're always warned about. You
bring a metal gurney close enough and it goes flying across the room and pins the person
up against the scanner because the magnet's so strong.
So if that happens they quench the magnet which is release all of the liquid helium
that heats up the magnet and then releases, releases the person, the unfortunate person
if that were to ever happen. But when they closed down the scanner they
quenched the magnet anyway. And we took a video of it. I think it's just kind of cool
so I'm gonna show it to you. [laughter]
>>People on video: Seven, six, five, four, three, two one.
>>Bradley Voytek: This is last year. >>Someone on video: Sylvia get out of the
way! [laughing; lot of cheering] >>Someone one video: Nothing happened.
[applause] >>Voice on video: Die magnet die!
>>Bradley Voytek: That was somebody yelling: "Die, magnet, die!"
[lot of noise on video as the magnet is quenched] This goes on.
At least half hour to an hour of that. Like it's, a ton, a ton of liquid, at this point
gaseous helium. So those are the tools at our disposal.
And one of the big questions within neuroscience is functional localization. So where does
language happen? Where does vision happen? Things like this.
And this is actually - a lot of these things are pretty well known at this point, especially
for some of the lower level functions like vision, somatasensation which is the sense
of touch, movement, etcetera. And really, I mean, in a way, a lot of these
ideas come out of ideas of phrenology not the roots album phrenology, but phrenology
which is the idea that you can actually figure out aspects of a person's personality by feeling
bumps on their head. This has led a lot of people, or some people
I should say, to refer to neuroimaging in general as a new phrenology. All you're doing
is you're figuring out where does something happen in the brain. But knowing what is happening
and where something is happening doesn't tell you how it's happening. It doesn't actually
inform you of how the brain is doing these computations or how it's creating ideas of
memory. Like how do we maintain memory? How do we access memory to do behaviors, things
like that. But, we can again, hijack some of these signals
and use them for certain kinds of purposes. For example, vision works through the eyes.
Obviously. You have signal coming in from the outside world into the eyes, and the eyes
send these fibers back to the mid-brain which is the middle part of the brain, in one synapse
back to the brain and then from the [ ] part of the brain back to the primary visual cortex
which is an area in the back of the brain. If you cut any of those fibers, then you are
blind because you 're not getting the optical signal from your eye to the brain. If you
destroy parts of visual cortex in the back of the brain, the person will report that
they were blind. If I hold up fingers and say: "How many fingers am I holding up?" They'll
say: "Oh. I don't know." But if you put that person in a room, in a hallway, for example,
full of obstacles and ask the person to walk down the hallway, they will actually avoid
all the obstacles in the hallway and walk to the other end. And you say: "Why did you
walk that way?" And they say: "What are you saying? I walked down the hallway like you
asked." So this is called, well this is a neurological
phenomenon and this occurs - it's thought to occur - because damage to the primary visual
cortex, the actual outer parts of the brain, damages your conscious awareness of the visual
input that you're getting; but not the actual lower level communication between incoming
vision and like movement and the body. So you can report that you're blind, but you
won't actually - and you won't be able to read or anything like that - but you can still
navigate a little bit around the world. Now if you destroy the first relay in the
middle part of the brain, where the vision first comes in out of the eyes, then you're
blind. You're actually blind; you can't see anything. Like neurons don't respond in different
later parts of the brain to visual stimulus. You're cutting the, you're cutting the relay
out, if that makes sense. If you record from the middle part of the
brain in an animal, for example - and again, use the recordings and the firing properties
of individual neurons to different kinds of visual, visual stimuli in the world around
them - you can actually use those recordings to then reconstruct a rough guess of what
the animal was looking at. And there was actually research at Berkeley that did this a couple
of years ago. And so what you're actually seeing is on the
right side here you're gonna see a video that an animal was seeing; and on the left side
is the researchers reconstruction of what they guessed the animal was seeing based upon
the firing properties of the groups of neurons that they were recording from.
So it starts off, not a whole lot happening. [pause]
Kind of see some similarities. And you can see a guy sort of come in and out of view
here. It's really rough. It's black and white. It's getting general shapes and things like
that and sort of picking up a little bit of motion and shading. But, if you think about
it, this is kind of really impressive. We don't know how the brain is decoding this
information. We don't know how we have this conscious sort of perception of vision and
the world around us, but we can still get some useful, useful in certain senses information
out of these different neurons. And this is, this is the sci-fi type level
stuff. Where you think about like being able to take a picture of your, of the world around
you or something like that, just be having an electrode implanted in your head and like
blinking or something. Who knows? >>Q: Could you tell what is the resolution
in brain by let's say counting how many cells are there? Because that's something you could
compare to resolution on an image, let's say. >>Bradley Voytek: Right. So somebody's actually,
somebody's actually measured the, the bit rate of the visual information or estimated
the bit rate, I should say, of the visual information coming out of the, coming out
of the eye, and you can start doing some analyses and discussions like that, but there's a problem
with that. Actually the next slide will sort of get to the heart of that.
So for every axon, for every fiber you have coming out of the eye, coming out of this
middle part of the brain, and connecting to the, to the actual cortex of the brain, the
brain sends ten fibers back to the lower level visual areas. So for every one true signal
you're getting from the outside world, the brain is modulating that with ten of its own
signals back onto itself. So you might be able to get an estimate of the resolution,
but who knows what the brain is doing to its own, like activity patterns? And this leads
into the importance of, oops, the importance of context.
This is kind of like a mini Rorschach Test here. Can anybody tell me what the image is
on the left? The black, the black spot? Any guesses as to what that might be?
>>Voice from audience: [unintelligible] >>Bradley Voytek: Loch Ness Monster. What
is it? >>Voice from audience: UFO.
>>Bradley Voytek: UFO. You can't really get any kind of idea of what
it is. What about now? What is it? >>Voice from audience: [unintelligible]
>>Bradley Voytek: It's a car? >>Voice from audience: [unintelligible]
>>Bradley Voytek: It's a tank? [laughs] Why do you say it's a car?
[pause] Somehow by accessing your memory, you can
take a noisy visual system, very blurry, such as the one you are seeing in the background
here, and you can interpolate that this is probably a building and probably a street,
and therefore this is probably a car.
Okay. I'm supposed to ask the folks at the video conference to mute their microphone,
if possible. Okay. Anyway, so you can take the same image, rotate
it; what is it now? >>Voice in audience: [unintelligible]
>>Bradley Voytek: It's a person. You have a lower level, the little lower level visual
image, the black image here, hasn't changed. The raw input is roughly the same, I've just
rotated it. And this isn't just important for vision.
This is also important for different, different senses and different modalities.
So I'm gonna play a really grating sound right now, but try and pay attention to what is
being said. [scraping type sound]
Who thinks that they understand what that sentence was? This is a heavily filtered sentence,
by the way. And this is actually from a research paper at Berkeley from Fredrick Tunison's
lab. This is pretty nice. Yeah? What do you think?
>>Voice in audience: She writes to her mother every day.
>>Bradley Voytek: Awesome. He got it. Very few people get it and statistically one or
two people in a room of this size will probably get it. A few people probably will understand
about one or two words, though. Most people will probably understand about one or two
words. I'm gonna play the same sentence now, unfiltered.
>>Recorded voice: She writes to her brother every day.
>>Bradley Voytek: Close enough. Okay. Now you've heard it once. You know what
the actual answer is. Listen to the sentence again.
[Recorded voice that is distorted: she writes to her brother everyday.]
[talking in background] >>Bradley Voytek: You can't unhear it now.
So when I was first doing this, we were taking all these sentences and we were heavily filtering
them in all these different frequency bands, actually we were filtering them on what was
called the modulation spectrum, trying to figure out what parts of this modulation spectrum
are important for speech comprehension. And I was filtering all these sentences and
everything, and I, I was like, I can't - they're all okay. Like I understand all of
them. That's so weird. So I'm gonna, I'm gonna demonstrate it one
more time with a new sentence. >>Recorded voice: The two children are laughing.
>>Bradley Voytek: Alright? >>Recorded voice that is distorted: the two
children are laughing. >>Bradley Voytek: It's been removed of so
much information, to the point where, and I, I can actually do it even more and remove
it even more, and remove it even more, and it's amazing how well you can still understand
this sentence, despite the paucity of information in it.
Because you have all of this memory information that's sort of bolstering and feeding on top
of the actual low level sensory information. Yeah?
>>Q: So I'm just curious if it would work also the other way. If you tell us the sentence
and then play something modulated what is not that sentence, would people still think
that that's what they hear? >>Bradley Voytek: That's actually a really
good question. I would, I would presume that if I told you
- for example, the sentence actually says: "She writes to her mother every day." Brother
every day. But like, he, he heard mother. If I said she writes to her mother every day,
if it's close enough, you probably would never be able to figure it out.
Because at that point, the sounds are so similar and the sentence, like the word "mother" and
"brother" even though the phoneme for ma and ba are very different; if I remove most of
that information content, I'm still using my understanding of English grammar and syntax
and just experience with individual words; and I know that a word of a certain length
like "mother" and "brother" - they're very close to the same length - they take almost
the same amount of time to say. And if I remove a lot of the information in individual phonemes,
I wouldn't be able to maybe differentiate them in this, in this context if I filter
that. So that's actually an ongoing research question,
though, is how much can you filter and still pick out individual phoneme sounds and use
that to inform the rest of the sentence? Alright. And I think there is, I don't know
if people saw a couple years ago, there is a, an example of a sentence that somebody
constructed in the written word, and then took all the middle letters in that sentence,
in every word in the sentence, and just rearranged them and kept the first and last letter the
same. And you can easily read it. And so apparently when you're reading, once
you become an experienced reader, you're not really reading, you don't sound words out
anymore when you're reading, you actually sort of pick out whole words at a time and
match the word sort of length and beginning and end to an internal template; and you're
very good at this. So it's more of a template matching rather than sounding everything out
and looking at the individual phonemes, which are individual like letter and sounds.
>>Q: [unintelligible] information? - unfiltered -
>>Bradley Voytek: So in this case actually the filter that we were using is also, is
also getting rid of phase information, but you can – so we're actually scrambling some
of the information that's there and then filtering. So it's, I don't remember the actual algorithm;
it's been about five years since I worked on this algorithm in this case. But you can,
I'm sure, do it and actually remove – [pause]
It's actually a good question. I don't know. Can we talk about it after? 'Cause, yeah.
So again, going back to this idea of functional localization a lot of people may have seen
this. I think Aubrey probably talked about this case before. This is a very famous case
in neuroscience, Phineas Gage. Poor guy was working at a, a, he was a foreman for a railroad
company and they were blowing out a big rock face to make room for the railroaders, something
like that at the time. And one of the, one of the sections of dynamite that they'd drilled
a hole, packed it with TNT and then they were gonna blow it up. It didn't, it didn't explode.
So he went in with a large tamping iron and went to tamp the, tamp the dynamite down a
little bit more, and it blew and shot the tamping iron right through his head.
And the guy actually, this is in the 1850's, 60's, actually amazingly survived this. And
afterward the guy was like a family man, he was a foreman of his company and very well
respected in his community, etcetera. Afterward he became kind of a drunk. He totally changed
his personality, wildly changed. He became a very different seeming person.
And this is really one of the first cases that started making people think that: "Oh.
Maybe these different brain areas are actually playing different roles in, in behavior."
And another classic case in neuroscience, is this case from Broka. This is back in the
1860's. And this guy was trying to disprove the idea of functional localization. And he
had this patient that had come in and the patient was aphasic, which means that they
aren't really putting together words in any kind of meaningful way.
This patient was referred to in the literature as Patient 10, because apparently one of the
few things he could say reliably is the word "ten." In literature. It actually turns out
that the guy also was able to curse pretty well, but apparently having a scientific paper
talking about "Patient Ass" isn't very, very well thought of.
So, I guess in the 1850's it might have been like Patient Rapscallion or Scoundrel or something.
Anyway, so the guy had this issue of being able to put language sounds together; speech
sounds together. And so - actually I'm gonna - here let me – I'm running out of time
here, so I'm gonna skip a few slides. But anyway this patient had damaged a specific
brain area and this guy was trying to disprove ideas of speech, or functional localization
and it turned out when he looked at several different patients who had this same kind
of behavioral pattern, and waited until they died, and then looked at their brains post
mortem, they all had a lesion in a very similar area of their brain.
And so this area in the brain in neuroscience is now known at Broka's area; and it's the
area that's responsible for the motor, motor, motor connections for speech. So being able
to put sentences together with your mouth. And this led a lot of people to do stroke
induced neuroscience. So you actually - or lesion neuroscience. So you actually make
lesions in different brain areas in animals and people, and – or they don't actually
make them in people - they make lesions in animals to see what happens. And this is 1950's
neuroscience. This wouldn't fly anymore. [laughs]
Ethical procedures. But anyway, they actually lesioned this cat
in a, bilaterally, so both sides of the brain in the same region. And this led to something
that's called Kluver-Bucy Syndrome. And Kluver-Bucy Syndrome is, is marked by hyperorality, hypersexuality,
things like that. And so this is a way that a lot of people
started thinking about what brain areas are doing what, is actually taking animals and
lesioning different brain areas and seeing what can't do anymore.
And so if you have a lesion in your primary motor cortex, you'll be paralyzed. If you
have a lesion in primary visual cortex, you'll, you will be blind or have blind-sight, which
is that phenomenon where you, you report being blind. Things like that.
And this is actually a video of my advisor talking to somebody with aphasia. So you can
kind of get a little bit of an experience of what it sounds like.
[pause] Or, not at all. What happened to the sound?
Anyway, okay, well I'll skip it. [pause]
That's interesting. [pause]
The sound was working before, right? Oh well. [pause]
So this idea of using brain lesions to discover certain aspects about brain function, has
been used a lot in neuroscience. And I actually do research with stroke patients who've had
brain damage because of very specific types of stroke.
And if I work with these patients behaviorally and I say for example, I look at the pre-frontal
cortex which an area at the very front of brain that's thought to be responsible or
important in attaching to memory and things like that.
Now again like I said, if I lesion primary motor cortex, I will be paralyzed. If I lesion
pre-frontal cortex, I, I, - or if a patient rather - has a lesion in pre-frontal cortex,
they will have attention problems and memory problems, but it's not like they can't pay
attention to things anymore; it's not like they don't have memory anymore.
So it's not exactly one to one mapping. These higher level cognitive functions aren't as
cleanly distributed. And one way that we test them in human neuroscience
attention, is this is an example of a task that I use where I ask you to keep your eyes
focused on the center cross, here. Don't move your eyes. And you will see a stream of images
on the left or right side of the screen, and every now and then you seen an upside down
triangle on one side or the other. If you see that upside-down triangle, I'm asking
you to consider that a target. Press a button when you see that upside-down triangle. Only
if it's on this side of the visual field, not if it's over here.
And we actually make it, take advantage of the anatomy of how the visual system is, is
set up. So if you see an image on the right visual field over here, that actually goes
back to the left side of the brain. And if you see an image on the left over here, that
goes back to the right side of the brain. And so if I look at somebody - and this is
actually a paper that was published back in 2000 out of our lab, that really prompted
me to join the lab that I was in. If you look at a normal, healthy control brain - and I'm
using scalp EEG and I present a triangle to this side of visual space, you'll see activation
within 100 milliseconds in the back side of the brain on the opposite side.
This reflects incoming information from the visual field being processed by the brain
on the opposite side. However, if I look at somebody that has a
brain lesion on that same side in the front, then this increase in visual, this increase
in visual activity in the back of the brain actually disappears. And this is known as
a top-down attention deficit. Top-down meaning that there's some sort of template concept
I'm holding in my memory that I should be looking for a certain kind of triangle.
Alright now I have to remember, okay, this is the triangle that I'm looking for, wait
for it, wait for it, wait for it, wait for it. Aha! Once I see that triangle then the
signal is enhanced in the visual cortex in the back of the brain.
However, if I have brain damage on that side, then the signal enhancement goes away. But
what I found interesting about this paper, was that people were still able to do it.
Even with these brain lesions they were still able to actually perform the task. They didn't
do as well as normal, healthy people, but still well above chance.
And so one of the areas of research that I actually specifically focus on is: how are
people who have this kind of brain damage still doing as well as they do?
And there's this idea of recovery or compensation. So if I have damage in the frontal cortex
here, then maybe the side over here is actually picking up the slack or assisting the damaged
cortex. And so one of the studies that's currently
getting rejected at various journals that I'm trying to publish right now, is exactly
looking at this effect. So if I take somebody with a brain lesion
here, and present a stimulus over here, and compare them to a normal control, this is
the distribution of activity in patients compared to controls.
You'll see that blue is more negative, so there's less activity in patients than controls
on that same side. However, if I present a stimulus to the bad side, remember it crosses
to the brain, I actually get an increase in activity in the good side of the brain.
And based upon what we know about the timing of information flow in the brain; within the
first 100 milliseconds of receiving a signal over here to the back of the brain over here,
it actually then crosses across the other side of the brain. And then that allows this
side of the brain to sort of assist it. Given that, if I actually present noise -
so here let's say I'm presenting stimulus here to the bad side of the brain - the good
side of the brain is helping out after the, after the information crosses. If I can actually
then present noise to this side of the brain, to prevent that information from crossing,
in a different group of patients - oh and I should explain what this image actually
is - then I could actually reduce this behavioral boost that these patients get from this information
crossing. So we can sort of use the timing of, of the
visual information and present noise stimuli at just the right times to actually reduce
performance; reduce this compensation. These maps actually here, I forgot to mention,
I'm used to talking to neuroscientists in this kind of context, this is an average brain
template and the, the, the brightness, the color, the more red these regions are, the
more number of subjects that we studied had a lesion in that brain area.
So purple down here means that one subject out of the ten subjects that we studied, had
a lesion there. Green means approximately five subjects had
overlapping lesions here. Red means almost all of our subjects had a
lesion in that spot right there. Does that make sense?
Pretty straightforward, hopefully. [pause]
Okay, so I'm actually gonna do a little bit of a warning here and talk about the last
kind of bit of research that - we're running out of time.
Which is intracranial recordings from humans who are, have undergone brain surgery.
So like I said, the signal quality on the scalp, not very good. But if we can implant
electrodes within the head, then we know very well where a signal is coming from.
So I'm actually gonna show a graphic image here and a video in a second of this kind
of brain surgeries, where we do some of these recordings. So look away here for a minute.
I'll let you know when you can look back, if you don't want to look at this.
But this is what it looks like in the operating room. These are pictures that we took during
the operation. This big mounted device here on the right is actually mounted. It's called
a stereotaxic mount, to keep everything still so you can put the electrodes in the exact
right locations. And these little numbers are actually numbers
that the neurosurgeon places directly on the cortex.
So these people are having surgery because of intractable epilepsy. Which means that
they have a lot of neuronal firing that's abnormal and they're going into seizure a
lot. And nothing is really helping them, so we go in, have brain surgery and remove the
abnormal firing areas. And in order to figure out exactly where the
bad tissue is, the surgeon actually has to go in and figure out what different brain
areas are spiking; showing this abnormal activity. But to insure that they're not cutting out
brain areas important for language and movement, they have to also do brain mapping. So you
use a little stimulator during the operation and stimulate different brain areas while
the person is awake, undergoing surgery. And the person will be sitting there talking
and if you stimulate an area that's important for speech, then they might repeat a word.
So if I'm reading a sentence and suddenly, I [stutters] during stimulation, the surgeon
marks that area of the brain directly with a little piece of paper.
Now if you're paying attention, these people were undergoing brain surgery because of abnormal
neuronal firing. The surgeon is going in and electrically stimulating different brain areas.
This can actually cause a seizure during the operation. If that happens, the surgeon reaches
over and grabs a little bottle of ice water, ice saline actually, and squirts the cold
saline directly on the brain. Because if you cool neurons down, they fire
less effectively. So this is the same neurosurgical technique
that's been used for about the last 50 to 60 years. This hasn't changed. The person's
lying there awake for about 45 minutes performing a task while a surgeon is electrically stimulating
their brain. And sometimes the electrodes are placed within
the brain for about a week. So the person kind of gets sewed back up a little bit; they
have these electrodes implanted; and then they're sitting in a hospital room.
And while they're sitting in the hospital room with all these electrodes implanted for
the week, waiting for them to have seizures - the surgeon is waiting for them to have
seizures so they can see exactly where it's coming from - we go in and we talk to the
patients and we say: "Hey. You know, we're doing this research. Do you wanna, do you
wanna help us out?" And so, sorry, oop.
I'm gonna show a video again. Sorry. Very bloody.
I'm gonna show a video of the actually OR. And you'll see actually the brain pulsating
in the video. [noise from video]
The brain is pulsing because it gets twenty percent of the output of blood from the heart.
So it gets so much blood it actually pulses inside the head.
And this is actually using electrodes. Every dot here represents an electrode on the brain.
We can use these electrodes to actually map activity flow across different brain areas.
So you'll see the person right here is hearing a word and a 172 milliseconds after the word
comes on, and you can see activity - my cursor's not working - right here.
Red represents a lot of activity. Right in the speech comprehension area. And they're
hearing a word and their task is to think of a verb that goes along with the noun. So
if I say "ball" you might say "kick" something like that. "Chair", "sit." Something like
that. So you hear a word then you have to process the word and then you have to speak.
And so you see activity early on. Now here they're processing; the front part
of the brain is thinking about a verb that goes along with it.
[pause] Now they're just about to respond. And now
you'll see all these electrodes in the middle part of the brain here turning red. That's
the motor cortex as the person is making the, sending out the motor command to make the
mouth move. And then you actually see the same brain areas
in the back activate again. Because the person is hearing themself speak the word that they
just said. So we can actually track information flow
across the brain. And we can also use these signals to then track movements. So if we
look at motor cortex and electrode activity over the motor cortex - in the right you're
gonna see reconstruction of movements that people are making on the screen; and then
the green is using the brain activity to try and reconstruct those movements.
You can see it's much noisier; it sort of bounces around a little bit, but it gets the
general, general gist - it follows pretty well.
Anyway, so I have a lot more slides, but I'm running out of time. So if people want to
actually come up and talk afterward or something like that –
I never actually got to talk about the, the computational issues that we run into when
we're doing the analyses that we're doing. But really quickly, in these kinds of instances
now we're recording from at sometimes up to 250 electrodes simultaneously, and every electrode
that we're recording from - [pause]
we can decompose into like 50 to 100 different frequency bands of interest.
And each frequency - we might record for like half an hour of data - so I have like 1800
seconds of data - record at like 3000 samples per second at 24 bits per sample, and we can
actually do what's called cross frequency or interelectrode coupling so we can like
maybe two to four hertz frequency in brain area A is driving a signal in brain area B,
in like the 50 to 70 hertz range or something like that.
So there's all these different permutations and computational problems and combinations
that we can't get at. Like we have to really limit the kind of research that we can do,
because we just don't have the processing power.
And really all of this just equals a big head explosion. It's mind, it's just really annoying
and frustrating problem. Anyway, so if you want to talk a little bit
more about this afterward, I'd be really happy to, and to answer your question a little bit
more as well; and your question. But before I stop talking, I want to thank
my wife Jessica. And she and I, she's been really helpful at a, with helping out some
of these problems and ideas. As well as my advisor Dr. Knight and he's
[unintelligible] at UC Berkeley. And a lot of my collaborators, Ryan Canolty,
Maya Cano, Roby Duncan, Adeen Flinker, Josh Hoffman, Lavi Secundo, Amital Shenhav, and
Avgusta Shestyuk. And NIH, funding agencies.
Anyway, thanks. [applause]