Practice English Speaking&Listening with: Science in the Hands of Scientists: The Meandering Path Toward Truth

Difficulty: 0

Welcome to Science in the Hands of Scientists.

I'm Carl Zimmer and this afternoon we're going to have a conversation about how science happens.

You know we have to bear in mind science is not something that happens in the abstract.

People do science.

And so these are not you know gods who are impervious to the forces that shape us.

They're human like the rest of us and they deal with all sorts of factors that everybody

else does.

It can be everything from pressures to get a job, to get funding.

It can be internal biases, there can be the thirst for glory.

There are all sorts of things that can shape how scientists decide to do science and what

they choose to do.

And that's what we're going to be talking about this this afternoon.

We're going to have an open discussion about the scientific process and how it works from

the classroom to the lab to the journal and then out into the rest of the world.

So let me start now to introduce our panelists.

Our first guest this afternoon.

She is a principal investigator at the Tufts Center For Regenerative and Developmental


Please welcome Dany Spencer Adams.

Our next guest is the K.D.

Irani Professor of Philosophy at City College of New York.

He works on the philosophy of science and the nature of pseudoscience.

Please welcome Massimo Pigliucci.

Our third guest today is a distinguished writer in residence at New York University's Arthur

Carter Journalism Institute.

He's the co-founder of Retraction Watch.

Full disclosure I have written about his work for the New York Times.

So please welcome Ivan Oransky.

And last but certainly not least our final guest is the editor in chief of Scientific


She also oversees the journalists both the Scientific American and at the Nature Journals..

Please welcome Mariette DiChristina.

So now that I have you all out.

Let's just start by just kinda talking about how you make a scientist?

And maybe Dany I could start with you.

I mean so you have a lab and you are bringing in students at Tufts who want to be scientists.

So how do you take them from being students to people who are going to do science in the

21st century?

I would describe it mostly as like an internship or or you.

Most of it is just being thrown right in.

Here, do this and there's a lot of supervising, teaching specific things, specific skills,

specific techniques, protocols.

And then there's the other it's more of a meta kind of approach where you have to teach

really that, you can call scientific thinking, you could call it scientific method, but really

the idea that science is simply one way of learning things.

And it's a it's been a very very powerful very good way of learning things and it's

in some ways not not human-like.

I don't want to say inhuman.

I don't want to say super human but humans, and I'm very much looking forward to the rest

of my panel to talk about this, but humans have what's called confirmation bias and we

tend to recognize and remember the things that confirm what we already believe.

And I just throw this out, most people if you think about the newspapers you read.

You're not, most of us me included, I'm not looking for new information.

I'm looking for confirmation for what I already believe.

And you can say I knew it.

I knew it.

Yeah I knew that he shouldn't have been elected.

No names.

And I think that one really useful way to think about science as a way of learning is

it's about overcoming confirmation bias.

So it's about phrasing a question in a way that allows you to ignore what you believe

or even say OK, here's my idea.

Now I'm going to try and prove myself wrong.

And I can't think of any other way of learning or thinking where that's what you do where

you say here's my terrific idea and now I'm going to try and kill it.



Learning the intellectual skills of thinking that way and asking questions that way and

controls are all about proving that what you thought was going to happen wouldn't happen

that it was actually the injection you did or it was actually because the air conditioner

was on or it was Thursday.

And I think that that that is what makes it so powerful is that it's it's about overcoming

confirmation bias so that you can see what's there rather than what you were expecting.

Do you find that you have to kind of push them further with when they're sort of satisfied

that they have done enough to try to prove themselves wrong, looks like I'm right.

Yeah absolutely it's it's.

There's there's you know it's crushing, it's ego crushing.

You come in and you're so excited that you're going to get to have wonderful conversations

and and there's fantastic questions and you're going to find the answers and you're going

to change the world.


First of all it's not like that.

It takes a very long time but you have to constantly say alright how would you disprove


How would you disprove that?

What's the proper control?

How do you know that, you know, how are you going to interpret it if you get one of these

three possible results?

And what are you going to do if you get something that you didn't predict at all?

It's there's it's you have to teach that and the only way to teach that that we know of

at this point is simply to do it.

So students come in.


You know when you get them started and then you say OK no, that's wrong and that's wrong

and that's wrong and that's wrong.

And there's a there's a saying if you're not failing most of the time you're not working

hard enough and that's a that's a hard thing to get students through.

Absolutely yeah.

And Massimo, you have a long career not only as a philosopher but but as a scientist and

have you know trained your own...


So I was in evolutionary biology.

I tend to agree with most of what you just said.

I would even go even further in terms of I'm a little more pessimistic about how the whole

thing works.

So ideally actually I think that lab works a lot like a renaissance place where you go

and learn from Michelangelo from you know Leonardo or whatever it is.


It's a workshop you come in and you learn because you are in constant contact and interaction

with not only the master, that would be the PI, the principal investigator of the lab

but also the people that have been there for a long time so senior students or post-docs

or things like that, right?

So you are immersed in an intellectual environment and you're doing things every day and that's

how you learn how to do science.

But I honestly don't think actually that individually we can as much as we it's a good idea to be

aware of it and try I don't think we can individually overcome our confirmation bias.

The way it happens most of the time is precisely because the rest of the people in the lab,

particularly the principal investigator tell you no, that's not going to work.

Why don't you think about it that way.

And of course who does that for the PI, who watch who watches the watchman?

That's the peer review outside of the lab right so you have the PI himself or herself

has to get a grant proposal approved, those have to go through the process of peer review.

Even if they are approved and the research gets done, then you have to publish the results.

And yes it is ego crushing, I mean it's rejection over and over and over.

And you just have to grow a thick skin and deal with it.

But that's the way you learn it by doing it.

So, so.

When you when you look at the you know the cohort of new scientists you know coming out

you know freshly minted Ph.D. in 2017, you know what's it like for them starting out

now in science as opposed to 1997, 1987?

Is it the same?

Is it different?

In some respects it's the same it's the excitement is still there.

And it's a great time for science for biology for physics for for other disciplines but

at the same time you're also very conscious these days that you know making a career in

academia, it's not it's not easy.

The number of people, typically when we advertise for one position there's like 150 applicants


And I guarantee you a lot of them are actually qualified for that position.

So it gets you know that you get into a period of possibly several years where you go from

one post doc to the other or from one adjunct position to the other.

And if you're lucky you're actually going to make it by the time you're in the mid-thirties

or something into a beginning of a tenure, tenure track.

So things are changing in that sense.

The job market in the sciences has gotten worse over decades.

I mean the real peak period that was in the 1960s and early 70s where you could just go

out and get a job.

It hasn't been like that for a while and it's increasingly less so.

But we also have a lot of talent.

We have a lot of people that I have you know you do it because you have you're excited

about it.

You really have to want to experience the joy of being a scientist in the lab.

You don't do it for the money because it doesn't really pay that well and you don't do it for

the benefits because those come very late in your career.

You have to sacrifice a number of things.

You do it because you love it.

So Ivan and Mariette, I mean you're you're kind of more on the outside looking in on

this culture of science and I'm curious if you see differences now in what it's like

for scientists to be starting out and launching their careers in science now as opposed to

like when you started being journalists.

I don't know which one of you wants to jump in?

I can start and then Mariette will probably say something smarter and more scientific

than I will, how's that?

You know I was just struck, if I may for 10 seconds just say what Dany was saying about

I think about confirmation bias a lot.

Everything that you said about scientists and how you really ideally want to wake up

every morning going how can I prove myself wrong?

Well journalists are supposed to do that too.

And you know I am not going to claim that I do that I wake up usually and you know check


But you know you you want to wake up and you need an editor you need someone who's saying

well wait a second.

You know yeah you have that document but that document doesn't really say what you think.

So I just want to reflect on that for a second.

IVAN I think that there are two I wouldn't call them changes necessarily but theyve

sort of accelerated.

And then I'd be interested to hear what others think about this but there were sort of sort

of two phenomena that I think have accelerated.

One is what I would refer to as a sort of metricization.


So everything is now measured.

And this is true in all walks of life, its a sort of if you'll forgive me a little bit

of not philosophy exactly, political science of a neo liberal idea that you know we can

measure everything.

And so that includes measuring impact of research and that includes something called the impact

factor which is...

Describe how you measure the impact of somebody's study.

How do you know?

So, often you know what a lot of it is based on and I'm not here to defend this but just

to say that this is what happens is how often are other scientists citing that work.

So if a tree falls in the forest and no one hears it did it really fall?

If a paper is published and no one cites it does it really matter?

And again I'm not condoning that but that is what matters.

And so you turn that into a number, you turn it into a number of numbers actually an ever-growing

sort of forest of numbers where you actually have something called impact factor.

So the number of times on average I'm simplifying here that a given, article in a given journal

are published and a lot of people and I kind of am in this camp a bit say that this has

really had a dramatic effect on science and not necessarily in a good way because everyone

is pushed to publish in these high impact factor journals.

And so they publish a lot of really good science but it also what it means is that tenure committees

promotions anyone who sort of helps you to get ahead or is what you need to get through

to get a good peer review committees you know Massimo was talking about peer reviews.

They all are looking for these numbers.

They're not necessarily reading the entire paper anymore which is what you would hope

they would do.

But they're just looking at well is it in this journal or that journal or what have


The other thing very quickly that I think is happening alongside this and really feeding

into it as well is that there was a blip and now it's sort of looking more and more like

a blip, from 1998 to 2003 if you look at the doubling.

So the NIH budget and thats not the only budget

National Institute of Health, main funder of biomedical research in the US.


The and this was a good thing for science it meant that there were more there was more

funding obviously, there was twice as much funding over that period of time.

But that stopped.

And so in fact you could argue and if you look at inflation the amount of funding, federal

funding for the NIH has declined.

And yet all of those graduate students, postdoc trainees, well they were all working somewhere

because labs just staffed up.

It was like some kind of boom it really was a boom.

Well for this over the past whatever it is 14 years now we've gotten many many even more

than we had.

And it wasn't so good as as Massimo pointed out since you know whether it's the 60s or

early 70s.

And so you had more and more people competing, all needing to get all these numbers.

And lo and behold we were seeing a lot more competition and that, a little bit of competition

is a really good thing.

Too much competition can really harm you.


Thanks very much and by the way it's fascinating just to listen so far.

Absolutely wonderful.

So I have two things I would like to add just reflecting on the ideas of confirmation bias,

on increasing competition, on impact factor.

On the two areas.

First of all I think a thing that we often don't recognize is publishing's role in this

ecosystem, in both at the popular level so the Scientific American level where I'm editor

in chief.

One way we can either foster haste in publication or not is how do we jump on "this week salt

is bad for you.

This week it's good for you" you know.

Can we take the longer view provide context, not be slaves to the embargo system, the system

where you know we all agree that a paper is going to come out a period of time from now

and the journalists like me and like Ivan we get a few days head notice to...and the

idea is that we will we will we will all publish our stories at the same time and we'll have

had some some equal access to the experts to do that without haste, but also comes with

a whole host of other challenges which we may, we may or may not come onto later.

So that's that's one aspect.

So mass publications like Scientific American and then the journals themselves.

How do we you know I am not a journal editor although I work at a large publishing company

Springer Nature which which publishes 2000 around journals.

Including some of the biggest ones like Nature.

Including some of the ones with very high impact factor very selective journals like

Nature and the Nature branded titles.

And how do we set standards there that in fact slow down that haste to publish that

enable a little more checking and accuracy like reproducibility check lists, like checking

methods and protocols, like supporting the research community so those pressures that

people like Massimo are calling upon or noticing, are are somewhat supported and slow down a

little bit.

And then I want to set the lens a tiny bit more broadly outside of publishing.

So how do we as a society look at science and treat it?

Do we jump for the latest headline or do we start with our kids when they're very very


Because researchers like Alison Gopnik will point out we're all born researchers and you

don't have to believe me just watch any kid in a high chair dropping different things

off to see how they fall and they bounce or whatever.

So we're all born experimenting.

But over time in schools we're taught through a series of classes that science is a set

of received wisdom.

And not something that you try and fail because you know watching my own girls, one now in

college, one in high school, every every experiment they take in every science class has the right


So this is kind of how the mass society is left.

So I encourage us to look at those two lenses you know one is the rule of publishing and

the ecosystem in the broader role of society at large.

I have to say as a journalist when I was starting out there there came a point, and it took

a little while for me to actually appreciate how much of the work of a scientist was filling

out paperwork to get the money to do science.

And like I just sort of thought like well somehow like the money just sort of like shows

up in a box or something and then the scientist does the science, you know.

I mean when you read a you know you read a biography of Einstein you don't read him filling

out an application to the National Science Foundation.

You know you want to know how he came up with E = MC squared.

But I just remember like I think that was the one the one time it really hit me I went

to have, to interview a scientist, a biologist, like a leading evolutionary biologist.

And I come into his office and he is like hold on, I can't talk with you just yet.

And he was on a conference call with these co-authors who all wanted to get this grant

and they were trying to figure out why it was that their, their, their the essay they

wrote to get this money didn't fit on one page.

Like there was something wrong with a font and they couldn't figure out what was wrong

with the font.

Because they had like two or three words are spilling off the end of this pdf and they're

like ahhhh...

If we don't get this all on one page there's no chance we're going to get this money.

There's no chance we're going to be making this great insight...I'm just looking at them

like I, I didn't come here for this.

And I had to wait like 20 minutes for them to like fix their friggin font and apply for

the grant and then we could get to the science and I'm just here wondering like.

I mean that's the sort of stuff we tend not to report on.

But unless obviously like really....

I mean that must be a big part of what goes on in terms of like your time and then also

how does that affect what you're going to decide to study?

Yeah it's a very frustrating part of it and it's something that I do wish more people

knew about because your tax dollars put me through school.

You trained me and I'm trained to be in the lab and how to ask a question, how to design

an experiment and I probably spend, I probably spent 50 percent of my time writing the same

thing over and over again, changing the font, changing the format, making sure this thing

is filled out and that's filled in.

And you know we have a rule never ever update your software within one week of the due date

because it will get messed up.

Don't send from a Mac to a PC because it'll come back wrong and it's it's very frustrating

to do that.

And the bigger the labs so that the more productive the lab should be and can be and is, the more

time the PI's probably spent, sorry PI stands for principal investigator and that's the

big scientist in the lab.

The person who gets all the credit even though the work is done by many people.

60 to 70 percent of the time is spent doing the paperwork, writing the story.

And on top of that, so the community and I think the journalistic community as well is

is aware of these issues and is is trying things.

Are there ways that we can get around this and for example the journal development which

is the big journal in my field has recently announced that you can submit it in any form

you want.

And they will deal with it.

They have decided that reviewers are capable of reading a paper even if it's a different

font than the one that's great.

That's a really nice thing.

Other journals are working there there are some journals now that are connected up so

that if you don't get into the highest impact one you can have it exactly you can have it

submitted to the next one down without doing anything else.

Saving you time that you can spend on science.



So trying to mitigate some of this silliness that we spend our time on.

But even when you spend all this time I mean if you look at the percentage of applications

that are accepted and lead to a grant.



I mean that's the other problem is is limited, there are limited funds.

And the competition the competition is intense.

How does that affect the science that you decide to do?

So the big granting agencies the National Institutes of Health for which there is 18

I think and then and.

By the way the more important problem I think is National Science Foundation so NIH is biomedical.

NSF it's everything else.

Everything else and its budget is very small compared to NIH's.



So right now I think in my field it is the places where I apply for money, you have to

be in the top four or five percent of your peers for people who will understand what

you're trying to do.

Rank it and then they decide and you have to be in the top four percent.

And if you're not you will lose, you know, that's it.

There goes your salary.

And that's hard.

You're competing with people across the entire nation and to stay in the top four percent.

It's one thing to be an A student in high school.

It's quite another to be the only A, the one you know one A in thousands.

And that's, so what you tend to do is you you, these agencies what they do is they put

out these things called RFA, requests for applications, and these are where they're

saying alright, Congress has decided that what we should spend our money on this year

is mapping the brain.

And so they'll say they'll set aside OK we're going to set aside $50 million for people

who want to map the brain and then they put out these RFAs and you go to the website and

you think well OK I'm a brain person.

I'm going to go and I'm going to find out.

Is there money out there for doing this thing?

Well it turns out there isnt, it turns out that some congress person got put in there

that what really matters is this part of the brain.

You know if you're studying the cerebellum you've got a much better chance but if you're

studying the cortex we're just not interested this year.

So yeah you might very well switch.

And so it ends up ultimately being guided by Congress.

It's up to Congress to say, this is what's on the agenda now.

This is what we think is important, and the scientists are becoming more savvy about this.

And there's a lot more scientists now going to Washington and we now get trained in how

to lobby for science because that it loops back around to what you're going to be able,

what what passion, are you going to be able to follow your passion in a direction that

you as the trained person think is the best direction to go or are you going to have to

fit it into something that came down from Congress.

How do you I mean how do you think about how this fierce competition then affects the papers

that get published and then get reported on?

Do you see that do you see an influence happening there?

Couple of things that I'd like to add to what you guys have been talking about for the last

few minutes.

So NSF in my field we're talking about the funding rate is five to six percent.

That means that when I was an active PI I had to write a grant proposal every semester.

And two out of three will probably be rejected which still it was much higher than...

That's a fabulous record.

Thats really good.

But that was frustrating.

That was a lot of time then yes you're right you learn also how to lobby.

That's another chunk of time that goes away from the science, do something that you are

not really set up to do to begin with.

But we have to realize that there's are a couple of things going on.

First of all this is actually fairly new and also it's not unavoidable because there are

other ways of doing things.

So it's new in the sense that NSF and all the major federal agencies were created after

World War Two.

And that's why Einstein didn't have to do that sort of stuff or Bohr didn't have to

do that sort of stuff because they were simply not around.

Even even after they were created initially there was a flood of money after the war was

you know in terms of economic expansion.

There was always you know science, especially physics had a lot of cache.

And so they were just getting money.

The as you said when you introduce me I'm the K.D.

Irani Professor Philosophy at City College, well K.D.

Irani was hired you know like four decades ago at City College because Einstein met him

at Princeton and he wrote to City College and said, I like this guy.

You should hire him.

And he was hired.

That I guarantee you that's not going to happen today.

It's not going to happen.

Turned out to be a good decision but nonetheless.

So all of these are actually fairly new and it's gotten much much worse over the last

20, 30 years or so.

But there's also some very bizarre ways that actually typical of the American system that

you don't necessarily find in other countries in the world, even in other Western countries.

So NSF for instance for a long time has concentrated more and more money on a smaller smaller number

of laboratories.

They go for the big project.

They go for the big things.

One of the reasons is as you were saying a minute ago because that sells to Congress.

And I can tell you a story that occurred a few years ago that I thought was hilarious.

And at the same time extremely sad.

So I was invited to a NSF retreat with a bunch of other ecologists and biologists.

They told us, look we want to give you big money.

But the way to give you big money is to attach it to some kind of big project because otherwise

Congress is not going to do it.

And so they said you know try to tell us.

Think about it for this entire weekend.

Think about the equivalent in ecology of let's say on an astronomical observatory or a satellite

or something like that or a particle accelerator.

So let's say that we want to give you not the usual 500,000, 600,000 dollars but you

know 30 million dollars.

What would you do with that?

And it was hilarious because the answer was I could hire 200 post docs because most of

them work in colleges' field work.

I don't need an observatory or a satellite.

I do need however a lot of people in the field doing the actual work but that isn't the kind

of thing that is selling the project to Congress.

Other countries do it differently although unfortunately more and more they're following

the American model because America is capable of exporting a lot of good things and then

all those bad things and the rest of world somehow buys it.

So Canada and most Europe for instance, they do a different system where they give smaller

grants to a much larger number of labs.

And the way it works is you write much smaller proposals instead of a 15-page proposal for

NSF, you have you know three or four pages proposal and all you have to do is really

to maintain a certain publication record.

If you can show that over the last three years you have published a decent number of papers

in decent journals and then you now have a new idea you want to proceed in that direction.

They're going to look at it and go, yeah sure.

Good enough.

That's we're going to give you the money and we'll see how it goes.

You know three years down the road.

That's not the case for NSF and things like that.

So there are different ways of structuring the system and I think part of the problem

is that of course the American system is the one that does pump more money than anybody

else in scientific research and so we think of science in the way in which it's done in

the United States.

But it's not the only model.

So what's your perspective on it?

I mean if it maybe pumps more money than many others but the sad fact about U.S. research

is that the funding overall has been roughly static since the 1980s and down.

And I actually get quite concerned.

I like science that provides solutions as much as the next person because we know one

of the challenges and why Nobel Prizes take so long is basic research may take decades

to then get to applied.

And surely nobody, surely Einstein who was mentioned earlier today and this is a this

is something that maybe a lot of folks in the audience have heard already, certainly

when he was working on relativity theories there was never thinking about it being applied

in my iPhone with my GPS which of course we do every day without thinking.

But even the iPhones have a couple of dozen innovations in them themselves that nobody

thought of would be applied later on.

This is one challenge I see.

Some interesting models I also like are places where in Europe, I think there's some great

leadership on this, funding the researcher rather than the research project which I think

is encouraging.

So how would that work?

So a researcher has demonstrated through some initial work over time that they have some

interesting lines of inquiry that they will be pursuing and they get a block of money

for maybe five years at a time instead of one study the next study, the next study.

I think that's quite interesting.

I think that's one way to go about it.

Another is for the publication piece of it you might decide that the publication will

be OK as long as the methods and statistical accuracy that you're going to use for analyses

are are agreed beforehand.

And one of our relatively newer journals Nature Human Behavior is looking at this sort of

model as well so that the researchers know when they've submitted something we can kind

of pre peer review almost that tells them that when they get when they get done through

it they will get to publish something.

So models like that and also I'm thinking broad, MIT and Harvard and how they have staff

scientists to help work and remove some of that admin burden that many of the researchers

are shouldering.

You mentioned about grants, and you listening to people as they're futzing through the grants.

I think the long story short of it is that there are so many elements that challenge

research and we're going to have to take kind of a multi-pronged approach to solving all

of these various problems.

I want to switch gears a little bit.

Ivan, maybe you could tell us a little bit about Retraction Watch.

I want to sort of talk about some of the things that with the panel as a whole that some of

the things that you kind of you know expose as you as you run this site.


So why, you know as a journalist why would you say hey you know what I'm going to do

I'm going to set up a web site dedicated only to retractions of scientific studies.

Like why why would you do that?

That's a great question Carl.

If I had the answer I probably wouldn't have started it.

So Adam Marcus is my co-founder.

He and I launched Retraction Watch not quite seven years ago now, itll be seven years

in August.

And to be honest the very basic idea there were really two guiding principles or sort

of reasons why we were interested in doing this.

In particular Adam had been reporting on retractions for a while.

I'd been doing the same but credit where credit is due, he had broken a couple of big stories.

One particular and it's a anesthesiology researcher who ended up going to prison because of scientific


It's a very rare phenomena, very rare event.

But that of course grabbed everyone's attention.

In fact I was at Scientific American at the time and we reported on this after Adam did

and all that.

And so one, that sort of speaks to the fact that often, scientific retractions, maybe

I'll just back up and explain what they are.


Because I think that many people including me until about seven years ago it wasn't quite


This is sort of if you will the and I'm going to mix metaphors, I definitely shouldn't do

it with Massimo here but this is the nuclear option if you will of self-correction or any

kind of correction in science.

So you're saying not only is there, was there a problem with this this paper, this finding

maybe there's an error here but you shouldn't trust it.

You shouldn't rely on it all.

You shouldn't cite it or if you do cite it you should make it very clear that it's been


It is again, retracted from the literature, withdrawn from the literature.

Guidelines say it should still sit there in the literature review marked really almost

with a scarlet letter, in fact often a red stamp that says retracted on it.

And so that's what retraction is and what Adam started seeing with these retractions

from this particular anesthesiologist was wow there's a great big story here.

And as a journalist I mean there are investigative journalists who I admire and just want to

be one one day but really Adam and I think of ourselves as finding stories that are hiding

in plain sight and retractions are a great example of that.

So here's a bunch of retractions.

They seem very arcane.

Why is this paper being retracted?

Oh wait a second this guy's made up all the data for 25 different studies.

So there's that line at the end of a lot of Hollywood movies you know, no no animals were

harmed in the making of this film and so it was sort of like that no patients were harmed

in the making of this study because there weren't actually any patients in this study.

So that's sort of a different take.

So there was that element.

But the other thing we noticed was that the retraction notices themselves.

This is not a very transparent process.

I mean the process is not transparent.

The outcome is not transparent.

So the retraction notices themselves were either, you know, some of them actually told

you what happened but a lot of them either they were sort of mealy mouthed.

They wouldn't really tell you the whole story or they would actually be wrong.

They would make something sound like it was an honest error when in fact it was fraudulent.

And it turns out we've now learned you know few years after we started the blog that A)

the number of retractions has dramatically increased, it's far outpaced the amount of

papers being published, theres a rate of growth in that, think when you and I one of

the times we talked for the Times about a study that came out showing that, the number

of retractions went up tenfold from about 40 per year in 2000 to 400 per year in 2010

and it's continued to grow.

That's out of about one and a half to two million papers so it's still a very rare event.

And you know the other thing that's happened is that we know that now two thirds of them

are due to misconduct or due to fraud.

So there's the sort of US federal definition of fraud and it is or misconduct misconduct

which is something that has also been sort of exported to other countries in a lot of,

a lot of agreement on that but it's FFP is the, FFP is the, what do you call it, the


So it's falsification, fabrication and plagiarism.

Plagiarism hopefully is self-evident to everyone in the room.

But we can talk later if it isn't.

But falsification is you did an experiment but it didn't go exactly the way you think

it should have gone to prove the sort of whatever it is that you're trying to prove.

So you fudged the results a little bit.

Maybe you literally used Photoshop and you take out that little band that shouldn't have

been there, a little spot.

Oh I'm sure that was just an error, you know the the lab tech didn't have enough coffee

that morning and so I'll just erase that one.

That was an outlier.

The ultimate in confirmation bias.

That's complete confirmation bias.

I know I'm right.

I'll just treat the photograph just so everyone knows it.

Right, and then the other half is fabrication where you actually make it up like this guy

Scott Reuben did.

And so that's the sort of triad.

We've seen a lot of other things happening.

Crazy stories about people doing their own peer review.

Actually manage to do the peer review of their own papers.

I'm sure you're all familiar with peer reviews.

The idea is that it's been done by a peer.

Now I happen to think that I'm peerless so there are no peers, but you actually are supposed

to have someone else do that.

It turns out

Its not a reflexive property.

So there have been just about 500 papers retracted because people figured out a way to either

peer review the paper themselves or have someone sort of do it for them so that they could

then peer review their paper and not in the sort of usual way that we all know that happens.

But in a directed way.


Mariette, what's your feeling on sort of you know the issue with retracted papers and you

know perhaps reflecting these these pressures were talking about?

Certainly acknowledging all the pressures and actually I just want to put in you know

praise for for Retraction Watch and for scientists who, you know let's not forget science is

a process.

I talked a little earlier about how we sometimes in school in early school years treat it as

you know a series of received wisdom.

Bits of facts that everybody knows and that that's always true.

But actually one of the things that's very wonderful about science is that it tries to

embrace its corrections and self corrections and so on.

And I also thought listening to to your excellent outline Ivan that a thing we should mention

for folks who may be not familiar is that you know the fact that you can reproduce something

doesn't make it right.

The fact that you can't reproduce it doesn't make it wrong.

There are lots of things that happen in between.

Problems with your reagents, problems with the techniques, with the methods used in the


And this is why I talked a little earlier about what publishing outlets like mine can

help do, for one thing in our in our journals they remove the restrictions they used to

have on length of methods.

We were talking about fitting a couple of words on a page we used to have restrictions

and now online they can be much longer and provide more detail so other researchers can

can help provide that course correction or correction by checking something.

Or providing protocols or providing the raw data which we now increasingly hook to the

figures that are in the papers as well.

All of these help the scientific communities police themselves in addition to you know.

You're quite right there's of course there is wrongdoing and science is ultimately done

by humans.

And they are flawed every so often but there are these larger factors at play as well.


And even if even if people are not committing fraud or misconduct or fabrication.

I mean there's also the issue of do you have have you have your students challenge themselves


Or you know as I said like whoa I've got the value I want, got to go publish, you know

rather than take


Known as p-hacking, right exactly.

And so I mean do you see that as a pressure like you know like.

I mean maybe young scientists don't have the luxury of spending another year or two saying

that I don't think that's quite right.

Let me make sure that my results are good.

Couple of comments to some things I've heard here.

One is that another one of the perhaps unique things about science is you don't have to

be right.

How do you mean?

So, science meanders.

I love the title of this salon.

An adjunct to that point is that the way scientific paper scientific studies are reported, the

structure of a scientific paper: an introduction, a section that describes how you did it, a

section that describes what you got and then there's a whole section called the discussion

where you put in what you think it means and you may be wrong and that's fine.

That's not the criterion.

The criterion is, did you do the right thing?

Did you answer that?

Did you do the right controls?

Did you use your techniques properly?

And the results, this was described to me when I was a student, as the results is the

part of the paper that will never change.

When you did exactly what you have said in the method and material section, is supposed

to be where you provide enough detail that someone can replicate it.

And then there's the results.

That's what happened in that situation on those days with those reagents and dadadada.

And it's one of the things I think that makes these papers very boring because, you know

what's the fun part?

The fun part is what does it mean?

What does it tell me?

How do I interpret this?

But it's absolutely critical.

And I'm not the the person who's going to figure out how to do this...

But that's the stuff that gets lost when you switch to journalism.

Where you need to have a headline.

Where you need to sell a paper and you have exactly two inches for your headline.

And it's... but the point I wanted to make is that this is the structure of the paper

that convention that tradition is another one of the ways that scientists police.

Is it going to come out right?

You say here's what I did and it's open game.

Anybody who wants to and who has the money and equipment can replicate it and disagree

with you and you have this whole section that's your your opinion and it's separated out and

it's this is the editorial part.

Here's what happened.

Here's the editorial part.

And and we build on each other.

We don't have one person who's studied, you know, we have Newton and we have these wonderful

laws and that's great.

And he was wrong.

And it's, he was he was wrong when you try to then apply them in other circumstances.

So but but science meanders and science builds.

And and of course you end up you know you want your grant.

You want your headline.

You want those things but there are so many little places built in to try to keep us honest.

To some extent these are now a question of incentives and incentives can be structured

again differently.

So for instance we talk a lot about replication but of course there's no incentive whatsoever

to replicate somebody else's results.

Why why why so explain why wouldn't scientists want to spend their time trying to replicate

what someone else did?

Because it's only the adventure right.

You want to be the first one to get there.

Not the second one or the third one or the fourth one.

And you can't publish it.

Usually you don't publish it.

Well why can't you publish it?

I mean

Because most journals especially the high impact journals want the novel stuff, the

sexy stuff, the things that nobody's done before.


So even though that's slowly beginning to change.

I mean there are some journals who accept replications but those are not the ones that

you build your career you on.

You don't become famous as a scientist by saying yeah yeah I checked these other the

guys work.

That was correct.


It just doesn't pay to arrive as number two or three or four.

So that's one thing in terms of incentive structure.

There's also the issue that the publish or perish sort of culture arguably is always

been there at least since World War Two but it has gotten worse precisely because there

is a large number of scientists, a large number of journals.

How many journal do you guys publish?

At Science Nature it's about 2,000.

So you alone, Science Nature, publishes 2,000 journals.

That's one publishing house.


So the result of it is that you know you were saying earlier if somebody publishes a paper

and nobody reads it or cites it the, you know, what was that.

Well the thing that really opened my eyes is a few years ago there was a study that

showed that of all the papers published in major pop biology journals, the really top

notch journals, one third only is cited within five years from publication.

That means two thirds, let's get let's get this straight like two thirds of top papers,

published papers, published in top journals never get cited period.

That's a lot of waste of money and energy.

The question though is, I'm sorry to interrupt you but should citations, I mean citations

obviously there are many alternative metrics as well.

Citation is important.

You know publishers also do have places increasingly now for publishing negative results like a

scientific reports in my own journal group will allow that kind of publication as help

for science at the time.

So incentives is a great point.


So you can change the structure of the of the incentives.

So one of the things that would that some people including my colleagues and I've been

doing when we got these 150 applications for instance from young people, post docs, graduate

students that want a new job instead of doing the bean counting and say OK so how many papers

you have, we say OK well that's nice.

You've published 20 papers in the last five years now pick your top three.

The ones that you've got, not the ones that have been published the best three journals

the top two the ones you think you made an impact and I'll read them.

And the rest I'm just going to ignore because it's just bean counting.

That I think brings back a little bit of quality into quality control into it instead of just

doing the simple thing...

Rather than just churning out one paper after another and just measuring by the pound.

And then I mean Adam and I have applauded efforts like that because it actually gets

back to let's read the paper and let's understand what he....

And who cares what... you could almost blind yourself to what journal its in.

With all due respect to Nature.

I mean Nature Springer has every range, top to bottom, in terms of...

I don't mean bottom.

But you know a range of selective or perhaps less.

From highly selective to sound science.

But if I can be just sort of dark for a second and I just because to put things in perspective,

you know we're talking you know you're talking about you know an American university.

I think in Western Europe you see a lot of the you know a lot of movements to do that

and Mariette was talking about some different funding models that are I would argue progressive

in terms of that.

Certain parts of the world are going in exactly the opposite direction.

So for example in China you get a half salary bonus if you publish in a journal with a certain

impact factor.

It depends on your field and what have you.

You know...

So you get a paper in Nature you're going to


Forget it.

I think they'd buy you a palace.

But I'm talking about you know a sort of a respectable..

OK, respectable.

You get a paper in, just a respectable journal and you're going to get six months of salary.

Is that what you're saying?

Right, and you can keep hitting the pinball you know.

I don't know if that metaphor works anymore today but you can keep hitting that you know

as many times.

I'm sure there's some limit what have you.

The point is I mean I like to think of myself as a fairly you know I'm going to do what's

right and sort of all ethical person.

I've got to tell you if I you know was offered the opportunity and had to figure out a way

to do it, I might sort of.

And even if I thought well I'm still doing the right thing.

It's going to it's going to clearly do something.

And every country is somewhere on that spectrum and you know even you know some for example

in the UK there's that and it's changed a bit.

So this being maybe a little bit unfair but there are these sort of framework exercises,

exercise frameworks where you look at, evaluation frameworks, excuse me, where departments get

more money or your university gets more money from the government if you have an average

impact factor in a certain number of papers and this sort of thing.

So I applaud all of the efforts that people are making to get away from that.

I think we just have to sort of look at this globally and understand what's happening.

So I just want before we take some questions I just wanted to switch it over a bit more

and more to focus more on sort of like that next step from the from the journal to the

rest of the world like as stuff spreads out through the news, through the magazines, the

newspapers, online and so.

Are are we sort of like on the outside of this sort of dynamic?

Do we have nothing to do with it or are we kind of.

Are we are we part of the problem?

We can get to the solution later.

Are we part of the problem?

We, who?

We, journalistsWe will get confessions from journalists first and then the scientists


Just like the spectrum of anything.

Yeah, sure.

Many of us are click bait chasers.

Many of us are publishing the latest headline.

Many of us are....

So we're going to look for that paper with the flashiest claim.

Yeah, it happens.

I'd be lying if I said it didn't.

How do we approach it, those of us who have the privilege of you know in the case of Scientific

American, let me put that that on as editor-in-chief there.

One of the first things I did when I became editor-in-chief of Scientific American half

a dozen years ago was instead of having the scientists write more in that received wisdom

kind of way which I was describing earlier applied to education.

But we used to have a science or one of the things I should have said earlier, in case

people don't know, is about half maybe more of feature articles in Scientific American

are written by scientists.

They're written in collaboration with our journalists so that they're accessible but

they're written by scientists.

About their work.

In the past we used to let the scientists write in kind of a lecture style as if everything

was solved and they would look over the past three or four years and it kind of came out

like here's all the received wisdom.

A thing that I tried to do and I think is an example of I'd like to think, I hope to

think a productive way to approach it is tell us your story as if it's what I did last summer.

What was your question?

How did you try to explore it?

When you made a mistake what happened there?

How did you get past it?

Who did you talk to you?

And so I actually invited the researchers, and increasingly we do this at Scientific

American, to speak from a first person point of view about them and their teams and embrace

the fact that science is meandering.

That we do make mistakes, that sometimes we see things in the data that because of confirmation

bias or a mistake in the reagent or a counterfeit, you know, technology that we bought that we

didn't know.

These things happen because it is a product of human endeavor.

But because science is a process and an evidence based one, eventually we right that over time.

Meaning we right in the sense of r-i-g-h-t, rather than write it out.

So I think that's one way we can contribute.

Take the longer view.

Don't just be slavish to the thing that maybe just happened that's the sharp headline but

try to provide the context.

And I think that's one place where at least the journalism world that we participate in,

we can benefit this process.

I mean Carl you asked me about creating Retraction Watch what made us want to do that.

I think one of the answers that I could also give is that we wanted to write about that

process of science.

You know I have to say and so I was at Reuters Health, I was running Reuters Health at the


Reuters I'm sure you all know is a large wire service.

And Reuters survives, like most wire services do, on volume.

And our clients were very clear about how much volume they wanted and how do you write,

and I can give you the numbers but it's somewhat staggering and I don't want to scare people

but, how do you create that many stories per week doing something other than writing about

every study that comes out that?

I'd like to think that we in some ways we're for a professional audience, medical audience

that understood that, but and I like to think we put the kind of context in that you would

see like in Health News Review which is a great site that tells you, gives ideas about

how to cover medical studies in particular.

You know the sort of the the triad of coffee chocolate and red wine.


I think that if no reporter ever wrote a story about chocolate, coffee or red wine again

we'd all be better off.

I like two of those things.

The other one I don't actually drink.

That's OK.

But the point is I actually I wouldn't say I had whiplash but I had a little cognitive

dissonance because here I was in my day job running a website that you know not a website

but a wire service that had to pump out the volume because we were reporting on single

studies, dozens of them every day.

And yet I was sort of thinking trying to think a little bit more globally and slowing things

down and thinking about the experience of science and what it all meant.

Maybe that was useful actually.

Maybe it sort of connected some dots for me.

But I think that's I think it would be great if more journalists did the sort of work that

Mariette was describing which is in which I think this salon is very much about which

is about how science really works.

We all need to get excited about what science is doing for us but how do you really include

at least some of that.

Well I just want to touch on that before I let the scientists pass judgment on the journalists.

The thing is that like I think when I became a fan of Retraction Watch and then I started

reading about some of these people that were featured there, which you know I started to

see more and more of these longer pieces about some of the more notorious characters.

I you you you.

It was hard to look away because in the most extreme cases where people would have to have

like twenty, thirty what's the record for one person having to retract?

183 actually.

183 papers from one scientist that had to be retracted and you start reading about these

people and they're just like they're sociopaths and fascinating and you're like how did this

person go years and years and years lying to everyone around them?

I mean that's... we love reading about that.

Is there a risk there that if you, that we're going to swap this sort of naive idea of science,

scientist as hero who is only about the truth.

Are we gonna swap that for you know scientist as sociopath trying to get the next grant?

I mean is that a risk?

Absolutely it is.

And it's something that you know I wouldn't say we thought of on day one but that hit

us sort of smacked us in the face quite early.

In the way that we try and combat that if you will both in our own minds and our own

work but also in terms of the public.

So one thing that has happened I don't know that it would surprise anyone to learn that

a lot of what I would sort of frankly call anti-science forces advocates have seized

upon what we do.

We're very popular with them.

Not all of them, but with some of them and so we're very conscious of that.

One of the things that we've done to try and combat that is to create something called,

this gets back to something that Mariette was talking about in terms of trying to correct

the record, doing the right thing, We have a "doing the right thing" award that we created

actually with Stat so it's me and Adam and our column there at Stat.

We handed out the first one just about a month ago.

And the idea there is to reward people who do the right thing by correcting the record

at some costs to themselves because, you know correcting the record is great.

Very important.

But you actually may face consequences if you do that.

And I think we talked about what some of those consequences can be and we call it, it's "doing

the right thing" so it's called the "Dirt" award which is almost the acronym for it and

sometimes you have to get dirty to clean up the center.

We like to market stuff too.

I think that if you if you approach it and tell those stories as well, really try to

tell those stories as well I think you at least over time can tell the whole picture.

You know.

So before we turn to some questions I just want to hear from the scientists about what

you how you feel sort of the journalism world fits into the issues that we've been raising

about you know the competition, the pressures and so on.

You know I mean is the way that the journalism is covering it kind of making things worse

or better.

I mean how what's your perspective on it?

The preferred answer of the scientist is I never said that, the journalist made it up.

And sometimes that does happen or at least there is some some major difference between

what comes out of my mouth and what comes into print.

More often what happens is the scientist would say I know I said that but I prefer that I


You know.

We we're always getting these you know people demanding to see our articles in advance.

I just want to check my quotes cause maybe I said something I shouldn't have.

I think that one of the things that is changing the equation.

Well two of the things that are changing the equation is one, as you know, journalism is

now in its own interesting situation and I have friends who are journalists and it's

not easy to find, maintain a job as a journalist or often you're a freelancer.

That means even more pressure, even less time to check stories even more pressure to go

for the sexy stuff and all that.

So that's one side of the equation.

The other side is more and more young scientists especially beginning with PhD students and

post-docs and young investigators on tenure track.

They start actually writing their own stuff.

They write blogs.

You know I write two blogs.

So I communicate directly to the public.

That that's interesting because then I have journalists who come to me and want to talk

about something that I publish on my blog not something to publish to the department

of literature.

And so that I think adds an interestingI mean I don't know where that it's going.

At the moment we're kind of welcoming it because I do think that there is a good thing to add

the voices directly of the scientist.

Scientists should be more conscious about talking to the general public.

After all as you were saying not only our education but most of our money for research

and so forth does come directly or indirectly from taxpayers.

And so it's just right that we spend some of the time talking back to those people and

say, hey, you know here's what I'm doing and why it is interesting.

So that's part of being a scientist.

That is part I think of being scientists.

And now it has been of course historically this was a problem because your colleagues

will look at you and say you're wasting your time.

Either you're wasting your time or you're not a good enough scientist and that's why

you're doing that instead of another grant proposal.

I think that's changing.

You know Planck famously saidMax Planck, famously said that ideas in science get accepted

one funeral at a time you know because the old people die.

I think that's true also for other attitudes within science that is the old guard is retiring

and getting out of the way.

And so the new people have grown up with social networks and Twitter and blogs and things

like that and I think that actually is almost an unqualified good at the moment.

We'll see how that evolves for the next decade.

Dany I'll give you the last word before we start taking questions

I agree completely.

I think that so I don't begin to understand the constraints that a journalist faces.

You know I have two inches to describe something that took someone six years.

And when you have to go from the boring details that make it science and make it trustworthy

and pick out whoo, this is the wow right?

We call it the gee whiz stuff.

You lose something and it can come out wrong and I know that when I've worked with journalists

I asked to see it, to check the facts not to check that I come off a certain way or.

But but facts can change when you change the structure of the sentence.

If I if it takes me twenty words to say something because I'm trying to be really accurate and

really precise and not let it get blown out of proportion.

And you have to trim it down to six then it becomes a sentence that you know you used

the word proof.

I used the word I'll say well I have evidence that it's not this or that you know it takes


Yeah, yeah, yeah, but you proved it

Right and then that becomes proved.

And now - You have a headline to write and now people think that scientists prove things.

And that I think is is is difficult.

I don't know what the answer is except a very deep cultural shift and I hope things like

this festival will encourage it.

But there was a time where you needed to know science to be considered an educated person

and now you need to know the humanities you know.

You know I don't like Shakespeare you know.

But I think that you shouldn't be able to avoid learning biology because you're an English


I think that we have to go back to a time where even if you do use the word prove, people

understand what that means in the context of science.

They, they can read that and say oh OK what that means, you know it's just it's automatic

that they will interpret that as we, we have a lot of evidence that supports this idea

so we're running with it.

And if, and if that was part of our citizenship almost I think that would that would help

a great deal.

I don't know how to make it happen.


So can we.

Well it's a little hard for me to see but could we field some questions?

Oh there is an audience!

Hey look at that!

So we do have a microphone to pass out.

All right, so raise your hands high.

How about we start with that person.


How much of this problem in communicating about science might be due to the fact that

you have some people, some philosophers who would argue that scientific truth is as relative

a truth as it is simply as good as any other form of truth.

I'm talking about the relativists and deconstructionists.

I'd be interested to hear what the panel would say.

Well since I'm the resident philosopher on the panel I'll take that one right.


So there is an issue with that but I really believe I'm not I'm not usually an optimist

but I believe that that's done and over with pretty much.

This was these so-called science wars mostly that happened in the 1990s.

There was a small but vocal set of postmodernist philosophers and sociologists, it wasn't just

philosophers who started going around saying, well science is socially constructed

which of course is true.

We were saying earlier you know several times science is a social enterprise.

It is socially constructed, in a sense in a sense.

In another sense of course that social construction is not independent of the outside world.

It's very highly constrained by the outside world.

The response to that sort of extreme version of postmodernism actually came both from scientists,

Alan Sokal here at NYU for instance, and especially by philosophers of science.

Philosophers of science were very strong in responding to that to that sort of extreme

criticism of science and I think actually we are in a good place at the moment between

the two disciplines because there is a good balance where the scientists themselves tend

to realize that, yes that there is in fact quite a bit more of the human side to doing


There is mistakes, there is fraud.

There is all sorts of stuff.

All right.

Do you have the microphone for someone else?

So my question is a topic which has been briefly touched upon which is replicating research

and that there are no real incentives for that.

Have any of you.

Do any of you have any ideas about how there might be incentives to replicate studies?

Perhaps work into the system like a lot of professors at universities are required to


Maybe graduate students are required to replicate research before moving on to original stuff?


I'll address that.

I want to qualify that there's no incentive to replicate.


So all results are comparisons.

You're always comparing what you've got to something else.

And in many cases what you're comparing to is what the last guy got.

So you're replicating it as one of your controls.

So it does happen.

What you suggest is a great idea.

And one way of teaching and getting your students kind of up to whatever boundary you're pushing

is to have them replicate what's been done.

That's how you can learn a technique.

That's how you can really understand what what the person, you're your competitor found.

Do you agree with it?

Don't you agree with it?

And you can use that as a teaching tool.

So there's ways of sneaking it in.

You can't just publish a paper where you replicate what everybody else did.

But often you can include the replication as that which you are building upon.

So here's where we started.

Yes we agree.

But now look at this.

And so I think that it's.

It's not as clear cut as we we have no incentive to replicate.

I think it's you can use it.

Any other ideas about fixing the replication.

Well I think too, if I can consider two quick opposite sides of the same coin maybe.

One is that I would argue and it gets back to something Massimo was saying earlier that

I would actually argue that some people have made their careers or are starting to in this

case, I think things are changing.

Not so much in replicating but actually failing to replicate.

Now that creates its own sort of bias because if you become known as the person who always

goes out and fails to replicate then you kind of show up to the saloon with a you know a

couple of guns or something, six shooters.

But so that's but that there can be a sort of incentive of its own to do that.

And the other is probably talking out of the other side of my mouth when I was talking

earlier about we have too many metrics.

Well if we're going to be stuck with metrics well we could make this into a metric you

could have something out of and I reckon you know sort of at a very high level called for

a bunch years ago, a transparency index like you know out of a given journal, out of a

given maybe lab, out of a given sort of field.

What are the, what is the likelihood that that's that something actually holds up and

if you turn that into a positive if you say well the higher it is the better then maybe

you'll have something that sticks.

But again those are not particularly well-formed ideas to be honest

I mean what about I mean on the journalism side I mean what could we do to sort of let

people know that that one little crazy sounding paper in Science we'll say is you know it

may not be the last word.

You know it turned out that was wrong.

The article you wrote no or I mean how do we sort of build in that process of replication

into the journalism about science?

I mean, maybe I'm gonna sound like a broken record at this point because part of what

we do when we choose what we cover is we celebrate things in effect because we we've chosen that

we've curated a story and presented it.

I think one way we do that is we portray the lives of scientists.

We portray the what is it like a lab?

The Times used to have a piece that I loved a lot which was about working life of scientists.

I've had many researchers tell me that you know you have no idea how long I spent writing

this software program so I could do this piece of research on cosmology.

It's all of that that we should celebrate as well not just the result that pops out.

So yah, that's one way we can contribute.

We were talking a little about the discussion section in experiments and how that's really

just not the science of it and yet it's the most interesting part.

So I'm a philosophy student and a lot of what we do when I'm in classes doing research is

we look at the discussion section in science pieces and so I'm not really reading the rest

of the experiment or I am but like briefly.

I'm looking at what their conclusion was and what they're discussing.

So really should we be taking a lot out of that and like writing our own papers off of

it or should we be used to looking at the experiment side?


That's a great question.

The discussion section is part of the science.

It's the most educated mind at that point, the person who did the work and got those


At that moment that is the most educated guess and it's often right.

You develop experience.

You develop, you know, you learn as you go.

So it's certainly part of the science and the interpretation is very interesting and

putting it in context is very interesting.

But I would say that you might be able to put a layer of depth into your understanding

of how we got to that if you look at the results, if you look at the part that is just here

is what we've got.

And I, and then say you know the process that we go through.


How many different ways could I have gotten the same thing?

And then you go through and then you go in the discussion and you say well I might have

gotten it if A was true but I could also gotten it if B was true.

And that process of forcing yourself to not fall in love with one interpretation.

The danger of falling in love with an idea.

And we do.

We fall in love with our ideas.

It's very creative.

And I think that if you were to go and look at the difference between what's in the results

section, where they say this is what happened.

And how that gets turned into a discussion section where there's the philosophical underpinnings

of it where you're thinking about necessity and sufficiency.

And you're thinking about all those those very basic intellectual processes that are

behind it.

I think that you might get something different and very interesting comparing the you know

just the facts ma'am to here's how I interpret this and here's what I think is important

about it.

I think that would be an interesting way to look at it.

But there's an additional problem there, right.

So there is a method problem if you will, which is scientific papers are in a sense

a piece of fiction.

Some of them actually are fiction.

Some of them really are fiction but all of them to some extent are if fact a piece of

fiction in the sense that it really didn't happen like that.

This is the structure of the paper is incredibly artificial.

There's this introduction

Youre saying like, I did this and then I did this and then I had this great idea

First of allYou don't even do that.

You write in third person.

They did that.

We did that.

Right, passive voice.

That's changing fortunately.

So it's it's like you're supposed to be distancing yourself as much as possible.


So this is a very structured thing.

The introduction, what tools and methods, the results and the conclusion and the discussion.

And they're supposed to have these really different functions as if you're sort of pretending

that when you write the introduction that you really don't know at that point how things

are gonna turn out.

But of course you've written the introduction after you've done experiments after you've

done the data analysis.

In fact typically the way I used I don't know how other people do it but the way I taught

my students to writing the papers was never to start with the introduction.

You always start with the results.

Because you want to have the results in good shape and then you know the diagrams and tables

and all that.

After you've done that then you sort of write, you write the actual narrative that goes with

the results.

Then maybe you're going back to a little bit of introduction, discussion and you go back

and forth until everything fits together.

So it's a work of fiction.

It's everything is put in a way that it's not the way it actually went, butI don't

mean fiction in the ways that you obviously have that problematic but it does give the

impression, even to the scientists themselves, that there is a lot more objectivity, a lot

more rigor a lot more sort of well it had to be that way than there really is.

Science is much more fun than that, but it's much more unpredictable.

Imagine if you were actually running a scientific paper the way in which you write a novel or

or little you know short stories like oh we started out this way.

We were having coffee that morning when my graduate student he said you know damn,

we dont understand this.

That would be much more true to the way things actually happened.

I'm not suggesting that's practical.

But you know what you need to keep in mind that when you read a scientific paper you

there is a lot of rearranging of stuff going on.

When I interview somebody for an article I'm always just, I just kind of like set the paper

aside and say hey come on how did this really happen?

And you get a bizarre story that you never would have guessed from reading the paper.

Much more interesting.


Id just like to thank everybody for attending today.

So thank you all for coming.

The Description of Science in the Hands of Scientists: The Meandering Path Toward Truth