Stocks and News
Home | Week in Review Process | Terms of Use | About UsContact Us
   Articles Go Fund Me All-Species List Hot Spots Go Fund Me
Week in Review   |  Bar Chat    |  Hot Spots    |   Dr. Bortrum    |   Wall St. History
Stock and News: Hot Spots
  Search Our Archives: 
 

 

Dr. Bortrum

 

AddThis Feed Button

https://www.gofundme.com/s3h2w8

 

   

10/02/2002

Peer Review

This past week or so has not been a good time for New Jersey.
Our senior senator withdrew in disgrace from his campaign for
reelection, Lucent stock hit 76 cents a share and Lucent''s Bell
Labs'' most promising young scientist was fired, judged guilty of
scientific misconduct. The Lucent stock and the two individuals
all were brought down by their peers - shareholders, constituents
and fellow scientists.

Peer review in one form or another is key to maintaining the
integrity of science. When I was at Bell Labs, before a paper
could be submitted for publication it was reviewed by my
management, usually by someone in another area and by our
patent department. When the paper was submitted to a journal, it
was sent out to one or more reviewers to judge whether it
merited publication. I assume these formal peer review
procedures are still in place. After publication, another type of
peer review begins, especially if the paper arouses a reader''s
interest. That person may analyze it more thoroughly and even
try to duplicate it. In most cases, the work is confirmed and may
prompt additional experiments expanding on the original work.

Rarely, this sort of informal peer review reveals evidence of
fraud - fabricated data or wrongful manipulation of data. Other
times, a reader may unearth mistakes in experimental procedure
or treatment of data. Often, a paper provokes experiments that
contradict the author''s conclusions or the conventional wisdom.
Sometimes this leads to vigorous debate and controversy with no
definitive answers. The field of anthropology seems to be a
fertile ground for such controversy. Consider the current debate
raging over whether Neanderthals were having sex with other
species more closely resembling us Homo sapiens.

Not wishing to be accused of a cover-up, I must tell you of my
first experience with peer review. In doing so, I may help
someone avoid the pitfalls of combining courtship with serious
professional work. In my thesis work at Pitt for my Ph.D., I had
taken my data and had to make calculations on the data using
what is known as the Gibbs-Duhem equation. Here, all we need
to know is that the calculations required what is known as
graphical integration. In essence, the calculation involved
plotting x versus y on a graph to get a curve that looked sort of
like a ski jump. The objective was to measure areas under the
ski jump by counting the number of squares on the graph paper.

I tested my ability to handle this sophisticated task by plotting
data from a paper in the literature and getting the same answers
as the authors of that paper. Confident of my technique and
wanting to make more precise calculations, I decided to plot my
data on a large sheet of graph paper spread out on a drafting
table. One evening, while escorting my wife-to-be home from a
Spanish class, I proposed that we stop at Alumni Hall so she
could help me by reading the numbers while I plotted them. At
the oral defense of my thesis, the only criticism was from the
chemistry department chairman, who spent about 15 minutes
chastising me for using an atomic weight of sulfur taken from an
old handbook. The effect on my work was insignificant.

After I left Pitt, Ph.D. in hand, a graduate student confirmed my
calculations and our paper was published in 1952, the year I
joined Bell Labs. I was there about a year when I got a letter
from my professor saying that a Dr. G. M. Willis in Australia had
read our paper and was concerned that one of the curves seemed
out of line. It took me no more than a few seconds to realize
what had happened. Would you believe that, in the process of
going from the small to the large graph paper and courting my
future wife, instead of plotting x versus y, I had plotted y versus
x! Obviously, when he confirmed my calculations, the graduate
student had merely used my graph and had not plotted the data
from scratch. I made the proper plot, without my wife in
attendance this time, and sent in an "errata" to the journal. It
took peer review from Down Under to bring my error to light.

That was certainly an innocent mistake. In the case of J. Hendrik
Sch n, it seems not so innocent. Sch n, a young 32-year old
Bell Labs researcher, was widely thought to be a shoo-in for the
Nobel Prize for his work on nanotechnology. In papers
published in Nature and in Science, two of the most prestigious
scientific journals, claims were made of transistors employing
switches of molecular dimensions, even a single molecule. You
can imagine the excitement generated by this work. If transistors
of molecular dimensions were possible, the processing power of
computers could be increased enormously with molecular chips
containing zillions of transistors. Other laboratories invested
major dollars into efforts to catch up with Lucent in developing
this "killer" application. There was a problem, however. These
efforts to reproduce Sch n''s work came up blank. Sch n himself
said that not all his samples worked. It seemed that only a
component material made on a particular machine worked and
these samples were made at his former lab in Europe.

Sch n and his co-authors published a lot of papers. In fact,
according to a September 26 New York Times article by
Kenneth Chang, in 2001 Sch n published papers at a rate that
averaged a paper every 8 days! This is mind boggling. My
output of papers over 36 years at Bell Labs was closer to two
papers a year! To publish a paper every 8 days would require a
prodigious output of data, a lot of luck or exceptional skill in
designing experiments, considerable talent at writing and data
analysis and/or fabrication of data. Sadly, an investigative
committee came to the last conclusion.

As in my case, the formal peer review failed. It remained for
readers of the papers and those who tried to reproduce the work
to blow the whistle and question the veracity of Sch n''s data. In
particular, it was the similarity of graphs showing purported
"noise" that caught the attention of some scientists. It seems that
the graphs of noise in different papers were strikingly close to
being identical, a surprising result for noise, which by definition
should be more random. Other identical, or nearly identical,
graphs were found in other papers. Finally, a professor at
Cornell called Bell Labs'' attention to the similarities and an
investigative committee was quickly formed. The committee''s
report is now in and Sch n has been fired. The committee found
that none of the co-authors of the papers were at fault. They
seem to have taken the data supplied at face value and were
generally more involved in the theory.

The Times reports that Sch n still claims his work is based on
experimental observations but admitted he had used
mathematical equations to make plots of purported experimental
data. However, he couldn''t show his original data, claiming that
he had no lab notebooks and that he had deleted the original data
on his computer when the hard disk ran out of space to store the
data. To me, this seems outrageous in this era of computer savvy
individuals who routinely back up data on floppy disks or by
burning CDs. I also am shocked that he didn''t have a laboratory
notebook and conclude that either the Bell Labs'' indoctrination
has gone to pot or that he was incredibly naive. When I was at
Bell Labs, we were issued numbered notebooks that were turned
in to our central files area for storage when filled. We were
instructed about the importance of notebook entries not only to
record data but also to have entries witnessed for patent
protection. In addition, we were advised to record instances
where we had witnessed others'' inventions or significant
findings. I''m unaware as to whether Sch n has any patents
issued or pending but I would be amazed if a molecular transistor
would not generate huge patent interest.

So, how can peer review be strengthened to guard against future
incidents of this nature? I personally don''t see how deliberate
fabrication of data can be detected except through the efforts of
those trying to duplicate the work. To me, science shares
something in common with the game of golf. There has to be the
element of trust in the integrity of the players, both in science
and in golf. A TV commercial caught my eye this past week. It
pertains not only to golfing misconduct but trust in the corporate
arena as well. The commercial shows golfers looking for a lost
ball. The errant golfer finally drops a ball, claiming he had just
walked past it. His score, when challenged by his fellow golfers,
rises from a 4 to a 7.

Rest assured that I counted every stroke, even the total whiff that
could have passed for a practice stroke, in my outing a couple
days ago. Except for my 45-foot putt for a par (I paced it off), it
was my worst round in New Jersey this year. Let''s hope the
news from New Jersey improves in the weeks to come!

Allen F. Bortrum



AddThis Feed Button

 

-10/02/2002-      
Web Epoch NJ Web Design  |  (c) Copyright 2016 StocksandNews.com, LLC.

Dr. Bortrum

10/02/2002

Peer Review

This past week or so has not been a good time for New Jersey.
Our senior senator withdrew in disgrace from his campaign for
reelection, Lucent stock hit 76 cents a share and Lucent''s Bell
Labs'' most promising young scientist was fired, judged guilty of
scientific misconduct. The Lucent stock and the two individuals
all were brought down by their peers - shareholders, constituents
and fellow scientists.

Peer review in one form or another is key to maintaining the
integrity of science. When I was at Bell Labs, before a paper
could be submitted for publication it was reviewed by my
management, usually by someone in another area and by our
patent department. When the paper was submitted to a journal, it
was sent out to one or more reviewers to judge whether it
merited publication. I assume these formal peer review
procedures are still in place. After publication, another type of
peer review begins, especially if the paper arouses a reader''s
interest. That person may analyze it more thoroughly and even
try to duplicate it. In most cases, the work is confirmed and may
prompt additional experiments expanding on the original work.

Rarely, this sort of informal peer review reveals evidence of
fraud - fabricated data or wrongful manipulation of data. Other
times, a reader may unearth mistakes in experimental procedure
or treatment of data. Often, a paper provokes experiments that
contradict the author''s conclusions or the conventional wisdom.
Sometimes this leads to vigorous debate and controversy with no
definitive answers. The field of anthropology seems to be a
fertile ground for such controversy. Consider the current debate
raging over whether Neanderthals were having sex with other
species more closely resembling us Homo sapiens.

Not wishing to be accused of a cover-up, I must tell you of my
first experience with peer review. In doing so, I may help
someone avoid the pitfalls of combining courtship with serious
professional work. In my thesis work at Pitt for my Ph.D., I had
taken my data and had to make calculations on the data using
what is known as the Gibbs-Duhem equation. Here, all we need
to know is that the calculations required what is known as
graphical integration. In essence, the calculation involved
plotting x versus y on a graph to get a curve that looked sort of
like a ski jump. The objective was to measure areas under the
ski jump by counting the number of squares on the graph paper.

I tested my ability to handle this sophisticated task by plotting
data from a paper in the literature and getting the same answers
as the authors of that paper. Confident of my technique and
wanting to make more precise calculations, I decided to plot my
data on a large sheet of graph paper spread out on a drafting
table. One evening, while escorting my wife-to-be home from a
Spanish class, I proposed that we stop at Alumni Hall so she
could help me by reading the numbers while I plotted them. At
the oral defense of my thesis, the only criticism was from the
chemistry department chairman, who spent about 15 minutes
chastising me for using an atomic weight of sulfur taken from an
old handbook. The effect on my work was insignificant.

After I left Pitt, Ph.D. in hand, a graduate student confirmed my
calculations and our paper was published in 1952, the year I
joined Bell Labs. I was there about a year when I got a letter
from my professor saying that a Dr. G. M. Willis in Australia had
read our paper and was concerned that one of the curves seemed
out of line. It took me no more than a few seconds to realize
what had happened. Would you believe that, in the process of
going from the small to the large graph paper and courting my
future wife, instead of plotting x versus y, I had plotted y versus
x! Obviously, when he confirmed my calculations, the graduate
student had merely used my graph and had not plotted the data
from scratch. I made the proper plot, without my wife in
attendance this time, and sent in an "errata" to the journal. It
took peer review from Down Under to bring my error to light.

That was certainly an innocent mistake. In the case of J. Hendrik
Sch n, it seems not so innocent. Sch n, a young 32-year old
Bell Labs researcher, was widely thought to be a shoo-in for the
Nobel Prize for his work on nanotechnology. In papers
published in Nature and in Science, two of the most prestigious
scientific journals, claims were made of transistors employing
switches of molecular dimensions, even a single molecule. You
can imagine the excitement generated by this work. If transistors
of molecular dimensions were possible, the processing power of
computers could be increased enormously with molecular chips
containing zillions of transistors. Other laboratories invested
major dollars into efforts to catch up with Lucent in developing
this "killer" application. There was a problem, however. These
efforts to reproduce Sch n''s work came up blank. Sch n himself
said that not all his samples worked. It seemed that only a
component material made on a particular machine worked and
these samples were made at his former lab in Europe.

Sch n and his co-authors published a lot of papers. In fact,
according to a September 26 New York Times article by
Kenneth Chang, in 2001 Sch n published papers at a rate that
averaged a paper every 8 days! This is mind boggling. My
output of papers over 36 years at Bell Labs was closer to two
papers a year! To publish a paper every 8 days would require a
prodigious output of data, a lot of luck or exceptional skill in
designing experiments, considerable talent at writing and data
analysis and/or fabrication of data. Sadly, an investigative
committee came to the last conclusion.

As in my case, the formal peer review failed. It remained for
readers of the papers and those who tried to reproduce the work
to blow the whistle and question the veracity of Sch n''s data. In
particular, it was the similarity of graphs showing purported
"noise" that caught the attention of some scientists. It seems that
the graphs of noise in different papers were strikingly close to
being identical, a surprising result for noise, which by definition
should be more random. Other identical, or nearly identical,
graphs were found in other papers. Finally, a professor at
Cornell called Bell Labs'' attention to the similarities and an
investigative committee was quickly formed. The committee''s
report is now in and Sch n has been fired. The committee found
that none of the co-authors of the papers were at fault. They
seem to have taken the data supplied at face value and were
generally more involved in the theory.

The Times reports that Sch n still claims his work is based on
experimental observations but admitted he had used
mathematical equations to make plots of purported experimental
data. However, he couldn''t show his original data, claiming that
he had no lab notebooks and that he had deleted the original data
on his computer when the hard disk ran out of space to store the
data. To me, this seems outrageous in this era of computer savvy
individuals who routinely back up data on floppy disks or by
burning CDs. I also am shocked that he didn''t have a laboratory
notebook and conclude that either the Bell Labs'' indoctrination
has gone to pot or that he was incredibly naive. When I was at
Bell Labs, we were issued numbered notebooks that were turned
in to our central files area for storage when filled. We were
instructed about the importance of notebook entries not only to
record data but also to have entries witnessed for patent
protection. In addition, we were advised to record instances
where we had witnessed others'' inventions or significant
findings. I''m unaware as to whether Sch n has any patents
issued or pending but I would be amazed if a molecular transistor
would not generate huge patent interest.

So, how can peer review be strengthened to guard against future
incidents of this nature? I personally don''t see how deliberate
fabrication of data can be detected except through the efforts of
those trying to duplicate the work. To me, science shares
something in common with the game of golf. There has to be the
element of trust in the integrity of the players, both in science
and in golf. A TV commercial caught my eye this past week. It
pertains not only to golfing misconduct but trust in the corporate
arena as well. The commercial shows golfers looking for a lost
ball. The errant golfer finally drops a ball, claiming he had just
walked past it. His score, when challenged by his fellow golfers,
rises from a 4 to a 7.

Rest assured that I counted every stroke, even the total whiff that
could have passed for a practice stroke, in my outing a couple
days ago. Except for my 45-foot putt for a par (I paced it off), it
was my worst round in New Jersey this year. Let''s hope the
news from New Jersey improves in the weeks to come!

Allen F. Bortrum