Post by Trades on Apr 18, 2016 12:36:12 GMT -5
I found this interesting...
theweek.com/articles/618141/big-science-broken
Big Science is broken
Pascal-Emmanuel Gobry
Science is broken.
That's the thesis of a must-read article in First Things magazine, in which William A. Wilson accumulates evidence that a lot of published research is false. But that's not even the worst part.
Advocates of the existing scientific research paradigm usually smugly declare that while some published conclusions are surely false, the scientific method has "self-correcting mechanisms" that ensure that, eventually, the truth will prevail. Unfortunately for all of us, Wilson makes a convincing argument that those self-correcting mechanisms are broken.
For starters, there's a "replication crisis" in science. This is particularly true in the field of experimental psychology, where far too many prestigious psychology studies simply can't be reliably replicated. But it's not just psychology. In 2011, the pharmaceutical company Bayer looked at 67 blockbuster drug discovery research findings published in prestigious journals, and found that three-fourths of them weren't right. Another study of cancer research found that only 11 percent of preclinical cancer research could be reproduced. Even in physics, supposedly the hardest and most reliable of all sciences, Wilson points out that "two of the most vaunted physics results of the past few years — the announced discovery of both cosmic inflation and gravitational waves at the BICEP2 experiment in Antarctica, and the supposed discovery of superluminal neutrinos at the Swiss-Italian border — have now been retracted, with far less fanfare than when they were first published."
What explains this? In some cases, human error. Much of the research world exploded in rage and mockery when it was found out that a highly popularized finding by the economists Ken Rogoff and Carmen Reinhardt linking higher public debt to lower growth was due to an Excel error. Steven Levitt, of Freakonomics fame, largely built his career on a paper arguing that abortion led to lower crime rates 20 years later because the aborted babies were disproportionately future criminals. Two economists went through the painstaking work of recoding Levitt's statistical analysis — and found a basic arithmetic error.
Then there is outright fraud. In a 2011 survey of 2,000 research psychologists, over half admitted to selectively reporting those experiments that gave the result they were after. The survey also concluded that around 10 percent of research psychologists have engaged in outright falsification of data, and more than half have engaged in "less brazen but still fraudulent behavior such as reporting that a result was statistically significant when it was not, or deciding between two different data analysis techniques after looking at the results of each and choosing the more favorable."
Then there's everything in between human error and outright fraud: rounding out numbers the way that looks better, checking a result less thoroughly when it comes out the way you like, and so forth.
Still, shouldn't the mechanism of independent checking and peer review mean the wheat, eventually, will be sorted from the chaff?
Well, maybe not. There's actually good reason to believe the exact opposite is happening.
The peer review process doesn't work. Most observers of science guffaw at the so-called "Sokal affair," where a physicist named Alan Sokal submitted a gibberish paper to an obscure social studies journal, which accepted it. Less famous is a similar hoodwinking of the very prestigious British Medical Journal, to which a paper with eight major errors was submitted. Not a single one of the 221 scientists who reviewed the paper caught all the errors in it, and only 30 percent of reviewers recommended that the paper be rejected. Amazingly, the reviewers who were warned that they were in a study and that the paper might have problems with it found no more flaws than the ones who were in the dark.
This is serious. In the preclinical cancer study mentioned above, the authors note that "some non-reproducible preclinical papers had spawned an entire field, with hundreds of secondary publications that expanded on elements of the original observation, but did not actually seek to confirm or falsify its fundamental basis."
This gets into the question of the sociology of science. It's a familiar bromide that "science advances one funeral at a time." The greatest scientific pioneers were mavericks and weirdos. Most valuable scientific work is done by youngsters. Older scientists are more likely to be invested, both emotionally and from a career and prestige perspective, in the regnant paradigm, even though the spirit of science is the challenge of regnant paradigms.
Why, then, is our scientific process so structured as to reward the old and the prestigious? Government funding bodies and peer review bodies are inevitably staffed by the most hallowed (read: out of touch) practitioners in the field. The tenure process ensures that in order to further their careers, the youngest scientists in a given department must kowtow to their elders' theories or run a significant professional risk. Peer review isn't any good at keeping flawed studies out of major papers, but it can be deadly efficient at silencing heretical views.
All of this suggests that the current system isn't just showing cracks, but is actually broken, and in need of major reform. There is very good reason to believe that much scientific research published today is false, there is no good way to sort the wheat from the chaff, and, most importantly, that the way the system is designed ensures that this will continue being the case.
As Wilson writes:
Science, at heart an enterprise for mavericks, has become an enterprise for careerists. It's time to flip the career track for science on its head. Instead of waiting until someone's best years are behind her to award her academic freedom and prestige, abolish the PhD and grant fellowships to the best 22-year-olds, giving them the biggest budgets and the most freedoms for the first five or 10 years of their careers. Then, with only few exceptions, shift them away from research to teaching or some other harmless activity. Only then can we begin to fix Big Science.
Pascal-Emmanuel Gobry
Science is broken.
That's the thesis of a must-read article in First Things magazine, in which William A. Wilson accumulates evidence that a lot of published research is false. But that's not even the worst part.
Advocates of the existing scientific research paradigm usually smugly declare that while some published conclusions are surely false, the scientific method has "self-correcting mechanisms" that ensure that, eventually, the truth will prevail. Unfortunately for all of us, Wilson makes a convincing argument that those self-correcting mechanisms are broken.
For starters, there's a "replication crisis" in science. This is particularly true in the field of experimental psychology, where far too many prestigious psychology studies simply can't be reliably replicated. But it's not just psychology. In 2011, the pharmaceutical company Bayer looked at 67 blockbuster drug discovery research findings published in prestigious journals, and found that three-fourths of them weren't right. Another study of cancer research found that only 11 percent of preclinical cancer research could be reproduced. Even in physics, supposedly the hardest and most reliable of all sciences, Wilson points out that "two of the most vaunted physics results of the past few years — the announced discovery of both cosmic inflation and gravitational waves at the BICEP2 experiment in Antarctica, and the supposed discovery of superluminal neutrinos at the Swiss-Italian border — have now been retracted, with far less fanfare than when they were first published."
What explains this? In some cases, human error. Much of the research world exploded in rage and mockery when it was found out that a highly popularized finding by the economists Ken Rogoff and Carmen Reinhardt linking higher public debt to lower growth was due to an Excel error. Steven Levitt, of Freakonomics fame, largely built his career on a paper arguing that abortion led to lower crime rates 20 years later because the aborted babies were disproportionately future criminals. Two economists went through the painstaking work of recoding Levitt's statistical analysis — and found a basic arithmetic error.
Then there is outright fraud. In a 2011 survey of 2,000 research psychologists, over half admitted to selectively reporting those experiments that gave the result they were after. The survey also concluded that around 10 percent of research psychologists have engaged in outright falsification of data, and more than half have engaged in "less brazen but still fraudulent behavior such as reporting that a result was statistically significant when it was not, or deciding between two different data analysis techniques after looking at the results of each and choosing the more favorable."
Then there's everything in between human error and outright fraud: rounding out numbers the way that looks better, checking a result less thoroughly when it comes out the way you like, and so forth.
Still, shouldn't the mechanism of independent checking and peer review mean the wheat, eventually, will be sorted from the chaff?
Well, maybe not. There's actually good reason to believe the exact opposite is happening.
The peer review process doesn't work. Most observers of science guffaw at the so-called "Sokal affair," where a physicist named Alan Sokal submitted a gibberish paper to an obscure social studies journal, which accepted it. Less famous is a similar hoodwinking of the very prestigious British Medical Journal, to which a paper with eight major errors was submitted. Not a single one of the 221 scientists who reviewed the paper caught all the errors in it, and only 30 percent of reviewers recommended that the paper be rejected. Amazingly, the reviewers who were warned that they were in a study and that the paper might have problems with it found no more flaws than the ones who were in the dark.
This is serious. In the preclinical cancer study mentioned above, the authors note that "some non-reproducible preclinical papers had spawned an entire field, with hundreds of secondary publications that expanded on elements of the original observation, but did not actually seek to confirm or falsify its fundamental basis."
This gets into the question of the sociology of science. It's a familiar bromide that "science advances one funeral at a time." The greatest scientific pioneers were mavericks and weirdos. Most valuable scientific work is done by youngsters. Older scientists are more likely to be invested, both emotionally and from a career and prestige perspective, in the regnant paradigm, even though the spirit of science is the challenge of regnant paradigms.
Why, then, is our scientific process so structured as to reward the old and the prestigious? Government funding bodies and peer review bodies are inevitably staffed by the most hallowed (read: out of touch) practitioners in the field. The tenure process ensures that in order to further their careers, the youngest scientists in a given department must kowtow to their elders' theories or run a significant professional risk. Peer review isn't any good at keeping flawed studies out of major papers, but it can be deadly efficient at silencing heretical views.
All of this suggests that the current system isn't just showing cracks, but is actually broken, and in need of major reform. There is very good reason to believe that much scientific research published today is false, there is no good way to sort the wheat from the chaff, and, most importantly, that the way the system is designed ensures that this will continue being the case.
As Wilson writes:
Even if self-correction does occur and theories move strictly along a lifecycle from less to more accurate, what if the unremitting flood of new, mostly false, results pours in faster? Too fast for the sclerotic, compromised truth-discerning mechanisms of science to operate? The result could be a growing body of true theories completely overwhelmed by an ever-larger thicket of baseless theories, such that the proportion of true scientific beliefs shrinks even while the absolute number of them continues to rise. Borges' Library of Babel contained every true book that could ever be written, but it was useless because it also contained every false book, and both true and false were lost within an ocean of nonsense.
This is a big problem, one that can't be solved with a column. But the first step is admitting you have a problem.
This is a big problem, one that can't be solved with a column. But the first step is admitting you have a problem.
theweek.com/articles/618141/big-science-broken