Misconceptions About How Science Works

[ 1 ]

A N   E X C E R P T
re. Science in the News

Opening quotation markMagazines, newspapers, and television have little interest in textbook science; only the newest and latest is their grist. What they cover is frontier science, capriciously unreliable and fraught with often unsavory ‘personal-interest’ attributes; but they cover it under the label ‘science,’ which to them, as to the rest of us, connotes objectivity, reliability, and the scientific method. Thereby confusion is worse confounded, especially since the media’s striving to cover all the ‘news’ is not matched by any concern to follow up stories covered in the past, even the very recent past. Thus the public is left with stronger impressions of breakthrough, crisis, or drama than of judicious resolution; with stronger impressions of the dangers of pesticides than of the strict regulation that controls their application; with the notion that room-temperature superconductors are in the works, not with the realization that the surge of progress to higher temperatures came to a halt more than 150° short of room temperature. Just as over other matters, so too in science: the media focus on aberration, not on context, perspective, or what is normal.

 Now on many other subjects the audience understands what is going on and can make conscious or unconscious allowances out of common sense and human experience: because one president lies, not all presidents are thought to; because someone’s neighbor turned out to be a mass murderer, not every neighbor is suspected of being one. But on matters of science, most of the audience lacks the personal experience and technical background to make allowances for the media’s bias toward instancy. Since in any case the distinction between frontier and textbook science is so rarely grasped, the media’s blurring of that distinction must add tellingly to the general confusion.

 Scientists are anything but clear that the media are interested only in the frontier. Being themselves steeped in textbook science, they can hardly imagine what it must be like to lack that background and yet attempt — as journalists must — to comprehend what is happening at the frontier. So those scientists who are reported by the media are often shocked, to some degree or other, by silly errors and lack of context in the ensuing stories; and also by the lack of subsequent continuity of coverage. Those who achieved superconductivity at 100° K and thereby attained instant fame, for example, have also seen how instant is the loss of fame as soon as one stops creating new news: emphatic were their protests when a journalist asked, ‘Superconductivity: Is the Party Over?’ With science now of so much public concern, scientists alike with actors, politicians, and rock singers can appreciate Andy Warhol’s insight, that in our fast-track mass society everyone has a turn at being famous, albeit for fifteen minutes only.

 Much public controversy about technical issues could be more soundly productive were it generally realized that what is most new is also the least likely to be true.Closing quotation mark

SOURCE:  Scientific Literacy and the Myth of the Scientific Method. By Henry H. Bauer. Champaign, IL: University of Illinois Press, 1992. 114–115.

[ 2 ]

A N   E X C E R P T
re. Government Funding of Goal-Oriented Research

Opening quotation markIf society does support potentially revolutionary science, then it cannot know what it will get. Why then even try to support it?

 To evade this uncomfortable question, the scientific community has been able to convince itself and society at large that corollary benefits inevitably flow from advances in scientific understanding. Only minorities — so far — have suggested that this has not always been so; or, even if it has been so in the past, that there is no guarantee it will continue to be so in the future (let alone that the benefits will be in some proportion to the initial expenditure). So we spend billions of dollars on larger and larger atom smashers even while most scientists (other than high-energy physicists) believe that nothing of practical human use remains to be discovered along that direction.

 If it is nevertheless decided to support pure science and to stimulate revolutionary discovery, it remains unclear how that might effectively be done. That the chief mechanism used by the National Science Foundation is clearly inappropriate does not necessarily mean that more appropriate mechanisms can be found. In point of fact, though, there are several more logically sound ones:

 1. So-called starter grants, given to newly qualified scientists solely on the basis of their promise and not for projects that have to be specified beforehand in some detail. (A few such awards are in fact made by Research Corporation and by the Petroleum Research Fund of the American Chemical Society.)

 2. ‘Established investigator’ awards, similarly given to people rather than to projects, the people being selected on the basis of past accomplishment. (A few such awards are made by the Sloan Foundation and the National Institutes of Health.)

 3. Prizes for past accomplishment, which in a sense is an established investigator award.

 Naturally, such mechanisms have been used much more frequently by private groups than by governmental bureaucracies. The support of pure science — the search for knowledge — cannot come easily or naturally from government. If totalitarian, the government is tempted to draw its distinction between correct, acceptable knowledge and other knowledge that is to remain taboo. If democratic, the government feels obliged to account for its expenditures, and therefore to hold accountable those who do research under its support; and accountability eschews risk and seeks tangible results. But where concrete results are demanded, research becomes mundane, particularly when results are looked for within the usual lifetime of a research grant, typically no more than a couple of years.

 The faith is widespread that support of pure science inevitably leads to startling discoveries that are not only palatable but even useful. Such faith is entirely compatible with the myth of the scientific method: if it requires only that the method be followed for true knowledge to be gained, then there is no evident reason why the method should not be applied to just those problems or phenomena whose understanding can be foreseen to be desirable and have useful application. Thus the myth of the scientific method leads to the misconception that science can give us the knowledge and applications that we want (or think we want). In support of this misconception such instances are typically cited as the development of atomic bombs or the putting of humans on the Moon: we often hear, ‘If we can put men on the Moon, surely we can cure cancer ... or produce energy in a nonpolluting way ... or eliminate poverty, et cetera, et cetera.’

 Such declamations fail to comprehend that the atomic bomb and the Moon flights are examples not of pure science but of applied science, of research into the known unknown (and into the rather well-known unknown to boot) to develop specific technology. The understanding — the pure science — already existed of how people could be lifted off Earth and guided precisely to the Moon; it was just a matter of constructing the mechanisms. It had also been understood how energy is produced when uranium atoms fission, and it was rather obvious what sort of devices could in principle harness the energy explosively or gradually (though it was not known whether such devices could actually work in the needed ways); enough was known to put thousands of people to useful work almost immediately on many details.

 By contrast, the causes of cancer remain in all probability still largely in the realm of the unknown unknown — and they were even more so two decades ago when the government (incidentally, against the advice of the best-qualified scientists) allocated prodigious funds (of the order of a billion dollars per year) to finding a cure for cancer. Most disinterested observers now recognize that very little of value has resulted from this direct war on cancer; the most consequential advances in understanding have come instead from outside the war, from research into fundamental molecular biology, which attempts to understand how genes that control growth are switched on and off during development and in later life.

 Goal-oriented research, applied science, makes sense only when there already exist the relevant basic knowledge to show that, and roughly how, the goal can be reached. Otherwise, resources can only be wasted and disappointment is certain. As Erwin Chargaff has pointed out, goal-oriented research without the requisite understanding is exemplified by alchemy, the attempt to transmute less-noble substances into gold. Emperors, kings, and other patrons kept many alchemists employed for centuries but got no gold in return.Closing quotation mark

SOURCE:  Scientific Literacy and the Myth of the Scientific Method. By Henry H. Bauer. Champaign, IL: University of Illinois Press, 1992. 119–121.

[ 3 ]

A N   E X C E R P T
re. Comparing Science, Applied Science, and Technology

Opening quotation markMisconceptions abound over the relation between science and technology as well as about each of those endeavors separately. An overwhelming majority of what is publicly discussed under the rubric of science actually has to do with technology: almost everything to do with medicine, for example, or almost anything having to do with pollution.

 Perhaps the most prevalent fallacy about technology mislabels it as applied science. That seems plausible, of course: science is knowledge, and application of it could or should provide food, shelter, tools, and so forth. It just happens, though, that this has not usually been the way of it. True, scientific advances have sometimes sparked new technology: atomic bombs and nuclear power plants and transistors are the standard examples. But instances of technology leading to science are no less dramatic: electric batteries and magnetic fields, photography and cyclotrons, among others.

 One reason for misconceptions about technology is that serious, systematic study of technology began so lately. The philosophy of science has been carried on for centuries, whereas the philosophy of technology has been a recognized specialty for barely a couple of decades. The history of technology, too, has been a distinct field for only a few decades, whereas the history of science can easily be traced into the previous century. That science and technology are quite different sorts of things, and that the relationship between them is anything but simple and one-way, has been made clear just within the last generation, most forcefully perhaps by Derek de Solla Price.

 That technology is not just applied science follows obviously — once one thinks about it — from the historical certainty that significant techniques are ever so much older than anything one could call science: applications of fire to cooking, to extracting metal from ores, to metallurgy; levers, wheels, water levels — devices that made possible the wonders of megalithic construction and the American and Egyptian pyramids; pottery and glass; tools and weapons; tanning and weaving; et cetera, et cetera. Humanity created practical marvels long before the advent of what we call science.

 The making of models to mimic the working of the solar system has been traced back many centuries, beyond times when they could have been based on scientific understanding of astronomy, and our clocks are descendants of those models and not timekeepers designed as such from first scientific principles. The crafts associated with such things, and with lens grinding and the like, were a significant factor in that ferment of the seventeenth century that led to modern science. Steam engines were developed through a series of inspired inventions, not through successive application of scientific insight; rather, these inventions led to scientific insight, to the science of thermodynamics. The lead-acid battery, ubiquitous in cars, planes, and boats, has been improved dramatically over the course of a century through improvements in material technology, not through advances in electrochemical science.

 So science and technology have grown rather independently of one another — which does not mean, of course, in isolation from one another. They learn from and assist one another, but one ought to be clear that they are in essence different sorts of things — and much flows from the difference.

 Notably, science is universal, whereas technology is particular. For instance, the behavior of gases is the same everywhere: the same equations govern the relations among pressure, volume, temperature, and amount of gas. By contrast, technologies can be vastly different: electricity can be made from nuclear power, or from falling water, or by harnessing the tides, or by burning any number of substances. Light can be made by burning things, or through chemical reactions, or from radioactivity, or from electrically powered devices, or by capturing and feeding fireflies.

 Correspondingly, science has a continuity over time that technology has not. As science proceeds, much is continuous even before and after the so-called scientific revolutions: even as our mode of interpreting changes, perhaps quite drastically, our familiarity with natural phenomena nevertheless grows. But technological revolutions can make sharp breaks with the past, as quite different ways of doing something supplant one another — as the use of candles superseded oil lamps (which had succeeded open fires), to be in turn superseded by gaslight, and then by incandescent electric lights, which have been largely superseded by fluorescents. There may well be, as the jigsaw puzzle would indicate, a certain sequence of discovery that has to be taken in science; but in technology, that is much less the case, if it is indeed the case at all. If and when we make contact with extraterrestrials, we shall probably begin communication by means of the universality of mathematical and scientific laws. Extraterrestrial technology, by contrast, is likely to be entirely different from our own.

 Science though universal is intangible: it is knowledge. Technology is inextricably bound up with tangible things (which accounts for its not being universal). Though advances in both science and technology flow from human creativity, they profit from different sorts of creations. Scientists all discover, or try to discover, the same phenomena and laws, whereas technological innovation can be unique. Thus creativity in technology is much more like creativity in art than in science: Wolfgang Amadeus Mozart, Salvador Dali, and Thomas Edison all created things that carry their individual stamp, whereas the Ideal Gas Law incorporates no clues whatsoever about the people who formulated it. Again, the laws of gases and elements and particles were bound to be discovered, perhaps even at about the same time as they indeed were, irrespective of the particular people who were helping to put the jigsaw puzzle together; but cars and telephones did not inevitably become ubiquitous, just as videophones and personal helicopters have not, despite their feasibility. As Erwin Chargaff has noted, it is not men who make science, it is the science that makes the men; whereas technology is intrinsically a human invention.

 A significant consequence is that science cannot but be open and public, whereas technology can choose to keep its knowledge secret. Since there is only the one world to explore, credit and fame go to the explorer or scientist who discovers a given thing first. The best way to ensure that one’s priority is established and recognized is to let as many people as possible know as quickly as possible what one has discovered; science and scientists are preoccupied with publication (moreover, with rapidity of publication). On the other hand, since technological products can be unique, secrets can be kept; so technologists say as little as possible about the crucial technical details of their ventures even after patents have been granted.

 Science and technology have quite different criteria for whether or when something gets done. In science, putting the jigsaw puzzle together progresses best when each new piece is added just as soon as it is seen to be possible: if something is feasible, it will be done and ought to be done. In technology, on the other hand, the criterion is human benefit or utility: many feasible things are not done, and ought not to be.

 Related to this difference is the fact that scientists and technologists look for approval in quite different directions. Scientists are first and foremost members of their scientific community; as they work and try to publish, at the back and front of their minds is always ‘What will they [meaning other scientists] think of this?’ The reputation and worldly advancement of a scientist rests on the verdict of the scientific community. Technologists, by contrast, work to satisfy particular clients or employers, and they may not even have particularly strong associations with their peer technologists.

 Science, as noted earlier, cannot be effectively controlled. It can be impeded or stopped, but it cannot be made to deliver particular knowledge that happens to be wanted. Technology, however, can be put under effective social control, and examples of that are legion. Thus domestic gas was made commercially from coal in Australia many decades ago, and liquid fuels from coal in South Africa, whereas such technology is still not deployed in the United States: economic and political criteria, not technical feasibility, determine technological development. (Of course, that technology can be controlled does not mean that all the effects of a new technology can be foreseen. Quite the opposite. It seems to be a law of technical innovation that every new technology brings some unforeseeable consequences with it.)

 There are some typical practical repercussions of misconceptions about technology. Perhaps the most common follows from the mistaken belief that technology is applied science, for that implies that any advance in scientific knowledge could be harnessed to useful applications. As often remarked, this misconception is cultivated by scientists as much as by anyone, on the presumption that society will support science only if it believes that something of tangible use will result. But that presumption is wrong: we wanted to put people on the Moon out of adventure and competitiveness, not out of utility.

 Another example of the confusion of technology with applied science is the fad that swept the United States, most noticeably during the last decade, for state governments to sponsor cooperative ventures between universities and industries in the belief that the latest science could thereby be quickly translated into new technology and corresponding economic benefit. This stampede, expensive in more ways than one, is based on considerable ignorance: of the distinction between frontier science and textbook science and the consequent fallacy of seeking quick application of the latest scientific work; of the related historical fact that, insofar as technology has drawn on science, it has drawn on textbook science; and of the difference in kind between science and technology, for it is not easy to envisage, let alone put into working order, a mutually beneficial and therefore viable cooperation when the interest of one partner is best served by complete openness while the interest of the other is best served by utter secrecy.Closing quotation mark

SOURCE:  Scientific Literacy and the Myth of the Scientific Method. By Henry H. Bauer. Champaign, IL: University of Illinois Press, 1992. 124–128.