§ The Stepansky Medical Encyclopedia

Encyclopedia

489 entries

A

Abortion, 1880-1930, silencing of women and culpability of men:

Between 1880 and 1930, then, a paradigm was established in which the focus of abortion prosecutions rested on male parties rather than on women and their abortion providers. While this focus offered women space to negotiate for an abortion, it silenced their voices inside the courtroom. . . . But depicting women as passive objects in legal narratives about an event that they had strong opinions about was crucial to the enforcement of legal statutes that denied women reproductive choice” (Schoen, loc 2056ff.).

Abortion, in Middle Ages:

“the dominance of humoral medicine meant that the remedies provided to woman without periods and a woman with an unwanted pregnancy were often very similar,” indeed often the same. “Surgical abortions are even harder to detect . . . By far the most common approaches seem to have been herbal remedies (usually taken in drinks) and physical trauma, especially punching in the stomach.” Many remedies were self-administered, “without recourse to medical help” (K. Harvey, 113-114).

Abortion, role of psychiatry in authorizing therapeutic abortions in 1950s and 1960s:

“By taking women’s conflicts seriously, psychiatrists were also among the first to articulate the hardship of forced pregnancy. . . . many psychiatrists supported women’s right to choose abortion without labeling them mentally ill or unstable. . . . Rather than look for indications of mental distress, psychiatrists stressed that patients were mature and mentally stable enough to choose pregnancy termination. . . . Despite the help that psychiatrists could offer women, the system of psychiatric consultation and referral was clearly limited in its ability to assist women in getting access to abortion. Psychiatrist were quickly overwhelmed with patients seeking consultations for therapeutic abortion, and at times such consultations became a charade, which frustrated both the psychiatrists and the women” (Schoen, loc 2427ff.).

Abortion, therapeutic, restrictions on in 1940s & 1950s:

“As obstetrical departments instituted therapeutic abortion committees in the 1940s and 1950s, hospitals voluntarily took on a new role in enforcing the abortion laws and acted as an arm of the state (Reagan, 173). . . . Therapeutic abortion committees helped take legal abortion out of the hands of general practitioners and private, nonhospital-based practice, and place it in the control of hospital-based specialists in obstetrics (177). . . . The medical monitoring of therapeutic abortions is a manifestation of the rise of both conservative medical attitudes toward therapeutic abortion and McCarthyism within medicine” (180).

Abortion and sterilization, common heritage of:

“The lack of state supervision of pregnant women might have been responsible for the fact that eugenic abortion programs never materialized in the United States. The appeal that eugenic abortion held for American reformers suggests, however, the common heritage of sterilization and abortion. Depending on the context in which the surgeries were performed, both sterilization and abortion could either give women greater reproductive control or allow for the control of women’s reproduction (Schoen, loc. 1937).

Abortions, committees on therapeutic, and authority of hospitals in 1940s & 1950s:

“Physicians and historians believe that hospitals gained authority over the decisions and practices of physicians only recently, during the 1970s and 1980s . . . Yet therapeutic abortion committees show that hospitals gained control over physicians and medical practice at a much earlier date. Therapeutic abortion committees brought physicians’ practices under hospital scrutiny and control over thirty years earlier than generally assumed. . . . By 1954, the Joint Commission on Accreditation of Hospitals (JCAHP) issued standards that required consultation for only three operations – all of them concerning reproduction. First-time cesarean sections, curettages or any procedure in which a ‘pregnancy may be interrupted,’ and sterilization required consultation. No other operations required review” (Reagan, 190-191). . . . Abortion was institutionalized in hospitals in two interrelated structures: the therapeutic abortion committee [for middle-class white women] and the septic abortion ward [for lower-class, especially black women injured and infected from illegal abortions] (214).

Actinomycin D, as early anti-tumor drug:

A product of WWII pharma industry’s test of fermentation products for antibiotic properties, it demonstrated significant anti-tumor properties, and was widely used in treatment of pediatric tumors in 1950s and 60s (DeVita & Chu, 8044).

Affordable Care Act (2010-2011), opposition to re “socialized medicine,” etc:

“But the AMA was no longer necessary to finance and coordinate the opposition; the case against ‘socialized medicine’ had taken on a life of its own. Conservatives could not even use the original arguments against Medicare to appeal to the elderly on the grounds that ‘socialized medicine’ would threaten Medicare (even though the new program, with its reliance on private insurance, was not as ‘socialized’ as Medicare had been). The death-panel hysteria and the campaign of fear against reform also found receptive ears because of the high anxieties following the financial crisis. The campaign particularly found a home in the larger Tea Party movement, made up entirely of middle-aged and older whites – people who, by and large, already enjoyed protection against health-care costs, believed they had earned it, and did not want to pay for anyone else’s. This was how the American health-policy trap worked. Some of the well-protected literally screamed and shouted at the prospect of change” (Starr, 237).

Affordable Care Act (2010-2011) excise tax on employer health insurance plans:

Via the “Cadillac tax,” as of 2018, plans costing more than $10,200 for individuals and $27,500 for families will be subject to a 40% tax on the amount over those thresholds . . . The excise tax was a substitute for the more straight-forward policy of limiting the tax exclusion of employer contributions [which has been criticized] on both distributive and allocative grounds: it provides the biggest subsidies to high-income employees with the most generous insurance, and it contributes to America’s inflated health spending by obscuring the trust costs” (Starr, 258, 223).

Affordable Care Act (2010-2011) liberal/conservative aspects of:

ACA “does, however, uphold the liberal view on the scope of benefits, which cover the full range of medical services in a mainstream insurance policy. Although the law does not spell out the ‘essential health benefits’ in details (leaving that responsibility to the secretary of HHS), it does specifically provide for preventive services. In this respect, it is also a departure from historic practices. . . . Culminating a long shift in thinking, however, the ACA incorporates preventive care into health insurance and seeks to promote public health through provisions aimed at reducing obesity and smoking and encouraging participation in wellness programs. . . . the law establishes a Prevention and Public Health Investment Fund to support increased training of primary-care providers, scientific research on prevention, public-health education, and other purposes (Starr, 260-61). . . . The combined move toward both the conservative position on patient cost-sharing and the liberal position on preventive care illustrates the absence of any single theory of cost containment in the ACA, unless it’s an open-ended empiricism and political pragmatism – proceeding on the available data, testing out alternatives, developing better information, and trying to make more intelligent policy as far as political constraints allow (262). . . . The fate of reform will be uncertain as long as there is no political agreement about the obligation to provide health care for all, and one of the two major parties rejects that idea. According to Gallup surveys, the proportion of Republicans saying ‘ it is the responsibility of the federal government to make sure all Americans have health coverage’ fell from 38% in 2007 to 12% in 2012. . . . A commitment to care for all has traditionally been an important moral principle in medicine ‘Well, don’t obligate yourself to that,’ Justice Scalia said. If that is what his fellow conservatives believe, the United States is a long way from resolving the issue” (295-296).

Affordable Care Act (U.S., 2010-2011), benefits to elderly:

In July 2010, “Only 33% knew that it eliminated co-pays and deductibles for many preventive services under Medicare, 26% that it provided bonus payments to doctors who provide primary care services under Medicare and 14% that it would extend the life of the Medicare Trust Fund (by twelve years according to government estimates). Just half knew the law would fill the ‘donut hold’ in the prescription-drug benefit” (Starr, 275).

AMA, formation of and exclusion of blacks & women:

“For leaders of the AMA, keeping abolition at arm’s length was critical to its existence. . . Crucial to this process of institution building was an association discourse that included defining the social identity of medical authority in relationship to the subordination of marginal groups – blacks and white women. . . the AMA as a distinctive space for white males (Haynes, 173) . . . AMA succeeded not in spite of slavery but, rather because of it. Its organization and discursive practice combined not only to secure the widest representation of states in the union, including the slave South, but also consolidated the social identity of medicine as white and male based on the subordination of blacks as well as women (176). . . . At the first convention in 1846, nearly a third of all state delegations came from the slave South (177). . . . One-half of the antebellum national meetings (seven out of fourteen) were held in cities of slaveholding and trading states in the South (178) . . . the discourse of the association circumscribed medical practice as an arena for the privileged few, namely, white males. . . The annual meetings became venues (among others) for inscribing black bodies as different and defending the institution of slavery (179). . . , [After the Civil War] the past practices of the AMA became a means to preserve its future. . . . the leaders of the association placed the future of the organization in the restoration of the antebellum social identity of American medicine. . . the exclusion of women and black doctors was central to its postwar viability” (187). . . . The present and future reality of regularly trained black practitioners challenged the social identity of medicine as white and male that was so fundamental to the antebellum association. For an organization, struggling to regain the support of Southern practitioners, it equally posed a destabilizing vision of the profession grounded in racial equality. The vote in May 1870 to exclude black and white members of the National Medical Society at the national meeting in Washington, D.C., affirmed Baldwin’s appeal to subordinate racial equality to the preservation of the future of the AMA and American medicine” (193).

AMA, formation of and exclusion of homeopaths:

“The organizational strategy of establishing exclusionary associations represented an effort at boundary work through organizational means, a solution to the epistemological inability to discriminate between competing claims. Legitimate knowledge was to be judged accord to who proclaimed it. . . . The criterion for homeopathic expulsion was not determined by the content or nature of their evidence, but rather by the exclusion from the regular professional community. Membership thus served as a solution to the problem of assessment introduced by radical empiricism. It became the mark of a legitimate physician and of a legitimate knower of medical knowledge. . . . The problems associated with the fragmented knowledge base were masked by the public unity of the profession through its professional associations. . . . regulars were able to use their institutional leverage to dampen the effects of government recognition of homeopaths. . . . Though government recognized homeopathy, the organizational strategy by the AMA ensured that this recognition did not translated into increased standing and access to resources without a long protracted fight. As an organizational gatekeeper for resources, the AMA circumvented a cultural, epistemological debate with homeopathy by excluding them from one . . . . epistemological debates can be (and often are) adjudicated, not through specific debates over the merits of some knowledge over others, but through organizations, which set the parameters of the debate, who can engage in it, and whether the debate can occur at all” (Whooley, 2009).

Amenorrhea, modern and ancient views:

Today, amenorrhea is usually a problem limited to the reproductive system that might not even warrant medical intervention. But it had far greater significance in Hippocratic and Galenic medicine. Because women’s bodies produced insufficient heat, they needed an additional method of purgation to expel waste matter (i.e., menstruation), absent which the waste “would continue to accumulate and sooner or later lead to a humoral imbalance,” i.e., disease. “This purgation was menstruation.” In Hippocratic and Galenic thought, absence of menstruation meant that “one of the major purgative systems of the female body was inoperative,” and “ any irregularity of menstruation was a serious threat to overall health” (loc 694ff.)

Amphetamine, use of during WWII:

“Thus, by World War II, amphetamine in tablet form was finding commercial success and gaining credibility as a prescription psychiatric medication (the first ‘antidepressant’) . . . The war years did nothing to diminish the drug’s growth in popularity. . . The US military also supplied Benzedrine to servicemen during the war, mainly as 5-mg tablets, for routine use in aviation, as a general medical supply, and in emergency kits” [as did the British military] (Rasmussen IV, 975).

Animalcular theory of disease, in antebellum America:

“In spite of all opposition, the animalcular theory continued to be preferred by a scattering men to whom no other explanation was satisfactory. Even these men failed to attempt to prove or disprove their ideas by experimentation. Medical quarrels at this time were still settled by the academic means of sporadic observation and logic – a last remnant of the old scholastic thinking. . . . The influence of the animalcular school of thought was not very widespread and, to judge from the decreasing volume of literature, appears to have almost disappeared by the late 1850s” (Allen, 515-16). . . . “The micro-organism theory of disease, which was so generally discredited during the first seventy years of the century, was accepted in the short period between 1875 and 1890” (518).

Anorexia, treatment at Mayo Clinic between WWI & WWII:

Following research of Mayo nephew John Mayo Berkman, Mayo Clinic physicians defined and treated anorexia nervosa as a general metabolic disorder as determined by basal metabolism rate (BMR). Correlating thyroid insufficiency and a low BMR, they recommended replacement therapy (administration of thyroxin or desiccated thyroid) to increase metabolic activity, coordinate with steady regimen of increased nourishment (Brumberg, 208-212).

Antidepressants, in relation to stimulants:

“The earliest drugs that are retrospectively regarded as antidepressants, the anti-tuberculous drugs, were clearly similar in nature to stimulants. Although stimulants had been successfully promoted as ‘antidepressants,’ by the 1950s and 1960s, a distinction started to be drawn between stimulants, which were regarded as nonspecific, and drugs that were thought to target depression specifically. The anti-tuberculous drugs metamorphosed into antidepressants through the concepts of the psychic energizer. It was imipramine, however, that finally established the modern notion of an ‘antidepressant.’ Imipramine had to be regarded as acting on the basis of a disease, because it was difficult to see how any effects that imipramine was known to induce could be useful in depression” (Moncrieff, 2353).;

Antipsychiatry movement, of 1960s:

“ . . . there was nothing remotely like antipsychiatry before 1960. There couldn’t have been, for the simple reason that until World War II no ‘normal’ people ended up in psychiatric hospitals. There was no outreach of psychiatry into the wider world and no means of incarcerating people without chains in the midst of their communities. . . until World II the vast majority of those who were committed to asylums were clearly psychotic, severely epileptic with behavioral disturbances, or mentally handicapped” (Healy II, 150).

Antipsychotics, dependency and:

“The serious but manageable problem is that people having these types of difficulties who want to discontinue treatment may be unable to do so without experiencing lengthy and significant discomfort. The potentially unmanageable problem is that it becomes impossible once patients have taken antipsychotics for some time to know where the treatment ends and the disease begins. . . . Similar problems may beset the SSRIs. This is a prospect that the pharmacotherapy establishment cannot view with equanimity. But what of the alternative, which essentially comes down to recognizing that while dependence is a pharmacological issue, addiction is a social one with political implications. The concepts of drug dependence that first took shape in the late 1960s set the stage for the disease models of addiction, which came to dominate in the 1990s. The historical evidence that therapeutic communities might do more for a larger number of addicts than drug treatments was nowhere to be heard when in the late 1990s two new agents, naloxone and acamprosate, hit the market” (Healy II, 172-73).

Antipsychotics, tardive dyskinesia and:

“. . . the way the antipsychotic story evolved cannot be understood without reference to the peculiar neuroleptic withdrawal syndrome tardive dyskinesia. Efforts to get to grips with this led in the 1990s to the replacement of neuroleptics with antipsychotics. The story also cannot be understood without the realization that psychiatry’s legitimacy was sharply challenged from 1965-1975” (Healy II, 174).

Antiseptic cleanliness campaign of 1920s, downside of:

“What Americans could not foresee was that their antiseptic revolution brought risks as well as rewards. As the nation cleaned up, new problems arose. There was now a smaller chance that people would come into contact with dangerous microbes early in life, when the infection was milder and maternal antibodies offered temporary protection. In the case of polio, the result would be more frequent outbreaks and a wider range of victims. Franklin Roosevelt was no longer alone” (Oshinsky II, 49-50).

Antivivisection, and image of traditional family doctor:

”Antivivisection offered a way to resist the penetration of science into medicine. Reluctance to accept the new medicine did not derive from blind opposition to change. The image of the scientist . . . clashed fundamentally with the traditional role of the family doctor. Vivisection became the focus of the struggle to prevent the encroachment of science because in the vivisector’s laboratory scientific medicine came most radically into conflict with the older stereotype of the physician (Turner, 97-98) . . . In an age highly wrought up about pain, vivisection’s infliction of it became a moral evil of demonic dimensions. The admission of vivisection as a legitimate tool of science would dash faith in science as an instrument of moral progress and thereby undermine confidence in moral evolution itself (100) . . . Dangerous enough when confined to the laboratory, scientific materialism now threatened to infiltrate every home in the person of the trusted family doctor. The purity of medicine had to be preserved. This need explains the allegiance of antivivisectionists to the preachments of sanitary medicine long after bacteriology had done to death its claims. The sanitary reform movement had always tended to preach physical cleanliness as an adjunct to moral sanitation. Morality and physical health went hand in hand . . . To fight against ‘impurity,’ ‘pollution,’ and ‘filth’ . . . did not mean merely to build sewers. It was this side of sanitary medicine that really captured the minds of antivivisectionists (103). . . . Medical scientists therefore tried to bolster their image by stressing their professional kinship with ordinary physicians. However unsavory the reputation of physiologists, the public looked upon family doctors as warriors against pain, comforters of the afflicted” (111).

Antivivisection, and rabies-cure enthusiasm in U.S. after 1885:

“Through the rabies-cure enthusiasm, vivisection became a familiar idea presented in a positive light, and it garnered valuable populist legitimation a number of years before an antivivisection movement could gain momentum in the United States. From the outset, scientists among New York’s physicians knew the importance of vivisection and how to argue publicly for its value” (Hansen, 412).

Antivivisection, anti-health/anti-benefit-to-humanity argument among British:

“An animalistic obsession with bodily health could be the only basis for entertaining such an argument, for disease was the divinely ordained consequence of sin and folly, and was to be borne as such. . . . The ubiquity of this kind of argument in antivivisectionist controversy reveals to us a segment of Victorian opinion for whom medical advance, regardless of source, appeared as a tampering with the order of things, an unwarranted intrusion of rather distasteful medical doctrine into Public consciousness” (French, 306). “The argument from the divine design of the universe was predicated upon the world view of natural theology and a vague preference for the naive inductivism associated with it. . . . Arguments like these as a rationale for hostility toward experimental medicine provide a link to antiscientific sentiment also expressed by members of the movement. . . such sentiment was based in part upon the belief that the scientists of the seventies and eighties had sacrificed the humility of the natural theological viewpoint for an overweening self-importance that failed to take account of the divinely appointed limitations to science” (317, 318; 355 ). . . . The movement’s basic line of argument “centered on the rejection of so-called ‘scientific medicine,’ the veneration of a personal, humane style of medicine directed toward the relief of the sufferings of individual patients, and the promotion of a naïve sanitarianism, in which a confusion of dirt, disease, and sin was juxtaposed with a similar confusion of cleanliness, health, and moral restraint. The experimental approach and the belief that the advancement of knowledge was fundamental to the professional obligation seemed to antivivisectionists a sinister and threatening attempt to disjoin material phenomena of bodily function from the moral and religious foundations of which they were the outward manifestations” (343).

Antivivisection, defeat of in U.S. by 1930s:

“The apparent willingness of the medical profession to assume some of the responsibility for going first and the impressive record of medical heroism and martyrdom in the early decades of the twentieth century persuaded many Americans that human experimentation was not the horror that antivivisectionists claimed. Although experiments on orphans continued to be problematic, few Americans shared antivivisectionist concerns about the professional subjects, prisoners, soldiers, and the other adult populations who participated in research. . . . Taking responsibility for the welfare of the subject, obtaining prior consent of the subject or guardian, and being willing to go first continued to be essential to professional definitions of appropriate human experimentation. By the 1930s, the public’s confidence in medical research and admiration for the medical profession in general made the task of defending medical

Antivivisection, disquiet about professionalization of medicine and:

“Language, in fact, may provide one key to understanding antivivisectionist distrust of medical investigators. Physicians . . . described experiments on sick or dying children and invalids with clinical detachment. The language of these research papers chilled antivivisectionists, who attempt to turn it against their opponents. . . . But, in their extreme way, antivivisectionists articulated a disquiet about the professionalization of American physicians that was shared by other segments of American society. The growing centrality of the hospital and the introduction of a medical reductionist technology played an important role in this professionalization. The increase in the number of lawsuits over consent in surgical procedures, for example, reflects an uneasiness about the doctor-patient relationship in a time in which physicians were realizing powerful therapeutic tools unknown to an earlier generation of physicians. The antivivisectionist partiality to sectarian practitioners, whose reliance on older types of medical explanation was well known; their criticism of compulsory vaccination, which superseded individual autonomy; and their indictment of the physician-turned-scientist reveal the polarity in attitudes between antivivisectionists and medical researchers” (Lederer II, 47-48).

Antivivisection, indictment of experimental medicine of:

“The leaders of the Victorian profession saw in experimental medicine a rigorous and demanding intellectual underpinning for the profession, one guaranteed to expose the incompetent and to awe the laity by its ‘scientific’ cachet and by a revolutionary new therapeutic technology. . . . The antivivisectionist indictment was thus based upon much more than medical support for experiments upon living animals. It was in fact a prescient foreshadowing of current critiques of the impersonality, the ethical insularity, not to mention the hypertrophied scientism, of modern medicine” (French, 411).

Antivivisection, medical triumphs relying on vivisection that marked defeat of:

mid1880s: advent of removal of brain tumors (1884) based on David Ferrier’s and others work on cerebral localization in animals; used of thyroid tissue from animals to treat myxedema (cretinous condition in middle-aged women); 1901: availability of adrenalin in crystalline form, which dramatically saved victims of heart failure; 1922: Banting’s isolation of insulin to treat diabetes (Turner, 115-116). “The control of tuberculosis, the virtual eradication of yellow fever in the southern United States and of Malta fever in the British armed forces, serum treatment for ‘spotted fiver’ (epidemic cerebro-spinal meningitis), public health measures to eliminate typhoid and cholera: All owed much to experiments on animals” (116).

Antyllus (2nd century A.D.):

Greek surgeon not only described aneurysms but developed an operation that would enjoy a renaissance almost 1,800 years later: making incision above the artery; dissect out the vessel; tie it closed on either side of the aneurysm; slit open the aneurysm to evacuate blood and any clot trapped inside it, pack the empty sac with wadding. : “This left a section of blood vessel permanently out of commission, but the body would soon compensate for the deficiency; besides, it was better than letting the aneurysm burst.” His near-contemporary, Galen, recommended compressing the aneurysm from outside the skin until it disappeared. In 1902, Rudolph Matas used Antyllus method to repair aneurysm of brachial artery: with ligatures above and below the aneurysm, he cut into sac and removed clot inside it, sealed off both ends with silk thread, and sewed up externa wound. His report of the case noted “he had merely been following the example of Antyllus,” with general anesthesia making the operation respectable once more (Morris, loc 1084ff)

Appetite, association with sexuality among Victorians:

“For the Victorian physician, nonnutritive easting constitutes proof of the fact that the adolescent girl was essentially out of control and that the process of sexual maturation could generate voracious and dangerous appetites. . . . an active appetite or an appetite for particular foods was used as a trope for dangerous sexuality (Blumberg, 175). . . . Doctors and patients shared a common conception of meat as a food that stimulated sexual development and activity. . . . Meat eating in excess was linked to adolescent insanity and to nymphomania” (176). . . . Meat avoidance was tied to cultural notions of sexuality and decorum as well as to medical ideas about the digestive delicacy of the female stomach. Carnality at table was avoided by many who made sexual purity an axiom” (177). . . . Female discomfort with food [was because] :Food and eating presented obvious difficulties because they implied digestion and defecations, as well as sexuality (178) . . . Food was to be feared because it was connected to gluttony and to physical ugliness” (179).

Asclepian temples (5th century, BCE) ritual of “incubation” at:

“sick individuals slept in a special sanctuary where they would either experience the god healing them with a dream or receive a dream in which the good would prescribe a regime of remebdy . . . they would work with the priest who interpreted the dreams and oversaw the administration of the prescribed remedy at the therapeutic sites in or near the temple” ( Upson-Saia, loc 804-806).

Asperger, Hans, on children with “autistic psychopathy”:

Asperger attributed “sadistic traits” to autistic children writing of the “primitive spitefulness” and “negativism and seemingly calculated naughtiness of autistic children” who “delighted in malice” (Sheffer, 156). His clinic reports “not only generalized about the children’s ways of relating – to the point where their descriptions are interchangeable – they also generalized about the youths’ disobedience.” Such children were impetuous, impulsive, and indifferent to adult authority (160). In his postgraduate thesis of 1944, Asperger “cited eminent figures such as Ernst Kleschmer, Ludwig Klages, and Carl Jung . . . – and his work has been interpreted in terms of these more mainstream figures – Asperger framed his work in the ideas of Nazi child psychiatrists and Gemüt in the introduction. And it is these Reich concepts that provided the basis for Asperger’s ultimate definition of autistic psychopathy” (215). In his thesis of 1944, autistic child “is like an alien, oblivious to the surrounding noise and movement, and inaccessible in his preoccupation”.” As such they were incapable of joining the Volk (220). Between 1938 and 1944, Asperger’s “diagnosis of autistic psychopathy became aligned with his senior associates in Nazi child psychiatry that it would seem to be the result of his immediate circumstances rather than of the evolution of autonomous research and independent thought” (221).

Asperger and Nazis:

“contrary to Asperger’s postwar image, he was far from an insular researcher, secluded in his clinic and immune from Nazi influence. Rather, Asperger was active in his milieu; on any given day, he had multiple points of contact with the regime. . . . One can not escape the fact that Asperger worked within a system of mass killing as a conscious participant, very much tied to his world and to its horrors” (Sheffer, 237).

Asperger’s “curative education” dept. at Vienna Children’s Hospital, diagnostic practices of:

Within three years, Margarete’s b diagnoses had run the gamut: from ‘waywardness’ to ‘manic depressive insanity’ to schizophrenia to menstrual problems to being ‘tentatively educable’. . . . Deadly in its arbitrariness, Nazi psychiatric diagnosis came down to individuals’ decisions and shifting criteria – in which haphazard, hasty words had enormous impact on children’s lives” (Sheffer, 167-168).

Aspirin, mechanism of:

Irritated or stimulated blood cells secrete arachidonic acid (a slippery substance that makes cells flexible), which in turn produce one particular prostaglandin (a group of hormone-like fatty acids) that causes inflammation and fever: “Think of a line of dominoes falling one on to another . . . When cells are disturbed, they produce arachidonic acid. This leads to the formation of prostaglandins, which in their turn create fever or inflammation, and most like (Vane reasoned) pain too. Then think of something coming between the first two dominoes and the others, preventing them from knocking over their fellows. That’s what aspirin did. By stopping prostaglandins being made, it halted the resulting fever, inflammation and pain. John Vane had figured out how aspirin worked” (Jeffreys, 230).

Autointoxication theory, Metchnikoff’s:

“If, he reasoned, the large intestine serves as the principal site of entry of the body’s microbes, if the microbes secrete toxins that alter the higher cells of the body, and if senescence results from the destruction of altered cells by phagocytes, then a direct negative correlation should exist between the size of a species’ members’ large intestines and their average life span. . . . Thus was his autointoxication theory couched in terms of the fundamental disharmonious living organism” (Podolsky, 5). M’s cure for autotoxemia, to offset this disharmony and prevent premature death, was to transform the intestinal flora: via ingestion of B. bulgaricus, which produced the most lactic acid (whose microbes sour milk and inhibited growth of milk’s own putrefactive flora), the senescence-producing putrefactive organisms in intestine would be replaced by harmless B. bulgaricus and premature senility would be prevented (5-7). In 1920s/30s, in American, M’s theory morphed into Rettger and Chepin’s (Yale bacteriologists) Lactobacillus acidophilus therapy, which replaced M’s bacillus of choice with its close evolutionary relation, and replaced M’s philosophical rationale for his therapy (evolutionary disharmony, alleviated by B. bulgaricus) with 19th c. notion of bodily harmony and normality. L. acidophilus, native to the intestinal flora, helped reestablishing bodily balance owing to individual diet; M’s evolutionary rationale for the retirement (preventing premature senility owing to disharmonies resulting in autointoxication) was subverted 12-13).

Avicenna, on relation of medicine to natural philosophy:

Ibn Sina warns doctors against exploring the fundamental teachings of natural philosophy, not because they are irrelevant but because the physician should simply take them for granted. Thus in medicine one does not prove that there are four elements or four bodily humours that arise from them, but the doctor certainly needs to accept this, as so much therapy involves redressing imbalances in the humours. . . . The doctor should know about the internal senses but can get by with recognizing only three out of the five, since his goal of treating disease does not require him to distinguish between the two kinds of memory or to know at all about the estimative power.” But medicine does have scientific ambitions, viz, to pursue “demonstrative proofs” about the underlying causes of disease on the basis of sensory experience, a task that, for Avicenna, calls for “the method of experience as opposed to mere induction.” In this respect, medicine is similar to zoology, since the study of animals often involves inferences about imperceptible causes (Adamson, 60-61).

B

Baer, William, empathy of:

In summer of 1900, prior to establishment of orthopedic clinic at Hopkins, Baer spent the summer at the orthopedic service at MGH, where many children had to wear plaster casts in the hot weather: “Doctor Baer himself wore a body cast all that summer. His sole object was to try in this way to gain the children’s confidence by showing them that he too was enduring the same discomfort” (Crowe, 130-31).

Banting, at the time he treated Elizabeth Hughes in 1922:

“In treating all of these [private] patients Banting was an inexperienced physician who happened to have supplies of a life-saving substance. He knew next to nothing about the complexities of diabetes, or the intricate principles of dietary balance worked out by the pre-insulin diabetologists” (Bliss, 102). Re Banting’s post-discovery research: “His research and reasoning were pathetically ill-informed, bearing the same relationship to serious physiological inquiry that a farmer’s tinkering with a beat-up Ford would to advanced engineering” (Bliss, 118).

Banting, in 1936:

“Every speech I have ever given has been preceded and followed by hyperacidity, diuresis, diarrhea and diaphoresis” (Bliss, 111).

Bayer, “industrialization of invention” at:

it’s early successes “were the results of an evolutionary process in which legal and economic as well as scientific developments played important roles. Major components of this process included the opening of a main scientific laboratory in 1891; the building up of a research infrastructure to support the main laboratory, including a library, a literary department, a patent bureau, a control laboratory, a training laboratory, and an experimental dye house; and the increasing focus of management on the organization of research and on the introduction of related managerial techniques, including labor contract regulations, financial incentives, conferences, regular progress reports, and creation of the role of research administrator” (Lesch III, 41). At IG Farbenr testing period for new drugs averaged one year (43).

Billroth, C. A. Theodor, as an operator:

“On familiar ground, his operative methods were rapid and summary, though always safe; but when, as a pioneer, he invaded an unexplored field, he was deliberate, always prepared by previous animal experiments, and provided with a well laid plan or procedure. Nothing could shake his cool and tranquil self-possession. Hemorrhage, asphyxia, or any of the serious accidents at the operating table were always met and overcome without a flutter of excitement. He was an indefatigable worker. . . . he was a masterful pianist. His friendships with Brahms and Hanslick were based on their common love of the art” (Gerster, 120,121).

Bioethics, birth of between 1966 and 1976:

“The new rules for the laboratory permeated the examining room, circumscribing the discretionary authority of the individual physician. The doctor-patient relationship was molded on the form of the research-subject; in therapy, as in experimentation, formal and informal mechanisms of control and a new language of patients’ rights assumed unprecedented importance. . . . Jonas’s image of the doctor alone in the examining room with the patient (and with God) gave way to the image of an examining room with hospital committee members, lawyers, bioethicists, and accountants virtually crowding out the doctor. . . . In the post-World War II period, a social process that had been underway for some time reached its culmination: the doctor turned into a stranger, and the hospital became a strange institution. Doctors became a group apart from their patient and from their society as well, encapsulated in a very isolated and isolating universe” (Rothman, 107, 108).

Birth Control Movement, racist assumptions re African American women:

“Prejudices about black women’s lack of intelligence frequently reinforced health officials’ belief that funding birth control programs in black communities was a waste of money altogether. Many philanthropists and health professionals believed that African Americans lack the intellectual capacity to use any form of birth control. . . . Clarence Gamble, Frances Pratt, and many public health officials assumed that black women lacked the intelligence to use anything but the simplest contraceptives. Even sympatric professionals regularly assumed that African Americans were not interested in birth control and would not avail themselves of the methods because they were too complicated, or that patients would not accept or carry out recommendations. Frequently, such attitudes provided an excuse for the absence of programs for African Americans” (Schoen, ch 1).

Black internships:

In 1920s, less than 100 of over 3,000 internships were open to black physicians. In 1923, only six of 202 black hospitals (i.e., that treated primarily black patients) had internships (Ward, 61-62). The internship crisis for African Americans subsided in the mid-1930s as new hospitals for blacks were built and fewer African Americans entered medical school. There was a surplus of internships available to African Americans for the first time in 1936” (65). Only in 1956 did a majority (77 of 129) of Howard and Meharry graduates serve internships in predominantly white (integrated) hospitals (69-70). By end of 1950s, there were over 350 board certified black physicians in all specialties, which was ca. 1% of total number of board certified specialists (78).

Black medical colleges:

Between 1910 and 1947, Howard and Meharry graduated 90% of all black medical school graduates. In 1947, of ca. 25,000 American medical students, only 672 (2.7%) were black, while blacks were ca. 10% of US population (Ward, 46). A 1949 survey found that U. Michigan, BU, and WMC of Phila had most liberal policies toward admitting blacks, who comprised slightly more than 2% of their total enrollment that year (50). By 1923, ca. 90 black women had received medical degrees – 39 from Meharry, 25 from Howard, and 12 from WMC of Phila (53). In 1967, three remaining all-white southern med schools (Duke, Bowman Gray [Wake Forest], and Univ. Georgia) all admitted their first black students. In 1969, for first time, more black students enrolled in 99 predominantly white med colleges than at Howard and Meharry (57).

Black medical colleges, debate over whether to support:

“Southern black physicians were caught directly in the middle of this debate. White physicians worked to keep them excluded from hospitals. . . . southern black physicians had to face not only the hostilities of southern whites but also the often paternalistic attitudes of black medical men from outside the Deep South who thought their southern brethren to be less competent than doctors who practiced in Chicago, New York, or Washington. Finally, civil rights organizations and leaders, determined to break down segregation at all costs, maintained that what southern black physicians needed to do was force their way into white hospitals, instead of accepting all-black facilities” (Ward, 174). . . . After Hill-Burton Act, some black physicians resisted integration out of fear of being hurt economically, of losing their fiefdoms in third-rate black hospitals (Ward, 181-182, citing historian Edward Beardsley and Deitrich Reitzes’s 1959 study).

Blackwell, Elizabeth, grounds for necessity of women physicians in ob/gyn, as set forth in lecture, “On the Medical Education of Women” of 1855:

In the scientific age, “The midwife must give place to the physician. . . Woman therefore must become physician.” Woman doctors would also rescue female patients from “unnatural and monstrous” need to confess intimate symptoms to men or, worse still, to serve as case studies for male medical students. . . . And even though she feared for the ‘moral purity’ of female patients both rich and poor, she laid no blame for this peril at the feet of her male colleagues, who had her ‘utmost confidence and respect.’ In aligning herself with the men, she implicated herself in their misogyny” (Nimura, 196-97).

Blood counts, infrequency of in early 20th c:

“There is little evidence to suggest, however, that the calls for blood tests were quickly heeded. The published literature suggests, instead, that well into the twentieth century blood counts were a distinctly unusual event. . . the occasional allusions to routine use of the blood count paint a picture of a technique talked about a great deal but used very little (Howell VI, 185) . . . . At NY Hospital less than 4 percent of all of the patients admitted in 1900 had a blood count performed. At the PA Hospital that percentage was only slightly higher than 5 percent. But in both hospitals, by 1920, almost one out of every three patients admitted had a blood count done” (187).

Blue Cross, advertising campaigns in late 1930s and 1940s:

“To attract subscribers to the Blue Cross plans, the publicity had to accomplish two tasks: one, to make certain that the hospital, more precisely, the voluntary hospital, was defined by the public as the appropriate setting for the treatment of acute illness; the other, to persuade individuals that a group health insurance policy was a sound personal investment” (Rothman, 24). . . . By underscoring the unexpected character of health care needs and by emphasizing the notion that illness could strike anyone at any time, Blue Cross set out to create a level of anxiety great enough to attract subscribers without immobilizing or terrorizing them. Blue Cross messages also stressed how expensive hospital stays were, in a shrewd effort to force potential subscribers to calculate whether they would be able to afford the requisite services” (26). . . . Through the 1950s, New York Blue Cross rarely used a black model in its advertisements, in another effort to make certain that no one confused its members with those who went to public hospitals” (29). . . . But Blue Cross was never ready to open its membership ranks widely. In these years, the plans generally limited their enrollments to the employed . . . . its public presentation obfuscated the critical division between the employed and the indigent, posing the question of health care delivery as though the voluntary sector could satisfy all needs, and omitting consideration of those at the bottom of the economic ladder. Blue Cross did not take as its responsibility exposing the limitations of public hospitals or the number of citizens, young and old, who were unable to join a voluntary health program. . . These considerations might be thought to lie outside its purview, except for the fact that Blue Cross framed its mission as a test case for democracy. But it was a test case of a biased sort, casting the debate on health care exclusively in terms of how to serve middle-class Americans, with scant attention to the fate of others” (33, 34).

Blue Cross, founding and rationale of in 1930s:

“Group hospitalization schemes were prepayment plans, typically founded by strong administrators of voluntary hospitals or hospital councils, which offered subscribers the opportunity to receive specified hospital service, without cost at the time of need, for a small, ongoing monthly payment” (Stevens II, 183). Real control of the Plans usually lay in balance of power between the chief executive and a small number of dominant individual board members who took the most interest in Plan affairs. Over half of hospital board members were trustees, not administrators: “. . . although they had to be concerned with the financial health of the hospitals serving their members, the Blue Cross Plans, once they were weaned from hospital underwriting, developed a life, purpose, and interests of their own, independent of the hospitals that begat them. Nothing else could explain the increasing difficulty in settling questions about reimbursement of hospitals” (Cunningham, 64). . . . Eligibility for obstetrics usually began ten to twelve months after initial enrollment. Mental health was almost universally excluded, and coverage for chronic diseases was rare. . . . Structurally, the Blue Cross schemes were corporations founded by corporations (the voluntary hospitals) which responded to the needs of other corporations (employers). As a result Blue Cross was a ‘community’ scheme but not a ‘social-welfare’ scheme; notably, it excluded the unemployed, the elderly, and the disabled, as well as agricultural, domestic, and other ad hoc or part-time workers who had no affiliations with the organized workplace (Stevens II, 186). . . . the new prepayment schemes did not usually cover doctors’ fees, only the hospital’s basic services. They thus proclaimed, in effect, that hospitals were not ‘practicing medicine’ as medical corporations (since the financing excluded medical fees). Thus the hospital remained, formally at least, the practitioners’ workshop, not their employer, and there was no hidden agenda in the plans in favor of group practice or bureaucratic control over medical work” (187). . . . The organizational genius of Blue Cross was that it was a national movement whose strength derived from its ability to adapt to local conditions. It was, in short, both a radical and a conservative undertaking – radical in providing a new vehicle for hospital financing, but conservative in hewing to traditional relationships between hospitals and physicians at the very time that these were threatened” (188).

Bonner, Thomas, on fate of women in medicine in U.S. vs. Europe in early 20th century:

“The ironic result of the social and scientific changes that had rocked medical education in North America was a diminution of women’s role in it. Across the Atlantic, by contract, no corresponding decrease in the number of places in the government-run medical schools or sudden raising of premedical requirements or tuition costs to the student affect the women of Europe. . . . It was the laissez-faire structure of American medicine, so long the despair of reforms and the subject of ridicule by Europeans, that made change so difficult and its effects so uneven” (Bonner, 156, 157). “More significant in explaining the failure of the medical women’s movement in America, following its period of greatest triumphs, was almost certainly the prolix and private character of much of the medical, hospital, and licensing structure that served the American profession” (165).

Brewer’s Yeast, as treatment and preventive of pellagra:

“Goldberger’s need to disprove the Thompson-McFadden Commission almost certainly helped delay his demonstrating the curative and preventive power of brewer’s yeast (Bryan, 242). Goldberger’s time in SC re the field studies “delayed his exploitation of a new animal model: black tongue in dogs. It was not until 1925 that Goldberger and his colleagues reported that dried brewer’s yeast, milk, and meat contained a pellagra-preventative factor or “factor P-P” (245).

Bubonic Plague, as understood in 1900, when it struck San Francisco:

“Medicine at the time largely considered plague a disease of filth, and had few conceptions of how it was transmitted from victim to victim. Paul-Louis Simond . . . had made a breakthrough discovery four years earlier, yet his finding was not yet widely accepted; he “went searching for what species was capable of transmitting the disease not just from person to person, but from rat to rat . . . he turned his attention to fleas . . . [he began dunking dead rates from homes of plague victims and gathering fleas from their corpses. When he examined the intestines of the insects under a microscope, he found that the fleas were saturated with plague bacilli” (Randall,138-139). Simond’s approach (kill the rats that carry infected fleas, the intermediates) adopted by Rupert Blue in San Francisco in 1907, when, a year after the earthquake of April 1906, infected rats with infect flees all but took over the city. Blue’s Marine Hospital Service men were capturing 13k rats a week (191), but this was inadequate. He effectively deputized the entire population to aid in eliminating rats, and his deputy, Colby Rucker, wrote a primer, “How to Catch Rats,” widely distributed throughout the city (189-192),

C

Cajal, Santiago, “true” understanding of nervous system of:

“In only a few years of searching [in Barcelona, from 1887 on], Cajal had concluded that the nervous system is composed of units: a cell body with dendrites that receive incoming information and propel the nervous impulse down a single departing axon to stimulate the next cell. Finally, he deduced that since there is no continuity of substance between neurons, the electrical impulse between cells must be transmitted by an unknown chemical induction effect. . . . Cajal’s ideas incorporated all that is now felt to be fundamentally true about the basic structure and function of the nervous system. At that time, however, the matter was both imperfectly understood and hotly contested” (Rapport, 104-105, 120-121). Cajal “was modifying not only Golgi’s staining method [via his “double impregnation” staining method (99-100)] but also his theories of neural conduction” (118). . . .[Using embryonic tissue, where myelin is scant and nerve paths less cluttered (101)] “He had seen that axons end, by this time in the cerebellum, and later in the retina and the olfactory bulb. Those endings invariably conformed to the shape of the dendrites of the next cell. There was no fibril of connection. There was a gap” (121). Utilizing only the light microscope, unable actually to see the synaptic cleft in the manner that electron microscopy permitted, Cajal “was correct in his basic description of neuronal anatomy, dynamic polarization, and synaptic transmission” (180).

Cancer genetics, transformation between 1982 & 1993:

“The cloning of ras and retinoblastoma – oncogene and anti-oncogene – was a transformative moment in cancer genetics. In the decade between 1982 and 1993, a horde of other oncogenes and anti-oncogenes (tumor suppressor genes) were swiftly identified in human cancers . . . Retroviruses, the accidental carriers of oncogenes, faded far into the distance. Varmus and Bishop’s theory – that oncogenes were activated cellular genes – was recognized to be widely true for many forms of cancer. And the two-hit hypothesis – that tumor suppressors were genes that needed to be inactivated in both chromosomes – was also found to be widely applicable in cancer. A rather general conceptual framework for carcinogenesis was slowly becoming apparent. The cancer cell was a broken, deranged machine. Oncogenes were its jammed accelerators and inactivated tumor suppressors, its missing brakes” (Mukherjee, 380-381).

Cardiology, as practiced in 1930s:

“Digitalis, quinidine, morphine, and nitroglycerin are the main cardiac drugs available to practitioners in the 1930s that are still used today. There were no antibiotics, potent diuretics, or antihypertensive agents. Congestive heart failure was treated with digitalis and diuretics (mercury compounds or purine derivatives like theophylline), but doctors knew these remedies had significant side effects or had to be administered by injection. Rest was a mainstay of therapy for heart failure and angina pectoris” (Fye 2, 77).

Cardiology, “New,” in Britain:

“ . . . a new clinical perception of the heart. Instead of being interested in the statics and mechanics of the heart’s action and the natural history of valvular disease, what began to matter to clinicians was the heart’s dynamics; what the heart could do. . . . There was thus a shift from pathological anatomy to pathophysiology and from an ontological to a physiological concept of disease. . . . The new cardiology involved a revolution in this perspective. The clinician perceived a patient with multiple indications of a ‘failing’ heart owing to changes in one or more of its muscular properties. The physician’s task was thus to elucidate what these changes were and make a prognostic assessment of the patient in terms of the heart’s capacity to deliver blood. The state of the valves was simply another factor, and often a minimal one, influencing this assessment” (Lawrence, 12).

Case files, and need for standardized forms in hospital:

[loose files introduced for inpatient records between 1880 & 1920] “However, inpatient records continued to be bound by year so that the protection and integrity of the fixed format volumes were continued in the new system. . . . The problem of binding documents of diverse size was responsible, in part, for a drive to develop new standards for records to replace those which had been automatically conferred by the format of the journals and casebooks. . . . once loose files were adopted, the lack of a standardized size for forms caused problems. Documents of various sizes were difficult to keep in place within the file, easy to loose when the file was in transit, and impossible to bind without a great deal of labor to ensure their proper place and positioning. Printed forms of the same size were introduced and fully employed individual medical record offices to achieve uniformity in recording and filing which was now undertaken by many people” (Craig II, 28).

Case files, flexible, introduction of after 1900:

“After the turn of the century, casebooks were replaced by flexible case files which contained a variety of different forms and types of documents. Separate activities, tests, and procedures were provided with their own forms, and these were collected together with the clinical history and progress notes to form the case file (Craig I, 63). . . . Between c. 1850 and c. 1950, three major influences had a direct impact on hospitals and their records in both London and Ontario: the involvement of outside groups with the hospital; developments in administrative practices; and developments in hospital medicine (64). . . . [Re #3] The increasingly complex diagnostic and therapeutic environment produced more records and stimulated the development of new series of records and of documents to record the details of procedures, the nature of patient responses, and the administration of departments. The development of hospital medical practice is reflected directly in the clinical case file, which records the participation of new departments, functions, and services in the care of in- and out-patients. Three factors worked in combination to increase the complexity and size of hospital case records: new techniques, the extension of care, and the establishment of standards in the composition of case files. . . . Between 1880 and 1940, the following departments and specialist services were established or extended at the selected hospitals: pathology, dental, photographic, electrical, out-patient, follow-up, x-ray, pharmacy, isolation, dietary, and cardiology” (77).

Case records, absence of narrative account of illness in:

“Despite this bulk, significant data are often left out, most notably a well-drawn narrative account of the illness itself. The usual clinical account . . . screens out the patient’s experiences and the doctor’s view of them. Indeed, the narrative part of the record often seems to remain an aide-memoire for the physician writing it rather than a clarifying note to others who will read it. . . . Concerns about self-protection inevitably change the character of the record and endanger its credibility. Other concerns, even when their purpose is to serve the medical needs of patients, can have a similar effect. For example, when the clinical record is written with a view to establishing claims for third-party billing, justifying the appropriateness of hospital admissions for review boards, or gearing diagnostic conclusions toward diagnostic-related group requirements, its truthfulness can be compromised” (Reiser III, 984).

Cellular immunity of late 1960s and 1970s that followed on discovery of the importance of lymphocytes (re experiments on removal of bursa [in birds] and thymus [in mice]) in acquired immune response:

“Cells of the immune system not only communicated with each other; they produced, especially when aroused, a roar of white noise in the form of molecules passing back and forth. These signals acted like hormones, in the sense that they either stimulated white blood cells to multiply or emphatically suppressed their activity; attracted other cells to the scene of immunological insult or inhibited the usual suspects from crashing that scene; induced profound changes locally or issued a generalized alarm that could affect the entire organism, such as molecules that traveled to the brain and induced fever. In 1974 Stanley Cohen proposed the word ‘cytokine’ as an umbrella term under which all these homeless factors could huddle; he defined them as the family of molecules manufactured and secreted by a variety of cells engaged in immunological or inflammatory responses” (Hall, 196).

Central dilemma facing courts in spiritual healing death cases where defendants mount a mens rea defense, since they “honestly” believed God would save their children:

“on the one hand, sincere Bible-reading parents, believing that God would heal their sick children, did not intend knowingly or recklessly to cause their deaths. On the other hand, the state viewed the parents’ conduct as a deviation from an objective standard of reasonable care, since they did not act in a timely manner to prevent their children’s deaths . . . In light of the difficulties of a subjective religious defense, the state must maintain an objective standard of reasonable parental care, imposing a duty of seeking medical treatment in life-threatening conditions in their children. What the ordinarily prudent parent would do is an issue for a jury to determine” (Hughes, 246-247). In 2001, Randy & Colleen Bates of Colorado’s Church of the Firstborn pled no contest to criminally negligent child abuse causing death. Their 3-year-old daughter Amanda died by diabetes and gangrene; shortly before her death, she was bleeding from every bodily orifice, had a yeast infection, and pneumonia (Hughes, 263-264).

Cesarean section, differential success rates among hospitals until 1940:

“In the case of the cesarean, the differential in success rates between city and community hospitals persisted well into the 1920s, due largely to the deficiency of postgraduate medical education. The diffusion of surgical knowledge that allowed the safe practice of cesarean operations in smaller hospitals occurred only with the institution of organized surgical residency programs in American medical schools by the close of the 1930s. . . . Such improvements appear to have increased the success of cesarean operations by 1940” (Ryan, 493).

Chemotherapy (Ehrlich), continuity from synthetic dyestuffs:

“The movement from synthetic dyestuff to pharmaceuticals to chemotherapy to bacterial chemotherapy may be regarded as a series of steps in the diversification of an industrial enterprise and its research establishment. Continuity from dyestuff chemistry was expressed in personnel, in concepts and methods, and in the use of specific products. Hörlein, Mietzsch and Klarer [Bayer/IG Farben] were trained in dyestuffs chemistry and practiced it before moving to pharmaceuticals. On its chemical side, chemotherapeutic research involved methods of synthesis and concepts of specificity and variability taken over from dyestuffs chemistry. Hörlein drew a direct analogy between the natural/synthetic dichotomy in dyestuffs and the natural/synthetic dichotomy in medicinal substances. The azo dyes were tried as medicine, and one of them became Prontosil” (Lesch III, 65).

Child guidance, summaries of Judge Baker staff ca. 1920:

“A spider’s web of complexity ruled these child guidance summaries. Social environment and mental ability were certainly critical elements of ‘direct causation’ . . . Yet family supervision and individual personality modified both the poor environment and the weak intellect. The child guidance message was clear – clients had been right to seek an evaluation, because the conditions that created delinquency, evidenced by the findings in most of the cases, were too complicated for any one worker to diagnose. As child guidance clients were urged to believe, only the combined expertise of a team of professionals [psychiatrist, psychologist, social worker] could supply an accurate representation of the problems, one that would point to the most direct and efficient changes needed to eliminate the delinquency. “These child guidance summaries also conveyed clearly the message that though troublesome, the children were not responsible for their plight” (K. Jones, 88-89).

Childbirth, among slaves in antebellum South:

Complications that called for a doctor included hemorrhaging (e.g., from placenta previa, in which placenta covers opening of the cervix [158-159]), prolonged labor, and cessation of labor (Schwartz, 153). Monetary considerations encouraged slaveholders and physicians to leave routine childbirth to slaves (156). Most docs who attended slaves were GPs who attended no more than 5-10 childbirths a year, which precluded development of expertise in midwifery (157). On masters’ use of physicians re illness of slaves, including as a means of coercion, see Stowe, 138ff : “. . . the power given to the dollar over human well-being is never more baldly seen that it is in the thousands of small decisions that made slaves’ health a subset of owners’ finances” (139). . . . at the very moment of the summons, the slave patient’s subjectivity was objectified and subsumed into the master’s interests. The summons became a distinct genre of calls for help, one that doctors referred to as a ‘Negro call’” (140).

Childbirth, critique of medicalization of:

“there is more diversity in women’s experiences and in their reactions to both medicalized and ‘natural’ birth than the critique suggests. . . the absence of medical intervention does not necessarily make for a positive, empowering experience; nor does medical intervention necessarily leave women feeling alienated from the experience. . . . the absence of medical intervention during a childbirth does not always produce a sense of efficacy; nor does medical intervention necessarily engender alienation (Fox & Worts, 335). . . . for the couples in this study, there was a very strong relationship between whether or not the woman’s partner was generally supportive and whether or not the woman received medical intervention. . . . Women whose partners were not very supportive usually requested pain relief before making every attempt to get her through the birth without it (337). . . . The dearth of social and emotional support for motherhood may be a major contributor to women’s desire for, and acceptance of, medical pain relief during childbirth. . . . help with pain may be important precisely because women assume personal responsibility for mothering following birth. What is problematic about medical management is no that it offers too much ‘care’ but that it substitutes for more general social support of women in labor and after the birth, and offers instead a very limited kind of help – mostly geared to the baby’s delivery” (338). . . . it is hazardous to adopt the concept [of control] uncritically in discussing a situation as intractable as childbirth. . . ‘control’ meant a number of different things (339) . . . control means different things to different women. . . . women giving birth can experience the situation as being ‘out of control’ for reasons other than the presence or absence of medical intervention; such reasons can include a hospital’s staff’s failure to respond to requests for pain relief. Thus, more important than control (narrowly defined) seems to be whether a woman’s needs are addressed – however the woman in question defines them (340). . . . An important feature of that [social] context is women’s privatized responsibility for child care and a dearth of social supports for mothering. In turn, the immediate context of most births in North America involves hospitals that reproduce privatized responsibility and offer women only a limited kind of assistance. We argue that because of minimal social support, many women welcome medical intervention, both out of concern for their babies’ immediate welfare and because it promises to leave them in better shape to assume the extensive responsibilities that will confront them as mothers” (343).

Childbirth, move to hospitals in 1920s, 30s & 40s:

“Maternal mortality remained high during this period. To understand why women wanted to go to the hospital to have their babies in the 1920s and 1930s, we must take into account two factors other than proven safety . . . First is the increasing mystification of medical knowledge in the postbacteriological era; second is the declining ability of women’s traditional networks to meet the demands of childbirth. . . . The price of the new science was a growing separation between expert and layperson, between obstetrical specialists and birthing women (Leavitt, 174) . . . . Birth in the hospital encouraged interference because the equipment and staff were readily accessible. Rather than making childbirth safer, physicians in the 1920s and 1930s, according to their own evaluation, were responsible for maintaining unnecessarily high rates of maternal mortality. . . . Approximately 25.4 percent of hospital deliveries were operative. . . . Urban rates of puerperal infection increased particularly during the decade of the 1920s, indicating that hospitals may have increased maternity-related infection risks for women” (182). “Maternal mortality took an esp. sharp plunge in the 1940s, from 37.6 per 10,000 live births to 8.3. That drop was due in part to the increasing incidence of hospital delivery. Three in five women delivered their babies in hospitals in 1940, but by 1950 close to 90 percent did so. . . . Infant mortality also declined, though not as precipitously. Antibiotics and immunization brought down infant deaths from 47 per 1,000 live births in 1940 to 31.3 in 1949” (S. Hartmann, 174-5).

Childbirth, women’s control over throughout U.S. history:

“Women’s ability to control confinement practices for most of American history . . . is unique in the annals of relationships between women and medicine. . . . women in the act of childbirth did not feel or act dependent. They negotiated procedures with their medical attendants from a position of strength originating in their historic dominance over confinement room practices. . . . Although physicians acquired theoretical knowledge about parturition quite early in the profession’s history, the male practitioners did not themselves possess the experiential secrets of the birth room” (Leavitt, 209). . . . until the twentieth century, women were able to keep a strong element of decision-making in their own hands. They called in the experts to perform needed interventions, but they decided themselves which procedures were acceptable. Childbirth remained women’s business as long as it remained in women’s homes (210).

Chiropractic, and science:

“The multiplicity of definitions for science within orthodox medicine made it easier for chiropractors to claim scientific status (S. Martin, 209; cf. Ernst, 546) . . . For chiropractors, scientific knowledge was not acquired by experimental control of variables in a carefully regulated laboratory environment. Instead they examined and treated thousands of patients observed in health and disease. . . . For chiropractors the locus of scientific teaching and research rested not in the laboratory, but at the bedside and in the museum [Martin, 210]. . . . At precisely the moment that elite medical institutions turned away from the museum and embraced the laboratory, chiropractic colleges gloried in their osteological collections [211]. . . . Just as Parisian physicians correlated antemortem and postmortem findings and presented their results as scientific evidence, chiropractors asserted that correlating the clinical status of patients before and after spinal manipulation placed chiropractic on a firm scientific footing. . . . [Chiropractors] argued that the demonstration of a new scientific law that healed the sick was an important contribution to revealing God’s beneficence [212]. . . .The imprecision with which physicians and patients used the term science “provided the intellectual space for chiropractic science to be acceptable, even attractive, to patients. . . . By elaborating a unique conception of science, chiropractors developed an intellectual framework and justification for spinal manipulation that expanded chiropractic beyond an empirical craft and enhanced its professional credibility and stature (226). . . . If the new scientific medicine placed a subtle but distinct wedge of science between doctor and patient, chiropractic science firmly anchored the practitioner to the bedside. The only science chiropractors performed was clinical – observing patients” (227).

Chiropractic, as reshaped in 1970s:

Three forces were involved: (1) deference to medical science collapsed; (2) reforms within chiropractic education; and (3) growing interest in alternative medicine among orthodox scientists (e.g., in psychoimmunology). This included new clinical research methodologies, such as nonexperimental research approaches to the efficacy of medical interventions that “allowed clinical researchers to address the efficacy of therapeutic interventions without making assumptions about the theory behind the therapy” (S. Martin, 223).

Chiropractic, germ theory and:

Bacteriology was widely introduced into chiropractic curricula during the 1920s. “Rather than wholeheartedly accepting the germ theory of disease, however, chiropractors accepted the association between pathogenic bacteria and disease but denied a causal relationship. Instead, they pointed to differences in susceptibility as the central issue in infectious diseases. They used their observational approach to note that not every person exposed to pathogenic bacteria became ill and argued that the difference between those who became sick and those who remained well was the presence or absence of vertebral subluxations. Thus bacteriology could help confirm a diagnosis without explaining the disease” (S. Martin, 218).

Chiropractic “science” (cf. psychoanalysis):

Chiropractors believed “they had established their own form of science, which emphasized observation rather than experimentation, a vitalist rather than mechanistic philosophy, and a mutually supportive rather than antagonist relationship between science and religion” (Ernst, 546).

Chloroform, as “cause” of Crimean War (Simpson):

British press, looking for a scapegoat, “even singled out Simpson, and announced that he was the cause of it all. If it hadn’t been for his chloroform, soldiers would never have agreed to the possible agonies of field amputation, the usual treatment for every gunshot wound, and the war would have come to a complete halt for lack of men” (Simpson, 216). Further, Dr. John Hall, who oversaw Britain’s medical system in Crimea, was suspicious of chloroform and warmed about its dangers (Gabriel, 156). Infection produced a mortality rate of 62% for thigh amputations; a scant improvement on the 70% mortality rate during the Battle of Waterloo (Gabriel, 137). In British army, 4000 men died from battlefield wounds whereas 16,000 died from disease (148). Disease rate per 1000 men per annum was 253.5 for French; 161.3 for British, & 119.3 for Russians. (Compared with 110 in Mexican War; 65 in Civil War, and 16 in WWI [Gabriel, 153]).

Chlorosis, and moral management of girls in 1880s & 90s:

In 1880s, chlorosis as a disease of “capacious girls” (physically and mentally) that embraced the lifestyles of both working girls and college women and was characterized by extreme lethargy, gastric and menstrual disturbances, and blood diminution, etc. By the early 20th century, chlorosis had become for most clinicians (excepting child and adolescence developmentalists like G. Stanley Hall), “simply a less than optimal hematological state detectable by the staining of cells and their analysis under the microscope” (Wailoo I, 50). Yet, “Even in the 1890s, chlorosis was not a stable, well-defined entity. In late Victorian America chlorosis was at once a collection of changing symptoms and part of a medical rationale for the moral supervision of girls. Whatever chlorosis had meant before the 1890s, during this decade the diagnosis became intertwined with the broader goals of uplifting and morally reforming American women, and technologies of hemoglobin analysis were crucial to this agenda. . . . physicians seized upon chlorosis as a reliable disease category and as a clinical rationale for transforming capricious and ill-fed girls into models of regulated, controlled, and well-behaved young women. The monitoring of blood, appetite, and habits conferred scientific legitimacy on moral management” (Wailoo I, 20, 25). . . . Hematological accounts of women and iron gained a wide cultural currency, both within medicine and in wider circles – as some theorists sought to justify a domestic model of American womanhood in the post-World War II decades (42) . . . . I have made no universal claims about what chlorosis ‘really was.’ Rather my analysis has emphasized that it was the interaction of technologies, gender ideals, and a changing culture of medicine – that is, the culture of moral management, the culture of laboratory medicine, the promoters of iron metabolism research, or the advocates of women’s emancipation – that shaped the changing identity of chlorosis. . . . Clearly, there is no single disease called chlorosis. Technologies have played a key role in giving many different identities to this complex phenomenon” (44). By 1930s, emphasis on iron intake and iron metabolism  reconfiguring of chlorosis as “iron deficiency anemia,” which revealed much about “a particular culture of hematological thought that had grown and expanded by the middle of the twentieth century. By knitting together observations about iron metabolism, modern therapeutics, and female physiology, hematologists constructed a chlorosis that reflected the growing chauvinism of their times.” Yet, “iron was in fact, a tool of limited power in the treatment of chlorosis” (63-64).

Cigarettes, and woman in early 20th c:

“As Richard Klein has pointed out, the first women to be publicly identified with cigarettes were those who were paid to stage their sexuality: prostitutes, actresses, dancers. The act of lighting a cigarette signaled a certain sexual openness; women who did so violated traditional roles by actively giving themselves pleasure instead of either avoiding it or passively receiving it. After about 1890 or so, when a women dangled a cigarette, it did not necessarily mean she was a prostitute, but it did suggest that she might be available – although on her own terms” (Tate, 98). . . . By the time the United States entered the war [WWI], then, two contradictory trends were evident: more women were smoking, and they were encountering greater and greater resistance. The war intensified both trends” (104).

Cigarettes, during WWI:

“By freely dispensing cigarettes to soldiers, the YMCA, Salvation Army, and Red Cross transferred some of their own respectability to a once-disreputable product (Tate, 81). . . . By the fall of 1917, private smoke funds were multiplying like amens at a revival (82). . . . Like the Liberty Bonds, the smoke funds provided a litmus test for public demonstrations of patriotism (89).

Civil War, and smallpox inoculation of black children in the South:

Shortages of smallpox vaccine led to use of healthy black children and infants as source of uncontaminated “vaccine matter,” i.e., lymph from pustules, followed Confederacy’s decision to vaccinate the entire army. Lymph obtained from soldiers was often contaminated, and led to syphilis and other infections, hence the use of “dispossessed populations as vaccine intermediaries” (143-145). The practice was also used by the Union, especially in border states like Missouri, where, at St. Louis’s Benton Barracks, Union doctors “viewed formerly enslaved children as the ideal population to harvest lymph.” After the War, Northern USSC doctors like Elisha Harris, drew on findings of Southern doctors like Bolton based on use of enslaved children, that is, “the postwar concern to promote epidemiological understandings of effective smallpox vaccination methods united physicians from the North and South. . . . This collaboration meant that Northern doctors carried Southern racism into their analysis” (150).

Civil War, medical impact of:

“There were more ambulances available in 1864, but without effective pain medicines, soldiers faced a horrifying and rattling ride to safety. There were more hospital tents for shelter, but without antibiotics to quell communicable diseases, epidemics spread unchecked. There were better-trained physicians, but without an understanding of antisepsis, surgical operations led to death from infection. While the usual story of Civil War medicine highlights bravery, caring, and organizational innovations, the reality of war for the wounded and sick was more sharply defined by agony, butchery, and loneliness than anything else” (Rutkow, 310). . . .There were no astounding medical breakthroughs during the Civil War. Communicable diseases ran rampant. Wound infections spread unchecked. Surgery, despite the performance of tens of thousands of operations, remained as barbaric and crude in 1865 as it had been in 1861. . . The significant achievement of Civil War-era medicine has to do with its transformation in the administration and organization of military medical care, from an appalling lack of preparation and concern to readiness and sympathy for the patient in a war that was waged on a scale never before known. Discipline was imposed on what had been an undisciplined profession. . . . The lasting medical impact of the war experience was not measured by ingenious innovations or engrossing surgical victories. Instead, it derived from the physicians’ day-to-day caring for sick and injured human beings in the face of scientific ignorance, superstition, and political interference” (Rutkow, 318).

Civil War, modern medicine and:

“There is no denying how important the clinics, laboratories, and model of European science was for the development of American medical education and professionalization in the post-Civil War period. But the medical developments during the Civil War established patterns, attitudes, ideas about disease, and above all a new foundation for the medical sciences. The way medicine was structured during the war, in fact, prepared physicians, the government, and the public for a social transformation in medicine. Very simply the medicine of 1860 was vastly different from the medicine of 1863, and the medicine of 1866 was vastly different again” (Devine, 137; cf. Rutkow, 310ff.).

Civil War, Sanitary Commission, racism of:

Via questionnaires aimed at documenting black soldiers’ physical characteristics, the USSC “created reams of data purporting to show that Black people were innately interior,” reports “that stabilized the racialist ideology that had justified Black people’s enslavement in the first place.” By positing a medical argument about Black inferiority, the USSC “assigned scientific authority to medical case studies that provided evidence to stabilize the racial order after emancipation.” In so doing, by placing statistics about facial categories in the hands of male doctors, it “obfuscated women’s work,” and denied women a platform in the USSC (Downs, 134-135).

Civil War, surgical impact of:

“The Civil War medical experience made operating surgeons out of a large number of physicians who previously had minimal experience with a scalpel. This extensive hands-on training helped surgery spread throughout the whole of the United States, especially during the 1870s and 1880s, when asepsis and antisepsis became an accepted part of the surgeon’s routine” (Rutkow, 319).

Civil War amputees, perception of in the South:

Jackson’s loos of left arm during the Battle of Chancellorsville (May 1863), and his terming the missing limb “a blessing,” set the tone for society coming to see an empty sleeve as a symbol of worthy sacrifice for the Confederacy, not a marker of shattered masculinity: “no amputation in the Confederacy did more to commend amputees to the public” (B. Miller, 63-64). Yet, throbbing stumps weeping a full brew of pus and blood were hardly an advertisement for the kind of glorious, sanitized war the public wanted to remember” (Jordan, 108). “The public wanted amputees to view their empty sleeves as a fulfilling honor, as something to be congratulated for, so that they could forget about the war (and disabled veterans) and get on with business” (109-110).

Clinical epidemiology, colonizing of health care and:

Rather than becoming marginalized within medicine, clinical epidemiology became an “animal of medicine” (Jonathan Lomas) employed in the interests of the medical profession: “It is something that belongs to medicine and something that medicine has developed and is using in some ways to colonize the rest of health care. . . . If the blame is to be laid anywhere, it may be at the feet of the relatively thoughtful medical leaders of North America who realized that evaluation in the future for the health care system. As such, the tools of clinical epidemiology are going to be central to the policy arguments that will allow individuals to lever funds for their own particular programs and initiatives outside of governments” (Lomas, in J. Daly, 118). . . If clinical care was the soft underbelly of scientific medicine, then clinical epidemiology proved to be disciplinary armor against attacks. Not surprisingly, then, a major theme of the burgeoning critical literature on evidence-based medicine has been the identification of the ‘new paradigm’ argument of evidence-based medicine as a professional ideology” (J. Daly, 118).

Clinical Research as Social Practice:

“For much of this book, I have been arguing that clinical research is intrinsically a social process. The most routine of clinical investigations involves seemingly endless negotiations: negotiations to persuade participating researchers to abide by a uniform protocol even when there are more interesting’ questions to pursue; negotiations with referring clinicians to send patients to the study; efforts to persuade patients to join such studies; negotiations with editors and coauthors about where to publish, how many tables to publish, and how many authors to credit. In all these activities, researchers have to persuade others – scientists, physicians, and patients – that their questions are worth asking; that their research plans (study design, treatment protocol, outcome measures) and their resources (clinical population, clinical staff, ancillary support) are adequate for the job; that their analyses are sound; and that what they have found should alter or affirm existing practice. All of these efforts entail the use of persuasion and power” (Marks, 240; cf. 243).

Clinical Trials, Social Construction of Results in AIDS Research:

“The point here is that clinical trials do not occur in a vacuum – and when the environment in which trials are conducted and interpreted is so contentious, then these experiments, rather than settling controversies, may instead reflect and propel them. Consider the range of factors and pressures that structured the determination of the ‘meaning’ of ACTG 155 as well as that of its precursor, ACTG 106: the methodological (and jurisdictional) disputes between infectious-disease researchers and biostatisticians; activist demands for access to drugs, plus or versus activist conceptions of ‘good science’; the social construction of hype; the profound need experienced by patients, and the kinds of pragmatic decisions that patients and research subjects make in response to their immediate perceptions of their interests; the marketing strategies of pharmaceutical corporations and the incentive structures to which these companies respond; the complicated role of practicing physicians in interpreting the data produced by clinical trials; the politics of regulation and deregulation; and the distinctive character of regulatory science as practiced by expert advisory bodies. In such an environment – given these stakes – is it any wonder that the interpretation of key trial results is often up for grabs? AIDS trials are not unique in this regard, as studies of cancer trials make clear. But insofar as the participation of knowledge-empowered activists increases the number of claims-makers and alters the distribution of credibility among them, AIDS trials may be particularly inclined toward conflicting readings” (Epstein, 315). . . . “. . . a study’s ‘definitiveness’ is not given but is a negotiated outcome and one that may be actively resisted by some parties to the controversy” (334).

Collective Investigation, in Britain & U..S.., 1880-1910:

“The movement for collective investigation, which sought to harvest the experiences of general practitioners for medical science, demonstrates how uninterested such practitioners were in the scientific and social aspiration articulated by medical elites (Marks II, 148). In Britain, “collective investigation failed to bridge the profound gaps between the world of the hungry, scrabbling practitioners and that of the inquiring, flourishing consultant” (164). In the U.S., “the American history of collective investigation is largely a story about the difficulties of realizing medical community. . . . Therapeutic knowledge was a form of private property, jealously guarded. Only where practitioners saw a material advantage from publicity did they participate, as in Parke, Davis’s Therapeutic Gazette” (165).

Commodification of health and illness, beginning In early 20th century:

“ . . . is one of the most important and understudied developments in 20th-century society. . . . In using ‘commodification’ in the context of medical history, I mean to draw attention to the process by which bodily experiences such as pain are assigned value (monetary and otherwise) by physicians, patients, insurance companies, and others. ‘Commodification’ also turns our attention to the ways in which analogous collections of experiences are named, take on conceptual coherence, and are subsumed under the titled ‘disease,’ in such a way that they can be called upon by professionals, laypersons, and politicians in bargaining for rights, power, status, or economic position. . . . It is this profound transformation in disease . . . that the story of sickle cell disease chronicles” (Wailoo, III, 18, 106, 126).

Community Hospitals, inegalitarianism and rigidity of in 1950s:

“Despite the equation of science, voluntarism, and democracy, then, and despite belief in the superiority of American innovation and technology over the supposed rigidity of foreign governmental health-care systems, American hospital service was both inegalitarian and structurally rigid. Indeed, in the 1950s insurance coverage divided the middle class clearly from the indigent. The latter’s care was covered by public assistance (in whole or in part), by cross-subsidy within the hospital, or occasionally by free care from hospital endowments or gifts. The postwar building boom allowed hospitals to provide hospital care, wherever possible, to private patients in single rooms, 2-bed rooms, or at most 4-bed rooms – but not for everyone (Stevens II, 253). . . . Hill-Burton allowed for the segregation of patients by race and for the continuation of the multiclass system (254). . . . During the 1950s, although the voluntary hospital became a center for community aspirations write large, these aspirations were varied and conflicting. American wanted unlimited technology, access to it by the whole population, and a service that was affordable to all –all without recourse to a governmental system. It was not possible to achieve all these goals simultaneously. The policy of community, vague though it was, conflicted with social expectations about equity of access, with the cherished prerogatives of local doctors who saw the hospital as their workshop, and with the autonomy of the local voluntary institution as a center for private charity-giving” (255).

Constitutional Medicine, Rockefeller & Macy Foundation support of in 1930s:

“The connection the Rockefeller and Macy foundations drew between Draper’s work in constitutional medicine and the approach of psychosomatically oriented clinicians such as Dunbar and Smith Ely Jelliffe was clear. Draper’s, Dunbar’s, and Jelliffe’s work, along with that of the psychiatrists Adolf Meyer and William A. White, represented the holistic, organismic perspective in psychiatry and clinical medicine, a perspective whose popularity seemed to peak between 1930 and 1940, and one that brought into medical and psychological parlance such phrases as ‘the organism as a whole,’ ‘the organism and its environment,’ and ‘the mind-body unity.’ The research of Draper, Dunbar, Jelliffe, Meyer, and White, ‘organismic, psychobiologic, constitutional, and character-analytic,’ stressed ‘wholes, correlations, configurations and constellations,’ but rarely cause” (Tracy II, 81).

Constitutional Medicine, vogue of in U.S. in 1920s & 30s:

“Its vogue within the academic medical arena seems to have owed much to a particular cohort of aging ‘great clinicians,’ who attempted to keep the whole patient within the doctor’s gaze. [Walter] Alvarez and [Lewellys] Barker, and to some extent [George] Draper, were among the last of the great clinician-scientists, equally comfortable at the laboratory bench and bedside. At once impressed by the great advances of modern science and depressed by the fragmentation of their profession and its view of the sick person, they articulated a constitutional perspective that not only legitimated the clinician’s sharp diagnostic eye and his or her knowledge of human nature but also reiterated the value of modern laboratory analyses in service to the clinician. In some measure, then, the reappearance of constitutional thinking was a response to the general clinician’s loss of power within an increasingly specialized, research-oriented medical profession” (Tracy, 165). . . . the constitutionalists “shared an intellectual indebtedness to systemically and organismically oriented biologists, physiologists, and psychiatrists such as Ludwig Bertalanffy, Walter B. Cannon, William Child, Edwin Conklin, L. J. Henderson, William Ritter, Charles Sherrington, and Adolf Meyer. Indeed, the aim of uniting the biological and social sciences underlay the interdisciplinary constitutional program; it was an aim shared by funding agencies such as the Rockefeller and Macy Foundations. “On a practical level, Draper, Barker, and Alvarez also advocated the coordination of the medical diagnostic team under the supervision of the internist or generalist. They thus proposed a style of medical investigation and practice that shared the same holistic bent as their view of the individual patients. The three internists were critical of the fragmentation of the patient under the reductionistic lens of an increasingly specialized medical profession” (170-171; cf. Tracy II, 58).

Consumer/survivor movement, policy influence from 1990s on:

“. . . the growing influence of consumer/survivor perspectives has largely been a consequence, not a cause, of radical restructuring of the mental health field. . . . Consumer/survivor perspectives entered policy discourse in the wake of policy failures and have flourished in a climate of perpetual crisis and tight budgets. Patient advocates have proved most effective in reshaping the criteria for what constitutes effective treatment: an integrated program of health and social services aimed at recovery and rehabilitation. They have had far less success in addressing the systemic problems in health care and welfare policies that stand in the way of such an integrated approach. In that failure, they are in good company, for no stakeholder group has been able effectively to address those problems” (Tomes III, 727).

Contraception, early laboratory research in the 1920s:

“Laboratory research on contraception indicated important unexplored areas for physiological investigation. Social activists, who had encouraged prominent scientists to become interested in both the social value and the genetic implications of birth control, found these investigators revising the goals of their research. . . . The eugenic motivations underlying these studies, which had initially made them theoretically attractive to biologists, were gradually eroded. Concern with ‘human evolution’ ceded its place to interest in physiological mechanisms” (Borell V, 84). . . . With the rise of Hitler, the genetic arguments for birth control rapidly lost their appeal. By that time the scientific problem of how to achieve effective contraception had entered the professional consciousness. . . . Scientific discussion of birth control permanently altered from a question of justification to a problem of method: How could one achieve reliable and safe contraception? . . . The knowledge and control that [biologists] promised lay in understanding the whole reproductive cycle – not merely in evaluating the toxic effects of presumptive spermicides (85).

Contraception, field trials in the 1930s:

Expansion of testing, funded by NCMH, Clarence Gamble, et al.,  establishment of birth control programs in some of poorest regions of N. America, entering into agreement with drug companies to test their products, e.g., foam powder in N. Carolina and Puerto Rico, condoms in Appalachian Mountains, contraceptive jelly in W. Va., and, in the 19 50s, birth control pill in Kentucky and Puerto Rico. Their “emphasis on the simplicity of the method, moreover, meant both that research was driven by stereotypes about poor women and that it further reinforced those stereotypes. . . reduction of the birthrate, rather than the improvement of women’s health and self-determination, became the measure of success. In fact, the bias in favor of simpler birth control methods worked as a strong counterforce to the development of health and support services that might have increased women’s reproductive control” (Schoen, 30 ).

Contraception, racism in testing of new technologies:

“But researchers did introduce racial bias by overwhelmingly apportioning the potential health risks of experimental birth control technologies to women of color, including black American women. The Pill, Norplant, and the Depo-Provera shot were first tested in Mexico, Africa, Brazil, Puerto Rico, and India. Once approved, they were administered to large number of girls and women in U.S. venues that are disproportionately and, usually, overwhelmingly African American and Hispanic. . . . . they were unwittingly participants in a research study. . . . reproductive drug testing makes poor women of color, at home and abroad, bear the brunt of any health risks that emerge” (Montgomery, 279-80).

Cortisone, discovery of anti-inflammatory properties of in 1948 clinical trial:

“The anti-inflammatory properties which quickly made cortisone the archetypical post-war miracle drug had been discovered through Hench’s [Kendall’s rheumatologist colleague who shared the 1950 Nobel price with Kendall and Reichstein] clinical intuition, and the fact that such applications [had] not been anticipated by endocrinologists studying Compound E [pre-1947 name of cortisone] in animal models certainly demonstrates the limits of laboratory-based drug development research. Intriguingly, however, the use of 1930s-style whole cortical extracts to boost strength and to combat mental stress and fatigue, the prewar market for the remedy which must have shaped pharmaceutical companies’ expectations of the indications drugs like cortisone would acquire, continued into the 1970s, due mainly to a medical movement promoting the extracts as a cure for hypoglycemia” (Rasmussen III, 321). First injection of cortisone given on 21 September 1948 at Mayo Clinic by Charles Slocumb & Philip Hench, whose biochemist colleague Edward Kendall first isolated “Compound E (cortisone) in 1936 (Hetenyi & Karsh, 426).

Cosmetic surgery, feminism and:

“The fact that cosmetic surgery was framed as individual self-realization long before feminism’s second wave helps to explain how it could so easily be reframed that way after that wave crested. In this sense the embrace of cosmetic surgery appears to be consistent with the liberal, individualist strand of feminism that emerged after the suffrage battle and surfaced again in the wake of the battle for the ERA. . . . feminism and individualism remain separate and distinct from each other. Our commitment to honoring women’s voices . . . should not obscure our ability to place these voices in context. Many women have chosen to buy into the celebration of the breast (and up a cup size or two), and to some degree popularized feminism has incorporated this practice. Cosmetic surgery may empower individual women by curing their inferiority complexes or, in less technical terms, making them feel better about themselves” (Haiken, 275-76).

Craniotomies (medical abortions), why they persisted into the 1890s:

“Murphy noted that younger physicians readily endorsed abdominal surgery [cesareans], but senior practitioners continued to perform craniotomies because of their reluctance to visit an operating theater for instruction in the new techniques. As in America, outmoded procedures had an afterlife due to the continued practice of unskilled physicians in an era that had no established procedures or standards for continuing education” (Ryan, 490). . . . by 1940, the diffusion of surgical knowledge made possible organized medical education and board certification rendered craniotomy virtually obsolete” (493).

Crile, George on the scope of his general surgery ca. 1890:

“Although serious accidents resulting in lacerations and contusions necessitating amputations, setting of fractures, excisions of eyes, drainage of hematomas and trephining were daily occurrences. I am surprised in reviewing old records of this time to find that I also dealt with cancer of the breast, removed papillomas, carcinomas, sarcomas and epitheliomas, performed laparotomies and tracheotomies, and every type of gynecological operation, besides operating upon the usual hernias, hemorrhoids, colloid goiters, cystic goiters, which we treated with galvanic puncture. “But it was the cases of shock that haunted me like evil shadows” (Crile, I, 39-40 and on range of contradictory approaches to surgical shock ca. 1896, 83).

Crile, Grace:

“So perhaps the quality of seeming dependence in a surgeon is a characteristic of a profession habituated to extending a hand and having an instrument placed in it” (Crile, I, 220).

Cushing, Harvey, in letter to Francis Benedict in early 1920s:

“Medicine, alas, nowadays is a circus of many rings and the students may be captured by a trapeze act while something really important is going on in another tent” (Thomson, 212).

Cushing, Harvey, while serving the war effort in France in 1915:

Cushing shocked when he was pressured by the wounded to remove a bullet or missile lodged in some harmless place “so that they might exhibit it to impressionable friends –‘souvenir surgery,’ Cushing called it, which often led to serious and needless complications” (Thomson, 187)

Cutter incident, liability without fault doctrine arising from:

1956: Bendectin for morning sickness used by 40% pregnant women in U.S.; National Enquirer story alleging birth defects (source was Melvin Belli); scientifically unfounded as per 27 studies, but Merrill Dow removed drug from market in 1983; 1992: FED director Kessler “decides” insufficient safety data on silicone breast implants (available for 30 years, used by two million Americans) and imposes ban that  onslaught of litigation, though no data links implants to connective-tissue disease. After court cases, “the companies that made Bendectin and breast implants didn’t become more attentive to making safer products because their products weren’t unsafe; they were just found to be unsafe in court” (Offit, loc 2000).

Cystic Fibrosis, as “transmuted disease” like diabetes:

“Yet the advances always seemed to be mixed blessings, ushering in new challenges such as antibiotic-resistant organisms or extensive after-transplant health problems. . . . From the outset antibacterial therapy was a balancing act for the clinician  alternating drug regimen in late-70s [cf. TB], and clinical research to regulate use of innovative drugs (Wailoo III, 80-83). From mid-80s, lung transplantation led to further reinvention of CF as a pulmonary disease, since increasingly older (= into young adulthood) patients increasingly died from lung problems, with surgery  further transmutation of CF owing to lifelong posttransplant care: “Having a transplant is a chronic illness” (surgeon Thomas Spray, in Wailoo III, 89). In 90s, on the eve of gene therapy, CF patients “in many respects traded one type of deadly disease for another – one that was manageable, but still deadly” (90). “The idea of gene therapy for cystic fibrosis filled an emotional need, but in the end it was a marketing myth, revealing more about the business culture and mainstream ideologies of the 1990s than about the actual potential of genetic technology” (110).

Cystic Fibrosis, complicated nature of, as of 1960s:

“With each discovery came gradual realization that CF was a complex clinical puzzle and that distinguishing between its primary and secondary features would remain a key conceptual challenge. It was at once a disease of malabsorption, a lung disorder, a pancreatic disease, and a malady defined by an overproduction of mucus” (Waterloo III, 69).

D

Da Costa, Jacob:

“The life of a surgeon is a very hard one. . . .and it is small wonder that surgeons as a class are not long lived (Da Costa, 253-254).

Da Costa-Deaver debate over routine use of blood counts for patients with appendicitis (Howell VI, 207-221):

Da Costa advocated routine use of; John Deaver championed immediate appendectomy for acute appendicitis. “Da Costa did not feel that one ought to use a low count as a signal not to operate. . . much of Da Costa’s discussion had to do with using blood counts to follow a patient’s recovery after surgery. After an abdominal operation there existed the possibility that not all of the infected material had been removed. If the white count remained high . . . that would be a reason to go back in and clean out more pus, to do more surgery rather than less. . . So, one wonders, why Deaver’s fervent, sustained opposition to blood counts” (211). . . . The very success of surgeons such as Deaver could support a strong argument for general practitioners to use blood counts to diagnose appendicitis, as a crutch to make up for their lack of clinical experience and skills. . . . The meaning of the blood count as a technology was thus twofold. On the one hand, it was a mark of scientific progress, a technique using microscopes, creating numbers and graphs, and defining the practice of medicine as scientific. On the other hand, it was a crutch, a means to make up for the absence of other, more subtle clinical skills” (217).

Debate, in Paris Academy of Science re microscope:

“The Academy of Medicine was the ultimate ‘old boys’ club. But as we have seen . . . medical journalists could exert a certain amount of power, could have a constraining effect upon what happened in the Academy. The press could and did impose certain rules on the game. Medical journalists could function as gatekeepers, referees, disciplining the ‘scientific tournaments’ within the Academy from their vantage point outside the Academy.

Deinstitutionalization, misleading nature of the term:

“. . . the term ‘deinstitutionalization’ was misleading. The fall of the asylum was as much a transfer of patients from one institution to another as it was a release of people with mental disabilities into the community. The coming of Medicaid in the United States enabled the states to shift elderly and poor residents to nursing homes, triggering a steep drop in the number of aged patients in state hospitals. During the 1960s, the population in U.S. nursing homes jumped from 470,000 to more than 900,000. By 1985, more than 600,000 nursing home residents were diagnosed as mentally ill, at a cost of $10.5 billion” (Dowbiggin, 181).

Dementia praecox and manic-depressive psychosis, differential diagnosis in early 20th century:

“Psychiatrists were well aware that, especially in difficult cases, the diagnosis turned on the quality of their reaction to the patient – simply, how they felt about him or her. Their reaction to patients they diagnosed manic-depressive was largely favorable. . . Looking at the dementia praecox patient, psychiatrists did not like what they saw: stolidity, catatonia, stupidness, disfiguring stigmata, and, most damning, overwhelming differentness. . . . [among psychotic patients] the men in this group were twice as likely as the women to receive a diagnosis of dementia praecox, to be classed as incurable and destined to deteriorate. Neither social class nor ethnicity came into play in this determination. . . . The most salient characteristics they saw in the manic patient were those associated in other contexts with an unbounded, out-of-control femininity that was at once frightening and alluring. In women this commonly took the form of an appealing ‘eroticism’” (Lunbeck, 146-151).

Depression, clinical trials of in 1970s, women and:

Although psychiatric research appeared to be more scientific, certain unquestioned research practices: “First, pharmaceutical companies by this time period had begun a practice of designing their own research studies and analyzing their own statistics. Second, clinical trials on depressed patients more and more reinforced the circular definition of depression: depressed patients were those who could be shown to have responded to antidepressants. . . . In this time period, women continued to be clearly predominant in clinical samples that were by then selected based on symptoms. Investigators seemed to expect that they would have greater numbers of women in their samples and did not comment on this other than to say that women were more depressed than men. . . . By the 1970s, researchers generally stated that depression was more common in women than in men. . . . the supposition that depression was a disease of women had become entrenched to the point that studies that were done entirely on women were reported as studies on depression itself” (Hirshbein, 200-01, 202). . . . Estrogen became inextricably tied to the catecholamine hypothesis (despite a lack of convincing evidence) because of researchers’ assumptions that depression was something that tended to happen in women” (207). . . . the Beck Depression Inventory (1961) “did not address the issue of how men at that time (or in the future) might respond to a screening tool that asked multiple questions about feeling states” (211). . . . In Epidemiological Catchment Area (ECA) Study of early 80s, “the researchers used the presence of symptoms of depression in order to screen the population for untreated cases of depression. Researchers and epidemiologists assumed that it was valid to use criteria that had been determined in a hospitalized population in order to test the public for untreated or undiagnosed illness. Further, ECA and other researchers did not ask questions about what in particular was being measured with DSM criteria. Perhaps not surprisingly, when researchers used the criteria for depression (which had been developed in a patient population that was generally two-thirds or more women), they found that more women than men endorsed the symptoms (at a ratio o about two to one)” (213-214). . . . “Thus the connection between women and depression has been a closed circle: researchers have assumed that women are depressed more than men, which means that women have been preferentially diagnosed, treated, and theorized about, leading to further conclusions that women are depressed more than men” (215).

Depression, legitimation of psychiatry and:

“An escalating use of medical resources needs legitimization, and in the field of psychiatry this has come from the opportune discovery of both antidepressants and depression. Depression has been the cornerstone on which the neuroses, carved out and shaped for construction purposes, have been laid in successive layers. . . . Where the field has attempted a disinterested rationality, the results have been confounding rather than illuminative. At present the area is such that it almost seems we have to let whatever dynamics govern the field take us to extremes and back in the hope that successive traverses of the scanner will eventually reveal the shape of what lies beneath the surface” (Healy, 252).

Deprofessionalization theory:

“That health care movements and the growth of consumerism have empowered patients and, in doing so, are the core claims of deprofessionalization theory” (Halpern, 843).

Diabetes, and also cystic fibrosis, as “transmuted disease”:

“Yet, the diabetic situation after insulin was rather more complicated – a ‘terrible beauty’ had been born. While the insulin developed by Banting, Best, Collip, and Macleod during 1921 and 1922 warded off coma and proved to be a tremendous boon to diabetics, the disease nevertheless remains a scourge, currently causing more renal failure, amputations of lower extremities, and blindness among adult Americans than any other disease. Defying any simple synopsis, the metamorphosis of diabetes wrought by insulin, like a Greek myth of rebirth turned ironic and macabre, has led patients to fates both blessed and baleful. . . . Unlike the concept of a stable and preordained natural history, the notion of transmuted diseases stresses both the malleability of disease entities and the role of human choice: chronic diseases manifest themselves in a particular patient according to the combined effects of physiological processes and human interventions . . . . Indeed, as diabetes has been diverted further and further away from its natural course, the very notion that this disease has an underlying natural history has become outmoded – clearly, a new paradigm of cyclic disease transmutation has taken hold. . . . just as the miraculous therapy of insulin has allowed long-term problems to emerge, other dreams of therapeutic success are also likely to find themselves mired in iatrogenic irony, with complete success proving to be complex and elusive” (Feudtner, 36, 37, 40, 41) Cf. cystic fibrosis (CF) as “transmuted disease”: “Yet the advances always seemed to be mixed blessings, ushering in new challenges such as antibiotic-resistant organisms or extensive after-transplant health problems. . . . From the outset antibacterial therapy was a balancing act for the clinician  alternating drug regimen in late-1970s [cf. TB], and clinical research to regulate use of innovative drugs (Wailoo III, 80-83). From mid-1980s, lung transplantation further reinvented cystic fibrosis as a pulmonary disease, since increasingly older (= into young adulthood) patients increasingly died from lung problems, with surgery  further transmutation of CF owing to lifelong posttransplant care: “Having a transplant is a chronic illness” (surgeon Thomas Spray, in Wailoo III, 89). Cf. also sickle cell disease, where, in the 1980s, bone marrow transplants “amounted to the replacement of one chronic condition (SCD) with another (GVHD) [graft-versus-host disease] (Wailoo III, 148-149) = “trading one disease for another” (151).

Diabetes, ideals of management:

“For Joslin and other American practitioners throughout the twentieth century, the central ambition of diabetic management – and its chief frustration – has been the want of control. The eagerness with which physicians have pursued this elusive object, and in what directions, have distinguished various styles of managing the disease. Joslin may have declared partial victory at mid-century, but the advance of an alternative clinical style, which was less stringent regarding control and more concerned that patients live ‘normal’ lives, would put his beliefs on the defensive for several decades. Since the 1950s, physicians have debated whether control could prevent long-term complications, and if so, how effectively. In 1993, the published results of a large clinical trial argued that Joslin was essentially right; patients placed on tight-control regimens, compared with those under normal diabetic care, lowered their risk of delayed complications markedly. But this reduction of risk, important though it is, does not settle a more fundamental argument: even if the means of better diabetic control are now ‘at our door,’ the questions of what measures should we take to pursue control, and at what personal consequence to patients’ lives, remain open, beyond the dictates of statistical data” (Feudtner, 143).

Diabetes management, technology and:

Re diabetic women’s experience of pregnancy and motherhood, individual stories “were also shaped by larger aspects of the system of health care and how it changed over time. One such larger aspect was technological innovation, which yielded what can only be thought of as ‘miraculous’ success in improving the health outcomes for mothers and their babies. The questions posed by Priscilla White on behalf of young adult women living with diabetes in 1948 – will I be able to conceive, survive my pregnancy, and deliver a healthy child who will be unlikely to develop diabetes? – had grown relatively outdated by the 1990s” (Feudtner, 166-167).

Diagnosis, cultural and technological relativism of re chlorosis:

“One must conclude that diagnostic tests and drugs have been among many significant factors aggregating to constitute and reconstitute the disease. These technologies themselves have been products of particular cultural moments. The privileging of one technology over another (iron pills over hemacytometers) in the construction of disease cannot be understood apart from broader cultural assumptions about professional identity, patients’ identity, and the nature and location of disease. . . . [The users of technologies] have described diseases that suit contemporary concerns. Theirs have been contingent, and historically bound, portraits of disease, linked to a politics of medical practitioners and to particular problems of womanhood in their time” (Wailoo I, 70-71).

Dieting, explanation of new salience in 1890s:

“Of course, new latitudes developed at the same time, for example, in the area of heterosexual contact. But this was precisely the point. Constraint, including new constraints urged on eating and body shape, was reinvented to match – indeed, to compensate for – new areas of greater freedom (Stearns, 54). . . . reliance on traditional religious sternness, lost considerable force in the third of the nineteenth century. . . . This set the stage for expressing a new, compensatory need to maintain moral anxiety and the potential for virtue – in this case, through attacks on fat – even for people who enjoy an escalation of consumerism in other aspects of their lives” (57, cf. 122: dieting after WWII “a moral counterweight in a society of consumer indulgence”). . . . By 1900, the body’s testimony to character was explicitly being extended to control of weight. . . . Here, then, in the association of an eating regimen with antidotes to the generalized problems of consumerist excess, was a crucial source both of the moral qualities and of the intensity of the growing hostility to excessive weight” (60). . . . Dieting as a “vigorous moral counterweight to growing consumer indulgence” (60). . . . The clearest single revision of image generated by the turn- of-the-century diet standards applied to middle-aged women whose full figures now testified not to successful child rearing and maternal maturity but to an inability to maintain proper shape. For women concerned, however unconsciously about an untraditional interest in sex and reluctant to imitate their own mothers’ level of childbearing, injunctions to discipline the body by keeping slim even into middle age may have seemed a welcome form of compensatory discipline” (64). . . . Diet concern increased after 1920 because of new commercial pressures and new medical knowledge, but also because there was more to atone for by personal denial or self-criticism” (67). . . . fascination of women’s magazines with the subject of diet and the moral derelictions of fat” (100). . .[after 1920] renewed connections between dieting and moral demonstration, as an economy of abidance increased the need for compensation or guilt regardless of gender (104) . . . [by 1990] the moral content of the war against weight persisted (108) . . . At its base, the need to fight fat remained a matter of demonstrating character and self-control in an age of excess. Here, surely, was the reason that fat was long singled out over other know health problems” (117) . . . the drumbeat of slenderness made the linkage with compensatory morality quite clear” (117) . . . In an indulgent society, reluctant to talk of painful moral obligations lest they distract from consumer pleasures, the moral quality of dieting offered a stern contrast” (121).

Dieting, male, post-WWII:

weight tables and other measures “worked to enforce adherence to what Lasch identifies as fundamental tenets of the consumerist world: submission to experts, loss of the ability to decide on one’s own satisfaction, chronic anxiety. . . . the diet became a key to social order: even as it criticized middle-class lassitude, it painted a very particular portrait of postwar society and disciplined the dieter to fit into that picture. Equally importantly, it opened a way to the truth of the self through the avenues of consumerism” (Berrett, 810). . . . Diet, then, helped to reconcile traditional masculine individualism with the restrained corporate world. Providing a putatively healthful means of regimentation that ensured a middle-class man’s economic success, it connoted adulthood, expertise, possession of the store of knowledge by which such men have commonly reckoned their authority. But diet also mandated the imposition of discipline, submission to experts, and rigid self-control. Workplace lore would be supplanted by the science of nutritional values and exercise; having mastered this language, the male dieter had achieved mastery of himself” (816). . . . Feminists, in contrast, have detailed the numerous social pressures that converge on women’s bodies. Where these men relied on arbitrary personal goals for bodily transformation, women suffer (and cannot escape) constant ‘cultural manipulation,’ as Susan Bordo puts it, toward an unattainable ideal. . . . Their fat is presumed to ‘compensate’ for something, to be ‘about’ something internal – an expression of inner fear, anger, self-doubt – whereas men’s fat, according to these dieters, signified a narrow range of problems that were both external and remediable. (Most often, merely that they had eaten too much) (819). “The linkage between appetite restraint and work was an important feature of the postwar men’s dieting culture and, again, often translated into corporate reality” (Stearns, 115).

Dieting, physicians and, 1890-1920:

“On the whole, however, a medicalization model does not work well in explaining the initial stages of the campaign against fat and doctors’ growing involvement in it. Rather, it applies to the intensification of efforts from 1920 onward (Stearns, 44). . . . doctors moved into the diet area somewhat hesitantly; with some individual exceptions, they lagged a full decade behind fashion standards. Nor is there much evidence that health arguments really spurred the initial diet campaign. . . . Doctors themselves, in touting the new wisdom about thinness and beauty, picked up at least as much from the general culture as they contributed to it during the transition years in medical discussions between 1895 and 1920” (45).

Difficult Deliveries, in 1870s:

“By the close of the decade, surgeons had several options to resolve difficult deliveries: they could use forceps in the hopes of extracting a living child, or destroy the child by craniotomy. The cesarean and Porro operations [surgical removal of entire uterus and accessories] represented alternatives for only the most skilled operators (J. Ryan, 473). . . . In 1879, “Evidence suggests that success in the cesarean operation remained beyond the reach of most physicians due to their lack of education and surgical skill” (478).

Diphtheria, diagnostic program in NYC of 1892-1894:

“While cholera had brought about the establishment of New York City’s bacteriological laboratory, it was diphtheria that established its importance to public health and medical practice. In two short years, from 1892 to the summer of 1894, diphtheria had been transformed. . . . The implementation of the diphtheria diagnostic program also transformed medical and public health practices in the city. The diagnosis of diphtheria was no longer under the sole control of the physician. Once bacteriological examinations were employed, the means for clear definition of the disease was established. The removal of children to hospitals and the barring of them from schools was standardized. Medical inspectors were to rely solely on bacteriological reports and enforce all the health department regulations. Few spoke out against the aggressive enforcement of these regulations in the tenement districts. For people living in these districts, the presence of diphtheria now resulted in greater intervention of public health authorities into their lives” (Hammonds, 84-86).

Diphtheria, prevalence among children in early 20th c:

“Diphtheria, not smallpox, was the dread killer that stalked young children in the early twentieth century. Moreover, it was a disease for which a relatively effective treatment existed. In the early 1890s, scientists in Berlin discovered that horses injected with heat-killed broth cultures of diphtheria could survive repeated inoculations with the live bacilli. Fluid serum extracted from the animals’ blood provided a high degree of protection to humans. An injection of this substance, generally called antitoxin, did not provide complete immunity, but only about one in eight exposed individuals developed symptoms. After 1895 diphtheria antitoxin preparations were available in the United States” (Sealander, 326).

Diphtheria, war-making metaphors in campaign of Diphtheria Prevention Commission (NYC), 1929-1931:

“War-making metaphors also contribute to the stigmatization of illness and disease. . . . now the rhetoric of the campaign cast blame upon parents and physicians for the disease. The very presence of diphtheria became a synonym for neglect. . . . The use of military metaphors in the control of disease also constructs enemies, and in this case the two primary ones were the disease itself and uncooperative physicians. Parents were a secondary enemy” (Hammonds, 206).

Drug company advertising (2000-2006):

“ . . . the economic advertisements from 2000-2006 refer more to direct cost (i.e., ‘costs less,’ ‘price,’ ‘affordable/affordability’) . . . than to indirect costs (‘back-to-work’) and cost-effectiveness (‘value’). . . . Third, the overall use of substantiating information has not increased with time. . . . Our findings suggest the possible influence of external policies on promotional activities for drugs. In particular, the FDAMA Section 114 legislation in 1997 may have contributed to the decrease in economic advertisements in general medical journals post-1997. Section 114 may have precipitated a shift in drug company’s targeted audience from individual physicians making prescribing decisions at the patient level to formulary committees making coverage decisions at the health plan level (Palmer, Timm & Neumann, 2008). . . . The uncertain nature of much of the substantiating information that comes with less transparent practices, however, tempers these advantages. . . . the field is scattered with examples of biased, unsubstantiated, unavailable, and misleading clinical claims” (753).

DTC (direct-to-consumer advertising), prehistory of:

1920s-1930s: marketing to consumers by promoting the institutional brand of the ethical firm as a whole (E.R. Squibb; Parke, Davis). No mention of specific products or indications; “Institutional advertising, in other words, advertised the concept of ethical pharmaceuticals” (Greene & Herzberg, 795). 1950s& 1960s: “By the mid-1950s, the popular promotion of brand-name prescription drugs through public relations and new-generation institutional advertisements had become a thriving and unregulated gray area of DTC marketing” (796). Boom in industrial public relations, so that drug firms “began to use public relations techniques in new ways that came very close to popular advertising of specific branded products (e.g., P-D’s Benadryl); Roche insert of Librium ad in issues of Time mailed to doctors for use in waiting rooms (Roche censured by Congress); “indirect-to-consumer advertising” in 50s & 60s accompanied by “energetic exploration of non-advertising marketing through newsreels, article placements, event planning, and other domains of public relations” (797). “Short shorts” offered to newspapers; “featurettes” to radio and TV stations; industry-ghostwritten media coverage via the “backgrounder,” seemingly legitimate news articles about new drugs in po0pular magazines written by journalists commissioned by the industry’s Medical and Pharmaceutical Information Bureau (MPIB). 1980s: In 1981, British Boots Pharmaceuticals ran general ads for Rufen (prescription ibuprofen) and Merck ran ads for Pneumovax (antipneumococcal vaccine). 1985: FDA issues notice asserting jurisdiction over DTC advertising and indicating prior standards of “fair Balance” and “brief summary” provided American consumers with adequate safeguard from misleading claims in general-interest ads. This effectively limited DTC advertising to print media (799). 1997: following hearings convened in 1995, FDA issued a draft guidance on DTC advertising, followed by a final guidance in 1999, that redefined “adequate provision” of risks and benefits to include reference to a toll-free number or website (800). At the same time, FDA continued to insist that company press releases needed to give fair balance to the benefits and risks of drugs. “ . . . one might ask who needs press releases when consumers continually encounter celebrity endorsements, ‘astroturf’ (planned and industry-funded ‘grassroots’ disease awareness programs), friendly (or for-hire) science writers, and the like? . . . We are left in the same strange situation that has prevailed for much of the twentieth century: explicit forms of advertising are carefully monitored and regulated but widely decried, while informal or indirect promotions still flourish with virtually no oversight” (800) “ . . . federal regulatory categories have been inadequate to capture the bewildering profusion of marketing techniques employed by the pharmaceutical industry. ‘Ethical,’ ‘advertising,’ ‘labeling,’ ‘education,’ ‘public relations’: each of these [terms] has meaning, technically, but they are of limited value when companies routinely pursue broader marketing strategies that synergistically combine all of them often in the same campaign. . . . There is no golden age to return to by stamping out promotion” (801).

E

ECG, mechanical ingenuity of:

[The electrocardiograph] embodied great mechanical and electrical ingenuity. Its most delicate part was the string galvanometer, initially evolved by Willem Einthoven in 1900-1903, and probably the most sensitive electrical measuring instrument which had been devised by that time; and it used a form of material – the quartz filament – that had first been made less than twenty years before. In order to make visible the tiny movements of the filament, condensing and projecting microscope lenses were used: the lens design was highly mathematical, and modern kinds of glass were required. The light source was a carbon arc, the brightest available point source of light, which had become a reliable device only in the previous two decades. Finally, the movements of the filament, after being projected, were recorded on a photographic plate or film which was coated with a recently developed sensitive emulsion. Had any one of these pieces of technology not been available, the electrocardiograph could not have taken shape. But it was not the ingenuity behind its conception which, in itself, separated the electrocardiography from existing medical technology. It was the combination in one instrument of so many different and new ideas. Twenty years before, so many of its components remained to be invented that the instrument, taken as a whole, was almost unthinkable” (Burnett, 53-54).

ECG (electrocardiogram; also EKG) and diagnosis:

“Characteristic ECG findings for the diagnostic entities of atrial fibrillation or of heart block were described in the ECG’s first decade. However, the ECG findings were not essential to the definition; both fibrillation and heart block had already been clinically described. In the early twentieth century, ECG users created their first unique diagnostic role in the diagnosis of myocardial infarction. Coronary artery disease was rarely diagnosed during life before 1918, although it was frequently described at autopsy. However, premortem diagnosis of the coronary disease increased dramatically following James Herrick’s 1918 description of particular ECG changes in myocardial infarction. The disease of myocardial infarction had become defined, in part, by ECG changes. Yet even at an elite hospital such as the MGH, the ECG was not a routine test for patients with suspected infarction as late as 1935. By 1940, however, long discussions about the diagnostic efficacy of the ECG tracing are found in the case records of patients presenting with chest pain. By around 1950, the ECG had become routinely used, as it is today, in the diagnostic evaluation of patients critically ill for whatever reason” (Howell IV, 285).

ECG and polygraph, relationship of:

“The polygraph and electrocardiograph record fundamentally different signals; this difference may have accounted for much of the difficulty in moving perceptually from one to the other. The polygraph records the mechanical events produced by the beating heart as reflected in the pulse waves of the veins and arteries. It is an amplifier of the mechanical waves that can be seen in the veins of the neck or felt in the arteries of the neck or wrist. The polygraph does not change the type of signal which is being recorded; the mechanical motion of the blood vessel (artery or vein) is reflected – via a series of cups, tubes and levers – in the mechanical motion of the pen on the paper. The electrocardiograph, however, detects electrical signals produced by the heart. Unlike the polygraph, it does change the type of signal used. The electrocardiograph detects an electrical signal far too small to be appreciated by human senses. It changes this electrical signal generated by the heart into a mechanical signal which can be recorded, thus it acts as a transducer. In Einthoven’s experiments, first the capillary electrometer and later the string galvanometer (the key element of the electrocardiograph) acted as transducers, changing electrical signals into mechanical ones. [Thomas] Lewis, however, used the electrocardiograph methodologically as though it were an amplifier of palpable or visible pulse waves rather than a transducer of electrical signals” (Howell V, 94-95). . . . [Lewis and Mackenzie used the electrocardiograph as though it were a superior polygraph and thereby evinced] “an important misconception of the ECG. For, while one could feel the arterial pulses and see the venous pulses recorded by the polygraph, one could neither feel nor see the electrical currents recorded by the electrocardiography. The a, c, and v waves were within reach of the unaided senses and could be analyzed by the skilled observer without instrumental aids; the P, QRS and T waves were not. Without the electrocardiograph, no amount of training enabled the practitioner to determine the form of the T wave” (96).

Echocardiography, initial dismissive reaction toward:

When famed Harvard cardiologist Paul Dudley White visited Lund University in 1956, Edler demonstrated his technique and White “uttered a polite dismissal: ‘an ingenious method.’ Later in the same year, Edler had the opportunity to present the technique to André F. Cournand, who was visiting Lund after receiving the Nobel Prize in Stockholm for his early work in heart catheterization. . . . Dr. Cournand also showed no interest in this noninvasive technique. . . . Edler felt let down by his international colleagues and was not hesitant to publish” (Singh & Goyal, 435).

ECT (electroconvulsive therapy):

Conflict regarding memory loss in 1970s & 1980s in context of Americans’ preoccupation with “memory and heritage” re fragmentation/fragility of American society: “. . . the memories that former ECT patients complained about the most were those that reflected participation in the American public. . . . While psychiatrists measured treatment outcome and side effects through standardized assessment tools and statistics, disaffected patients countered with individual stories” (Hirshbein II, 158). During this period, ECT researchers did attend to patient feedback after treatment, but “ECT researchers had little use for patient narratives but rather wanted to manage subjective reports with rating scales, assessment instruments, and statistics” (155). By late 80s and early 90s, ECT practitioners were “more vocal about the benefits of the treatment – and less apologetic about its side effects.” This changed over the last decade owing to “articulate and sobering first person accounts of ECT and its memory side effects” [which’ “began to challenge the dichotomy between the pro and anti-ECT camps” (159). The “gender component of the debate” = after feminist movement, many women “no longer interested in having psychiatrists interpret their thoughts or feelings (Tomes, 1994). Yet, ECT required women to surrender their own narratives – and sometimes portions of the memory of those narratives – in order to try to recover from depression” (160). . . . “And, as with protobacco insistence on specific evidence, ECT providers’ increasingly stubborn unwillingness to consider the abundant evidence of cognitive side effects of ECT (albeit generated without randomized controlled trials) appears increasingly irrational and/or strongly biased” (161).

Ehrlich, Habilitation (in medicine at Univ. Berlin) of 1885:

“The study of oxygen uptake became the basis of Ehrlich’s 1885 Habilitation . . . Das Sauerstoff-Bedürniss des Organismus [The Requirement of the Organism for Oxygen]. In this, Ehrlich showed that dyes were transported to the cells in the blood stream as fine particles, in which form they entered the cells. Then, held in environments in which they were sensitive to protoplasmic consumption of oxygen, they underwent reduction [became colorless]. This was manifest as color changes. Thus Ehrlich made maximal use of dyestuff properties by observing the in vivo reduction of colorants with different oxidation-reduction potentials. The infusion of knowledge from a technological pursuit into a scientific discipline was even more apparent in the 1885 dissertation than it was in earlier studies. . . . Ehrlich’s in vivo experimental approach to the study of oxygen uptake by observing color changes in dyes” (Travis II, 395-396).

Ehrlich, surgical metaphor apropos side effects as integral to effective chemotherapy:

“Where would matters end if, in diseases which were not of a vitally dangerous nature, the surgeon were permitted to undertake only such operations as were absolutely free from risk? An irrational restriction of this kind would halt the marvelous development of surgery at a stroke. The surgeon works with a steel knife, the chemotherapist with a chemical one which he uses to separate the morbid from the healthy. It is clear that there can be no difference between the therapeutic ethic of the physician and that of the surgeon. And if such a difference does actually exist, it rests only on the notion that medicine should do no harm, whereas everyone knows that an operation always entails certain risks” (from his The Experimental Chemotherapy of Spirilloses [1910], quoted in Bäumler, 180).

Electrification, role of in surgery:

Beginning in late 1890s, “the problem of ineffective illumination lessened as electric lights were installed in operating rooms. . . . [By early 20th century] A hospital’s employment of technology, particularly for surgeries, is best explained in one word: ‘electrification’” (Rutkow, 172-173).

Electrocardiograph:

“The electrocardiograph is the most impressive bit of apparatus which the wit of man ever devised” (Wilson, 245).

Emotion, orthodox medicine’s use of in explaining successes of alternative medicine:

Emotion was “one of medical orthodoxy’s most important concepts for defining and negotiating its relationship with alternative modes of healing and with its own history . . . orthodox physicians . . . developed new concepts and new rhetorical strategies in rationalizing away the apparent successes of alternative systems: the emphasis on the body’s natural healing powers, the promulgation of the notion of ‘self-limiting’ diseases, and the formulation of a theory of placebo – all were mechanisms that explained alternative knowledge claims in terms of orthodox theory. . . .The central mechanism that allowed psychosomatic clinicians to mediate between orthodoxy and the occult was emotion. Emotion explained millennia of apparent premodern medical successes, naturalized the power of the occult, and marshaled these naturalized powers – in the form of emotion – for orthodox medicine” (Dror, 77-78).

Endocrinology, and psychoanalysis in U.S. in 1920s & 30s:

In 1922 in Scientific Monthly . . . the well-known science popularizer Edwin E. Slosson stated that ‘a few years ago, psychoanalysis was all the rage. Now endocrinology is coming into fashion. Those who recently were treading Freud and Jung have now taken up with Berman and Harrow.’ This comparison between psychoanalysis and endocrinology might now seem far-fetched but was then perfectly understandable. The glandular juices were seen to exercise a palpable influence over both body and soul” (Nordlund, 90-91) . . . According to Louis Berman, in his best-selling New Creations in Human Beings (1938), “Morbid individuals are therefore perfect objects for regenerative hormone therapy which, according to Berman, actually works in practice, in contrast to psychoanalysis and other alleged pseudo-sciences (Nordlund, 99).

Endoscopy, routinization of in 1990s:

“What changed the image of endoscopy in the mind of the surgical community and turned arthroscopy, cholecystectomy . . . and numerous other endoscopic surgical techniques into common operative procedures? The introduction of the small medical video camera attachable to the eyepiece of the arthroscope or laparoscope was an initial major step. French surgeons were the first to develop small, sterilizable, high-resolution video cameras that could be attached to a laparoscopic device. With the further addition of halogen high-intensity light sources with fiber-optic connections, surgeons were able to obtain bright, magnified images that could be viewed on a video monitor by all members of the surgical team rather than by just the surgeon alone. This technical development had consequences . . . including suturing and surgical reconstruction done only with videoendoscopic vision” (Lenoir, 29).

Epidemic encephalitis, and development of neurology in NYC:

“By the late 1920s, New York City had emerged as the international center of encephalitis research. Interest in the disease, however, was informed as much by neurologists’ aspiration to reform their specialty and advance their status within the biomedical community as it was by their desire to solve the riddle of this novel infection. New York neurologists hoped to use the disease as a new investigative model that would link their work to the emerging professional field of public health through the bacteriologic laboratory (Kroker, 109). . . . Throughout the 1920s, New York neurologists, in concert with city health officials, rallied around the banner of encephalitis in the hope of creating an expansive research project that would establish their field on a new and relevant scientific footing” (121). This led to the establishment of the Association for Research in Nervous and Mental Disease in 1920 and the focus on encephalitis of the New York Academy of Medicine’s section on neurology and psychiatry in effort to reform how neurology was practice in NYC’s hospital system (126). By the early 30s, “the diagnosis of epidemic encephalitis nonetheless began to unwind. The lethargy and ocular paralysis that had originally defined the acute version of the disease had been largely displaced by insomnia in adults and behavior disorders in children . . . The universalizing aspiration of the New York City neurologists had failed. Epidemic encephalitis had aroused little sustained interest beyond the rarefied atmosphere of New York, and the disease remained a largely local affair (141-42).

Eugenic sterilization, new arguments by end of 1950s:

H. Curtis Wood, president of Human Betterment Assn of America, devised new argument in terms of the defense of liberty and of medical autonomy. “They [doctors] enjoyed immense prestige and high, although declining, degree of patient deference, and their vocabulary and expertise enabled them to invoke arguments powerful enough to persuade even the most skeptical patients of sterilization’s merits. Doctors would do so in increasingly large numbers in the late 1960s and early 1970s (Hansen & Kind, 184-185).

Eugenics (American), use of religion in promoting:

“ . . . the Christian God pervades eugenics from the start. . . . To win public support, many eugenicists began to couch their message in religious terms, producing eugenic ‘catechisms,’ sponsoring contests for the best eugenic sermons, and conducting ‘Fitter Family’ and ‘Better Baby” contests based on good moral and mental hygiene” (Bruinius, loc 204, 292). On the other hand, Christian churches, esp. the Roman Catholic Church, were the most powerful lobby opposed to state sterilization bills. Buttressed by a papal denunciation of coerced sterilization, “the Church was the most outspoken defenders of what we would now call the rights of the mentally handicapped to freedom from coerced sterilization” (Hansen & King, ch 1).

Eugenics movement, American psychiatry and in early 20th century:

“Uncharitable opinions about specific national immigrant groups and their susceptibility to insanity, crime, and vice were, however, minority attitudes among U.S. psychiatrists. Most thought the real need was not new legislation but better enforcement of the old. Like Canadian psychiatrists, their attitudes toward immigration arose largely if not wholly from their progressive sense of duty to combat the entrenched business ‘interests’ allegedly profiting from unrestricted immigration. Political interference, in their view, only hamstrung the efficient operation of medical inspection and state hospitals, damaging the nation’s public health and financial resources and inflicting incalculable suffering on the insane newcomers deported or denied admission. Psychiatrists’ perceptions thus had more to do with progressive sentiments, occupational concerns, professional self-image and humanitarian considerations than generalized prejudices. They supported whatever legal reforms promised to improve their working conditions and promote reliance on psychiatrists as public health experts. That translated into better screening of foreigners with a predisposition to mental illness, and if psychiatrists exaggerated the immigrant threat, it was for these reasons more than anything else” (Dowbiggin II, 212) . . . At 1912 annual meeting the AMPA issued a reported , that claimed, inter alia, that “all questions of fact or opinion involved be governed and decided solely by the teachings of modern psychiatry” (215).

Euthanasia, changing domains of:

“The origins of medical euthanasia, I will argue, lie in a movement of dying from the domain of religion, through that of medicine and finally into the jurisdiction of positive law and public policy. . . . only in the nineteenth century that treatment of the dying, as such, became a medical concern and medically regulated. . . . a new ethic developed in which the physician was expected to remain present at the deathbed. The law of the deathbed had shifted from religion to medical. . . Euthanasia now stood for the new task of the medical profession – to assist dying patients in their last hours, short of hastening death” (Lavi, Intro).

Euthanasia, deathbed pain in late 19th c.:

“ . . . the senselessness [literally] of dying led both to the scientific assertion that dying could not be painful and to the technological call that it should not be painful. . . . The use of anesthetics such as chloroform was designed to mechanically reproduce this ideal of natural death in which pain is no longer necessary, and therefore no longer bearable. Pain became senseless precisely because the only sense that it had was given to it by the medical machinery aimed at annihilating it” (Lavi, 73-74).

Euthanasia and eugenics, relationship between:

“on could suggest that the practice of euthanasia on newborns was the missing link between eugenics and euthanasia. What remained a constant in the progression from birth control, to sterilization, to infanticide was not only the will to control birth but, more specifically, to remove from existence certain forms of incurable social suffering.. . . The problem of death control was closely linked to the problem of birth control. . . . The case of the dying, like that of the mentally retarded and the physically handicapped, involved a very specific kind of suffering . . . that was located in the social body as much as the individual’s body” (i.e., the suffering of family, friends, bystanders, and the physician) (Lavi, 109-111).

Evidence- Based Medicine (EBM), and specious distinction between experience and evidence:

“Residents’ encounters with EBM show that pure ‘informal experience’ and ‘formal evidence’ do not really exist. Any consultation of written research is already prestructured by the overall diagnostic or treatment goal and informed by other research and accumulated clinical observations. Similarly, any experience is grounded in the hierarchy of written research evidence, anecdotes, consensus, and hunches of generations of clinicians and basic researchers. ‘Evidence’ and ‘experience’ constitute complementary resources that help residents in learning treatment options and patient management. . . . The quality that guides clinical decision making is not the tradition-bound experience put up as a straw person in the medical and sociological literature, but a mixture of skills and uncertainties grounded in medical knowledge. Medical knowledge acquisition can thus not be reduced to either evidence or experience but inevitably contains a mixture of the two, albeit not necessarily in equal proportions. . . . The proposed problem for which EBM is the solution does not match the reality of learning to doctor. Residents generally do not agonize as much about variability or dehumanizing care as they worry about getting through the residency without killing patients, completely exhausting themselves, accumulating negative evaluations, or getting sued” (Timmermans & Berg, 163-165).

Evidence-Based Medicine (EBM), and art of medical practice:

“EBM itself may pose as great a challenge to the survival of the art of medicine as does the pharmaceutical industry. . . . [Critics of EBM suggest that] the data gleaned from population studies and clinical trials are portrayed as a form of Galenic scholasticism, in which the skills associated with close readings of texts are characterized as of greater clinical value than the actual patient/physician encounter. . . . The patient increasingly has been replaced by the statistic. To the extent that patient narratives have become suspect, the insights that such cases may offer have become devalued as merely anecdotal. . . . EBM provided a mechanism to ensure that academic medicine would finally trump what it viewed as the unscientific nature of medical practice” (Kushner II, 65-67).

Evidence-Based Medicine (EBM), and justice in health research:

Industry has an obligation to its shareholders to make a profit, which creates significant incentives for introducing bias into the research process. . . . Recent investigations have demonstrated a clear association between funding source and trial outcome, with commercially funded research three to four times more likely to have findings favorable to the sponsor than non-industry-funded research (Rogers & Ballantyne, 191). . . . Three related factors were responsible for the under-representation in clinical research of women, ethnic minorities, and other groups perceived to be ‘vulnerable,’ such as children and prisoners: explicit protectionist policies of exclusion; practical considerations of research efficiency and cost; and false assumptions about the relevance of sex, gender, and racial differences. . . . These justifications reflect what we call the ‘distortion paradigm,’ which defines the white adult male body as normal and considers female biological processes as distortions of this norm” (192). . . . Given that environmental and behavioral factors are estimated to cause up to 70% of premature death in the United States, a narrow focus on drug development can never maximize population health outcomes” (195).

Evidence-Based medicine (EBM), and patient choice:

“ . . the most accurate and valid evidence will relate to narrowly circumscribed questions, which may be of limited use for patient choice (W. Rogers, 98). . . . the nomination process for guidelines development reflects professional enthusiasms and existing research rather than patient identified concerns (99). . . . pharmacological interventions are those for which there is the greatest evidence. This means that non-pharmaceutical interventions such as counseling, physical therapies and life interventions, which may be of interest to patients, may be excluded from consideration because no evidence exists either for or against their effectiveness. . . . decisions about which outcomes are to be counted as benefits and burdens in reviewing existing research is a crucial part of the process, and one which traditionally has relied upon biomedical markets rather than consequences which are of importance to patients” (100) . . . . “The evidence which had the potential to enhance patient choice has been synthesized into a list of instructions, leaving the patient with the options of either accepting or refusing the guideline recommendations. . . the current practice of implementing EBM through the use of guidelines limits rather than facilitates patient choice” (101).

Evidence-Based Medicine (EBM), and psychiatry:

While EBM directly tackles ethical questions related to the efficacy of psychiatric treatments, it largely ignores other major ethical debates in psychiatry, such as the debate concerning the validity of diagnosis. Instead, EBM assumes that psychiatric diagnoses are valid and focuses attention on which tools do a better or worse job of establishing diagnoses. [DSM behaviorist criteria and quantification through numerical scores on symptom rating scales] obscure the substantive question of whether a given diagnostic entity is a disease requiring medical intervention (Gupta, 284). . . . A final ethical issue in psychiatry that EBM is unlikely to resolve is the question of whether involuntary psychiatric treatment is legitimate. . . One must in troduce ethical considerations lying outside the utilitarian ethics of EBM, such as notions of human rights, to consider whether involuntary treatment is ethical (Gupta, 285). . . . advocates of an evidence-based approach to psychiatry . . . neglect to consider that, at best, EBM’s ethics can only address a limited set of ethical questions. As a result, EBM cannot, at least on its own, provide the ethical substantiation sought by some psychiatrists” (286).

Evidence-Based Medicine (EBM), and tacit knowledge:

“ . . . the interpretation of data provided by a randomized controlled trial, like other forms of scientific and statistical knowledge, moves between subsidiary and focal poles of reasoning. The randomized controlled trial, like clinical intuition, is therefore also premised on the existence of unspecifiable processes. The emphasis on randomized controlled trials is an attempt to elicit clinical information under experimental conditions; yet its usage leads to errors because of the assumption that individual events are unchanging so that they may be correctly approximated with the epidemiological law under investigation. It is in dealing with the contingency inherent in the clinical encounter that clinical skill as a form of tacit knowledge becomes crucial, and is not adequately accounted for by proponents of evidence-based medicine” (Braude, 196).

Evidence-Based Medicine (EBM), and the politics of standards:

“ . . . the politics of standards should not be located solely in the regulatory-political environment from which standards emerge but in the standards themselves. Standards are inherently political because their construction and application transform the practices in which they become embedded. They change positions of actors: altering relations of accountability, emphasizing or deemphasizing preexisting hierarchies, changing expectations of patients. . . . standardization is, paradoxically, a dynamic process of change. The implementation of clinical practice guidelines or novel nomenclatures generates action and creates new forms of life” (Timmermans & Berg, 22-23). . . . standardization is political since it inevitably reorders practices, and such reorderings have consequences that affect the position of actors (through, for example, the distribution of resources and responsibilities). The shifting relations between the health care professions, the disappearance of black hospitals, and the changing legal status the record in early-twentieth-century U.S. medicine all indicate that it would be a painful mistake to denote standardization as a mere technical issue, not worthy of sociological attention. . . . it is not too exaggerated to state that the introduction of the patient-centered record helped reconfigure the patient in early-twentieth-century U.S. medicine. The creation of the patient-centered record implied an individualization of the poor and the increased valuation of their status as patients through the granting of individual records to them” (53, 54). . . . we want to construe the activity of working with standards as an active act of allowing oneself to be transformed while at the same time transforming the standard. . . . working with guidelines is an active act because of the required proficiency. Rather than a matter for ‘mindless cooks,’ active submission appears to be a highly skillful activity” (73).

Evidence-Based Medicine (EBM), and uncertainty among residents:

“With Light, Katz, and Atkinson we would expect that EBM perpetuates a dogmatic, control-centered form of medicine in residents’ daily clinical practice, validating the power of medicine while accentuating the strengths and weaknesses of its scientific basis. . . . The residents we interviewed, however, noted that the most immediate effect of the increased reliance on guidelines and medical literature was a new source of uncertainty to be managed. Residents not only need to know how to diagnose and treat patients but how to acquire epidemiological research skills as well. . . . First, some residents felt uncomfortable about their ability to search for primary or review articles. . . . Second, residents expressed their discomfort in evaluating the quality of primary articles. . . . Third, residents questioned the rationale behind conducting studies and expressed suspicion about the effects of economical incentives on the quality of medical knowledge. . . . The new research-based uncertainty leads to new forms of managing the uncertainty. . . . Learning how to deal with the specific uncertainty of research led, thus, to a new kind of research-infused skill, an additional dimension of learning to doctor” (Timmermans & Berg, 152,153).

Evidence-Based Medicine (EBM), overvaluation of RCTs in:

“There is also the problem of clinical epidemiology being concerned with groups. I have yet to find a clinical epidemiologist who openly acknowledges the ambiguity of the probability concept. They see in a scientific study that 90 percent of patient were cured, and then they take it for granted that there is a 90 percent chance that Mr. Brown will be cured. But that is an extremely interesting problem. What is a probability of one person being cured? Either he is cured or he is not cured. What if Mr. Brown is very old, and another study has shown that old people in general have a worse prognosis? How do you combine these probabilities? It is extremely complex, but that is what clinical medicine is all about. . . . [Trials] can run you ragged on the trivial” (Hendrik Wulff, in J. Daly, 108, and more generally 103-109).

Evidence-Based Medicine (EBM), values of:

EBM makes two assumptions (Gupta 2003). First, it assumes that by pursuing truth (in other words, true conclusions about medical interventions) we will discover the most effective means of achieving health. . . . Second, and more controversially, EBM assumes that only by pursuing EBM do we maximize the likelihood of arriving at the truth about the effectiveness of medical interventions. In other words, only by pursuing EBM can we arrive at the best means of achieving health. [para] Taken together, the moral value that we should pursue the most effective means of achieving health, and the epistemological assumption that EBM is the most effective means of achieving health, lead to an inescapable moral conclusion: we should practice EBM. . . . This moral obligation constitutes the bedrock of EBM’s ethical commitments – improved health through improved knowledge of the effectiveness of interventions, best achieved via EBM (Gupta, 280). . . . EBM’s approach to decision making gives researchers, first and foremost, the authority to define what constitutes improved health or decreased harm to health. It is researchers who typically choose the outcomes under investigation in medical research, and it is these outcomes that EBM seeks to achieve. This is not to say that deontological and virtue ethics commitments are dismissed; however, the central purpose of EBM practice is not to foster our virtues or enhance our actions or duties towards patients. If the practice of EBM did not improve health outcomes (consequences), there would not necessarily be the urgent imperative – as there currently is – to teach and learn it. [para] EBM is not only consequentialist, it also contains a commitment to utilitarianism. . . . EBM does take a broadly utilitarian ethical perspective, in that it focuses on maximizing good by maximizing particular health outcomes” (281). . . . But even though utility is defined as the satisfaction of patient preferences, this does not capture what is emphasized as good in EBM. If patient preferences were satisfied but EBM led to no improvement in health, or to even worse health (according to researchers or clinicians) then pre-EBM practice, the principle of EBM utility would not be satisfied” (282).

Evidence-Based Medicine (EBM), why it took off in the 1980s:

“With spiraling health care costs, more emancipated patients/consumers, increasing attention to medical practice variations, an information overload, and an overall critical scrutiny of the role of experts and professionals in society, the medical professions felt it had to take unprecedented action to maintain its position as exclusive safe-keeper and wielder of medical knowledge. ‘Unexamined reliance on professional judgment,’ it is argued, will no long do. ‘More structured support and accountability for such judgment,’ in the form of evidence-based guidelines, is required to ensure the trust in the medical profession” (Timmermans & Berg, 16; cf. J. Daly, 123-127, esp. 124, re effort of clinical epidemiology [limited to medically trained researchers] to distance itself from public health epidemiology [with prominent role of non-MDs]).

Evidence-Based Standards, compatibility with creativity and expertise:

”. . . the generative power of procedural standards thrives on the local expertise the nurses and doctors develop in their interaction with these tools – and vice versa. Standardization does not result in an obedient workforce, with individual health care workers accomplishing their tasks in a rigid, preprogrammed fashion. Quite the contrary, affording skillful and non-predetermined interactions with the procedural standards enhances their generative power. A proper (both effective and desirable) deployment of procedural standards creates a synergy between the staff members’ embodied expertise and the tool’s coordinating activity, in which expertise and coordination mutually reinforce each other (Timmermans & Berg, 78).

Expert Patient, internet and:

“In addition, the cultural (and spatio-temporal) extension of medical knowledge and advice through globally available information sources heralds the arrival of the so-called ‘expert patient,’ as well as the procuring of medical advice and products by people outside the specific regulatory and ethical provisions found within any one country. The Internet fosters not only a ‘new medical pluralism’ (Cant and Sharma, 1999) but also what Giddens (1991) has called the ‘reskilling’ of lay people in their engagement with, definition and management of health and illness. Crucially, this ‘expert patient’ challenges the power of the physician (Hardey, 1999) and so his or her professional authority. . . . this whole process might be regarded as the socialization of clinical diagnosis” (Webster, 448).

F

Fair information practice, vs patient control:

“ . . . giving control over personal medical information to the patient is impractical and that it is the needs of the medical care system that should be determining. In this view, most decisions about information flow, as well as choices regarding measure to protect patients from harms caused by information disclosure, should be made by health care providers, purchasers, and insurers. A policy of this type is generatlly based on what are termed ‘fair information practices’ rather than patient control” (Woodward, 339). Fair information practices are rules of the road for disclosure. . . . Limitation of access to ‘authorized viewers’ may be completely compatible with the elimination of privacy” (340).

Family planning services in 1960s, racial tensions and:

“ . . . the new prominence that family planning gained in the 1960s created tensions between white supporters of the new programs and many African Americans. . . . The low participation of white clients in some programs seemed to bolster the charge of racial genocide regardless of whether segregation resulted from the targeting of black clients for family planning or from the continued refusal of white women to visit integrated services” (Schoen, loc. 959).

Federal health policy, Rosemary Stevens on:

“ . . . a constantly negotiated health care system or, more accurately, a series of systems, filled with discoveries and surprises, temporary coalitions, changing rhetoric, new experts, new solutions. Federal policy lurches from one perceived crisis to the next, and consensus is narrow and ever-shifting. Resistance to government in principle is matched by pressures from specific interests for more government intervention, for limited purposes or specific populations. Each care is ‘special.’ This process tends to lead, overall, to an increased federal role by a process of accretion.” (Stevens III, 302).

Feminism, problem of appeals to “nature” re women’s health:

“. . . ‘the natural’ still exerts substantial power of us, perhaps in part because of its continued usefulness in environmental issues. Yet we must free ourselves from it as it also inappropriately naturalizes our way of life. For instance, the contemporary American middle-class environment requires little physical exertion and tempts us with ever-bigger portions of food. So, in the absence of conscious control, our bodies become quite different from those of our Stone Age ancestors of Cretan peasants . . . the need to reject appeals to nature as guides to action has long been clear in bioethics: defining states of affairs as ‘natural’ or ‘normal’ implies nothing about how to deal with them. When we learn that African-American women in the United States die more often in childbirth than white women, and that horrifying numbers of Third World women are dying as we speak, nobody concludes that preventive action would be morally intrusive. Yet we tend to be bewitched by the claim that menstruation or pregnancy are natural processes and thus inappropriately dealt with in the medical realm” (Purdy, 254).

Fetal electronic monitoring vs. auscultation of fetal heart with fetoscope, technologies of:

“Auscultation necessitates an embodied human/technology relation in that the fetoscope directly extends the sense of hearing. . . electronic fetal monitoring entails a largely hermeneutic human/technology relation in that clinicians do not directly sense the fetus but, rather, interpret its condition from visual traces that represent the fetus. . . . machine monitoring does not require bodily contract for appraisal [of] the fetus” (Sandelowski, 170-71).

Finney:

“One must have gone through the experience of having a surgical operation performed upon oneself before getting a really adequate idea of what it means. Much depends on the end of the knife at which one happens to be during the operation, the handle or the blade” (Finney, 341).

Flexner (Abraham) on role of Johns Hopkins:

“In the course of the conversation it finally came out that from his point of view Johns Hopkins was the only school in America that taught medicine at the beginning of the twentieth century, and that Hopkins deserved all the credit for the modernization of medicine in the United States” (Aub & Hapgood, 145-46).

Flexner Report and black medical colleges:

“Neither Flexner nor the white philanthropists who followed his advice believed in making medical schools for blacks the equal of medical schools for white. While white schools should focus on scientific medicine and research, Howard and Meharry should focus on sanitation and hygiene (Ward, 26, cf. 33). . . . A greater appreciation of the needs of black communities for professional medical care . . . might have led to a more sympathetic attitude toward some of the other black medical colleges, especially Flint and Leonard [in Raleigh, NC], schools that proved they could produce excellent physicians and were strategically located in regions of needy black populations” (30).

Forceps, which transformed power relationship between midwives and obstetric surgeons in England in 1730s:

“As long as the male practitioner deployed the crotchet or some other craniotomy device, the realm of live deliveries belonged to the midwife; this entailed an inviolable boundary between male and female competencies. As a result, medical men regarded the midwife’s office with a certain implicit respect – whatever the weaknesses of real-life midwives, or some of those midwives. But once the male practitioner could deliver a living child, the boundary was broken; his critical attitude hardened and deepened, and his ambitions were transformed” (A. Wilson, 99). Yet, impact of forceps was limited since forceps practice was limited to emergency work; it did not signal the displacement of the midwife but a “new equilibrium between midwives and male practitioners” (101).

French empiricism, Louis’s numerical method and:

“Louis’s construction of a numerical therapeutic rationale caused a great stir in France and prompted a series of academic debates about its merits during the 1830s. . . Louis’s approach was grounded in early nineteenth-century French hospital medicine. . . . Quantification was believed to override individual clinical judgments, creating the danger that therapeutic decisions would be made by the application of predetermined formulas arrived at through statistical calculations. Such a change in the decision-making process of the medical profession was perceived as a great threat to the physicians’ identity and social standing. However, the appeal of a statistically informed therapy remained somewhat restricted to a small elite of academic Parisian physicians primarily active in institutional environments where more universalizing tendencies flourished naturally. It made less sense in private medical practice contexts where therapeutic specificity was still essential in supporting professional identity” (Risse, 59-60).

French empiricism (Louis) and antebellum American medical reform:

“Just how were the epistemological ideals attached to the Paris School supposed to uplift the regular medical profession in the United States? . . . First, by escaping the reign of rationalistic systems, the regular profession would shed those features of its knowledge and practice that had come most under attack, depriving antiorthodox critics of their targets. Second, by cultivating empirical observation of nature, regular physicians would improve the character of their ideas and practices, with the expectation that such betterment would win public favor. And, third, having undermined the attack on orthodox medicine, regular reformers would go on to use the ideals of empiricism and antirationalism to discredit their irregular competitors, thereby affirming the superiority of regular medicine in ways consonant with American values. . . . The target of critics’ most violent denunciations was orthodox heroic therapy, aggressive depletion by bloodletting and mineral purgatives such as calomel and tartar emetic. . . . Regular physicians took up an empiricist faith akin to that of their assailants and used it to launch a counterattack against them [Thomsonism, Eclecticism, Homeopathy]. . . . The crusade against the spirit of system thus became a crusade against alternative healers and outright quacks as well. The ideal of empiricism gave regular physicians a powerful weapon for assailing their rivals without offending the epistemological sensibilities of the American public” (Harley, 231-32, 234, 237).

French empiricism versus derogatory “empiric”:

American disciplines of the Paris School celebrated the newness of French empiricism “as a learned, cultured empiricism, something to be scrupulously distinguished from the empiricism of the host of upstart medical ‘empirics’ who claimed that experience was on their side. Indeed, the long-standing identification of the ‘empiric’ with the charlatan and of empiricism with quackery placed regular physicians in a troubling semantic quandary” (Harley, 245-46).

French resistance to organic explanation of tic behaviors after WWII:

“The fact that the Nazis and their Vichy allies had attempted to eradicate psychoanalytic theory in favor of a reconstituted Nazi psychology informed by eugenic explanations would influence (in a negative way) the French psychiatric community’s attitude toward most organic explanations of tic behaviors for the next half century. To this day psychoanalysis remains strong in France because it portrays itself as a bulwark of defense against detested racist medical practices. Criticisms of psychoanalysis are often labeled as protofascist and anti-Semitic” (Kushner, 150). “By the 1990s, leading French psychoanalysts would take the offensive, decrying organic explanations of Gilles de la Tourette’s syndrome as a construction, almost a conspiracy, of the North American medical establishment” (162).

G

Gate Control Theory of Pain:

Invented by Canadian-born psychology Ronald Melzack and English physiologist Patrick Wall early 60s, it borrowed the motif of feedback, information processing, and cybernetics from computer science: “From the mid-1960s to the early 1970s, a new concept, gate control theory, arose to support the liberalization of the new field [of pain medicine]. . . . the uptake of gate control owed less to a ‘cultural spirit’ whisking ideas along and much more to the fact that the theory resonated on multiple levels with the era’s legal battles, cultural critiques, pain relief practices, and liberalizing political commitments. But “The theory offered no specificity about actual mechanisms operating these gates in any one person,” and encouraged unconventional approaches to pain therapy (Waterloo II).

Gene Therapy of 1990s, entrepreneurism and:

“ . . . the rise of gene therapy reflected a new kind of entrepreneurism. Over time the gene therapy phenomenon reflected the growing influence of financial speculation and venture capital in clinical research (Waterloo III, 95) . . . Even as they organized their experiments, researchers were also forming startup companies, raising funds for these innovative therapies and seeking a market for them. The new gene doctors, then, blurred the distinctions between scientist and salesman, surgical pioneer and drug innovator, playing a role not just in developing new drugs and techniques but in testing them and in promoting them as well (96). . . . “researchers [were] fully conscious of the financial import of their findings” (99) . . . At the heart of the problem was a potentially dangerous vehicle (the adenovirus) and an increasingly problematic relationship among clinician-researchers, CF patients and their families, entrepreneurial venture capital, and the biotechnology market” (101). . . . “The idea of gene therapy for cystic fibrosis filled an emotional need, but in the end it was a marketing myth, revealing more about the business culture and mainstream ideologies of the 1990s than about the actual potential of genetic technology” (110). . . . Throughout the 1990s it became increasingly clear that genetic medicine was not merely a benevolent enterprise dedicated to curing the sick people of the world but also a growing financial enterprise operating according to the edicts of the marketplace. . . . Clinical experiments were means toward a financial end; one such end was attracting investors with a steady stream of good news, bringing a promising product ever-closer to the market. In this sense the promise of genetic medicine took many of its meanings from the broader business culture of the 1990s and from the strategizing of a new type of researcher/entrepreneur, which was further blurring the lines between innovation for profit and patient care” (168, 169).

Genes, blood type, infectious disease and:

People with type O blood gene on chromosome 9 are much more susceptible to cholera (people with AB genotype are the most resistant), but seem to be slightly more resistant to malaria and slightly less likely to get various cancers. “This enhanced survival was probably enough to keep the O version of the gene from disappearing, despite its association with susceptibility to cholera” (Ridley, 141). Finally, native Americas predominantly had O version of the gene that made them maximally susceptible to cholera; but cholera was rare in Africa, and this same gene rendered them less susceptible to syphilis (144). In 1940s, Anthony Allison discovered that people with the sickle-cell mutation in Kenya (one copy of the gene, not two) were largely resistant to malaria (141).

Genes and proteins, relation of:

“We now know that the main purpose of genes is to store the recipe for making proteins. It is proteins that do almost every chemical, structural and regulatory thing that is done in the body: . . . Every single protein in the body is made from a gene by a translation of the genetic code. The same is not quite true in reverse: there are genes, which are never translated into protein, such as the ribosomal-RNA gene of chromosome 1, but even that is involved in making other proteins. Garrod’s [1902] conjecture is basically correct: what we inherit from our parents is a gigantic list of recipes for making proteins and for making protein-making machines – and little more” (Ridley, 40).

Genetic counseling/personalized medicine:

“What can be accomplished today is educating trainees on the tenets of genetic counseling, the challenges of interpreting complex genetic data, and more effective ways of communicating data of unknown significance. . . . As medical schools develop curricula for personalized medicine, the key objectives for learning are not the technology but developing the communication skills needed to discuss complex genomic test results.” Re curricula addressing personalized medicine “Pharmacogenomics has been the major focus area to date; 84% of medical schools in the United Kingdom and 74% of U.S. and Canadian medical schools include pharmacogenomics in their curricula” (Cornetta & Brown, 3)

Gillies, Harold Delf:

“He proved to be a dynamic teacher impressing by paradox, invective, cajolery, and teasing raillery” (Brain, 159).

Golgi, failure to “see” beyond his reticulum theory of neural transmission:

“When he began to investigate fine structure with silver straining, he was not alone in these views. But as more and more data arrived from a variety of laboratories in different disciplines, belief in the reticulum theory began to weaken. Golgi’s own discovery of axonal branching had so validated the reticulum for him – a position solidified by his misunderstanding the receiving function of dendrites – that he began to discount the findings of others. . . . He was intellectually prepared to find network because he so fervently wished to believe in the holistic nature of cortical function – an idea already firmly denied by Virchow . . .” (Rapport, 115-116). . . . By the time he began to produce his own original research with the silver stain [1873-74], a commitment to the network theory obstructed Golgi’s perspective on how the brain was organized. Sidetracked by confusion over the function of dendrites and the meaning of terminal axonal branching, he was never able to broaden his view (118).

Golgi, reticular view of nervous system of:

“Regardless of what he called it, it is clear that Golgi saw the neuron as composed of a single axon exiting a cell body laden with dendrites. For him, though, dendrites did not wait like tentacles of a jellyfish for stimulation but somehow nurtured the neuronal cell body. Just as clearly, he believe that the fibrils of the cylindraxis he found on axonal branches were all part of a reticular network that connected the nervous system. These tiny fibrils were, Golgi thought, lacework that hooked up every cell to every other” (Rapport, 88). “[Golgi’s] most extraordinary contribution would be the discovery that neurons could be stained by silver nitrate [the black reaction], an insight that very soon paved the way for even greater breakthroughs in neuroscience” (93).

Gonorrhea, prevalence of in early 20th century in US:

“At the beginning of the twentieth century gonorrhea was probably the most frequent disease treated by practicing physicians, and estimates of the number of men who had had gonorrhea at least once varied from 48 to 99 percent. In World War I, among a million men drafted in this country, 4.5 percent had gonorrhea when they were examined” (Dowling 1, 104).

Gout, unchanging treatments for:

“ . . . practically all the beliefs and therapeutic methods prior to our own time . . . had their origin in Greek times and were passed down basically unchanged until the opening of the nineteenth century. With the exception of emetics all these methods appear to have had the sanction of the ‘High Priest of Gout’ Sir Alfred Garrod, who died as recently as 1907. It was with the discovery of the uricosuric substances [which increase excretion of uric acid]that the modern conception of long-term treatment started (Coperman, ch 1 “summary”).

Gymnastics, for U.S. women, 1830-1870:

“ . . . the most important distinctions [re men’s gymnastics] pertaining to the ways feminized gymnastics discourses sought to shape female bodies stemmed from the fact that those texts contextualized the regimens they lauded in relation both to female invalidism and to the resulting necessity for . . . feminine rectitude with respect to health and home. . . . Those activities, they purported, not only ameliorated curvature of the spine but also enlarged the chest and drew back rounded shoulders. . . . all were consistent with a straight-backed and broad-chested feminine form ((Chisholm, 744). . . . Overall, therefore, many nineteenth-century gymnastics regimens were represented as techniques for straightening women’s spines, for enlarging their chests, for prompting increased respiration, as well as for invigorating women’s physiological systems in their entirety. . . Nineteenth-century discourses commending gymnastics for U.S. women coordinated postural rectitude and moral rectitude with distinctly utilitarian goals (746). . . . On the other hand, this literature often reflected the prevalent opinion that housework was restorative, while postulating both that it was a coterminous (if not the ultimate) means for maintaining female health and that it was the standard for female fitness. Gymnastics and housework therefore existed in mutual relation: gymnastics for women promised to fortify and to perfect their bodies while making them healthy and fit enough to undertake their household duties; and housework would complete the process by mobilizing and augmenting the healthful effects of gymnastics (748). . . . girls who faithfully participated in gymnastics would attain, embody, and enact postural, moral, and procedural rectitude (749). . . . these discourses positioned gymnastics as switches that closed the circuits that ran between female education and domesticity” (750).

H

Halsted, relationship to residents:

“If one was insecure enough to require reassurance and compliments, one was certainly to be sorely disappointed. By the same token, abject disapproval would be reserved for those in mortal disfavor and would be delivered unemotionally, in a quiet and withering tone, with an expressionless face and unyielding ice blue eyes” (Imber, 201-202).

Halsted, surgical procedures at Hopkins:

“Halsted devised an ether cone, which would allow a greater admixture of air for the patient to breathe. Blood loss was sharply reduced, aseptic technique ruled, anesthetic dose was reduced, and in abdominal operations sterile gauze soaked in sterile saline solution was used to protect the intestines from trauma and drying. Patients were kept warm after surgery, the foot of the bed was raised slightly, and replacement fluids were given by rectum to prevent postoperative shock” (Imber 203-04).

Hansen’s Disease (leprosy), medieval analogue of contemporary understanding:

“Modern clinical medicine has established two basic forms of leprosy – the severe lepromatous form and the milder tuberculoid form – a system resembling the distinction made by Paul of Aegina,” a 7th-century follower of Galen (T. Miller & Nesbitt, 65-66).

Harkens, Dwight, use of index finger in early mitral valvuloplasty:

Harken cut off the index finger of his glove “in order to expose his specially prepared fingernail. The nail was allowed to grow long and it was then trimmed in an offset fashion similar to the blade of a carpenter’s knife. Furthermore, tiny notches were cut into the free edge, converting the fingernail to a miniature saw. His was a ‘Swiss Army fingernail,’ which could be used to part the commissure by direct pressure or by cutting” (Collins, 210).

Harvey, William:

How 17th c. physicians integrated his discovery of circulation with Galen: “You’d think phlebotomy, for example, would have gone out of the window along with Galen, but physicians simply crammed Harvey’s theory into existing models. They convinced themselves that because blood constantly circulates, it was of even greater importance than they first thought. So, they assigned to it many of the properties they’d previously ascribed to other humours and bled their patients still more” (Craddock, 58-59).

Heart disease, new paradigm of in late 40s:

“By the late 1940s, a different paradigm of heart disease was emerging, one in which the acute emergency of a heart attack had an extended biological history. Textbook authors increasingly emphasized the central role of long-term arteriosclerosis in the development of heart disease. According to Paul Dudley White, coronary heart disease was virtually synonymous with the effects of diseased coronary arties . . . The heart remained a pump, but one that hopefully might be spared through prevention rather than salvaged by treatment. . . . The principal hopes for prophylaxis lay in altering the high levels of cholesterol found in the blood of heart attack victims” (Marks, 169).

Heart valves, replacement of by 1980s:

“By the 1980s surgeons had two excellent alternatives for patients suffering from serious valvular disease: the mechanical valve or the bioprosthesis. But the two are not interchangeable. Although mechanical valves can last a lifetime, they introduce foreign materials into the body, increasing the risk of sudden blood clots. To minimize this danger, patients must take anti-coagulant drugs for life. Tissue valves, on the other hand, are unlikely to cause clotting problems but have a maximum lifespan of around twenty years. In practice this means that young patients are generally given a mechanical valve, while the elderly are more likely to receive a bioprosthesis. . . . but today’s clinicians do everything in their power to avoid using devices they spent decades perfecting” owing to evolution of “surgical philosophy” deriving from Carpentier’s insight of 1967 that reconstruction of diseased valve is preferable to its replacement (Morris, loc 2568ff.).

Hemostasis, advances in treatment:

1872: Spencer Wells used small arterial clips to close of blood flow during surgical procedures = intro of bloodless surgery; 1873: Esmarch invented the Esmarch elastic bandage, which served as a battle tourniquet; 1879: Halsted invented the modern hemostat (Gabriel, 136).

Hippocratic Corpus on diet and art of medicine:

“But the reason why the art of medicine became necessary was because sick men did not get well on the same regimen as the health. . . . What fairer or more fitting name can be given to such research and discovery than that of medicine, which was founded for the healthy, preservation and nourishment of man and to rid him of that diet which caused pain, sickness and death. . . . I do not believe anyone would ever have looked for such a science if the same regime were equally good for the sick and healthy” (On Ancient Medicine, #3 & 5 [Lloyd].

Hippocratic Corpus on the “Sacred Illness” (epilepsy) and treatment as commerce:

It is “in no way more divine or sacred than other illness, but it has its own nature and, just as the other illnesses have a condition from which they arise, this illness also has a natural cause.” People impute sacredness to it simply because “it is nothing like other illnesses. . . . But if it is wondrous, on this count there will be many sacred illnesses and not one” (Upson-Saia, 195). Those who first imputed sacredness to this illness were “people of the sort who are now magicians, purifiers, charlatans, and quacks, who pretended to be deeply god-fearing and to have superior knowledge.” In point of fact, they were, and are, “impious and believe that the gods neither exist nor have any power” (197), “people in need of a livelihood contrive and fabricate strategies of all sorts for this illness and other things, attributing to a god the cause of each type of conation” (197).

Homeopathy, as continuation of classical approach to therapeutics:

“it presumed to follow nature, the natural world not constructed on the basis of perennial tensions between opposites locked in constant battle, but one in which similars could interfere, inhibit, or even cancel each other out. Moreover, homeopathy also viewed the human organism as indivisible, reacting as a whole through the total sum of clinical symptoms and signs. Likewise, health remained a dynamic balance ever to be maintained or restored. Patient uniqueness was an expression of individual constitution and susceptibility.. . . . Homeopathic patients were treated as individuals, not just merely viewed as examples of known diseases. Their complaints were to be carefully elicited, and special treatment plans were provided on the basis of unique symptomatic combinations” (Risse, 61).

Hormone, in relation to “internal secretions”:

“Following the discovery of secretin [1902], the potent chemical secreted by the intestinal mucosa that triggers the release of pancreatic juice, physiologists quickly became fascinated by the idea that chemicals as well as nerves can provoke physiological events. The new term ‘hormone’ was introduced to specify chemicals serving this function. ‘Hormone’ implied a chemical derived from animal tissues, which had specific physiological effects; the term ‘internal secretion’ on the other hand, suggested an entity whose absence resulted in disease, a hypothetical, rather than a demonstrated entity. Hormones could be isolated in the laboratory and studied by recognized physiological methods. The existence of internal secretions, although implied by clinical observations, could not be proved with the same rigor” (Borell IV, 4-5).

Hormone therapy, in psychiatry of the 1930s:

”In the 1930s . . . psychiatrists began enthusiastically trying out hormone treatment for women patient with mental illness. Psychiatrists believed that it was obvious that since women experienced mental illness and appeared to have stresses around their life transitions . . . that hormone disruptions were the cause and that administration of hormones was the treatment of choice. . . . psychiatrists were particularly enthusiastic about treatment of a specific disorder: involutional melancholia . . . In the psychiatric model of illness around the climacteric, mental illness was due to a decline in hormones, esp. estrogen, and the treatment was hormone replacement. The diagnosis of involutional melancholia died in the late 1970s and early 1980s with the rise of the diagnosis of major depressive disorder. . . . In 1940, NY psychiatrist Herbert Ripley and endocrinologist Ephraim Short collaborated with pathologist George Papanicolaou on the treatment of involutional melancholia with estrogen.” (Hirshbein III, 164, 65)

Hospital reorganization after WWI, marginalization of women physicians in:

“The success of the American base hospitals left an indelible mark on the institutional values and organization of postwar American hospitals. At least until after WWII, surgeons emerged as the authoritative figures in hospitals. . . . anyone who had not been in on ‘the action’ during the war was far less likely to prevail during postwar planning for hospital reorganization. Since women were excluded from the Army Medical Reserves, and few had been involved with the overseas military hospitals, they largely stood on the sidelines. The ACS’s preferred mechanism for hospital reform, internal reorganization of the medical staff, often placed power in the hands of those physicians and surgeons most unsympathetic in the older, patient-centered values associated with general practitioners, including the majority of women physicians. The gender-based professional niche – in which women physicians were explicitly assigned to the treatment of women patients – disappeared from hospital organization charts” (More, 118, 119).

House Call, Francis Peabody on:

“When the general practitioner goes into the home of a patient, he may know the whole background of the family life from past experience; but even when he comes as a stranger, he has every opportunity to find out what manner of man his patient is, and what kind of circumstances make his life. He gets a hint of financial anxiety or of domestic incompatibility; he may find himself confronted by a querulous, exactly, self-centered patient, or by a gentle invalid overawed by a dominating family; and as he appreciates how these circumstances are reacting on the patient he dispenses sympathy, encouragement or discipline. What is spoken of as a ‘clinical picture’ is not just a photograph of a man sick in bed; it is an impressionistic painting of the patient surrounded by his home, his work, his relations, his friends, his joys sorrows, hope and fears.”

How did debates end, and how did this one end? Broca rephrased the original question to suit the needs of the microscopists, echoing a point Velpeau had made earlier. The question asked was:

Is the microscope useful for diagnosing cancer? The consensus among the surgeons participating in the debate was ‘no.’ But if the question was restated as ‘is the microscope useful for pathology?’ then the answer was ‘yes.’ For Broca, the utility of the instrument was confirmed . . .” (La Berge II, 448).

Humanities education in 1980s, versus George Canby Robinson’s psychosomatic program at Hopkins in the mid-30s:

“Eighties education in the humanities was not, however, a major intellectual complement in training in scientific medicine. It was promoted primarily as an addendum in ‘values clarification’ and ‘attitude shaping.’ The contrast with Robinson’s project was striking. Humanities education in the ‘personhood’ of patients was presented largely as an afterthought to education about the human biological organism, whereas Robinson’s central aim had been to help students understand patients psychobiologically as ‘total individuals’ so that they could incorporate social and psychological information in proper medical management” (T. Brown III, 153).

Humoral model, pagan nature of, responses to:

Paracelsus replaced with “truly Christian medical framework” took off in 1550s = chemical medicine (Chemical Physick) in which cosmos was composed of three substances: mercury (transformative); sulphur (binding), and salt (stabilizing). In this model, ill health resulted from dysfunctional chemical processes in the body. A more radical departure from humoral model, was that diseases were specific entities that would afflict everyone the same way. Among most influential proponents of the Paracelsian chemical model was Flemish physician Jean Baptist van Helmont. “. . . the centrality of Christian doctrine to this new body of medical though meant that some people saw chemical practitioners as more accessible and more charitable than their Galenic counterparts. Chemical physicians were also some of the first to criticize the practice of bloodletting” (J. Evans & Read, loc 235-252).

Humoral theory of immunity, consequences of victory of in 1890s:

“But the failure of the cellularist doctrine to gain adherents in the scientific community meant also that many approachable problems in cellular immunology were neglected as being ‘uninteresting’ in the humoralist context of the times. . . . one might reasonably have expected slow but substantial progress in cellular immunology over the next 40 to 50 years. Thus, instead of endless searches for circulating antibody associated with tuberculosis and tuberculin reaction, and contact dermatitis, histopathological studies and their resultant conclusion might have been obtained many decades earlier rather than awaiting the important descriptions of Gell and Hinde, Turk, and Waksman in the 1950s. Such studies might have pointed up much earlier the importance of the lymphocyte in immunological phenomena” (Silverstein, 55).

Humoral theory of immunity, victory of:

“The most telling blow to the cellular theory of immunity came in 1890 with the discovery by von Behring and Kitasato that immunity to diphtheria and tetanus is due to antibodies against the exotoxins of these bacteria. When, shortly thereafter it was demonstrated that passive transfer of immune serum would protect the naïve recipient from diphtheria, with no obvious intercession by any cellular elements, the humoralists felt that they had been vindicated, and Koch felt free to proclaim the demise of the phagocytic theory at a congress in 1891. The discovery of antibodies against these exotoxins, and even against toxins of nonbacterial origin such as ricin and abrin, supported the earlier view that most infectious diseases were toxic in nature; thus, it could be claimed that protection was due in large part to humoral antitoxic antibodies” (Silverstein, 49-50).

Hypothermia, as adjunct to heart-lung machine from 50s on:

“The DeWall oxygenator’s output was lower than that of the heart, so patients were slowly being starved of oxygen while they were connected to the device, and surgeon s had only a little window of time in which to operate. Hypothermia offered a way of prolong it. . . Another new use for hypothermia was discovered by Norman Shumway . . . In 1959 he showed that cooling the heart locally with saline at 4 c made it safe to operate for as long as an hour. Topical or general hypothermia – or a combination of the two – eventually gained general acceptance and remain common techniques today” (Morris, loc 2016ff.).

Hysteria, deconstruction of into “its constituent symptomatological parts”:

“With increases in general medical knowledge and advances in diagnostic techniques, many cases of hysteria were not believed to involve physical diseases, such as epilepsy, syphilis, multiple sclerosis, and cranial injury. At the same time that the diagnosis was losing ground to ascertainably organic ailments, it was being redefined by new and more nuanced psychiatric classifications. Most important in this regard were the psychoses described by Kahlbaum, Hecker, Kraepelin, and Bleuler and a series of theories about the psychoneuroses, such as Janet’s psychasthenia, Babinski’s pithiatism, and Freud’s anxiety neurosis. The large majority of these changes took place during 1895-1910 . . . In large measure, it is the process of the atomization of the diagnosis of a century ago and its reconstitution in many new placed and under a multitude of different names that has created the historical illusion of a disappearance of the disorder itself” (525-26).

Hysteria, in relation to syphilis:

“. . . during the final third of the nineteenth century patients with advanced syphilis were intermixed with general institutionalized psychiatric populations, which also included many cases diagnosed as severely hysterical. Equally relevant are the clinical similarities between the two maladies. In the nineteenth-century medical literature on hysteria, acute paralytic disturbances are among the most common symptoms. The onset of general paresis, like hysteria, may be characterized by convulsive seizures, double vision, loss of pain sensation in scattered areas of the body, and sensory ataxias, as well as exaggerated emotional behaviors. The situation was further complicated by the fact that hysterical symptoms, especially monoplegias, hemiplegias, and paraplegias, often appear in conjunction with syphilis, especially at the outset of the secondary stage of infection. . . . it appears likely that a no-insignificant number of individuals included a century ago in the French medical literature as hysterical were in fact afflicted with ‘ The great imitator’ in its advanced stages” (Micale II, 509).

I

Imminent transformation, promise of in genetically transmitted diseases:

“In the history of disease – and, as we have seen, in the histories of TSD, CF, and SCD – this vision of standing ‘on the threshold’ has emerged repeatedly, nurtured by innovators and entrepreneurs and intertwined with the hopes, ideals, and fears of different patient groups and medical communities in ways that are uplifting but also deeply troubling. The dream of imminent transformation is an integral part of the ideology of the modern medical sciences, of which genetic medicine is a part. . . . The recent histories of Tay-Sachs disease, cystic fibrosis, and sickle cell disease unearth often hidden dimensions of the promise of breakthrough medicine: the conflict between the cultural allure of the idea of imminent transformation and the realities of what can be delivered to patients; the unintended consequences of experimentation and innovation; and the ways in which scientists, business interests, and social commentators capitalize on this cultural motif in order to draw attention to their respective enterprises, even when their ventures are speculative in the extreme” (Wailoo III, 166, 167).

Informed consent, bureaucratic rationale of:

“Informed consent has provided a wonderfully portable and efficient means for approving the acceptance of medical care in a highly segmented and specialized medical system. The power to give informed consent today moves along with patients through medical institutions. It is portable. Specialized physicians who meet patients along their paths through the sites of care can make agreements about procedures quickly and serially. . . .Approval for care in mid-century medicine, although unburdened by formal procedures, may actually have been more cautious. Care had ideally to be coordinated through existing medical relationships with a single physician. Such arrangements posed a challenge to elaborate networks of specialized practitioners working in segmented institutional niches. . . . Informed consent narrowed decision-making powers efficiently to the individual patient. Any physician could be involved in a given instance of consent.. In addition, the patient’s family could often be efficiently dropped form the group needed to approve care. The older paternalistic model, in contrast, required coordination with the patient’s family” (Crenner II, 228-229, cf. 248)

Insanity defense, 19th century, and delegitimizing of women:

“ . . . in ascribing their crimes to individual mental illness the legal system, the medical profession, and the press colluded in a process of delegitimizing both the defendant and her crime. Acquitting a woman on ground of insanity may have save her . . . but it also stripped her crime of meaning (Ainsley, 47). . . . The insanity acquittal turned the women concerned from agents who made decisions about their situations and then acted upon those decisions . . . into helpless victims of their emotions and bodies [puerperal mania, etc.], and transformed threatening and dangerous women into patients who could be confined, subdued and cured of their illness . . .” (48).

Instruments, ambivalence toward in nineteenth and early twentieth centuries:

“At the same time that diagnostic instruments promised to place diagnosis on a sounder foundation, they threatened the professional ethos that valued artistry accrued over a lifetime. The physiologists who designed these instruments promoted objective over subjective data, quantitative over qualitative information, and precision over vagueness. In so doing, they stripped diagnosis of much of its mystique and suggested a new set of priorities for medical thinking. Ideally, the instruments would serve as an objective arbiter of the physician’s diagnostic skills and a predictable, standardized reference for the student learning diagnosis” (J. J. H. Evans, 787). . . . Calling hypertension a ‘sphygmomanometric disease’ alerted physicians to a perceived danger that the instrument could falsify reality and mislead doctors by arbitrarily making disease out of a healthy process. In other words, doctors questioned the validity of obtaining a numerical value for blood pressure and using that number to determine normality and abnormality. Instruments could distort diagnosis. The finger, they argued, was as accurate a gauge of blood pressure as was medically necessary” (797).

Instruments, surgical, symbolic dimension of:

“Objects, in general, contain symbolic messages in addition to allowing for certain functions. This is true for surgical instruments, which can be designed to invoke associations with modern technology and science or to symbolize solidity and prestige. Instrument designs can also embody group identity, for example, when surgeons adopted ‘distinctive, all-metal (usually steel) instrumentation . . . while physicians continued to use tools with wooden, brass or ivory parts’ – a step that ‘emphasized that surgery was different, and independent, from medical specialties,’ as Ghislaine Lawrence remarks. The naming of instruments after their surgical inventor is another element of this symbolic dimension of surgical objects” (Schlich II, 246).

Insulin, as flawed miracle:

“Call insulin, then, a precious but flawed miracle. Herein lies a central paradox of assessing remarkable health care interventions: in proportion to which they offer incredible benefits in certain regards, medical technologies often make life more onerous or complicated in other regards. We typically purchase enhanced control in one realm at the expense of diminished control in another. Examples of this phenomenon of “problem exchange” abound in technologically sophisticated hospitals, including the children’s hospital in which I work.” [Examples]

Involutional Melancholia:

“Not only was involutional melancholia more common in women because the went through menopause, but menopause appeared itself to be a cause of illness [Hirshbein, IV, 722]. . . . Om 1922. Edward Strecker and Baldwin Keyes, both of the Pennsylvania Hospital Department for Mental and Nervous diseases, argued that the difference between menopause and involutional psychosis was only in degree rather than in kind. . . . Since menopause appeared to precipitate mental problems, psychiatrists concluded that treatment with hormones would reverse the disease process. In 1932, for example, Karl Bowman and Lauretta Bender tested the hypothesis that involutional melancholia would improve with ovarian hormone . . .” (724). . . . In 1937, two different groups, one at the Boston State Hospital and one at a hospital in St. Louis, experimented with an injectable estrogen preparation marketed under the trade name Theelin” (725) . . . In 1940s and 50s, shock therapies replaced hormone therapy as treatment of choice for involutional melancholia (731- 35): “Here again, the diagnosis of involutional melancholia made sense to describe a coherent patient population, one that responded to the specific treatment of electroshock” (734-5). . . . American psychiatrists by this time period [midcentury] generally assumed that old age could (and usually did) result in a serious mental illness that might be explained through psychoanalytic investigation and/or treated with shock therapy” (735). FADING OF THE DIAGNOSIS: In 50s and 60s, “psychiatrists used a broad concept of depression as a way of describing a condition for which medication therapies were effective. All patients who looked depressed and who responded to medications were lumped together, including patients who had previously carried a diagnosis of involutional melancholia (737) . . . Advocates for depression research emphasized that depressive symptoms had become less common in older patients and were becoming more of a concern for younger patients” (737) . . . “With the older classification systems – within which involutional melancholia had played a significant role – psychiatrists grouped patients by their different prognoses and compared results among institutions. But with the 1960s’ focus on medication effects, the older divisions made less sense. Instead, researchers and practitioners focused on symptoms. Since involutional melancholia was defined primarily by the stage of life of the patient, it did not match other categories. . . . patients who improved after medication were studies to determine which symptoms were most significant to describe depression. In the grouping of patients in medication trials, patients’ ages and menstrual statuses became less important than their responses to treatment” (738).

J

Jackson, re training of assistants:

Give them more credit than they deserve to render them “full of enthusiasm” and with enough ego “to think he has done it all himself” (155). “. . . nobody with me is ever allowed to develop an inferiority complex” (160). All this in the interest of training surgeons “ready to spread widely the gospel of safe bronchoscopy” (156). (Cf. da Costa on Keen as a teacher [da Costa, 324-25])

Jackson re choice of specialty:

Why bronchoscopy? “Probably one factor was the call of the great beyond. That is to say, looking at the larynx thousands of times created a yearning to cross the border line into the mysterious, dimly visible, deeper air passages” (Jackson, 204)

Jacobi (Mary Putnam) on hysteria:

In her Essays on Hysteria, she suggested that “weakened nerve tissue (whether debilitated by constitutional weakness, illness, or overwork) might be overtaxed by relatively ordinary sensations, failing to store force or to take in nourishment. She therefore sketched out a narrative of the onset of hysteria, a narrative located in the ‘cortex of the hysterical brain’: storage power becomes deficient; centripetal (sensory) impressions are stored in sensory centers rather than the cortex; centrifugal activities (voluntary acts) decline; sensory centers fail to discharge shored materials and become hyperexcitable; sensory centers inhibit brain activity. Or, as a later physician would put it, hysterics suffer from reminiscences. . . . Putnam Jacobi’s understanding of hysteria placed the disease in the nervous system and the brain. This theory of hysteria suggested a therapy that was both physical and mental, that respected the hysteric’s consciousness and potential strength” (Wells, 180, 181).

Jaundice, use of saffron in early modern remedies:

Saffron (orange threads of the saffron crocus) and barberry featured in many recipes because of their yellow color, following from belief that physical appearance of a plant was clue to its curative properties, i.e., like-follows-like. . One remedy involved combining “sixteen long knotted earthworms (collected in the evening) sliced and seethed in ale mixed with cloves and saffron.” Other recipes involved power of blackened egg shells given in white wine with saffron; or dried “fresh dung of a Goose” or “sheep’s dung mixed with the bark of a barberry tree infused in ale and white wine” (Evans I& Read, 85-86)

Johnson & Johnson, founding of:

“Like the inventor of Listerine, Robert Wood Johnson first became aware of antisepsis when he attended Lister’s lecture at the International Medical Conference in Philadelphia. Inspired by what he had heard that day, Johnson joined forces with his two brothers James and Edward, and founded a company to manufacture the first sterile surgical dressings and suture mass-produced according to Lister’s method. They name it Johnson & Johnson”2013 (Fitzharris, 231).

K

Keen to Cushing:

nice expression of quandary of the book editor “when e.g. a chapter of 20 pp. (assigned) reaches him expanded to 70 & one of 80 pp. to 196!” (Fulton, 269). Cf. Minot, on his student’s papers: “Look here, you can do better than that! Take it home and read it to your wife, and you will see how to make it read smoother and easier. I am sure that you can cut out some of the words to shorten and sweeten the whole thing” (Rackemann, 249, 247).

Koch, Robert, major contributions from 1880:

“Koch’s innovations and contributions to the emerging science of bacteriology during the next few years were remarkable. In studying anthrax he had developed the hanging drop culture technique and taken the first photomicrographs of bacteria. He went on to invent the method of steam sterilization. He worked in partnership with Ernst Abbe at Carl Zeiss, the famous lens makers, and he was the first to use oil immersion lenses and the new substage optical condenser developed by Abbe. Koch showed that staphylococci and streptococci . . . were causes of wound infections. He developed solid culture media that for the first time allowed the culture of pure strains of bacteria” (Daniel, 78).

Kraepelin, dementia praecox and manic-depressive disorder re hysteria:

“Kraepelin, in formulating his ideas about dementia praecox and manic-depressive disorder, drew on both the mid-century medical writing about hysterical insanity (especially that of Griesinger) and the recent French and German literature on hysteria. . . . [in classic chapter on d. pl. in 6th edition of his textbook (1899)] “we find that the clinical descriptions and pictorial representations are much alike. . . the hebephrenic and catatonic [but not paranoid] forms of the disorder have clear clinical parallels with hysteria” (Micale II, 513).

L

Laboratory, nineteenth century development of:

“Laboratory development during the nineteenth century can be thought of as a chain of links that began with the laboratory devoted to basic research; was followed by the clinical laboratory, which split its efforts between research and patient care; and ended with the ward laboratory, the workshop next to the patient, where the knowledge and methods perfected in the other laboratories were most practically applied. . . . The first [publicly financed] laboratories opened were mainly devoted to detecting diphtheria. . . . In 1893, New York City founded the first laboratory devoted to diphtheria diagnosis” (Reiser, 140-142).

Laboratory science, in British medicine:

Labs entered hospital medicine more slowly than public health medicine: “Elite doctors resisted the introduction of new laboratory technique into their practice for the same reason as they resisted specialization: because they reared that such techniques would tend to undermine the personal and individualized social relations around which they built their highly lucrative private practices (18) . . . administrative interests lay behind the introduction of laboratories into increasingly large areas of personal as well as public health practice from the beginning of the twentieth century. . . . In this guise, medical laboratories clearly served administrative interests, being used to discipline and standardize general practice with the aim of creating a routinized system of mass medicine” (20). These “interests” meant that in clinical teaching, concentration on idiosyncrasies of individual patients would be replaced “by more routinized and standardized forms of knowledge that were more in keeping with the administrative demands of state medicine” (21). Efforts to reform medical education in years before WWI (i.e., creating full-time professorship and severing the connection between clinical teaching and elite private practice) “were intimately bound up with proposals for a closer integration of the laboratory sciences into hospital practice, and especially into the practical training of prospective doctors” (21). The “importation of laboratory science” benefited the reform campaign in “contingent ways”: (1) preclinical lab sciences already harbored a body of professional full-time academics; (2) much of the science conducted by academics was “at least in keeping with, and often informed by, administrative interests”; and (3) the lab sciences “provided a model of how the work of the hospitals might itself be reorganized in the interests of greater efficiency” (22. cf. 27 on “contingent outcome”). . . . “the overall effect of the academicization of clinical medicine and the expansion of clinical research, within the new organizational structures of the NHS, was to encourage the growth of more administratively inflected forms of medical knowledge and practice throughout the system. As a class, the new breed of consultants were less concerned than their predecessors to be seen as individual medical virtuosi, and more inclined to identify themselves with the most technically sophisticated and specialized forms of diagnosis and treatment” (25-26). In the case of British medicine, “laboratory science actually developed as an instrument of scientific management. This confluence of science and management lay at the heart of the transformation in British medicine. The world of medicine was remade, not because of science, but through a logic of efficiency that science could be mobilized to confirm” (29).

Larrey, and development of modern casualty handling system:

“Medical personnel staffed the aid stations close to the lines. After the ambulance corps brought casualties to these collection points, those wounded requiring further treatment were sent along predesignated routes to the general hospitals farther to the rear. Everything in the modern casualty evacuation systems was included in Larrey’s organization, and he may be genuinely credited with creating the first modern casualty handling system in the West (Gabriel, 145).

Laryngology in the 1860s:

“The extent of the treatment by laryngologists of diseases of the throat in the ‘sixties – and indeed for the next quarter of a century – was limited to the opening of abscesses, the removal of tonsils, and the endo-laryngeal removal of polypi and other small tumors from the larynx. Caustic, astringent, or sedative solutions were applied to the larynx with a camel’s-hair brush, or syringed or sprayed into it, or astringent or sedative powders were puffed in by an insufflator. Functional or hysterical loss of voice was treated by applying the galvanic current to the vocal cords with a special instrument. The diagnosis between simple chronic laryngitis, syphilis, tuberculosis of the larynx, and malignant disease . . . was always difficult and sometimes impossible, even for an experienced laryngologist. . . . Since the local anesthetic properties of cocaine were discovered only in 1884 the manual dexterity of the early laryngologists and especially of Morell Mackenzie – whose technical skill was outstanding – had to be well-nigh miraculous” (Stevenson[2], 38, 39).

Leechcraft, popularity in England by 1830s:

“In 1836 John Pereira  wrote, ‘The consumption of leeches must be enormous.’ He pointed out that, collectively, the four largest London dealers imported, on average, 600,000 leeches monthly or 7,200,000 a year. Most of these imports came from France, Germany, Silesia, and Poland (Carter, 41).

Leeches, contemporary medical use of:

Leech’s “anesthetic and anticoagulant have yet to be bettered by science,” and it has been characterized by Roy Sawyer [founder of Biopharm] as a “living pharmacy” (George, 30). “Today leeches are frequently used in reattachment operations, skin grafts, and reconstructive plastic surgery, since they are so good at keeping the patient’s blood flowing in the damaged area, which helps the veins to knit together again. The active anticoagulant in leech saliva, a protein named hirudin, can be produced separately, and Biopharm [in Wales; George, 26-28] has isolated and resynthesized many other active and useful compounds from the medicinal leech and other species — including the terrifying giant Amazon leech, which is the length of your forearm and stabs its prey with a six-inch needle proboscis). . . . Perhaps even more surprisingly, leeches also relieve symptoms of osteoarthritis when applied to the knees, because of anti-inflammatory and other compounds in their saliva that are still not fully understood. The use of leeches has fewer side effects than the standard treatment of non-steroidal anti-inflammatory drugs, and they are more effective in relieving pain and stiffness than the best topically applied medication” (Poole; cf. George, ch. 2).

Leprosy, as Holy Disease in 4th-c. Byzantine Empire:

Christian writers stressed that “leprosy could be interpreted as a trial to perfect good people rather than as punishment meted out to sinners” (T. Miller & Nesbitt, 41). In his Oration XIV on behalf of the poo, Gregory of Nazianzos argued lepers were blessed by God rather than cursed because of sin: “Leprosy was preparing its victims for paradise, while good health and beauty might be leading those it favored to eternal damnation” (42). For Gregory, leprosy was the ”Holy Disease” that marked those destined for heaven rather than those stained with sin”(41, 104). His oration contributed to basic change in popular response to leprosy, as “no Byzantine intellectual, ecclesiastic, or political figure thereafter ever suggested that Christians should ban the victims of leprosy from churches, cities, or public places” (43). “In both the Byzantine East and the Latin West, leprosy had this strange, ambivalent connotation: it might represent sin, or punishment, or even the devil, but it might also symbolize virtue and the divine” (106).

Leprosy, as transgressing boundaries of life and death:

“Leprosy was capricious. It seemed to resist explanation in terms of geographical and somatic boundaries, or else to practice some inexplicable boundary behavior of its own devising. In this way the new micro-bacterial world of the late nineteenth century was rediscovering in its own terms the ageo-old boundary between life and death.” In the mid-1880s, German leprologist Eduard Arning, investigating leprosy in Hawaii, claimed that the bacillus “seems to multiply in the bodies of dead lepers months after they had been buried. Modern science could thus be seen to augment the traditional idea of the leprous body as a prototypical corpse, and leprosy as a disease that operated across the divide between life and death” (Edmond, 81).

Leprosy, basic causes per scholastic classificat6ion of 14th-16th centuries:

Six “Non-natruals”: 1) ENVIRONMENT (bad air, exhaled by lepers or polluted by vapors; corrupt air emanating from corpses, etc.); (2) FOOD AND DRINK (varies among authors, but esp. no fish with milk in same meal, no meat of hare or donkeys, no bad food that upsets balance among humors, esp. black bile [Galen]; (3) REPLETION & EVACUATION/MENSTRUATION (need for balance e between fulness and emptiness to retain humoral balance, esp. re black bile; includes need for timely evacuation of corrupt portions. Concern with harmful excess overshadowed concern with deficiency [Avicenna]; grave threat of weakening of “expulsive faculty,” which  “retention of melancholic superfluities,” as in hemorrhoids; amenorrhea; (4) EXERTION AND REST: SEX: esp. in 13th and 14th c, less pervasive in medical writings; excessive sex depleted body’s precious fluids and overheated the blood; intercourse with leprous woman singled out as cause; (5) DIVISIONS AND DISTINCTIONS: “Leprosy in general was explained by the conjunction of a disproportionately cold and dry complexion with an excess of ‘unnatural’ black bile,” an excess due to “one of the four humors, which had become overheated and burned into ash.” The offending humor  the Salerians fourfold scheme of appearances that resembled the typical features of certain animals: when burned, yellow bile caused a leonine form; blood caused fox mange [hair loss owing to mite infestation]; black bile, causes elephantia, and phlegm, tyria, named for a snake (loc 1091-93). Bernard de Gordon of Montpellier “extended the humoral typology from etiology to a register of distinctive features. . . Each type was distinguished in definition, appearance, and course by the character of the responsible humor, which, in turn, was embodied in the eponymous animal ” (loc 1095-1096).

Leprosy, semiotic ambiguity and classificatory confusion of observable signs in Middle Ages:

“Even language contributed obstacles to the development and teaching of a reliable semiology. . . It was difficult to teach the recognition of a disease when supposedly different names for symptoms were as imprecise and fluid as, for example, ‘glands’ . . . which also referred to buboes, goiters, and the neck growths of scrofula.” Further linguistic problem was “the frequent disjunction between depictive impressionism and anatomical precision, which may be sampled in various descriptions of ocular symptoms” (e.g., a darkening of the eyes, tending to reddishness, a reddening of the eyes, as a disturbance of the white of the eyes associated with lividness or darkening). “Subsequent authors failed to integrate these traits, nor did they correlate them with the classical descriptions of Galen or Aretaeus (Demaitre, loc 1329-1334). Leanred practitioners “faced a triple challenge of distinguishing – in one decision – between congruent and disparate phenomena, between unequivocal signs, and between potential and confirmed condition. The distinction was all the more challenging because so many signs [e.g., 30 or more] needed to be weighed.” In examples from Marseilles and Ganges, they examiners “hung their decision on a ‘reading’ of the unequivocal and equivocal signs, without much indication about the content of these categories.” Plus, a sign could be unequivocal but insufficient to impute leprosy, e.g., disfigurement of the nose and mouth but not the “corruption“ of the “entire form” (loc 1413-1418, 1423).

Lethal dosing, differentiation from physician-assisted suicide and legal acceptance of:

How could lethal dosing “foreseeably cause the death of the patient without implicating the physician who administered the lethal dose? The answer lies in . . . a transformation in the treatment of the dying from pain relief to pain management. . . . a manifestation of the expanding rule of technique [i.e., medical practice] in the treatment of the pain of dying patients ” (Lavi, 136). Under the aegis of technological-driven pain management (e.g., morphine drip), the “subjective intent of physicians is not central to the practice of lethal dosing any more than their subjective state of mind is relevant in preforming an operation. . . . The administration of pain relief is based neither on the patient’s request for pain medical nor on the medical judgment that such medication is necessary at a given time. Rather, the timetable [viz, as to when the patient will die] is replaced by the continuous drip, which now sets the pace for human action. . . for the medical profession, lethal dosing is a duty grounded not in morality but in ‘’’technicality’” (140-141).

Listerism, development of invasive operations prior to:

“But Listerism and the possibility of surgical invasion of the body cavities was a construction after the fact. By the 1860s surgeons, notably the ovariotomies, had begun to enter the abdomen. By harnessing a technology of cleanliness, the isolation of the operating field and chemical purification into an integrated system they were able to perform internal surgery. John Erichsen in 1881 saw the origin of this not in Listerism but in obstetrics. . . . By the 1879s, non-Listerians, such as Lawson Tait, had a technology at their disposal which allowed them to remove gallbladders safely. The germ theory and the creation of Lister as its prophet sanctioned an enterprise that was well underway” (Lawrence II , 26).

Lobotomy, at McLean Hospital in 1940s, re selection of patients:

“The question of identifying which patients might benefit from psychosurgery thus was resolved through a set of filters applied in combination, each of which had been derived from a separate base of experience. From their basic medical training, the doctors at McLean learned how to sort the patients on the basis of current physiology and general classifications of disease. Their more specialized background in psychiatry yielded additional filters concerning the treatment of mental disorders. And the years on the wards at McLean provided an opportunity to observe patients who had undergone the latest innovations in treatment; at the same time, it led to an intimate acquaintance with the everyday affairs of persons who inhabited a very narrow section of the social strata. The decision to operate was formed in the intersection of these various filters, which, added together, demarcated similarities among those patients who did well with the procedure and those who did not. If Mrs. Smith could be sited within the appropriate clinical frame, then her physician proceeded with confidence. It did not matter whether the patient’s ‘emotional tone’ was due to a hormonal imbalance, the delusion that aliens were trying to abduct her, the real fear that her husband might sue for divorce, or a distaste for n institutional life – or all of the above. Her case seemed similar enough to others who had gone before; Mrs. Smith had what a lobotomy was expected to cure” (Pressman, 311-312).

Lobotomy, at McLean Hospital in 1940s, re the multiple clinical frames within which treatment decisions were made:

“ . . . the clinical frame for psychosurgery was not defined in reference to a single idealized clinical type but was varied in accordance with the region of the hospital’s clinical ecology in which the patient happened to be located. Tellingly, within each of these different sectors the procedure was used for different functions. Experience had taught the McLean staff that, for the kinds of female patient they referred to as involutionals and manic-depressives, their mental disturbance was likely to clear up on its own and was unlikely to lead to very long-term institutionalization. As there was no immediate pressure, the use of the operation was thus limited to those patient for whom even longer periods of hospitalization were unlikely to yield further hope of discharge. In the case of male schizophrenics and female catatonic schizophrenics, however, an opposite tactic appears. Such patients were operated on after only a moderate waiting period had elapsed, in the hope of preventing them from slipping into the pool of chronic schizophrenics which, in the hospital staff’s experience, was considered certain doom. The most extensive selection was of female paranoid schizophrenics, who presented the worst clinical scenario . . . the strategy here was to operate early and often” (Pressman, 299).

Lymphangitis, when abscesses are not incised and drained in time:

“But then the bacteria defeat the inflammatory cells and pus forms in the inflammation . . . The increasing quantity of pus pushes the surrounding tissue away, and the body attempts to halt that process by forming connective or scar tissue. . . . But, because blood can no longer flow to the pus, the immune system cannot combat it. . . . . . . If the swelling fluctuates [push down on side and the other bulges out], the infection is matured and ready to be cut open. . . . . If you do not drain the abscess in time, the bacteria will ultimately break through the abscess wall and be released into the surrounding tissue  cellulitis in subcutaneous tissue, where criss-crossed miniscule lymph vessel. . . . In these vessels, lymphangitis develops, an infection that follows the course of the lymph vessels, which travels to lymph nodes” (Van de Laar, ch 14).

M

Mackenzie, James on his reputed “second sight”:

“It is not second sight which I possess, but only sight. My eyes were opened because I had real need to use them” (Wilson, 230).

Mackenzie, James, as exponent of the “new cardiology”:

“For Mackenzie, what mattered about the heart clinically was what he called the reserve force of its muscle, physiologically speaking its capacity to maintain the correct degree of tonicity, contractility, etc., to produce a normal circulation. . . From this point of view, valvular disease or hypertension were only important ‘as a source of embarrassment to the heart muscle.’ Heart failure, therefore, was the inability of the heart to deliver blood to the tissues. This account, the forward pressure theory of heart failure, considered diminished cardiac output as the defining feature of failure, not back pressure owing to the accumulation of blood behind a damaged valve” (Lawrence, 15).

Mackenzie, James, on harm done by invention of stethoscope:

“No man ever used a stethoscope with a higher degree of expertness than this man who declared that the stethoscope had ‘not only for one hundred years hampered the progress of knowledge of heart affections, but had done more harm than good, in that many people had had the tenor of their lives altered, had been forbidden to undertake duties for which they were perfectly competent, and had been subjected to unnecessary treatment because of its ‘findings’” (Wilson, 103-04).

Mackenzie, James, publication of first book, The Study of the Pulse, in 1902, which “went forth unperceived by the giants [of his profession]” (Wilson, 181):

“A book differs from a paper or article in this respect – that it does not require the consent of an editor before it can see the light of day. And the publishers of medical works are far less under the influence of the ‘prevailing opinion’ than are usually the editors of medical journals. The latter are doctors themselves, and are usually obsessed by the idea that they must, at all costs, maintain their reputation for wisdom and gravity. Publishers, on the other hand, are men of the marketplace, with an instinct for business. They continually take risks in the matter of new and unorthodox books, which do them great credit. Mackenzie, like many another man, reached the public whom he sought to influence, behind the backs of the ‘Press’ (Wilson, 179-80).

Male Codes of Honor, Medicine and:

In 19th c., physicians “engaged in intra-professional regulation in a relatively informal way, according to an honor code that physicians as gentlemen, were supposed naturally to understand” . . . the BMA had a Central Ethical Committee to assure gentlemanly behavior among its own membership. . . The committee did not have the power to expel offenders from the professional and even hesitated to revoke association membership in most cases. . . the committee acted much as the committee of a social club of “good old boys” in addressing violations of the recognized, masculine code (72, 73). . . . Why was there no dramatic increase in number of female physicians in first three-quarters of 20th century? “. . . one hitherto unappreciated factor has been the exclusionary role played by the male honor culture in the multiple formal and informal settings where professional sociability controlled behavior, expectations, and opportunities. . . . In view of the fact that a willingness to engage in violence was regarded as a warrant for the truthfulness (or at least sincerity) of public statements, it is likely that in such situations educated middle-class women would feel, at the minimum, discomfort, and at most, a thoroughgoing disenfranchisement. . . [even where actual violence, e.g., dueling, had died out] “’satisfaction,’ ‘making amends,’ the mediation of ‘differences,’ the premium put on deft but ‘frank’ interchange, all remained important aspects of middle- and upper-class professional sociability . . . Might this not have stilled the voices of women, who did not know or care to adopt the gestural repertory of assertiveness? Further down the hierarchy . . . smoking, drinking, and profanity, which were salient expressions of male exclusivity if not aggression” (Nye, 75-76).

Malingering, medicalizing/psychologizing of during WWI:

“If malingering were cast as a psychological condition it could serve as a means both of asserting medical authority and of gaining professional autonomy in the face of the military’s employment of doctors as mere medical police. Doctors might thereby extricate themselves from positions which, via-á-vis the military, were parallel those of malingerers in their relation to medical officers. In effect, through psychologization, the malingerer’s use of his body in the struggle against military power, as well as the medical profession’s relations with the military, could be depoliticized, or at least removed from the space of medical jurisprudence (Cooter, 132). . . . [But] the practical effect of modernity’s embrace of malingering is not widely apparent. . . The court-martialling of malingerers continued apace during the war. From the military’s point of view, the medicalization of malingering could only serve to encourage it and run up demands on war pensions (136). . . . rank-and-file doctors at the front . . . tended to respond to malingering not as a treatable pathology, but in terms of personal cowardice . . . among rank-and-file medical officers, as opposed to the elite psychologically-inclined ones, the encounter with malingers might be demedicalized. . . . [GPs, resentful of the state’s use of doctors as salaried servants and third-party medical detectives] took up what might well be described as an anti-modernist position, at the heart of which was the trope of the sacred voluntary relationship between doctors and patients (138, 139). . . . [-Among GPs] psychological modernism was not embraced, and neither were any standardizing routines. For the most part, malingering was managed paternalistically on an individual informed basis, essentially in reaction to the managerial forces of modernity, and in disdain of psychological expertise. If malingering in modernity signifies rebellion against industrial disciplines and routines, then GPs and malingerers had much in common” (140).

Man-midwives, ascendancy in mid-18th c. England:

“As long as women’s traditional collective culture remained intact across all social classes, childbirth retained this leveling quality. And surely this was why the man-midwife was so attractive to those wealthy and literate women who by about 1750 had collectively constructed a new female culture. . . . The midwife, by her very presence – whatever her actual deportment -- served as a tangible reminder that ladies were mere women. But the man-midwife offered proof of their superior social status: who but ladies could afford the 10 guineas that William Hunter charged for deliveries?” (A. Wilson, 191).

Manufactured medicine and British commerce in 19th c.:

In Britain’s overseas colonies, “staggering mortality that greeted newcomers, planters and African captives both. The brutality, demographic turnover, malnutrition, overwork, warfare, and anxiety of the eighteenth-century Caribbean fueled the medicine trade to that region and informed large concerns about manpower.” Thus, the colonial demand for manufactured medicines, which “struck a chord with plantation owners, bureaucrats, and surgeons, who sought treatments for the populations they oversaw.” Esp. in second half of 18th c., “a kind of bulk, portable medicine appeared increasingly necessary and convenient . . . Manufactured medicine offered a convenient solution to the omnipresent challenge of manpower by the logic that certain treatments could work on anybody irrespective of external characteristics or internal complexion. . . . manpower concerns introduced manufactured medicines to issues of political economy, such as improvement, balance of trade, and white supremacy, that shaped the routines of the British empire” (Dorner, Intro, passim).

Martin, on period of his training (1877-1880):

“We were in medical school during the days when the ‘art of medicine’ was practiced to the exclusion of the ‘science of medicine’; we were approaching the development of the ‘science,’ which was more and more to share with the ‘art’; and then the time arrived when we began to speak of the ‘science and art of practice’ rather than the ‘art’ or the ‘art and science of practice’” (Martin, 67).

Mayo Clinic, reliance on surgery in early 20th c.:

Led by William Mayo, an abdominal surgeon, the Clinic emerged “as a national exemplar of the lucrative cottage industry in abdominal surgery. . . patients were but minor characters in this drama. Hundreds of the newest operations . . . were either invented or refined at the Mayo Clinic in the 1910s . . . Mayos had become well known for innovations in gallbladder surgery, but they also perfected other novel abdominal operations, and William Mayo himself built an undisputed expertise in matters related to the spleen”(Wailoo I, 90).

Medical knowledge, role of 19th-c. women in creation of:

“In these cases of neurasthenia and puerperal insanity . . . the patients and their families and friends were participating in defining normal and abnormal behavior, physical illness or wellness. Physicians were taking their cues from patients (and families/friends), and the nonprofessional public was also learning from physicians about scientific definitions of health and illness. . . . Steeped in the belief structure of their time and place, physicians learned to call ‘abnormal’ what patients described as part of their illness: the inability to manage her household, the tendency to untidiness of person, the propensity for foul language. . . . In addition to the physical dangers physicians believed to be associated with every childbirth, they learned from their patients and their patients’ families that women could also suffer from numerous signs of temporary insanity shortly following delivery” (Theriot, 354-55; Theriot II, 23-24).

Medical Practice, at mid-19th century:

“Bleeding still lingered, however, though increasingly in the practice of older men and in less cosmopolitan areas. Mercury . . . still figured omnipresently in the practice of most physicians; even infants and small children endured the symptoms of mercury poisoning until well after the Civil War. Purges were still routinely administered in spring and fall to facilitate physiological adjustment to the changing seasons. The purposely induced blisters and excoriated running sores so familiar to physicians and patients at the beginning of the century were gradually replaced by mustard plasters or turpentine applications, but the ancient concept of counter-irritation still rationalized their use. . . . To most physicians at mid-century, one disease could still shade into another; illness was still in many ways a place along a spectrum of physiological possibilities – not some category entity capable of afflicting almost anyone with the same patterned symptoms as the more devoted advocates of French medicine contended. Holistic definitions of sickness as a general state of the organism were consistent with social attitudes toward need and dependence, in that both included moral as well as material elements. In both, the interplay of individual and environment could bring about health or disease, prosperity or poverty (Rosenberg II, 92-93).

Medical technology, and physician authority:

As per Barnes (1999), patients’ search for “greater technical surety creates a major problem for the maintenance of an expert-lay relation. The more we base this relationship on technical knowledge rather than trust, the more the relationship between the two begins to break down or, more precisely, questions about our status as moral actors are redefined in terms of questions of a purely technical matter. Rather than ‘healers,’ medics are becoming ‘technologists of the body.’ . . . The arrival of the ‘expert patient’ simply gives a further twist to the technical ratchet . . . the ways in which these technologies are defined, given meaning, and challenged by lay actors is opening the medical ‘black bag’ and loosening rather than strengthening the control the doctor has over items found within it” (451, 452),

Medical technology, the worried well, and the re-visioning of disease and health:

By pushing back the boundaries of abnormality to find disease at earlier and earlier stages, “medical technology has created patients without symptoms, the ‘worried well,’ who occupy what we might call a therapeutic limbo, adding new forms of ambiguity and risk for both physicians and those subject to their gaze. Consequently, these new forms of screening and classification that are to be found throughout most advanced health systems create a situation where the control they engender may well be much less than some of the theoretical models of surveillance culture might suggest” (Webster, 445). . . . STS shows that the ‘technical’ is always socially shaped, and that together socio-technologies can be re-invented, reconfigured in different contexts and take on more or less degrees of ‘mutability’ and ‘mobility (Mol and Law, 1994; Latour, 2000). . . . in regard to the meaning of ‘illness’ or disease, or indeed ‘health,’ contemporary technoscience . . . is by virtue of this very complexity unlikely to be able to offer medical diagnoses that derive from pathological cause to symptomatic effect. Instead, high-tech medicine generates forms of diagnosis that are more likely to depend on the language of risk and probabilities than the language of causality. . . . In the context of informatics, the arrival of a multiplicity of virtual (Internet-based) information sources through which people (whether patients or not) can obtain publicly available as well as commercial advice has also created further opportunity for uncertainty and risk. These developments in both genetic and informatics, dependent as they are on ever-sophisticated socio-technologies, weaken the epistemological and professional authority of medical science and practice (447-48). . . . The third sense in which we can say that contemporary health technologies are qualitatively distinct from the past is that they deconstruct the physical body as the locus for health and illness. . . . Developments in genetic and informatics can be seen to redefine the boundaries of the body while working to dematerialize the body – not only for lay people but physicians too. The physical sense we have of the link between person, body and health may well be disrupted by the advent of cybernetic medicine with supercomputers being used to produce models of in silico organs, organ systems and eventually the ‘Virtual Human’” (449).

Medicine, as Art (Pressman):

“ . . . medicine is an art not because it is watered-down or incomplete science; rather, the challenge facing physicians is very different from that which faces laboratory scientists, and constantly draws upon a unique combination of perceptual talents. And, like architecture, medicine is formed in the junction of technical expertise and human values, a domain in which decisions must meet physical as well as human standards. Even in current times, then, physicians necessarily engage in the medical arts – as understood in their positive connotations, that is” (Pressman, 315).

Menopause, feminist vs. medical conceptions of:

“Feminists in the 1980s and 90s by criticizing medicine’s construction of women as weak and vulnerable, themselves construct women, who do not actively resist this medicine, as passive. Because these modernist feminist critiques conceive of knowledge as fixed, their arguments coalesce around contests about the ‘real’ menopause. As a result the shifting focus of knowledge about menopause from constructions based on concepts of femininity to those based on the relationship of menopause to prevention of ill health is not brought into modernist feminist explanations of menopause and health care at menopause. Rather, associations of menopause with chronic diseases are deployed as further examples of the exploitation of ‘menopausal and postmenopausal women’: that this particular construction of menopause may have implications for contemporary medical practice is not considered outside the framework of ‘exploitation’ (Murtagh & Hepworth, 284). . . . By the late 1990s articles in professional academic medical journals constructed a triadic argument about menopause and decision-making: menopause is a marker for prevention; hormone replacement therapy, the key preventive pharmaceutical brings with it both risks and benefits; women must make an ‘informed decision,’ with the assistance of her medical practitioner and based on an assessment of her personal risk profile. . . . As a concept upon which to based decision-making, individual choice fails to account for differences of power in society, it fails to account for social inequality and social difference. Indeed, it is itself a strategy of power which reinforces these inequalities by bringing a moral force to their existence (Murtagh & Hepworth, 2003b). . . . In this way the construction of menopause as a risk factor acts as a technology of power [i.e., for physicians]. This is one which is supported and perpetuated in the discourses available through health professionals and mass media. The menopausal women as a consumer of health information and health advice is left between a rock and a hard place: there exists a moral imperative to choose but the grounds upon which you might do so (the biomedical evidence) are continually shifting. Defining menopause through discourses of risk and prevention of disease thereby limits the possibilities for choice outside a medical framework” (285-86). “Health narratives [of middle-aged women re menopause] also tackle the important issue of ‘health choice and risk.’ Cf. Greene, Thompson & Griffiths, 280): This raises questions around: what is informed choice, who is responsible for that informing and how much information in ‘enough’? Are these circumstances where a health professional should persuade rather than aim to inform? How is ‘risk’ defined and communicated? Such narratives raise the complex issues of power, empowerment, knowledge and understanding within the patient/practitioner relationship.” Furthermore, decisions re HRT “are rarely made upon purely scientific or medical grounds, they also reflect present concerns and embodied experiences such as the loss of confidence and self-esteem associated with weight gain” (284).

Menstrual blood, properties of according to Pliny, still being read in early modern Europe:

Its “startling properties” included “the ability to make a dog mad should he taste it, kill whole fields of crops, or to drive bees from their hives. It could also blunt knives, make iron go rusty, dull a mirror, and had the power to make an unwitting suitor fall in love if some of the woman’s powdered blood was slipped into his drink.” But it did have a beneficial property, whereby “it is said that the root of Peony being given with Castor, and smeared over with a menstruous cloth, cureth the falling sickness” (Evans & Read, 46).

Mental hygiene movements, definitions of mental health before and after Great Depression:

“”During the 1920, mental hygienists, inspired by Meyer’s psychobiology, had defined mental health in social terms as adjustment. The ability to hold down a job and to make one’s own living were seen as prime characteristics of adjustment and mental health. During the Depression years, it became much more difficult to maintain that individuals were out of work as a sole consequence of their lack of mental health. Incorporating psychoanalytic ideas, mental hygienists redefined adjustment as inner or emotional adjustment. Gainful employment was no longer considered essential for mental health; it was now see as useful as an outlet for the expression of pent-up emotions which could, if need arose, be replaced by other activities. Employment was now valued because it was instrumental in providing a sense of belonging – a sense which could be attained in many other ways as well” (Pols II, 379).

Midwives, effort to control and regulate by public health nurses employed by Sheppard-Towner in 1920s:

“By the 1920s, then, the movement to control midwifery was fraught with cultural and racial tension: white, native-born professionals versus foreign-born women and women of color” (Munch, 115). . . . In states where midwives attended a high proportion of births, S-T funds consequently helped to pay for surveys of midwives’ practices, classes for untrained midwives, and inspections of licensed midwives (116) . . . But supervision of midwives went so far beyond these legitimate requirements that it proved the most intrusive of S-T initiatives and the one that pushed hardest toward professional cultural hegemony” (116) Intrusiveness extended to midwife’s personal cleanliness and appearance (116-17).

Midwives, identification with abortion by obstetricians in early 20th c:

“In turn of the century Chicago, specialists in obstetrics won increased state supervision and restriction of the city’s midwives in 1896, 1908, and 1915. In each instance, the identification of midwives as abortionists facilitated the passage of new rules controlling midwives’ practices. . . . When reporters and city officials connected midwives and abortion to contemporary anxieties about the sexual vulnerability and independence of single women, politicians at every level acted to control midwives. Reformers, reporters, and politicians all reworked the midwife story and put their own slant on it, yet they followed the medical line that linked midwives to abortion and urged their regulation as the solution. Controlling midwives seemed to be the answer to an array of perceived social problems. Meanwhile, the role of physicians in performing abortions was overlooked. Stigmatizing midwives as abortionists was only weapon used by specialists in their political campaign to suppress midwifery” (Reagan, 119).

Military surgery, at beginning & end of 19th c:

“Most military surgeons at the turn of the eighteenth century had no yet mastered ligature or the tourniquet; thus, amputation, the most common surgical procedure performed on the wounded, remained a traumatic and risky business. By the end of the nineteenth century, however, both ligature and tourniquet applications were normal practice, as was the use of the hemostat and surgical clip. Cautery was finally banished from the surgeon’s kit” (Gabriel, 132).

Miltown, and relocation of psychiatry in general practice:

“ . . . with the advent of Miltown, Americans consulted the doctors they were most likely to see for routine problems – family practitioners, internists, pediatricians, and obstetrician-gynecologists. . . . The biochemical revolution also fueled the diversification of psychiatric practice by transferring it from the specialist’s office to the generalist’s prescription pad. . . . By 1960, nearly three-quarters of all doctors in the United Stats prescribed meprobamate. . . . Miltown’s success, fomented by the prescription practices of nonpsychiatrists, forged a new patient-doctor relationship in which Americans increasingly came to regard mental health – first anxiety, then depression, attention deficit disorder, bipolar disorder, and son on – as grounds for routine medical consultation and pharmacological intervention” (Tone, 90, 91). Miltown ads aimed at GPs; “in emphasizing the ubiquity of anxiety even among otherwise healthy people . . . Early Miltown ads “promoted the tranquilizer to treat ‘mental stress’ or ‘tension’ in ‘the average patient in everyday practice’ and deployed psychosomatic reasoning to suggest the drug for allergies, arthritis, asthma, and other problems. . . . the ads translated psychodynamic psychiatry’s obsession with anxiety into simple life problems that physicians confronted every day in their offices. . . If generalists could not psychoanalyze their troubled patients, they could at least ease worries with a pill . . .” (Herzberg, 34, 35).

Miltown, psychoanalysis and:

“[popular magazine articles] suggest that Miltown and the tranquilizers offered a treatment for the crisis of masculinity . . . tranquilizers answered the questions [Do you think a wife’s place is in the home?] with the precision of science, restoring a 1950s version of marital love while returning the mother to her rightful place in the home and in bed as if a conjugal-strength Mickey Finn. . . . tranquilizers promised to restore a man’s mastery of his own home and his sense of tranquility with it (Metzl, 107). . . . In a historical sense, tranquilizers thus became treatments for clearly psychoanalytic problems . . . Not only, however, did the notion of a bodily anxiety ablate the possibility that symptoms could return from the repressed – it ablated the repressed altogether. . . biological psychiatry treated the symptoms and deconstructed the diagnostic system that defined them (108). . . . Popular print sources thus suggest that the rejection of psychoanalysis, and the embrace of Miltown, in magazines such as Newsweek, Time, and Cosmopolitan, was based on something more than one clinical model replacing another. Also at state, and even more so in these articles, was the embrace of a new model for talking about a specifically gendered perception of ‘cultural’ problems. . . . [it] presented a new means of justifying what Rosalind Minsky might call patriarchal unrest. . . . Chemical change then brought housewives in from the cold and rendered psychotic mothers suddenly able to perform their motherly duties. . . Offering a replacement for psychoanalysis both as a clinical mode of treatment and as a discursive system, biology thus posited a ready response to the social crisis implied in Newsweek’s search for sanity. . . . Psychoanalysis offered a beautifully conceptualized diagnosis without hope of immediate alleviation. In psychopharmacology, however, Newsweek found its cure” (111).

Minot:

“The results of treatment are often very difficult to evaluate – and for many reasons. One of them is the overenthusiasm of the doctor – particularly of the so-called specialist: the other is the modern tendency to forget the value of the simplest clinical methods – a complete and careful history and a complete and careful physical examination, with attention to the patient as a whole human being. Clinical vision should be sharpened and not dulled by the light of science (from introduction to his course on applied physiology to 3rd year Harvard students, 1918; (Rackemann, 117-18; cf. 204, 244, 249).

Modern bodies, creation of in 19th c. women:

“Nineteenth-century case studies indicated a transition from this prescientific point of view to a more objective, more modern sense of the body – for patients and for doctors. . . . at the same time patients were influencing the emergence of disease concepts in the doctor-patient dialogue, physicians were instructing patients about medicalized bodies. . . . The doctor-patient dialogue in which biosocial-psychological illnesses were diagnosed and treated was also a site for the creation of a discrete, manageable body (Theriot, 361). . . . In the taking of the history, in the solicitation of symptoms, in the questioning about menstrual irregularities, doctors contributed to the dissociation of body and self. Based on a growing clinical knowledge . . . physicians instructed their patients on the ‘norms’ of the body even as patients’ reporting became part of the data establishing those norms. Patients and their families and friends learned to pay attention to and report changes in appetite, bowel functioning, and menstrual flow; and they learned that any pain was a significant sign of something. . . They were learning a new way to distance themselves from the body, to observe the body, ‘tell’ the body, and act on the body. . . . they were learning to regard the body in a way previous generations of women had not: as discrete, objectifiable, observable; as having a knowable rule-observing interior. As they negotiated about granting speculum exams, wearing pessaries, ending lactation, submitting to restrictions of diet and activity, they were learning that the lived-body was also a fact-to-be-reported and an entity-to-be-examined. Its operation was analyzable and understandable” (362).

Mustard gas experiments of WWII, among Allies, effects on soldiers:

Injury depended on type of tests (gas chamber, field test with aerial spray) and duration of exposure: “Some men developed severe eye injuries and damage to lungs. Most frequently, men had burns and blistering on the skin, which sometimes left scars.” Blisters usually occurred when men sweated, “especially the face, hands, underarms, buttocks, and genitals” (S. Smith [II], loc 922). Many of the men in mustard gas experiments in U.S. and Canada “were in agony for days, weeks, and even months. Soldiers needed time to recover not only from the blisters and oozing sores on their bodies, but also from systemic poisoning” (loc 943).

Myocardial infarction, why Britain’s “new cardiologists” did not describe it:

[Lewis and other British physicians] “had trained themselves to see precise functional relationships between the muscular chambers, but when it came to signals that the heart was failing, they were interested in the performance of the whole myocardium, not in any anatomically distinct part of it. Infarction, like a damaged valve, was not of importance of itself. Herrick, on the other hand, was not educated in this outlook. He used the ECG as a transducer. He was not only interested in the rhythm of the heart but, through the shape of the QRS complex, in its anatomical, not simply its functional, architecture” (Lawrence, 28).

N

Narration, and clinical specificity:

“Picking up on the narrative aspects of therapy is not something that needs to be left to the wisdom of therapists. At least to some extent, it can be standardized and taught. The problem rather is in seeing who might have an interest in fostering such an approach, as the possibilities for making money out of something that in principle could never be shown to be specifically effective would at present seem to be rather limited” (Healy, 253).

Narrative, of “good patient”:

“. . . generic categories can impose restrictive conventions on what is not only literature but is also a means of organizing experience. These conventions are reinforced by the cultural preference for the ‘good patient’ (Buckwalter, 2007). Whereas any exaggeration of illness . . . is viewed as problematically inauthentic or even malingering, people who are ill or disabled are generally rewarded for an exaggeration of health or at least good nature, the cheerful stoicism expected of the ‘good patient’” (Garden, 127).

Narrative storytelling, physician transformation and:

Conscious employment of a narrative approach “might represent a sincere attempt on the physician’s part to develop over time into a certain sort of person – a healing sort of person – for whom the primary focus of attention is outward, toward the experience and suffering of the patient, and not inward, toward the physician’s own preconceived agenda. . . . The notion that one is trying over time to develop into a special sort of person and that one is willing to open oneself to being changed by experiencing the suffering of others proves finally that the physician has accepted a suitably humble status in the power hierarchy of the physician-patient relationship. . . . The ‘narrative physician’ knows that sometimes objective detachment is both necessary and comforting to the patient but sometimes a compassionate vulnerability is required” (Brody, 88-89).

Narratives, patients’, constructed nature of:

“There are, however, tensions between the lived experience of illness and disability and the conventions that shape the representations of those experiences in published narratives. Narrative places particular constraints on embodied experience. . . . The limits and order of narrative expert pressure on the shaped of published accounts of illness and disability and, in particular, their resolutions or endings. . . . the generic happy ending is predominant in book-length illness narratives” (Garden, 122, 123). “The lesson for medical professionals is that a literal veracity, a one-to-one mapping of life onto representation, cannot be expected from illness stories. . . . What Being John Malkovich suggests is that even strictly ‘inauthentic’ stories are richly ‘authentic’ in their very idiosyncrasy; that even when they cannot be mined for facts, stories can be understood as meaning-making processes” (DasGupta, 448). . . . This more complicated vision of illness narratives supports the notion of the other’s unfinalizability – the impossibility of ever saying the last word, telling the final story – as well as the potential in true dialogue for always asking, telling, and hearing more, a potential that must be imagined and therefore understood by the listener” ( DasGupta, 456).

Narratives, patients’, limitations of:

“The operation and influence of both personal motivational bias and existing meta-narratives are crucial qualifications to our understanding of patient narratives because otherwise it appears as though people are ‘just telling their stories.’ In fact, they may be telling stories that they feel motivated to tell in order to be perceived as ‘good’ patients or commendable individuals; or they may end up availing themselves of idealized, culturally accessible plots that represent how things should be (a kind of societal wish fulfillment) as opposed to how they actually perceive things to be (Shapiro, 69). . . . Within the academy, transgressive, boundary-violating, defiant counter-narratives are championed [but] Contestation and opposition do not automatically constitute more valid criteria for ‘truth,’ reliability, authenticity or trustworthiness than other authorial stances. A brutal, unremittingly ugly narrative is not necessarily a more ‘real’ narratives than a transformative one” (70). . . . Patients’ cv narratives themselves are not simplistically one thing or another – not entirely an act of rebellion against confining prevailing norms, nor an exercise in crafting a positive image for posterity. All stories necessarily contain elements of both authenticity and inauthenticity, are always partly trustworthy and partly untrustworthy, to some degree are unavoidably self-representations and performances. However, as consumers and necessarily evaluators of narrative, the unconscious biases and predilections of clinicians and scholars, whether in one direction or another, may diminish their capacity to complicate and fully appreciate the stories they hear and read” (70).

Naturopathy, position on bacteriology of:

“they were effect rather than cause, agents that established themselves in the body only after it had already begun to deteriorate ‘because of our unnatural mode of living.’ . . . Whenever anything (clogging of tissue spaces, impermeable clothing) interfered with nature’s processes of elimination of waste through kidneys, bowels, lungs, and skin, sickness was sure to result. Then, and only then, did germs appear, drawn to the feast of putrid fluids pooling in unpurified tissue. In short, naturopathic etiology was hygeiotherapy’s physical Puritanism reborn in the age of bacteriology” (Whorton, 205-207).

Nazi antismoking campaign:

“Gender images blurred with other associations, including stereotypes of race and class. Smoking becomes associated not just with sexual depravity and licentiousness, but with communism and Judaism. Jewish and communist women were said to be especially likely to smoke, and to foist their filthy habit on others. . . Smoking was associated with jazz, with swing dancing, with rebellion, with Africa, with degenerate blacks, Jews, and Gypsies, with many of the other fears that inspired the Nazi retreat into a paranoid, xenophobic fortress of purity, cleanliness, and muscular macho health fanaticism” (Proctor, 218-19). “Hitler’s personal aversion was only one of several factors in the Nazi war on tobacco. The more important concern . . . was the productive and reproductive performance of the German Volk. Tobacco, like alcohol, was said to be sapping the strength of the German people – at work, at school, in sports, in the bedroom and the birthing clinic, and on the field of battle . . . What we find is a merger of the earlier moral critique with an increasingly medical critique. The moral element is not lost but is in fact strengthened through the incorporation of the Nazi-era rhetoric of bodily purity, racial hygiene, performance at work, and the ‘duty to be healthy.’ Gesundheit über Alles is one of the hallmarks of Nazi ideology” (221-22).

Nazi child psychiatry:

By 1936, Nazi child psychiatry, led by Leipzig’s Paul Shröder (with whom Asperger did and internship in 1934) and his student Hans Heinze (whom Asperger also met and idolized) had replaced politically neutral, therapeutic “curative education.” They developed diagnoses, esp. revolving around Gemüt, for children lacking community connectedness, conditions that resembled and preceded Asperger’s definition of autistic psychopathy, ”For Nazi thinkers, Gemüt referred to one’s fundamental capacity to form deep bonds with other people.. . . Nazi child psychiatrists aimed not to cultivar youths’ Gemüt as an end in itself, but as a way to strengthen the community. . . . Gemüt was instrumentalized, an individualist means to a collectivist end” (Sheffer, 69-71). Asperger attributed “sadistic traits” to autistic children writing of the “primitive spitefulness” and “negativism and seemingly calculated naughtiness of autistic children” who “delighted in malice” (156). Their “wickedness and cruelty speak clearly to poverty of Gemüt”(219).

Nazi expulsion of Jewish medical students and Nazification of curricula, postwar consequences of:

“This qualitative decline, which was to have grave import for the future of medicine in Germany after 1945, was predicated on additional factors not traceable to anti-Semitism. To begin with, the problem had a quantitative dimension in that whatever solid medical teaching had been left . . . was pressed into fewer hours than had been the case before 1933. This progressive curtailment of legitimate medical content commenced with the emergence of peacetime Nazi preoccupations for students and after 1939 led to martial supertasks. . . early in the regime precious time was deducted from needed study periods, especially before the premedical and final medical examinations, by the labor service imposed through the Nazi-coordinated student self-government. . . Well in 1934, mandatory SA service, including physically demanding sports training and days-long stays in exercise camps, cut deeply into the study time of every male student, but especially medical ones because of their daily commuting between university and clinics” (Kater, 172). . . . This system produced ramshackle physicians, whom even the wounded soldiers did not trust. . . . specialism was discouraged or neglected so as to manufacture physicians with minimal general training in assembly-line fashion; on the other hand, the half-baked generalists were suddenly required to do most surgery in the field, a subject on which they had not had sufficient instruction” (173). . . . traditionally valid medical subject were either shortened or abandoned to make room for useless fields like Rassenkunde, useless particularly in war” (174).

Nazi treatment of women physicians:

Female medical students, like male, were concentrated in Medizinische Fachschaften (medical work units) to spread Nazi propaganda. Also women had other nonmedical extracurricular tasks. From 1940, three months a year of hospital service; from 1941, orderly duties in a hospital for six months before admittance to med school; from 1940 on, obligatory “field service” (usually at harvest time) via physical labor or factory work. After all this, female docs worked mainly in the conquered East, trying to prevent trachoma or teaching Polish-German mothers proper baby care; they also had “race-hygienic, eugenic tasks performed under the SS (Kater, 102-104).

Nervousness/Insanity as unfeminine behavior in 19th c:

“ . . . the common characteristic of the symptoms was the unfeminine nature of the behavior or feeling. Insane and nervous women were describe as antimaternal, selfish, willful, violent, erotic – all of these inappropriate in terms of nineteenth-century definitions of womanhood. . . Case studies indicate that women patients and their families and friends were as responsible as physicians for linking unfeminine behavior with insanity and nervousness (Theriot II, 17). . . . Husbands brought in wives for a variety of unwomanly offenses. Women who disagreed too vocally, lost interest in personal appearance, or neglected their children were brought to physicians by husbands who saw this behavior as insane or nervous” (18).

Neurasthenia, Jewish variant of ca. 1900 Boston:

“Ethic identity infiltrated so deeply into the definitions of nervousness that Cabot and many of his Boston peers made the extraordinary identification of a separate Jewish racial variant of nervousness. . . ‘racial neurasthenia,’ ‘Hebraic debility,’ ‘jew-neurasthenia,’ and ‘Jewish psychoneurosis.’ . . . (Crenner II, 164) . . . Nervousness predisposition was a serviceable conduit for the influence of racially determined differences on unitary physical disease. . . similar connections between race and nervous temperament occurred to physicians contemplating putative racial differences in syphilis (169-70). . . . The nervous system provided a useful link between fixed disease and the fluid effects of racial circumstances or vaguely defined inherent differences in ‘racial character,’ ‘sexual energy,’ or ‘moral tendencies.’ Race and nervousness, in fact, occupied somewhat analogous roles in the application of objective, scientific analysis to medicine. . . Whether understood as inherent or imposed, both racial and nervous characteristics often took the role of mediators between the natural biology of human disease and the contingent effects of personal habits and social conditions” (172). . . . Boston’s Protestant physicians demonstrated a frustration with Jewish nervousness that served to highlight their contrasting satisfaction in handling physical disorders. The difficulties of evaluating nervousness might be redeemed by diagnostic perseverance, so that by acknowledging racial propensities to unreliability, the physician could transcend these obstacles to find the source of ‘real pain.’ In a case of diabetes, for example, a physician might reassert the clarity of medical control even in a setting where nervousness seemed to predominate. Racial hierarchies in reliability assisted in this process. . . . A patient’s racial identity situated him or her within a hierarchy of reliability in reporting symptoms” (178-179).

Neurologic practice, Wm. Hammond’s:

“’Cerebral hyperaemia’ [excess blood circulating in brain], in particular, furnished the rationale for a huge portion of the neurologist’s practice. In this respect it was very similar to ‘neurasthenia’ [a diagnosis that Hammond rejected] (Blustein, II, 153). Insomnia was the crucial symptom of cerebral hyperaemia, which fell to persons who overstrained their brains: “The only direct evidence Hammond had to define the pathology he suggested for cerebral hyperaemia was ‘the redness of the face, and throbbing of the cephalic arteries’ and ‘the sensation of fullness of the head.’ Hammond also cited the ‘persistent insomnia always present’ as evidence that cerebral circulation was disturbed, but here the argument was dangerously close to circularity” (157).

Neurologists vs. neurosurgeons, debates re medical identity, 1920-1950:

“The neurosurgeons believed therapy to be the critical organizing principle of specialization, while the neurologists, arguing that both organic and functional disorders ought to be belong to neurology, based specialization on a focus on the nervous system broadly defined (Gavrus, 59-60). . . . the self-fashioning of neurologists and neurosurgeons had a distinct performative dimension, and the language used was calibrated specifically for this purpose (61) . . . . these exchanges played out first as performances to an audience. This oral genesis provides insight into the manner in which the doctors’ rhetoric worked to persuade by means of entertainment while creating and perpetuating particular narratives of professional identity. . . . For instance, the attempt to invoke a fundamental difference in temperament between physicians and surgeons by locating the neurologist’s supposedly reflective nature in a historical past in which physicians were characterized as contemplative learned gentlemen while simultaneously dismissing the n neurosurgeon as an unthoughtful man of action allowed neurologists to frame a response to the neurosurgeons’ challenge of therapeutic superiority [cf. 79-81, where the temperamental difference is cast in terms of Shapin’s and C. Lawrence’s use of the term “repertoires”]. This was a rhetorical strategy deployed in rousing speeches to which the professional audience could contribute communally. . . . the critical role that rhetoric and performance may play in the fashioning of medical identity” (62, 75, 79, 81-83). E.g., the private correspondence between Wilfred Penfield and British clinical neurologist Francis Walsh (65-68). E.g., the way all published documents began as oratory (performances) followed by entertainment (“When the sessions filled with clinical and scientific papers were over, the doctors literally put on costumes to sing and dance . . . the content of their plays . . . blurred the edige between the entertaining and the professional” (82). Beginning in late 1930s, “the neurosurgeons’ prominence on the medical scene eclipsed that of their clinical neurologist colleagues” (77), but the loss of authority led to a counterattack in 40s when neurologists appealed to Fulton’s notion of “dynamic neurology” and attacked neurosurgeons’ claims of therapeutic superiority, invoking new drugs (Prostigmin for myasthenia gravis) and claiming that previous surgical treatment approaches (e.g., of glioma or hemorrhage) would give way to medical management (79). Fundamental change in late 40s & early 50s, when the debate shifted from antagonism tow. neurosurgery to internal organization (via newly formed American Academy of Neurology and its new journal, Neurology).) and development of a new revisionist narrative in which the neurology of the past had never achieved professional autonomy, which now fell to the younger neurologists (86), who were bolstered by the new neurological service under the VA (87). “In order to argue that neurology was undergoing a period of progress, [Pearce] Bailey reread the past in a way that suggested a break with neurology’s present focus on therapy. This narrative does not, however, reflect the past faithfully. Early neurologists such as Dana were deeply invested in therapy, as were the physicians who established the Neurological Institute of New York in 1909” (87, 90).

Neurology, late nineteenth century, impact of European psychotherapeutic ideas in America:

“Perhaps the greatest effect . . . was not as a source of hypnotic experiments or explanations for traumatic neuroses but as a source of effective treatments. . . Those who found such [per Bernheim] suggestive therapeutics deceptive and perhaps unethical could turn to Dejerine or Dubois who advocated using moral appeals and reasoning to persuade patients to get better. And, of course, there was Freud and his ‘psycho-analysis.’ After Janet’s visit in 1906 and Freud’s in 1909, competition between the advocates of these various approaches intensified” (Brown, 9). The split among neurologists over psychotherapeutics in general and over psychoanalysis in particular widened during the second decade of the twentieth century.” Jelliffe’s Journal of Nervous and Mental Diseases was so given to psa articles that in 1913, a group of “organic” neurologists rebuffed Jelliffe and founded the Archives of Psychiatry and Neurology, after which event “organically and psychologically oriented neurologists continued to grow further apart” (Brown, 10).

Neurology, state of after WWII:

“Medical neurology in the postwar decades would no longer be a primary care specialty as it had been for some 75 years previously. Neurologists would leave the ‘nervous’ patients to psychiatrists, surgery to the neurosurgeons, and the ongoing care of other patients to internists or family physicians. They would confine themselves to consulting, teaching, and research. In these roles they would flourish, particularly under the auspices of the National Institutes of Health and of private foundations such as the March of Dimes and the Muscular Dystrophy Association. “Yet something was lost in this process as well. The project integrating the knowledge of mind, brain, and behavior under the rubric of ‘neurology’ was set aside, at least temporarily. To the extent that neurologists continued to strive for a scientific, material understanding of psychological phenomena, their efforts were directed almost exclusively along reductionist lines. In a parallel fashion, psychiatrists who maintained an interest in organic bases (or correlates) of mental disturbance bypassed sophisticated neurological theory in favor of the empirically based use of psychoactive drugs or crude surgical interventions” (Blustein III, 112-113).

Nightingale, Florence, and germ theory:

Nightingale “initially dismissed Koch’s theory and continued to advocate for sanitary reform. . . eventually and grudgingly accepted the theory, but with reservations . . . she believed it overemphasized the agency and power of germs over the problem of unhealthy sanitary condition.” Even after she believed in germs in late 1880s, “she questioned their origin. She contended that filth produced germs and that sanitary measures were thus needed to prevent their spread. Koch’s research in India suggested the reverse. . . Koch and Nightingale disagree about what came first” (Down, 107-108). “She remained intellectually committed to miasma theory and questioned the existence of microbes, but she did play an important role in establishing a set of practices that contributed to the field of epidemiology . . . her method and subjects of analysis mattered more than her position on any given theory” (111). [Down doesn’t grasp the significance of paradigm change (Kuhn) int reconceptualizing and reordering a “set of practices” Practices do not exist in vacuo; they necessarily fall back on one theory (miasmatic theory) or another (germ theory – PS].

Nocebo effect:

“Placebo side effects occur when expectations of healing produce sickness, however minor; a positive expectation has a negative outcome. For example, a rash that occurs following administration of a placebo remedy may be a placebo side effect. . . . In the nocebo phenomenon, however, the subject expects sickness to be the outcome, i.e., the expectation is a negative one. . . . [In one study] 80 percent of hospitalized patients given sugar water and told that it was an emetic subsequently vomited” (Hahn, 57).

Non-sectarian health guides of post-Revolutionary America:

Elite physicians were not intent of eliminating lay practitioners (=self-help). This was not the overriding issue before the 1830s. “Rather, beginning in the 1770s, they sought to induct the population into a learned medical ethos in which self-help occupied an integral but newly auxiliary role.” This reflected the determination of elite physicians to raise professional standards, as well as the republicanism of the Revolutionary era, “which mandated a well-educate citizenry to ensure social health and national survival (L. Murphy, 5-6).

Nurse practitioner, opposition to within nursing:

“The primary point of reference for the nurse practitioner is arguably not truly nursing (no matter how much nurses insist it is) but, instead, medicine and economics. Nurse practitioners have recurringly been promoted and have presented themselves as a cheap albeit excellent alternative to physicians. Medicine and economics, not nursing ideals, remain the ‘gold standard’ against which the nurse practitioner is promoted and judged. . . . In medical and public policy discourse, the identity of the nurse practitioner as a nurse is elided. The nurse practitioner is a ‘mid-level’ provider in a vertical hierarchy still dominated by the physician. The nurse practitioner is a ‘physician extender’: the Hamburger Helper of health care” (Sandelowski, 190, 191).

Nurse-midwifery, as “sectarian profession” vis-à-vis nursing:

“It blended the practices of nursing and midwifery during the Progressive Era campaign to eliminate traditional African-American and immigrant midwives, decrease the role of general practitioners in births, and replace both with obstetrician-gynecologists. Public health nurses and social reformers first denounced midwives as ignorant and dangerous and then proposed educating nurses in midwifery because, as nurses, they would maintain asepsis and seek obstetrical consultation more effectively than traditional midwives or general practitioners, while meeting the needs of women who either had no access to physician care or preferred midwives because of cultural tradition” (Dawley, 149). After brief period (1944-1952), when nurse-midwifery was a section of the National Org. of Publish Health Nursing, the nurse-midwives formed their own American College of Nurse-Midwifery, which first met in 1955 (151). . . . Over time these changes [more in clinical practice; more state regulation as nurse practitioners from 60s on] intensified division in primary professional identity between those who thought of themselves as midwives with a background in nursing and those who identified as nurses who also had an education in midwifery. In terms of practice there were two distinct groups – those who practiced clinical nurse-midwifery and those who used their midwifery to improve maternity care for women while working in traditional nursing roles. (153). Even though Carolyn Conant van Blarcom, Clara Noyes, and Mary Breckinridge accepted the English title “Certified Midwife” when arguing for educating American nurses in midwifery, “almost from the beginning,[American] nurses who were educated in midwifery preferred the title ‘nurse-midwife’” (154), Debate over professional identity reopened in 1972, when assignment of writing regulations for new nurse-midwife act fell to State Dept. of Health Office of Nursing Manpower, albeit naming ACNM certification as qualification for nurse-midwife licensure (156ff. ).

Nurses, early twentieth century registration of:

“When the issue of control moved out of the sickroom and hospital corridors into the legislative arena, the battle over nursing became more visible and vituperative. . . . The laws differed from state to state and changed over time, but most were permissive rather than mandatory and did not cover ‘all who nursed for hire.’ They differentiated nurses more by education than by practice; only those wanting to be labeled as ‘registered nurses’ need to sit for the examination and/or graduate from approved nursing schools. . . . Qualifications for taking the state examination and control over the examining boards focused the real question: legal acknowledgment of nursing’s right to determine its own occupational future. Few physicians were willing to concede to nursing, through state mandate, what they were unwilling to give up in practice (Reverby, 124). . . . Opposition was voiced by physicians and “also was voiced by nurses who had graduated from smaller schools. . . . Even when registration was passed, the administration of the laws and the nursing boards proved to be as weak as most of the laws themselves” (127). . . . The professional nursing associations’ focus on registration and education reforms often brought them into conflict with their own constituencies and physicians, and into precarious alliances with the larger hospitals and their associations. Although legislation slowly upgraded standards in the schools, it did not achieve the goal of legitimating the professional nursing associations’ right to regulate nursing or facilitate the creation of a united occupation” (128).

Nurses, trained, identity of in early 20th century:

“They constructed their role as that of educated, supportive allies of physicians at the bedsides of sick patients. But they also saw their role as occurring in spaces where they would be the only women with the uncontested authority to engage with medical content. Nurses welcome the military analogies because they referred not only to the male medical head but also to their own legitimate power to search for relevant pieces of medical data, to negotiate new meanings about a particular patient’s symptoms, and to create new ideas about the significance of a particular clinical situation. Nursing leaders, that is, did not simply work with accepted social configurations. Rather, they deliberately manipulated carefully constructed representations to meet their own ends (D’Antonio, 50) . . . [by the early 1920s], nurses had already assumed the knowledge and the expertise needed for vigilant clinical observation and intervention: for recognizing, understanding, and responding to the meaning of discrete physiological and psychological signs and symptoms . . . They claimed, in fact, the very knowledge about practice and process that, as historian John Harley Warner has argued, was precisely what physicians – before the dawn of scientific medicine – took as their exclusive domain (52).

Nursing, authority of obstetrical nurses by 1940:

“By 1940, nurses who worked in specific labor and delivery, postpartum, and nursery units began to develop a clinical expertise and familiarity with the routine that freed them to pay attention to patients’ needs for comfort beyond the strict scientific application of medical treatments. In the process of smoothing the path for the advance of obstetrics, nurses had begun to develop a vision no longer confined to the field of medicine. While cooperative with medicine, nursing was moving beyond a physician definition of the function of nurses. . . . because of their relationships with patients, nurses were privileged to a particular kind of knowledge that while derived form and complementary to medical knowledge, was different (Rinker, 119-120).

Nursing, failure to achieve collective power:

“The particular female hierarchy within nursing further restricted the nurses’ collective power. . . . But nursing, from its very beginnings, created a female hierarchy in which sisterhood was difficult to achieve when different class-based assumptions about behavior and work collided. Commonalities of the gendered experience could not become the basis for unity as long as hierarchical filial relations, not equal sisterhood, underlay nurses’ lives. . . . But nursing had neither the financial nor the cultural power to create the separate women’s institutions that provided so much of the basis for women’s reform and rights efforts. . . . Nursing remained bounded by its ideology and its material conditions” (Reverby, 200-201). . . . The tradition of obligation made it very difficult for nurses to speak about rights at all, or to articulate a vision of caring that acknowledged the need for the right to determine duty” (203).

Nursing, failure to professionalize through education through 1920s:

“But the [hospital] work load continued to become both heavier and more complex, while administrations remained reluctant to commit hospital resources to the educational program. Further, hopes that more science could be added to the curriculum, or scientific methods used to evaluate procedures, were not fulfilled (Reverby, 157). . . . But once the methods and tools of the efficiency experts were introduced, they could be used, not to upgrade nursing, but to subdivide and increase the work. Once again, as in the nineteenth century, the benefit from a seemingly positive structural nursing reform accrued primarily to the hospitals (158).

Nursing, fetal monitoring and “tacit knowledge”:

“Accordingly, nurses saw electronic fetal monitoring as proving the value of the tacit knowledge and difficult-to-express processes they used to appraise patients. In addition electronic fetal monitoring not only moved obstetric nurses closer to obstetricians; it also moved obstetric nursing, especially nursing in labor and delivery, up the hierarchy of nursing specialties. . . Electronic fetal monitoring tied obstetric nursing more closely to the prestige and drama of critical care nursing, represented iconically by the cardiac monitor. Electronic fetal monitoring moved the labor and delivery nurses closer to the critical care nurse, who had achieved an elite status among nurses” (Sandelowski, 156).

Nursing, impact of cardiac and vital function monitors on nurse identity in 1960s:

The “new machinery”  new vision of nurse as machine monitors and as monitors themselves, esp. ICU nurses  new technological self-understanding and  new collegiality with docs: “While the new machinery of care fostered inequality among nurses, it stimulated collegiality, collaboration, and a ‘more equal’ relationship between nurses and physicians, if only because both nurses and physicians were often equally unfamiliar with it and harnessing its benefits required that nurses be allowed to diagnose and treat emergent life-threatening conditions” (Sandelowski, 127-8). Hildegard Peplau, among others expressed concern about automation in nursing, because essence of nursing resided in the interpersonal (as opposed to technical) relationship between nurses and their patients (130).

Nursing, in Civil War:

“Military surgeons’ reception of women at the front was beclouded by internecine struggle within the medical profession over the standardization of medical training and practice (Schultz, 371). . . . the opposition among surgeons even to the idea of female hospital attendants in the first year of the war was fairly common” (376). . . . few women had the training their military superiors desired. In the first two years of the war, a status-anxious and poorly supplied corps of hospital surgeons was ripe to find a medically untrained and militarily naïve group of women a nuisance (373). . . . conflicts over bureaucratic inhumanity, morality, and corruption were pervasive. Nurses found no greater opportunity to individualize the suffering of their patients and thereby demonstrate their moral superiority to surgeons than in conflicts involving food” (382). . . Thus the two ways in which nurses individualized the suffering of their patients – in focusing on specific personal details and in narrowing the focus of their sphere of action – were meaningfully related: the former to give substance to the lives snatched cruelly and randomly by death, the latter to salvage personal effectiveness in an often hostile military bureaucracy” (384). . . . [Civil War nurses] also began to realize the professional implications of gender difference. . . And they began to see an ethical conundrum in a system that equated medical authority with professionalism. Nurses observed that surgeons’ insistence on maintaining hospital protocol sometimes resulted in poor patient care. If the power and authority granted to surgeons permitted lapses in the standard of practice – through corruption, bureaucratic inefficiency, and depersonalization – then the concept of professionalism itself, through which all surgeons were accorded power and authority, lacked the essential humanitarian ingredient in medicine of putting the patient first. . . . women rejected the values that formed the basis of medical authority as defined by surgeons” (388-89).

Nursing, racism in hospital employment in late 1920s:

“The nation that drew lines around water fountains and waiting rooms, buses, schools, museums, bathrooms, and sidewalks had also extended them to hospital wards. Black nurses could work only in Black hospitals, and in the late 1920s, America registered 210 Black hospitals, compared with 6,807 white ones” (Smilios, loc 492).

Nursing, tension between proceduralism and “true nursing” beginning in 1930s:

Isabel Stewart in 20s: nurses were abandoning “health nursing” in favor more “dramatic” sick nursing in hospitals (Sandelowski, 105). In hospitals, transfer of bedside nursing to new category of technical nurses or nurse technicians, with the resulting paradox: “this technical nurse was not to perform the procedures that were crowding out traditional bedside nursing but, instead, bedside nursing itself. . . . the technical nurse now performed true nursing functions (or bedside care), while the professional nurse performed technical functions (or the execution of complex medical tasks and administration). True nursing did not necessary require a true nurse” (106).

Nursing, transformation of during Great Depression:

“ . . . the change to graduate staffing was a rapid fait accompli. Nearly 60 percent of all the hospital beds in the country in the late 1920s were in hospitals with nursing schools; of these 73 percent had no graduate staff nurses and only 15 percent had four or more. ‘Attendants’ predominated in the staff positions in the hospitals without schools. Between 1929 and 1940, however, the number of nursing schools dropped by 574. While the remaining schools often became larger, there was an explosive expansion in the number of graduates working in staff positions. . . . the numbers rose from only 4,000 in 1929 to 28,000 in 1937 to over `00,000 by 1941. By 1941, there were more staff nurses than private-duty nurses and students combined. . . . Blue Cross financing and reimbursement . . . helped to solve some of the expense dilemmas. Hospitals could bury the cost of nursing in the hospital bill and pass it on to the insurer for reimbursement” (Reverby, 188). Only in 1936 did the joint boards of the national nursing associations finally approve licensure for subsidiary nursing workers (aides) a year after NY state passed first mandatory licensure law for all nursing personnel (193). . . . In 1942, Natl. Nursing Council for War Service, in conjunction with the PHS and the American Red Cross, began to initiate more training programs for practical nurses and to approve training of aides (196).

O

Obesity, Post-WWII prosperity and:

“At the root of the condemnation of fatness and overeating was concern about the effects of America’s growing prosperity and technology. In the largest sense, the panic about fat was a reaction to modernization. . . . Post-War prosperity, with all the changes it wrought, was precisely what made medical and national leaders lead the campaign against overweight” (Seid, 128). . . There seemed to be a fear that if the new prosperity were misused, a debacle might ensure like that which followed the prosperous twenties. . . . concern that the backbone of the whole nation might be weakening. This caused special panic as the Cold War got under way. Competition with Russia heated up, and keeping American’s morally and physically fit, especially American children, seemed essential if that battle was to be won” (130).

Obesity, psychogenic explanations of in 1950s:

“Overweight people did not have a problem with their body machinery – their metabolism, glands, or even genes, but with their appetites.” According to NY Times Magazine (1959), 90% of all obesity caused by psychogenic problems (Seid, 123). . . . Body weight had become solely a question of personal habits, of diet and exercise – that is of psychic health and personal self-control” (125). . . The overfed body was, strangely, an empty body, one plagued with emotional needs that food would never satisfy. In this construct, food was abstracted beyond even a scientific quantity of calories and nutrients. It had been abstracted into an emotional symbol. . . . The psychoanalytic model also reinforced powerfully the connection between sexual maladjustment and fatness. . . . Appetite for food became inextricably tangled up with the appetite for sex. But the sexual instinct came to be regarded as more powerful, more important, and more respectable than the instinct to please the palate or fill the stomach” (126).

Obstetrical authority, consolidation of between 1880-1920:

“It was mediated in large part by the development of new medical technology and, perhaps more important, by the perception of the effectiveness of this technology by both medical professionals and the laity (Leavitt II, 230-31). . . . At the end of the 19th century the available procedures (in the order in which physicians would have considered them) included a high forceps operation, or internal version with forceps application on the after-coming head; symphysiotomy, the surgical separation of the pubic bones; pubiotomy (also called hebotomy), the cutting of the pubic bone to increase the conjugate diameters; cesarean section, the delivery of the fetus through an incision in the abdominal wall; or craniotomy, the reduction by various operations of the size of the fetal head so that it would fit through the pelvic opening” (233). . . . Technical criteria for selecting between craniotomy and cesarean section occupied physicians’ attention for years and caused heated arguments at medical society meetings (236). . . . In favor of their [physicians] autonomy and authority was their exclusive control of the technical information necessary to both decision making and execution of certain procedures. Some of the objective information, including pelvic measurements and estimation of the size of the fetus, although open to interpretation, remained in the hands of medical observers alone” (241). . . . [Yet] such control over information did not necessarily lead to their ultimate power in decision making . . . extramedical factors that they felt they could not control. . . . [re surgical intervention] parturients had to be convinced to move to hospitals . . . . . ethical and religious considers . . . “ . . . medicine itself remained shackled. Women were losing their traditional birth-room powers, and physicians were taking up only part of the slack. Who or what stepped into the breach? . . . Church and husband, both more active birth-room participants in this period” (242). . . . The husband, as a rule, also spoke of the necessity of saving his wife’s life first. The priest, in the case of Catholic families, spoke for the ‘innocent’ fetus over against the ‘guilty’ mother. In the midst of this debate, obstetricians tried to establish medicine’s authority to cast the ‘swing’ vote – indeed, to make medicine’s vote the most important one (244). . . . The new technology make it possible to consider fetal life as a viable option and provided physicians with the opportunity to wrest decision-making power away from its traditional place within the family and establish it in their own domain. It was in these high-risk cases requiring surgical intervention that physicians found their first commanding voice in the birthing room; they later learned to use their new authority in all obstetric cases” (245). Craniotomy came to be associated with the hold ways, cesarean section with the new. . . . Physicians justified their actions on medical ground and in social terms, making social value comparisons between the birthing woman and the fetus. In so doing, they revealed their own biases about women’s role as childbearers” (246). . . . physicians claimed their right to make medical decisions using the whole range of considerations that had characterized collective decision making in the past. . . . Actually, however, the physicians relied just as heavily on their own social ideas as they gained medical authority. They made decisions on both social and medical grounds . . . [they superseded collective decision making with a more unitary professional one] even while they used all the traditional modes of integrating medical with nonmedical considerations” (247).

Obstetrical authority, reasons for triumph over midwifery in 1920s:

“It is certain that the relevant health conditions were not improving in those areas where the midwife was first being superseded. . . . Rather, the obstetricians triumphed because, before the public health programs became firmly established in the public mind, the obstetrician gained tremendous advantages from other sources. Immigration decreased significantly during the war and was afterwards reduced legally to a small fraction of the numbers experience just before the war. . . concurrently the economic problem [of competition] per se was greatly reduced. This did not occur simply because of the ‘prosperity’ of the 1920s . . . rather the secular trend towards limitation of family size accelerated to include nearly the entire population. . . . With the limitation of births, it is possible that pregnancy and anticipated delivery seemed sufficiently rare to be generally equated with major operations and worthy of great expense. The other secular shift . . . was a new, general demand for improved obstetrics. . . Also responsible was a growing public demand from women, who were becoming increasingly self-conscious about their own welfare, and who were still infected with the reforming zeal of the Progressive Era . . .” (Kobrin, 362-3).

Obstetrical nurses, in relation to midwives:

“A major reasons obstetric nurses were embraced by scientific medicine while lay midwives were not is that nurses were always careful to recognize and protect the physician’s authority even when fulfilling duties that were properly defined as medical obligations. The obstetric nurses accepted a dependent role, sanctioned by the authority of scientific medicine, that midwives refused. By assuming the delivery duty without the decision-making power to determine how the birth should be managed, nurses became firmly entrenched as subordinates in the medical sphere” (Rinker, 111-112). . . . By giving priority to science and the physician over individual attention to the patient the nurse compromised the traditional woman-to-woman connection that had made her so valuable a missionary of the gospel of good obstetrics. Once the public had been convinced to accept medical attendance at birth, the womanly attributes of the nurse were redirected to support physicians and medical procedures rather than the patient who was giving birth” (116).

Obstetrical nursing, medical model and:

“In the developing professional nursing, documenting the contributions of nurses to medical practice was a first step necessary for the acknowledgment of nurses as skilled professionals. . . expertise in the application of scientific medical knowledge was foundational for the future acceptance of the profession as a legitimate partner with medicine (Rinker, 113). . . [In the hospital] Because the anesthetized, restless patient could give no warning of an impending birth, making precipitous delivery very likely, the nurse who managed such a process successfully was obviously an accomplished partner in scientific birth” (116).

Obstetrical nursing, nurses role in home deliveries, pre-1940s:

“The nurse who assisted at an operative delivery in the home faced a task of daunting proportions. A nurse’s tactful ability to secure the patient’s cooperation was as crucial as her scientific training because she was required to convert the patient’s private home into an aseptic field for scientific birth. Building a trusting relationship with her patient was an essential first step as the nurse was responsible for convincing the mother that it was necessary to scrub all floors, walls, and furniture and otherwise rearrange the family’s furnishing for birth. She must ensure that all equipment was sterile and at hand. After the environment was cleansed the nurse was expected to wash thoroughly her patient also, both internally and externally. A warm bath, soapsuds enema, and in the early years an antiseptic douche were followed by additional scrubbing of the skin from the breasts to the knees, including the perineum, the buttocks, and the thighs. Perineal shaving, begun in the early decades of the 20th century, continued, despite its demonstrated futility, well into the 1980s. Following the cleansing, the mother-to-be was placed in an elaborated prepared bed for her labor. The nurse’s responsibility then was to remain with her patient and send for the doctor when delivery was imminent” (Rinker, 110).

Operating room, transformation of after WWII:

“Our hypothesis is that by the post-Second World War period, spaces for science prevailed over earlier architectural references, such as the theatre and classroom. . . . The post-war enshrinement of the operating room as a space of experimental science can thus be understood within the framework of surgery’s broad aspirations of being a science (Adams & Schlich, 304) . . . . the growing resemblance of operating rooms to laboratories registers the common aim of surgeons and scientists to control life processes. The ideal scientific laboratory is a place that creates conditions that allow the investigator to control life phenomena at will. Like, surgery is a ‘technology of control’ (308). Increase of control by design can be traced in the three types of surgical arrangements at the Royal Victoria Hospital [that] roughly correspond to the Victorian operating theater, the interwar surgical suite, and the post-war operating room (or ‘OR’)” (309). . . . The principle of control is even more central in the third type of surgical environment, the completely isolated, mechanically illuminated, international-style surgical environment with even smaller, more laboratory-like rooms (323) . . . . Rather than a performance in itself, surgery by the 1050s was less of a spectacle and more concerned with replicability, reliability, and control. Material evidence of this change is in the smallness, the exclusivity of the space, and a strict code of behavior, modeled on the scientific laboratory” (324).

Osler as teacher:

“But he encouraged us to read widely outside of medicine in addition to the reading of medical books and journals. It was through his enthusiastic recommendation that I became acquainted with Burton’s Anatomy of Melancholy, Sir Thomas Browne’s Religio Medici, Boswell’s Life of Samuel Johnson, Montaigne’s Essays, Plutarch’s Lives, and Jowett’s translation of Plato’s Dialogues. Under Dr. Osler’s influence I became an omnivorous reader and explored many different fields” (Barker, 86).

Osteopaths and M.D.s, early legal battles between:

“What constituted the practice of medicine became the primary legal point at issue in most of the state courts . . . DOs maintained that it meant the practice of administering drugs – and nothing more.” Prior to 1904, all state high courts except Alabama & Nebraska concurred with Kentucky that the term medicine should be narrowly interpreted (Gevitz, 46). By 1920 all graduates of approved osteopathic colleges had received instruction equivalent in length (4 years) to that of MDs” (60, 78).

Osteopathy, germ theory and:

At first osteopaths interpreted germ theory from “a Still-like perspective by emphasizing individual immunity over bacterial virulence: germs could obtain an infection-producing foothold only in tissues that had already been weakened by an osteopathic lesion. And even then, it was maintained, osteopathic manipulation of the lesion would overcome the infection by stimulating the production of more antibodies” (Whorton, 159-160).

P

Pacemakers, implantable, and dominance of heart surgeons:

“Thoracis surgeons devised the standard implantable procedure and held a near-monopoly on the implantation for many years. The dominance of surgeons was inevitable given the design of the pacemaker, a device that required the surgical opening of both the abdomen and the chest.” Even after cardiologists began implanting pacemakers in catheterization labs, “surgeons remained their dominant position until the 1980s.”

Palliative care, critical care nurses, surgeons and:

“Jezewski suggested that critical care nurses are ‘culture brokers,’ bridging the complex cultural environment of the critical care unit between the physicians, patients, and families” (Buchman, 667). . . . We suggest that the surgeon is ideally trained to organize and sustain the rescue attempt. We suggest that the surgeon is poorly positioned to abort the rescue attempt when it has failed. The covenant between the surgeon and the patient as social and, at the end of life, spiritual beings demands comfort and dignity. Repeated fruitless attempts at physiological rescue delay and even deny these covenantal obligations” (671).

Paracelsus, and physician as aiding body’s “internal alchemist”:

“For Paracelsus, the internal alchemist ([read: the role of DNA] also had the job of separating what was useful from what was not in the body. Since the influence of the greater world, the astra, also contained disease, the internal alchemist needed to separate what was poisonous from what was beneficial to the body. . . So the physician needed to understand the processes by which alchemists worked. He or she needed to know what calcination and sublimation were and needed to understand distillation and fermentation, because these processes were the means by which all chemical natures were completed . . . Knowing these things allowed the physician to come to the aid of the body’s ‘internal alchemist’ . . . The two, the body and the physician, then worked together, the ‘internal alchemist” receiving help from the physician-alchemist, who supplied, by laboratory means, what a specific part of the body required [i.e., by .Making medicine and applying medicines] . . .” (B. Mora, 41).

Paracelsus, chemical (nonhumoral) basis of his “new medicine”:

“Paracelsus also rejected the notion that all things in the immediate physical world were composed of earth, air, fire and water. In his view, there were some things that preceded even them: the cosmological wombs, as he called them, of Sulphur, Salt, and Mercury. These were known as the ‘first three’ . . . The knowledge of powers – or how things knew what to do, and what they could be expected to do when applied as medicine – was, therefore, chemical at its root. This is where the new medicine staked out a new frontier. The human being was a divinely created, spiritually infused, chemical apparatus. The body worked in an alchemical way. Its parts possessed their own ‘inner alchemist’ that ‘knew’ how to separate what was useful to the body from what was not” (B. Mora, 32).

Paracelsus, Luther and:

“All the same, Paracelsus’s theological philosophy, while hardly itself scientific, enables science, where Luther’s prohibits it. Indeed, it is not easy to separate Paracelsus’s ‘science’ from his theology, and to the man himself there could never by any such division: his chemical cosmos was comprehensible only within the framework of his distinctive and idiosyncratic Christianity, for God’s work was visible everywhere in nature” (P. Ball , 105). . . Luther’s notion that faith was all did not sit comfortably with Paracelsus’s practical mission of healing” (118).

Paracelsus, magical remedies of:

“Paracelsus was first of all a physician, and he regarded his own medical remedies as magical. But that was, in his view, the opposite of superstitious; the doctor systematically concentrated and manipulated the invisible, magical forces and ‘virtues’ of nature. And Paracelsus sought to embed this ‘new medicine within a comprehensive system of (devoutly Christian) natural philosophy, from which the doctor’s art emerged naturally. In this much at least, his aim was no different from that of contemporary science: it all has to fit together” (P. Ball, loc 264).

Paris School, why antebellum Americans flocked to it rather than to London:

“The single element in American accounts of Paris yet consistently missing from accounts of London was enthusiasm about free access to medical facilities and instruction.” . . . that the official course of instruction in the hospital wards and lecture halls of Paris came free of cost to foreigners was a point “that American observers made persistently and emphatically. . . . It was access more than scientific brilliance that chiefly decided Americans on the French over the British capital” re access to cadavers and “to instruction from the living body at the bedside” (Warner I, 70-71).

Pathology, in relation to bacteriology:

“The chief preoccupation of most pathologists between World Wars I and II . . . was their relationship with bacteriology. Just as more time and energy was spent working out physiology’s relationship with biochemistry than it did with pathology or other basic science disciplines, so too did pathology have to deal with the problem of its daughter science bacteriology. In fact, many investigators in the 1910s and 1920s considered bacteriology the experimental wellspring of pathology. . . By the 1930s it was clear, especially with the explosions of viral research, that microbiology was too large to be contained within pathology. In that decade, an increasing number of medical school deans engineered the two disciplines’ fission, so that bacteriology might stand on its own, with everything that implied in terms of careers and patrons” (Maulitz, 230, 231)

Patient, viewpoint of in medical history:

“Thus the patient’s point of view remains enigmatic. On the one hand, there is a call to consider the patient in history of medicine as an important partner, voice, subject, object or whatever you like to name it with the ultimate aim of rewriting the history of medicine according to the patient’s view. On the other hand, we have statements that the patient has actually disappeared from the medical narrative or is merely a by-product of medicine. A full debate between these two positions – that the patient’s view can be unearthed form the sources against the statement that the patient is a construct of the medical gaze – has, to my knowledge, never taken place” (Condrau, 529). Contra Porter, patient history as “history from below” doesn’t work because: “it over-emphasizes polarity. Porter’s patients stand against doctors – there is not much room for the social or local environment. The family or, lo and behold, other occupational groups such as nurses and midwives play only a minor role in such an account. . . . most histories from below were driven by a political interest [e.g., women’s history derived from feminist movement; black history derived from civil rights movement]. A comparable political background for patients is not easy to unearth” (533,534).

Patient-centered record, and emergence of the modern patient as Subject:

“The topic to be addressed here is the embodying of the patient: the production of a patient with a body whose characteristics are the effect of the interrelation of the patient with a growing number of professionals and investigative probes, and with a medical record which becomes more and more significant as a gravitational node in these interrelations” (Berg & Harterink, 14) . . .The patient-centered record might be seen to perform the patient as a Subject: a bounded, coherent and unified self, with a history that forms a whole, and an inner core tha t is unique to this person and constitutive of who she or he is. . . . The patient record concretely attires the patient with many of the characteristics of liberal subjectivity (Hayles, 1999): coherence, boundedness, centeredness, unique historicity, self-determination” (Berg & Harterink, 29).

Patient-centered record, and re-historicization of the patient temporally and physiologically:

“ . . . the patient-centered record became a crucial actor in the performance of a new mode of embodiment (Berg & Bowker, 1997). . . . Partly mirroring the compartmentalization of the hospital organization, a compartmentalized yet unified body emerged here, in which organs or organ functions are each allotted a separate section in the file, or a separate portion of a preformatted form. In the interrelation of proliferating techniques, patient-centered record, doctors, nurses, and the patient, a body emerged whose dimensions do not map the everyday sites and events on the ward or in the clinic. This body extends in an anatomical/pathophysiological space and time which is traveled by blood cells and growing tumors, and which is explored through urinalysis and endoscopies (Berg & Harterink, 23-24). . . . “the new space-time that emerged between the covers of the record was a novel phenomenon. The embodiment of patients was loosened from the workings of the day-to-day life networks that permeated the hospital walls; the space-times that doctors now traveled in were less and less measure by moral worth and social standing. Rather, studying the X-rays and other forms, doctors could enter the space of a tumor that grows, or a fracture that heals (24). . . [This temporality is exemplified in the graph,] transcriptions that transform events occurring in the space-time of a hospital ward or laboratory into repetitive phenomena, occurring in and linking across a linear time that is lifted out of the ward’s time zones. . . . They establish a historical continuity in a double sense. The graph’s grid, first extends reassuringly into both the past and the future: the grid’s basic structure in unbounded and completely regular. . . In addition, a historical continuity is produced because the graph is accessible at any moment. The specific tracing can be rescrutinized at any later time; it can be compared to other processes, of other physiological entities, or of the same individual later in time” (25-26). . . . Compared to the series of disjointed, brief narratives in a casebook, the patient-centered record affords a physiological historization of the body in myriad ways (27). . . . Processing the body in medical practice, [now] had also become embodying the patient-as-process. . . . The patient-centered record performs a historized body; it invests the patient’s body with a linear, accessible and continuous history. . . . Recurrent tests now subjected patients to daily routines . . . Likewise, therapies began to perform the body-as-process (28).

Peabody, Francis, appreciation of GP of:

“Dr. Peabody, unlike Dr. Cabot, felt strongly that the general practitioner was more important today than ever. “Never,” he says, “was the public in need of wise, broadly trained advisers so much as it needs them today to guide them through the complicated maze of modern medicine.” He was opposed to the kind of group practice where many different specialists see the patient but no one doctor understands or is responsible for the whole patient. . . . On every important subject on which Dr. Peabody worked – typhoid, cardiac dyspnea, etc. – he wrote at least one paper especially to present the subject to the general practitioner” (Williams, 480).

Pediatrics, historical role of women in:

“Since the late nineteenth century, women physicians had enjoyed public acceptance as physicians for children. Prior to the end of the century, most children’s hospitals were women’s and infant’s hospitals rather than centers dedicated exclusively to the care of children. Several such institutions were founded by women, including the Blackwells’ New York Infirmary for Women and Children (established in 1857), Dr. Marie Zakrezewska’s New England Hospital for Women and Children (1862), Dr. Mary Thompson’s Chicago Hospital for Women and Children (1865), the Children’s Hospital of San Francisco (1875), and the Babies Hospital of the City of NY (1887). In the early twentieth century, widespread recognition of women physicians’’ child health work in voluntary societies and municipal agencies, the still shaky status of pediatrics as an academic specialty, and the unstinting support of several well-placed male academic pediatricians helped ease the way for women to make inroads in the new field” (More, 170-71).

Penicillin, Fleming’s limited laboratory use of:

“The discovery of penicillin had provided him with a most valuable reagent in his main routine occupation, the production of vaccines [in Almroth Wright’s Inoculation Dept. at St. Mary’s Hospital], and he had been quick to apply it in this way. . . . Penicillin favoured the selective culturing of the acne, influenza, and whooping cough bacilli. Fleming directed Craddock to this work in preference to his attempts with Ridley to purify penicillin. Crude penicillin was, in fact, perfectly satisfactory for the selective culture of these three organisms in the production of vaccines, and so it was itself produced for this purpose in weekly batches. . . . When it became quite obvious that penicillin was proving itself as a systemic antibacterial agent of unparalleled power, Fleming changed his stance. He reported the results of his own in vitro tests on the Oxford material, found them superior to the sulphonamides, and began to predict that, if penicillin could be synthesized, it would supersede them. And, by quoting the two predictions he had made, on in his 1929 paper, and one in the paper for the British Dental Journal, he was able to claim that he had always been aware of the potential therapeutic value of penicillin. Though it would seem that these two predictions referred to a possible local use of penicillin, many subsequent writers have interpreted them as referring to its systemic injection. This view seems to be unsupported, not only by the actual wording of Fleming’s pronouncements, but by the course of events” (Macfarlane, 254, 255).

Penicillin, Florey’s genius in pulling together the Oxford Unit:

“Florey was not only a hard worker and a clever scientist, he was a great organizer. He had the ability to recognize and to use the relevant special talents of his colleagues and assistants, and he had a very special quality of his own, the ability to inspire the confidence and enthusiasm of a group of experts so that they became a very effective team under his leadership. In 1939, with almost no money to fund such a research and with the shadows of war darkening the whole of Europe, Florey decided to gamble all his resources and those of his department on penicillin, a dark horse at best, and quite possibly a non-starter” (Macfarlane, 170).

Penicillin, impact on medicine:

“Penicillin had a dramatic effect on infection and mortality resulting from bacterial infections, such as staphylococcal and puerperal sepsis, pneumococcal pneumonia, otitis media and bacterial meningitis. It also had a big impact on minor diseases, such as impetigo, which as a result is rarely seen nowadays. . . . Just as importantly, the advent of penicillin allowed for major advances to be made in surgery, allowing for organ transplantation, cardiac surgery and the efficient management of severe burns” (Wainwright, 87). When penicillin first became available for civilian use, “It was used for pneumonia, hemolytic streptococcal infections, and for staphylococcal infections. By late 1943, it was in general civilian use: “It was such a thrill to be able to save the life of someone with meningitis or bacterial endocarditis and to bring syphilis to a half in a few days” (Beeson, 171).

Penicillin, outcome of WWII and:

“Although the Germans were well aware that the Allies had penicillin they never succeeded in producing it on a large scale, a fact which was a major contributory factor in their final defeat. There was some debate as to the legality of preventing penicillin-producing culture from reaching the enemy since it could be argued that in International Law it was illegal to distinguish between friendly and enemy wounded Despite this, the military potential of the new drug was fully recognized and, perhaps not surprisingly, cultures of Fleming’s mould were not dispatched to Nazi scientists and doctors” (Wainwright, 65).

Pernicious anemia, fragmented identity among vying specialists, 1900-1925:

The period was marked by “professional tensions and commitments concerning the hierarchy among specialties [abdominal surgeons, hematologists, gastroenterologists, neurologists] . . . The disease possessed a clinically fragmented organic identity that reinforced specialization and the search for organic causation . . . specialists brought their preferred order to a fragmented organic and therapeutic situation. How they thought about disease depended upon technologies deployed within the hospital and on the social relations of hospital practice )Wailoo I, 161).

Pernicious anemia, liver extract (Eli Lilly), and corporate pharma in 1930s:

“A key development during this era was the way the patient’s response to Eli Lilly’s liver extract was interpreted as a kind of bioassay, a diagnostic technology in its own right retrospectively constituting ‘the disease.’ Rather than curing all of the patient ordinarily diagnosed with pernicious anemia, this disease became, by definition, that entity which was cured by a new consumable and de facto diagnostic technology – liver extract. . . . [In his 1956 autobiography, Boston hematologist Roger Lee] “reflected on how academic medicine’s commercial ties had transformed medical writers into spokesmen for the drug industry” (Wailoo I, 141).

Personality, early twentieth century psychiatric focus on:

“Around psychopathy, then, psychiatrists began to constitute ‘personality’ in its modern form, as at once a possession, something a person ‘has’ or displays, and object of analysis. . . . Early-twentieth-century psychiatry’s focus on the personality, adumbrated first around psychopathy, was an important means by which the discipline effected the shift from the necessarily limited psychiatry of the abnormal to a psychiatry of normality. . . The term’s turn-of-the-century usage anticipated and facilitated the discipline’s adoption of the psychiatry of adjustment, a psychiatry applicable to everyone.(Lunbeck, 68, 69).

Philadelphia, Antebellum Southern Medical Students in:

“ . . . leading figures of the southern medical establishment . . . most of whom had been trained in Philadelphia, were not threatened by that city’s preeminence – quite the contrary” (Kilbride, 707) . . . “Philadelphia’s schools [Jefferson & Penn] were respected as the finest the nation could offer. . . . William Penn’s city . . . initiated students into a genteel culture, where they absorbed the elements of medical thought and practice and became part of a national community of professional gentlemen” (708) . . . “most informed medical men considered the city’s greatest advantage to be the availability of hands-on instruction in clinics and hospitals” (709) . . . Philadelphians aversion to radical reform “was rooted in the close ties between Phila. bluebloods and their southern peers, a relationship that gave the city a decidedly southern and conservative cast that reinforced its allure in the eyes of prospective southern physicians” (710. 714-15) . . . The city was not a center of abolitionist sedition. In fact southern men were heartened by the obvious disdain with which Philadelphians viewed antislavery activists” (711, 712-13 ) . . . 244 southern students left Penn and Jefferson after execution of John Brown (717); in 1858-60, 60% of Jefferson students was from the South; “Despite the vituperation of southern medical educators, the young men returned and were not alone in choosing a Philadelphia medical school: the next year 34% of the University of Pennsylvania’s class was southern, and 48 percent of Jefferson’s. Though the latter signaled a significant decrease from 69 percent the year before – the high point for southerners in Phila. schools – their numbers were remarkably strong given the publicity generated by their withdrawal. Phila’s conservatives had good cause to feel they had preserved the trust of the South” (719).

Physiological therapy, gap between experience of, and intended effects of, ca 1900:

“But a gap opened toward the end of the nineteenth century between a patient’s experience of medical treatment and the treatment’s intended effects. The targets sought out by newer twentieth-century therapeutics often had little connection to their immediately perceptible effects (Crenner II, 102). . . . What changed late in the nineteenth century was the availability of routine methods for extracting information about physiological effects: with blood counts and homoglobinometry, chemical urinalysis, microscopy, serology, and x-rays. . . . Drug therapy increasingly sought to produce interior changes in a patient’s body that were as concrete as surgical effects (103). . . . Feeling better was not accurate evidence of therapeutic effect. . . . [Whereas] A blood test, like an appendix on a napkin, showed both the hidden target of the treatment and its demonstrable therapeutic effects” (105). . . . Physiological therapeutics picked out interior targets for medical therapy in a manner that made a previously reliance on the patient’s perception of therapy seem less legitimate” (109).

Physiological therapy, patient’s attitudes toward:

“For the patients who accepted the premises, physiological treatments did offer a proof of control that could be perceived as a service in its own right. . . . Some patients not only identified an independent value in the doctor’s control over disease, but seemed capable of sharing it vicariously” (Crenner II, 118) . . . The concrete facts of physiologic monitoring offered a shared territory lying between the patient’s unimpeachable, if inaccessible, claims about symptoms and the physician’s assumed expertise” (123).

Physiology (experimental) and medicine, relationship of:

“By the end of the [19th] century, as physiologists and physicians had developed distinct goals, their instruments reflected different requirements. This conflict over purpose had been evident to those who attempted to apply the new instrumentation to medicine even in the 1870s. In the 1880s and 1890s, physiologists had become increasingly concerned with their own questions, and the disjunction between pure and applied experimental medicine had become more apparent. By the time of [William] Porter’s push for experimental physiology, physiology had become a discipline to train the mind in exact reasoning rather than the royal road to either diagnostic precision or effective therapy. By 1900, physiology as a discipline had become removed from clinical medicine, even though physiology as an intellectual pursuit was becoming intimately tied to clinical concerns [Borell, 310-11] . . . the eventually successful campaign for the separation of physiology from anatomy and medicine in the nineteenth century was being followed by a reconsideration at the turn of the twentieth century of the mutual benefits enjoyed to each intersecting discipline. Physiology itself was about to be threatened by the separation from its domain of fields like biochemistry and endocrinology. Moreover, physiologists were beginning to discover that the more interesting problems yet to be solved lay in the common ground between clinical medicine, experimental physiology, and physiological chemistry” [313].

Physiology (experimental) and medicine, relationship of, in Britain re vivisection:

“Opposition to the practices of medical science reached far outside of the clinical sphere. It was also a result of the application of methods of animal experimentation that proved inseparable from laboratory research. Strong feelings toward these procedures existed both among members of the general public and within sections of the medical profession in Britain. . . . The inclusion of scientific principles into medicine was also problematic because concern existed that clinicians would begin to see their responsibility as normalizing a deviant physiological process rather than caring for a sick human being (I. Miller, 346) . . . Experimental physiology was therefore inherently wrought with problems of internal ethics and public accountability. . . . Accordingly, antivivisection sentiment, including that from within the medical profession, exerted a powerful inhibitory influence on the adoption of the ideals and technologies of physiological exploration, more so in the British clinic than in other countries” (347). . . . Fears of human experimentation were certainly deeply immersed within vivisection controversies. It appeared perfectly plausible to many contemporaries that the human patient might eventually fall victim to the cruel, experimental urges of the modern medical man, particularly if the ethos of laboratory science was allowed to intrude too far into the British clinical experience (353, cites Lederer, Subjected to Science). . . . [-Re gastric analysis] Modern procedures were there not altogether rejected either as an over-simplistic science versus intuition dichotomy might suggest. It was not uncommon for British practitioners to argue that modern forms of gastric analysis should be restricted until a later date when their accuracy and usefulness was more certain. . . a postponement of the introduction of medical science until question related to clinical value were firmly settled (354-55). . . . patient discomfort seems to have been the leading factor in reducing the British physician’s motivation to abandon familiar methods. It was an aspect that held the strongest cultural resonance due to its potential association with apparently needless exercises in medical experimentation and brutality” (356). . . . enthusiasm of those who were initially eager to use gastroscopic methods was often dampened by accidental, and sometimes fatal, perforations of the gullet or stomach” (357). . . . “Fears of the cruelty and pain of the laboratory being directly transferred into the clinical setting appeared to be turning into reality within the controversy surrounding the suffragette hunger strike, which took place in British prisons from July 1909, and is likely to have contributed toward the wariness of both doctor and patient to engage with laboratory technologies” (359). . . . [Re its use in force-feeding of suffragettes], “Representations of the stomach tube as instrument of human torture therefore constituted a climax in debates regarding the extent to which technologies accrued from scientific medicine might be utilized for scientific purposes, or for torture, at the expense of questions related to the patient’s health (371).

Placebo:

“Placebos may also be procedures, diagnostic or therapeutic endeavors that the physician ‘knows’ bring no pharmacological benefit. By most definitions a placebo must be given by a physician who believes that the drug prescribed is ‘inactive’ by pharmacologic standards. This leaves us in the awkward position of claiming that a drug which is a placebo for one physician may not be for another. . . . One person’s placebo is another’s active agent (Spiro, 44). . . . The physician may give a placebo as (1) a gift to relieve pain or to treat a complaint that seems to have no objective explanation; (2) a challenge to prove that the patient is wrong (‘See, if a sugar pill has helped you, it is all in your mind!’); and finally (3) ransom to get rid of a demanding patient too difficult to deal with. Placebos benefit patients, regardless of the mood in which the doctor prescribes them – and that benefit is one of their wonders” (46).

Placebo, physician as:

When physicians listen to their patients as carefully as they now look at them, placebos will prove unnecessary because physicians will have learned again that they can help many patients through themselves. The placebo is powerless without the physician” (Spiro, 52). . . . But words from the physician can be placebos. If words can exult the health, reassurance and comfort may mobilized ‘healing’ in the sick, even if the process cannot be measured” (53).

Placebo, psychiatry, alternative medicine, and:

“Psychiatrists vigorously deny any resemblance of their craft to alternative medicine, yet sometimes the doctors who give placebos live in the same world as psychiatrists, with the same respect for the mind and its power over the body, the same respect for listening” (Spiro, 51). . . . Suggestion should come out of the therapeutic alliance of patient and physician.

Placebos, Richard Cabot on:

“The placebo, Cabot claimed, was nothing more than a lie about therapy, and it was thus unacceptable in practice. As a treatment devoid of specific effects, the placebo placed the doctor’s control over treatment at stake without its usual justification in technical knowledge about the diseased body (Crenner II, 127). . . . the use of placebos would risk separating individual therapeutic influence over patients from its material justification. If therapeutic authority was grounded in special technical knowledge about the diseased body, then the use of placebos seemed a willful misrepresentation of this authority” (128). . . . for Cabot, it [placebo] was nothing but a deception. But placebos were more potent and complex mixtures than was allowed in these critiques, as the psychodynamic interpretations of Houston and others would suggest. Isolating the placebo so cleanly from its manifold meanings and associations was not easy” (133).

Plastic Surgery, late eighteenth century advances:

use of skin flap (free flap) described; Chopart performed lip reconstruction using a neck flap (1791)  In 1818, J.C. Carpue (Britain) reconstructs nose of an army officer using a flap of skin from the forehead (the “Indian Rhinoplasty” based on 1794 account Carpue read in Gentleman’s Magazine (Bennett, 152-3).

Plastic Surgery, Mid-nineteenth century advances = use of rotation and pedicled flaps and free skin grafts. 1860s:

Baronio (Italian) experiment of free grafts from one site another on flank of sheep; American Civil War: more than 30 reconstructive procedures on eyelids, nose, cheek, lips & palate, using rotational flaps, oral prostheses and intermaxillary wiring; Gordon Buck, at NY Hospital during Civil War, reconstructed faces of patients with war injuries, incl. Carlton Burgan, who lost his nose, cheek, and orbital floor to an overdose of calomel (Crumley, 9-10); 1869: Reverdin (Paris) reports taking small piece of epidermis (“free skin graft”) to heal patient with traumatic skin loss of forearm (Bennett, 154); also develops, via experiments with subperiosteal [beneath dense fibrous membrane covering the surface of bones] resection, his eponymous cleft palate repair (Chambers & Ray, 473)  1874: Thiersch described the split-skin graft, taken with a razor, of the type currently used today” (Rowe, 344)  Rene Le Fort’s reports his research (via cadaver heads) on bony displacements and patterns of fracture following injuries to middle third of facial skeleton (345-46)

Pneumonia, private physicians’ opposition to serum for:

“Despite laboratory advances in the 1930s that made pneumonia serum technically easier to use, physicians remained skeptical about the wisdom of using serum therapy in community practice. . . . Practitioners’ fears of serum therapy were ultimately rooted in the economics of medical practice. Using serum remained a tricky business not only because of the potential danger to the patient, but because of the potential injury to the physician’s reputation, should the patient react badly to serum or the expensive treatment fail.. Serum therapy for pneumonia might be rational therapeutics but the prudent physician might do best to avoid it” (Marks, 67).

Polio and the “New Public Health”:

“ . . . polio epidemics appeared during an era of transition in the American public health movement. Promoters of the New Public Health urged the public to accept the germ theory, but the popular and professional association between dirt and disease lingered. . . . In their anti-polio campaigns American health officials and private physicians turned to the laboratory for therapeutic and diagnostic help. But at the peak of public hysteria they also relied on tried and true methods of disinfection and fumigation. Despite the measures’ contradictions to tenets of the New Public Health, relying on sanitary regulation was partly the result of the laboratory’s impotence in dealing with and explaining polio” (Rogers, 18, 19). “The emphasis on sanitation, then, offered both the public and the medical profession a way to define and explain the epidemic. Polio conceived as a dirt disease could resolve questions of responsibility for the spread of polio; sanitation became a means of protection and prediction. . . . Germs might be everywhere, but public health work tended to divide cases into guilty carriers [i.e., Eastern European immigrant families] and innocent victims” (46-47).

Polio vaccine, ideological divide between proponents of live and killed vaccine:

“The majority of virologists favored the live-vaccine option and regarded killed vaccine as, at best, a stopgap. The most obvious advantage of live over killed vaccine was that it was administered orally, whereas killed vaccine had to be injected, not just once but three times, at intervals, to protect against all three types of poliovirus; mass immunization would be much simpler if the necessity – and fear – of the needle could be removed. Then there was the question of the length of immunity conferred by the killed vaccine, which had yet to be established; if it lasted only one to three years before a booster injection was required, that would create problems for public health authorities. Live vaccine, by contrast, should provide long-term, if not lifelong, immunity, just as the disease did – since the whole point of it was to infect with the disease but in such a mild form as to render it harmless (Gould, 134). “With live vaccine, not only was distribution far simpler . . . but incomplete coverage mattered less, since the vaccine-induced polio would spread of its own accord, and that would be beneficial so long as there was no danger of a reversion to virulence (181).

Politics of Pain, 1945-1960:

From 1945 to 1960, number of veterans receiving disability payments and pensions climbed from just over half a million in 1945 to three million in 1960. “Here was the crux of the politics of pain. For liberals looking to the New Deal as a continuing model of governance, the veteran’s complaints could become a platform on which broader commitments could be built. But the topic split the Right, with the doctors and veterans squared off for an ideological fight” (Wailoo II, ).

Polygraph, MacKenzie’s:

“Mackenzie . . . indicated in 1909 that he never wanted his name to be linked with the polygraph . . . [which] in Mackenzie’s opinion, was a research tool, not designed to be of use to the general practitioner, who was expected to understand the significance of the heart sounds and changes in rhythm without applying the polygraph to all of his patients. . . . The polygraph provided a diagrammatic representation of the variations of the simultaneous pulsing cycle in the arteries, veins, and heart. The instrument became a symbol for twentieth-century American and European heart specialists, thus realize Mackenzie’s worst fears for its impact on medical practice” (Davis II, 129).

Professionalism, in late nineteenth and early twentieth centuries:

complexions of professions as a status category changed from “status professionalism” (i.e., “gentlemanly professionalism”) to “occupational professionalism.” “For the first time in the latter half of the nineteenth century, the professional stratum became definable by the now-familiar matrix of tasks involving specialized knowledge, requirements of high levels of formal training, tests for competent performance, regulation by professional associations, and licensing by the state. At the same time, many of the precapitalist legitimations of the professions continue, in a transformed way, during this period of capitalist expansion” (Brint, 30-31).

Professionalism, role of universities in:

“In America, the universities became the central arbiters of professional status during the era of collective mobility [late 19th- and early 20th-century]. Aspiring occupations acquired professional status primarily by gaining a place in the regular curricular offerings of universities. They did so by persuading the universities that the tasks involved in their work required training in a formal body of knowledge, knowledge that was, furthermore, relevant to the performance of important services for individual clients or organizational employers. . . . The key to acceptance as a profession during the period of ‘collective mobility’ was a successful claim to testable expertise on the basis of formal knowledge, combined with the successful claim to social status arising from the conviction of an occupation’s ‘respectability’ and social importance. These are rather vague criteria, and therefore it is not surprising that formal university-level training became the authoritative guide to the boundaries of the professional world. For all intents and purposes, it is the universities that define the professions” (Brint, 34, 35).

Professionalism, shift from social trustee to expert professionalism beginning in the 1960s:

“In particular the last three decades show a double movement – away from antimarket elements in professional organization and ideology, and toward a more exclusive emphasis on bonafide formal knowledge as the critical element in the constitution of professions. In so far as this is true, the professions show signs of consolidating around a narrower and more exclusive base. “Beginning in the 1960s, social trustee professionalism fell under increasing attack for its apparent lack of correspondence to the organizational realties of professional life. At the same time, a variety of forces – from the increasing importance of income as a status element in the general population to the population explosion within the professions – favored the rise of expert professionalism to a position of greater significance” (Brint, 39).

Professionalization, 19th c., in relation to biomedical innovation/science:

“. . . attempting to gauge with exactitude the extent to which nineteenth-century physicians were, by present standards, scientific, proves an unproductive task. Patients judged the profession by the criteria of their age, an authority which was incapable of distinguishing in any absolute sense the relative scientific merit of, for example, a phrenologist or his opponent. Given this limitation, ‘valid’ science becomes irrelevant to the attainment of status, while to pursue diligently its antecedents adds little to an understanding of the past. What is of paramount importance, however, is the manner in which physicians used, not the content, but the rhetoric of science. In an analysis of the deployment of science of physicians it will become apparent that the nineteenth-century profession, though outwardly demonstrating increasing homogeneity, must be resolved into a series of distinct and frequently competing subgroups. Each of these fragments invoked a definition of biomedical knowledge designed to accord with its particular aspirations. In effect, science, mirroring the profession itself, must be seen not as a fixed entity but as a collage of discrete and malleable constituents. (Shortt, 60) . . . The terminology of science, then, had entered the substratum of nineteenth-century British thought at a level quite divorced from its practical achievements. This subtle invasion made a significant contribution to medical professionalization: physicians portrayed themselves as exemplars of science to a public receptive to the idiom in which these claims were phrased. Under the guise of an objective explanation of natural phenomena, science became a code-word for a methodology, a designation for specialized expertise, and a vehicle for social mobility. . . . [sufficient evidence suggests that] American physicians were not unlike their British counterparts: specific subgroups within the profession fashioned their own definitions of science in their search for acceptable socioeconomic stature” (63-64).

Professionalization of psychiatry:

“Between approximately 1880 and 1917 psychiatry shed its former status as a beleaguered specialty isolated from the rest of the medical profession. The rise of medical interest in psychotherapy, the founding of the mental hygiene movement in 1908, the creation of new psychiatric societies and journals, the rapid expansion of a body of abstract knowledge – all developments in which [Adolf] Meyer was intimately involved – testify to the increasing professionalization of psychiatry during those years. The integration of psychiatry into the medical curriculum was a crucial step in this process. Thus the founding in 1908 of the Department of Psychiatry at Johns Hopkins was a decisive event, signaling as it did the recognition by the outstanding medical school of the day of the importance of psychiatry as an academic specialty” (Leys, 453).

Psychiatric disorders, and single neurotransmitters systems and DSM III:

“During the 1970s the major psychiatric disorders became defined as disorders of single neurotransmitter systems and their receptors, with depression being a catecholamine disorder, anxiety a 5HT disorder, dementia a cholinergic disorder, and schizophrenia a dopamine disorder. The evidence to support any of these proposals was never there, but this language powerfully supported psychiatry’s transition from a discipline that understood itself in dimensional terms to one that concerned itself with categorical ones. This legitimized the rise of biological psychiatry, which in turn fostered a neo-Kraepelinian approach to diagnosis and classification, as embodied in DSM-III (Healy, 163).

Psychiatric Nursing, pre-1940s:

“Nurses in psychiatric services prior to 1940 must have been greatly confused about their roles. Were they custodians? Were they surrogate mothers? Were they informers conveying descriptions of ‘bad’ behavior to physicians? Were they companions and/or friends of patients? Were they mother-like housekeepers keeping the wards clean and in order for Daddy (the physician)? Were they jailors preventing escapes, restraining patients, punishing misdeeds? Were they habit trainers? Were they protectors of the public, family, other patients? Were they occupational therapists or hydrotherapists? The nurse role was diffuse and the phenomenological focus for practice was unclear if not elusive” (Paplau, 22).

Psychiatric Nursing, role of Norristown in promoting:

“One of the great friends and promoters of psychiatric nursing was Dr. Noyes, superintendent at Norristown State Hospital in Pennsylvania. In 1930, when I was a senior student nurse, my classmates and I spent four half-days, over a period of four weeks, at Norristown. . . . In 1962, when I gave a clinical workshop at Norristown, Dr. Noyes, then retired and in a wheelchair, attended some of the classes” (Paplau, 22).

Psychodynamic Psychiatry, dominance from 40s through the 60s:

“Psychodynamic psychiatry from the 1940s to the 1960s was virtually synonymous with the psychotherapies and, to a lesser extent, milieu and other psychosocial therapies. . . . Yet within the psychodynamic synthesis lay elements that would hasten its eventual decline. Whereas most medical specialties had committed themselves to biological and physiological research that would illuminate organ function and pathology, to the eventual development of therapies that might be evaluated by randomized clinical trials, and to an emerging hospital-based technology, psychiatrists all but abandoned research into brain pathology. Consequently, the gap between psychiatry and medicine widened precipitously. . . That psychotherapy could not be defended in strictly medical terms meant that psychiatry was vulnerable to challenges from other professional groups, particularly clinical psychology. By the close of the decade of the 1960s, psychodynamic and psychoanalytic psychiatry had begun to lose the paramount position it had enjoyed since World War II” (Grob, 213).

Psychopharmacology, criticism of benzodiazepines beginning in late 60s:

“Because the chemical properties of benzodiazepines remain unchanged, this shift can be explained only by understanding these drugs as historical artifacts whose political viability tells us much about the world that determines their worth. . . . The diethylstilbestrol (synthetic estrogen) scandal, the thalidomide tragedy, and nagging doubts about the safety of oral contraceptives had burst the bubble of confidence in pharmaceutical panacea. There were also the illicit drugs, such as marijuana and LSD, that were being widely used a rebellious youth” (Tone, 378).

Psychopharmacology, discovery of chemical transmission and:

“The third area of scientific development that contributed to the advent of modern psychopharmacology was the discovery of chemical transmission at neural junctions in the central nervous system. . . . These [1904-1914] observations suggested a possible physiological significance of acetylcholine, particularly as a parasympathetic neurotransmitter. At the time, however, acetylcholine had not been shown to be normally present in animal tissue. Conclusive evidence that the effects of parasympathetic nerve stimulation are mediated by liberation of a chemical transmitter that resembled acetylcholine in all respects examined was provided in the early 1920s by Loewi, who showed that after simulating the vagus nerve [which slows heart rate] to the frog heart, fluid from the heart inhibited a test heart. In 1929 the presence of naturally occurring acetylcholine in animal tissue was first reported. Conclusive evidence that sympathetic nerves released a chemical transmitter having the properties of adrenaline was provided by Cannon and Bacq in 1931. . . . By the end of the 1950s the weight of evidence in support of chemical transmission in the CNS was sufficient to convince even strong early advocates of the electrical hypothesis (Eccles, 1959). With the discovery of chemical transmission in the CNS, identification of the central neurotransmitters became a major focus of research. By 1960 numerous substances, including acetylcholine, norepinephrine, dopamine, and serotonin, were considered likely brain neurotransmitters” (Baumeister & Hawkins, 205). . . . “in each case, the development in understanding the mechanisms of action of psychotropic drugs depended critically on the discovery of central neurotransmitters. At the same time, the evolving understanding of drug mechanisms of action helped to clarify the physiological significance of neurotransmitters. For example, reserpine depletes monoamines and produces immobility in animals. This immobility is reversed by L-dopa, suggesting a physiologic function for dopamine in movement. . . . The discovery of chemical transmission at neuronal junctions in the CNS was a foundational event in the rise of psychopharmacology. . . . Without the knowledge that junctional transmission in the brain was chemical, along with the corollary idea that disturbances in neurotransmission underlie mental illness, psychopharmacology would have remained purely empirical” (206).

Psychopharmacology, marketing and creation of psychiatric disorder:

“Although there are clearly psychobiological inputs to many psychiatric disorders, we are at present in a state where companies can not only seek to find the key to the lock but can dictate a great deal of the shape of the lock to which a key must fit. . . . In purist circles there will be great resistance to the idea that one can create reality by marketing efforts. It can be argued, however, that it was ever thus. . . . Indeed in psychiatry, at least, a good case can be made that astute marketing extends right into the heart of academe itself. In psychopharmacology, the ideas that have caught on have done so because their originators have had a talent for coining a pithy title to describe a phenomenon – such as Type I and Type II schizophrenia or the amine hypothesis of depression. . . any consideration of the development of psychopharmacology makes it quite clear that good marketing of such ideas can capture a field, either before the evidence is in on an issue or even in the face of considerable contradictory evidence” (Healy, 212, 213).

Psychopharmacology, not paradigm-shift from psychoanalysis:

“Not only did the somatogenic perspective have many strong adherents during the zenith of the psychoanalytic movement, it predated the psychogenic perspective by more than a century (Baumeister & Hawkins, 201) . . . . the paradigmatic status of the psychoanalytic perspective is questionable. . . . Although the psychoanalytic perspective [1950-1970] was certainly a powerful force, it was by no means universally accepted. . . . scientific research in psychiatry, even during the heyday of psychoanalysis, was overwhelmingly somatic in nature. . . In the middle of the twentieth century two clearly defined competing schools existed in psychiatry: psychogenic and somatogenic. In the 1970s and 1980s the somatogenic perspective rose to a level of dominance indicative of a true paradigm (Healy, 1987). However, the advent of psychopharmacology was not a revolution in the Kuhnian sense. Rather it was a normal science outgrowth of the somatogenic school” (207).

Psychosomatic sensibility, demise of beginning in the 1970s:

“As the seventies unfolded, however, the ground under [George] Engel began to shift. . . Most significantly, psychiatry and internal medicine underwent dizzying and dramatic shifts. In psychiatry, the seventies were marked by the rapid decline of psychoanalysis, the rise of the neurosciences, and the general advance of an aggressive new ‘biological psychiatry.’ . . . Departments of medicine felt themselves reeling in ‘future shock’ as they struggled with unsettling changes in size, subspecialty fragmentation, geographic dispersion, and administrative balkanization. Tied up with these changes were further transformations: the displacement of physician-investigators by Ph.D.-trained biomedical scientists, the refocusing of research from human subjects and disease processes to ‘basic’ and increasingly molecular events, the alteration of study designs from selected patient cases to biostatistically rigorous clinical trials” (T. Brown II, 28). . . . Engel was also denied the opportunity to retreat to the ‘safe haven’ of psychosomatic medicine, because that field, took was undergoing disconcerting changes. From Engel’s point of view, the problems of psychosomatic research – already evident in the sixties – deepened in the seventies as animal ‘models,’ ‘stress’ studies, and psychoendocrine bench research took over a larger and larger portion of the field and tended to displace earlier, psychoanalytically grounded clinical studies” (29). In 1979, Engel stepped down from leadership of Rochester Liaison program which “had been a bridge connecting dynamic psychiatry to patient-centered internal medicine via extensive teaching and respected research. Now both fields pulled apart in their own, rapidly changing directions, and Engel was left without the scientific research legitimacy that had been crucial to his mainstream success. What primarily remained was teaching and the attempt to assure succession” (31-32).

Psychosomatic trend in Internal Medicine after WWI:

“ . . . by 1919 many American physicians had joined the ‘Barker line’ in Internal Medicine, that is, they had become interested in the role of emotion in the etiology and treatment of physical illness. The intensifying buzz from psychiatrists, plus experience with soldiers during World War I and with veterans in the immediate postwar period, served to underscore the role of emotional factors. The full extent of psychogenic disorder, already suspected by some before the war, was brought vividly home, and its occurrence was generalized to the entire civilian population. What happened after 1919? The simply answer is that interest in the relationship between emotions and disease markedly increased in the twenties and thirties” (T. Brown, 14). The psychosomatic turn came to fruition when psychoanalysis and “American corporate philanthropy” were added to this development in the 1930s.

Psychotherapeutic Approach, after WWII:

“The psychotherapeutic approach became dominant in American psychiatry after World War II. Up until then American psychiatry, in broad orientation, had been more community oriented than European psychiatry. It was more prepared to see disorders such as neurasthenia, alcoholism, and substance abuse as being within the psychiatric remit. In terms of theoretical models it was poised between the disease model of Emil Kraepelin and the psychosocial model of Adolf Meyer. World War II led to an influx of clinicians into psychiatry, whose experience in the management of combat reactions led to a decisive switch in orientation from a disease model to Meyer-ian and analytic approaches. This coincided with an expansion in the number of psychiatric departments, professorships, and residency programs. These positions all went to analysts. The expansion happened at just the time that effective somatic treatments were about to appear, setting the state for a series of conflicts between an analytic establishment and a new breed of somaticists, conflicts that have spanned thirty years” (Healy, 222-223).

Psychotropic advertising, and male anxiety:

“When read psychoanalytically, their ‘success’ depends on a troubling slippage between transference and countertransference, conscious and unconscious. . . . beneath the surface, the message of the advertisements has little to do with women at all. Instead, a discourse insisting to be about women is instead an oversimplified discussion of the anxiety of men as doctors and doctors as men looking at women. . . . Anxiety is also the inquietude in the doctor, made uneasy by the threat that these symptoms [re women/marriage] come to represent. . . . psychotropic advertisements thus promote the message that male doctors – so construed by looking at the ads – can react to various forms of anxiety by the act of writing a prescription. Prescription writing is, in this system of not entirely chemical imbalance and rebalance, presented as a relationally gendered, if not entirely stable, form of power” (Metzl, 158,159).

Psychotropics, popular negative images of after 1980:

“ . . . popular assessment of tranquilizers came to focus on these negative images partly because these were key elements in other contemporary cultural currents: growing reaction against the ‘medicalization’ of society, growing distrust of big business and big science, and concern for the rights of women, consumers, and patients. Minor tranquilizers, with their hidden dangers, their widespread prescription (especially to women), and enormous sales figures, seemed to epitomize many painful defects in American society. . . . Much public discourse pushed aside the positive (or even neutral) assessments of tranquilizers between 1960 and 1980 simply because the dark side of prescription drugs was so much more useful rhetorically” (Speaker, 361-362; 371-372. 376). . . . two-thirds of prescriptions written for mood-modifying drugs were for female patients. For many feminists, as for consumer advocates, tranquilizer prescription was a prime example of economic and social exploitation” (373).

Psychotropics, shift in advertising from mid-70s:

“Beginning in the mid-1970s, science replaces sentiments as the principal means by which effective treatment is applied. . . . Gone is any reference to the collaboration or alliance with the patient, the facilitation of psychotherapy, and so on. . . . Stressing the power and potency of psychotropic drugs is accompanied by a change in the psychiatrist’s image of himself, via-a-vis his patient. Earlier advertisements suggested medication increased the potency of the personality of the psychiatrist, his own ability, his therapeutic qualities. During this period there is a dislocation of potency to the drugs themselves, which can then be prescribed by the knowledgeable psychiatrist” (Neill, 337).

PTSD, as following neither explicitly monothetic nor polythetic diagnostic rules:

“Like the polythetic classifications described in DSM-III-R, they include members that share no single overt feature, but unlike these classifications, their unity is not imposed by convention, from the outside. Rather, cases are connected by something that is intrinsic to the classification but that does not appear on the official symptom list: a submerged feature . . . In the case of PTSD, the submerged is the traumatic memory. The traumatic memory is not merely hidden below the surface of the DSM text; it is also polymorphous, showing itself in diverse presences (creating an equivalence between outbursts of anger and sleeplessness) and signifying absences (products of symptomatic avoidances). It is the traumatic memory’s protean character that gives cases the appearance, on the surface, of sharing only a family resemblance. This is what Ribot, Charcot, Janet, and Freud wanted to call attention to when they compared the traumatic memory to a mental parasite. Lawrence Kolb evokes a similar image when he suggests that PTSD ‘is to psychiatry as syphilis was to medicine,’ that is, that both maladies mimic other disorders through their heterogenous symptomatology (Kolb 1989:811)” (A. Young, 119).

PTSD, difference from traumatic neurosis:

“From the discovery of traumatic neurosis at the end of the nineteenth century to the 1960s, the psychological reaction was not explained solely as the consequence of the event. The personality of the victims themselves was always questioned. During World War I, especially in the German and Austrian armies, soldiers with traumatic neurosis were systematically suspected of simulation or of weakness and lack of moral fiber (Brunner 200). . . . The pathology was not considered to be the consequence of an event outside the range of human experience, because war experiences were supposed to be part of human experience in general. Even within psychoanalysis, traumatic neurosis was viewed with what I call a ‘clinical practice of suspicion’” (Rechtman, 914). . . . [Introduction of PTSD in 1980 in DSM-III was a consequence] less of a clinical discovery than of a kind of revolution in the American mentality. In fact, the battle for women’s rights in the 1960s, and the return of thousands of Vietnam veterans, created a very specific socio-political context where the evidence of trauma became a way to access a new political condition” (914).

PTSD, invention in nineteenth century:

Conventional genealogy of traumatic memory “represent[s] the traumatic memory as a found object, a thing indifferent to history. Research into this memory and the associated pathogenic secret is portrayed as a process of discovery. I have argued for something else: the traumatic memory is a man-made object. It originates in the scientific and clinical discourse of the nineteenth century; before that time, there is unhappiness, despair, and disturbing recollections, but no traumatic memory, in the sense that we know it today” (A. Young, 141).

PTSD, second [i.e., softer, more variable] conception of from mid-1990s:

“The surveys in the mid-1990s did not support the initial assumption that ‘almost anyone’ exposed to an event outside the range of human experience developed PTSD. Some do, many others do not. The specificity of traumatic responses was also addressed. . . . In spite of empirical findings, it seems a new way has been found to preserve both PTSD and the prominent influence of the external event. . . Such notions like risk factors, vulnerability or resilience belong to the second concept of PTSD; they could not fit the first version. . . . Issues like comorbidity, degrees of exposure, protective factors and recovery are logical consequences of this later concept of PTSD, they are not relevant in the original” (Rechtman, 915).

Public Health, persistence of environmental focus in age of bacteriology re healthy typhoid carriers:

“Health officers, because of growing numbers of healthy carriers among them, developed guidelines that encompassed both particular and general factors. Health officials needed to control disease and found ways to do this without isolating all who tested positive in the laboratory. Toward this end they judged housing conditions, sanitary facilities, and the individual’s tractability as they determined the proper handling of typhoid bacilli-carrying healthy people” (Leavitt III, 625) . . . . According to Rosenau, “The sick [with typhoid] should be isolated in homes or hospitals; the healthy carriers could be allowed to walk about on the city streets. The laboratory thus did not define the full scope of the public health problem or its solution” (626). . . . Chapin, Rosenau, and other public health officials advocated and practiced differential treatment. Public health workers sought control of carriers in ways that acknowledged their health, their place in the community, and the near impossibility of constraining them all, considerations well beyond the laboratory findings (627). . . . Early twentieth-century proponents of bacteriology could no more isolate disease from its environmental and social context than could their predecessors who were driven by the filth theory of disease. . . . But the laboratory could not provide all the answers to health officers trying to protect the public from contagious diseases. The real world impinged on their deliberations, and of necessity the health officers in the field adopted a broad approach to disease control, one that in many respects resembled earlier practices. . . . Science, when applied on the streets and in the tenements of urban America, became tainted by the values, limitations, and commitments of that world” (629).

Public Health, schism between public & private doctors in early 20th c:

“Especially significant was the break, in the early twentieth century, between doctors in private or hospital practice and those working in government. The former, represented by . . . the American Medical Association, claimed authority over the domain of patient care and vehemently opposed any initiative to provide clinical services in the public section. The notable exception was care for expectant mothers and their infants . . . Local health department were also left with responsibility for a few necessary but unglamorous categories of care that the medical profession disdained to provide: services for the indigent, treatment of sexually transmitted diseases, and control of once-epidemic but rapidly dwindling contagions such as tuberculosis and smallpox. . . . the growth of local public health was halting and uneven: as late as World War II, one-third of the U.S. population lived in an area that was not served by a full-time health official” (Colgrove, III, Intro).

Public Health Nursing, prestige of in late nineteenth and early twentieth centuries:

“Ironically, at a time when public-health medicine was increasingly seen as the backwater of the medical profession, public-health nurses, because their practice involved autonomy from medical and hospital control and the provision of cross-class care and education, were perceived as the elite in nursing. At the same time, public health remained the smallest of nursing’s practice fields” (Reverby, 110). This kind of nursing grew slowly because it required an endowed charity large enough to support the nurse’s entire salary, but after 1900, with increased immigration and government concern with health, “both voluntary and government-supported public-health agencies began to grow” (109).

Puerperal Insanity, late nineteenth century:

“Whether on a conscious or unconscious level, women who suffered from puerperal insanity were rebelling against the constraints of gender. The symptoms clearly indicate that rebellion. . . . women suffering from puerperal insanity were not acting like women at all. . . . Rebelling against cultural notions of ‘true womanhood’ was the one thing tying together the various symptoms of puerperal insanity. Physicians . . . made these rebellious symptoms legitimate by defining them within a medical framework . . . with a name: puerperal insanity. That naming was the result not only of the general ideas of the culture and the specific professionalization struggles of physicians, but also was related to doctors’ new relationship with women patients: as birth attendants. . . . The medicalization of pregnancy, birth and lactation provided a kind of permission for women to express rebellion and desperation in the particular symptoms of puerperal insanity. . . . Women played out their rebellion against the male physician, and doctors translated that rebellion into an acceptable medical category. But doctors also ‘cured’ the rebellion with their treatment and systematically silenced women in their case study reporting. In both cases, women were unequal partners in the construction of the disease” (Theriot III, 81-82).

Q

Quota system, in NYC:

After WWI, Harvard, Yale, Columbia, and Cornell lead establishment of undergraduate quota system to limit percentage of Jews, with medical schools following suit. Only NYU Medical School had no formal quota system, with its average class being over 50% Jewish (trained Albert Sabin and Jonas Salk), with many of its graduates accepted for internships in its medical and surgical divisions at Bellevue (along with Mount Sinai) (Oshinsky, 196-198; Oshinsky II, 140).

R

Rabies, ethical dilemmas raised by:

“ . . . there is no way to be sure that even the bites of a certifiably rabid animal will lead to rabies in the victim. . . . The mortality rate of ‘declared’ or symptomatic rabies is effectively 100 percent, but the threat of death from the bites of a rabid animal is vastly less. . . . Pasteur himself estimated that only 15 to 20 percent of people bitten by rabid dogs would eventually die of rabies if they would not or could not submit to his treatment. . . . There is simply no way to be sure that the rabies vaccine is even potentially beneficial to the vaccinated individual. In this crucial respect, Pasteur’s ‘treatment’ was unlike ordinary therapeutic measures, undertaken for the immediate sake of a person already suffering from a disease. When he vaccinated asymptomatic victims of animal bites, Pasteur was subjecting them to a painful and inherently risky series of injections even as he knew that many and probably most of them would escape the disease anyhow” (Geison, 230).

Racism, in Phila during 1793 Yellow Fever epidemic:

Phila’s black population of 2,100 initially believed immune to the plague, so untrained blacks persuaded to take service in infected homes, whence they were blamed for exorbitant wages and for spreading the infection and preying on the diseased. The black community “had never been dearly cherished by the city,” yet elders of African Society (founded by Jones and Allen in 1787) decided on 5 September that blacks should do what they could to help diseased white citizens  Absalom Jones and Richard Allen calling on Mayor Clarkson to offer assistance, thereby becoming “the first Philadelphians prepared to accept the plague and overcome it.” “The African Society, intended for the relief of destitute Negroes, suddenly assumed the most onerous, the most disgusting burdens of demoralized whites. . . . After, of course, the city shunned them as infected, vilified them as predatory. No conduct, however heroic, could expatiate the original sin of dark skin” (Powell, 94-101).

Radiotherapy, in opposition to “experimental” or “laboratory” medicine:

“Indeed, the mode of introduction of radiotherapy into medicine was diametrically opposed to the paradigm offered by ‘experimental’ medicine. Clinical application preceded rather than followed theoretical understanding and laboratory experimentation; the basic sciences found themselves in the position of having to explain already-existing clinical observations. This is the reverse of many significant innovations of the early twentieth century – for example, the discovery of insulin. Yet despite its ‘unscientific’ foundation, radiotherapy found a permanent place in medicine. What accounts for the emergence of radiotherapy against the tide of ‘rational’ therapeutics? . . . Radiotherapy made its entry into medicine through the portal of empiricism, and its use was stimulated by its associations with scientific discovery, its affirmation of the clinicians’ role in the assessment of new therapies, and its restoration of the physician’s threatened role as healer. Most important, it showed that new knowledge in medicine could come as well from the clinic as from the laboratory. It was only after a preparatory phase of enthusiastic empiricism that it entered the laboratory, where its physical principles and biological basis could be elucidated, permitting safe and rational use in patients” (Hayter, 677, 688).

Railway spine, in nineteenth century Britain:

“It is argued here that it is more accurate to see nineteenth-century medicine as broadly united on the somatic nature of these disorders, but divided over the acceptability of an organic theorization which permitted the direct action of what were normally identified as the non-organic factors of ‘mind,’ ‘emotion,’ ‘moral,’ or ‘emotional shock’ upon the functions of the body, and which located that interaction in the brain and nervous system” (217-218). In Britain, the surgical tradition of theorizing a somatic basis of apparently nonsomatic disorders went back to Abercrombie and Brodie in the 1820s & 30s (218-19). “Differences of medical opinion arose not from a nonsomatic challenge to the established somatic interpretation but from a theorization of the disorder that sought its origin in pathological lesions rather than the organic processes proposed by The Lancet. This model of railway spine is associated most influentially with the work of the surgeon, John E. Erichsen” (219). Erichsen’s “opponent,” Herbert Page, surgeon to the London & North Western Railway, was equally an organicist, but he argued for the influence of nonorganic emotional and ideational phenomena as organic phenomena – railway spine demonstrated the action of nonorganic influences through the physical matter of the body (221).

Railway spine, in relation to functional neuroses and shell shock:

“ . . . the notion that railway spine was a functional disorder tended to emerge as a useful tool in resolving conflicts over the responsibility of railways for the victims of accidents. This contributed significantly to legitimating the concept of the functional nervous disorder. During World War I, the utility of a functional interpretation of shell shock in resolving issues of the responsibility of the state for traumatized soldiers would play a parallel and more dramatic role in legitimating the concept of the functional nervous disorder” (Brown III, 328).

Rape and pregnancy, in Middle Ages:

“The idea that some women are ‘asking for it’ clearly has a long history. Doubts about female honesty were reinforced by medical ideas about conception , which suggested that a woman had to release seed in order to become pregnant, and that this seed was only released when she experienced sexual pleasure. This led to considerable debate about whether a woman could fall pregnant as a result of rape; did a pregnancy prove that the encounter was actually consensual? (K. Harvey, 277-278).

Re sanitarium regimen:

“In hindsight, most ingredients of the therapeutic program must be judged ineffective; none had proven worth . . . Common sense and clinical wisdom suggest that rest did help acutely ill, feverish, or exhausted patients. Rest, however, has never been scientifically tested in the absence of effective chemotherapy and, with it, has proved unnecessary” (Ott, 320).

Reflex theory, and nineteenth-century women’s diseases:

“Reflex theory explained neurasthenia, puerperal insanity, adolescent insanity, hysteria, insanity related to menopause, and other conditions as well. The reflex theory was a concept designed to account ‘scientifically’ for symptoms grouped together by patients and patients’ families. Physicians argued among themselves about the direction of the reflex action – predictably, with neurologists maintaining the nerves as the source and gynecologists asserting the primacy of the reproductive organs – but regardless of specialty or nonspecialization, nineteenth-century physicians accepted the patient-grouping of physical and psychological symptoms as codependent and fashioned disease concepts to account for patients’ complaints. . . . The reflex theory, along with the diseases it scientifically legitimized, came out of a doctor-patient-family relationship new to the nineteenth century, one based on the patient’s willingness to tell all and show all, the doctor’s ability to listen sympathetically and intervene in some way, and the patient’s and family’s acquiescence to some degree of medical control” (Thieriot, 356). . . . The medical literature indicates that physicians were sympathetic listeners who managed to assert authority and control even while they often collaborated with patients and patients’ families and friends over treatment. . . . Often the physician’s recommendations had to do with asserting authority not only over patients, but over patients’ families and friends as well” (359).

Religious Exemption Laws for Provision of Medical Care to Children:

Between 1982 and 2003, criminal charges filed in 58 cases of withholding medical care from children based on parental religious views; 55 involved fatalities (Merrick, 280). Re deaths of Christian Science children from untreated diabetes (diabetic ketoacidosis), bacterial meningitis, pneumonia, etc.: “In the three of the four criminal cases that have been overturned on appeal, each dismissal was based upon a violation of the parents’ rights to due process; however, the courts have agreed that religious exemption laws are not a defense of criminal neglect (269) . . . Christian Science’s defense under the U.S. Constitution may be a moot point. We are all afforded the right to practice or not practice the religious beliefs we choose; however, in the [Establishment Clause of the] First Amendment, the government is specifically proscribed from ‘recognizing’ any specific religion, therefore the majority of religious exemption laws may be unconstitutional. In Lemon v. Kurtzman (1971), supreme court held that legislative intent be secular and that a law must neither inhibit nor advance religion: “The spiritual exemptions fail to meet the first standard because they do not have a secular legislative interest. They were enacted specifically to provide prosecutorial relief from child abuse charges for parents who withhold medical care from their children based on the parents’ religious belief” (Merrick, 286; Hughes, 259, citing Justice Stanley Mosk, CA Chief Justice, in Walker) In Prince v. Massachusetts (1944) the U.S. Supreme Court majority ruled, “The right to practice religion freely does not include liberty to expose the community or the child to communicable disease or the latter to ill health or death. . . . parents may be free to become martyrs themselves. But it does not follow they are free . . . to make martyrs of their children before they have reached the age of full and legal [maturity]” (Hickey & Lyckholm, 269, 270’ Merrick, 283). Per Beauchamp and Childress “The argument can be made that a child’s life is more important than a set of beliefs or morals. These are not the child’s morals. It has often been argued, and most recently by Jeffrey Spike, who said, ‘The children are being raise in that community, yes, and by true believers, yes, but they deserve a chance to survive until they can judge for themselves whether to adhere to those beliefs’” (272). “Regardless of an adult’s theological convictions, however, he should not be free to withhold the healing power of medicine from a child because it is not in the child’s best interest to do so. The Christian Science Churches in Canada and Great Britain have accommodated to this policy. The Jehovah’s Witnesses in the US have accommodated to blood transfusions for their children. Members of churches that eschew conventional medical care for their children should also be required to accommodate. To do otherwise constitutes discrimination against their children as a class and perpetuates inequities in the healthcare system. . . . The goal of public policy should be to protect both the health of vulnerable children and the public at large. The spiritual exemptions create a legal landscape of confusion and contradiction. They create a burden for sick kids who could be healed through medicine, a burden for parents who have no direction about when withholding medical care becomes potentially criminal behavior and a burden for third parties who are at risk of becoming ill with vaccine-preventable diseases” (Merrick, 297-98). With James Dwyer, she advocates a class action lawsuit filed in federal court on behalf of children from whom medical care might be withheld as a result of the exemptions.” Despite constraints, this approach “has the greatest likelihood for success” (298).

Reproductive Genetic Technologies, perspectives on suffering and:

Three groups in U.S.: first group dominated by nonreligious and liberal religious respondents espouse “the dominant public discourse about suffering,” i.e., it is not to be tolerated but conquered through technology. Second group, largely liberal Protestants and Jews, “seems to understand why there might be a debate but clearly falls on the side of relieving suffering through RGTs. . . These religious groups “do not have the strong counternarrative identified by theologians. . . . [The third group of] largely conservative Protestants (evangelicals, fundamentalists, and Pentecostals) tends to see a legitimate debate about whether we should end suffering through RGTs. . . many come down on the side of not relieving suffering through RGTs at all” for three reasons: (1) they do not want to eliminate suffering by ending the life of the sufferer; (2) they do not want to imply that those who have genetic qualities that lead to suffering are less valuable to society; (3) they see a purpose in suffering (J. H. Evans, 1068-1070 & passim). . . . Re RGTs, “despite attempts of religious leaders to claim that all people have equal value, most of the respondents who find meaning in suffering ultimately feel compelled to justify the utility to others of the sufferer (e.g., the pedagogical function of suffering for others), because in this country we value people ultimately in terms of their utility” (1072).

Reproductive Physiology, in the 1910s:

“Despite the prolonged attempts to separate physiology from anatomy during the nineteenth century, understanding of the reproductive process required the insights and techniques of both disciplines, as Marshall’s text clearly showed. Reproductive physiology had been eliminated from consideration by this separation. Only with a clear analysis of the estrus and menstrual cycles, for instance, was it possible to detect changes resulting from ovariectomy. During the 1910s physiologists would gradually learn how to monitor histological changes in the uterus and vagina. When the normal phases of the estrus cycle were determined, as they were by about 1920, it would at last be possible to test the potency of sex gland extracts following deprivation of these glands by surgery” (Borell IV, 17). After about 1920, “This separation of physiological theory and moral imperative was synchronized with endocrinology’s emergence as a science. Both of these events coincided with the increasing influence of psychological and sociological theory within the professions, as well as public acceptance of birth control” (21). . . . This convergence of [theoretical and practical] goals occurred just as the gonadal hormones were being isolated. The 1920s, therefore, mark an important juncture in the congruence of interests represented by theoretical science on the one hand and expressed social needs on the other” (28).

Rockefeller Foundation, holistic biomedical philosophy of:

“In short, the RF’s vision of biomedicine was not an exemplar of the reductionistic ideal, but a holistic model in which matters of psyche were placed on a par with those of soma; the laboratory was to be intimately linked to issues that arose in the clinic and the broader society; and experimental method was itself broadly construed. The laboratory was essential to the Rockefeller enterprise – but it was understood on terms quite different from today’s vision of bioscience (Pressman, 190). . . . In important ways, the triumphant model of laboratory-based medical research that prevails today was built not on top of the RF’s holistic model, but over it” (192). . . . By the time Alan Gregg took over control of the RF Medical Sciences program in the 1930s, a distinctly American school of psychiatry had emerged. As articulated by Adolf Meyer, the grand architect of the model, and put into practice by Thomas Salmon, director of the Army neuropsychiatry program in World War I, a new medical specialty [psychiatry] arose that joined the urban neurologist and the asylum alienist in a common enterprise; this was defined not around a reductionistic model of mental illness but rather by a holistic model of mental disorder in which problems of psyche and soma were functionally defined in terms of a citizen’s performance in society, and thus brought together (200) . . . In sum at the Rockefeller Foundation, the new science of man was not reductionistic molecular biology, but psychobiology and psychosomatic medicine – the holistic study of the entire organism, based on many disciplines ranging from anatomy to psychology and anthropology” (201-202).

Roe v. Wade (19 73), elimination of therapeutic abortion in wake of:

“From a feminist perspective, Roe v. Wade seems to have limited women’s right to unrestricted elective abortion by placing abortion into a therapeutic framework so that the procedure requires medical justification. In fact, it is the assault on the medical necessity of abortion that has reduced women’s access to the procedure. It has been difficult to get abortion recognized as a legitimate therapeutic procedure that should be included in regular health care coverage. It’s exclusion from Medicaid coverage, for instant, is possible because abortion continues to be perceived as an elective rather than as a medical procedure – regardless of the therapeutic framework provided by Roe” (Schoen, 2681ff.).

S

Salpêtriere, in time of Charcot:

a society-in-miniature of ca. 5,000, with 45 major and 60 smaller buildings, “basically an immense psycho-geriatric institution providing refuge for the aged and a retreat for the mentally ill; stable population, with annual turnover in 70s & 80s that never exceeded 30% of overall population (Micale, 715); a working institution (sewing of all sorts; shoe-making; center for hospital laundry for all Paris: “The old, monstrous conglomerations of patient, prisoner and prostitute had been sorted out, but other highly disparate groups were still housed together – the old with the young, epileptic patients with hysterics, retarded people with alcoholics” (717); fluid boundary between staff and patients; wide range of recreational activities; its own marchand de vin; “ . . . the keynotes of everyday life here were not morbid silence, forced regimentation and imposed uniformity. The sobering aspects of existence were real enough, but there was also a more dynamic side to life at the hospital. The basic material necessities were provided. The physical setting was open, clean, and pleasant. All patients could receive visitors; the majority of them could walk freely through the complex . . . To stress just the negative, manipulative aspects of institutional life [per Foucault et al.], or merely to recite a catalogue of horror stories, may be ideologically gratifying in some quarters, but it is a substantial distortion of the truth” (719-20); in final quarter of 19th c., “the Salpêtriere literally opened its gates to the city at large . . . . and people, both medical and lay, seem to have come in their AIs” (721); contra the literature on the development of the modern hospital, the Salpêtriere of Charcot’s day represented a different reality: “During this period [from roughly 1870 to 1910] , an incomparably more public conception of the hospital appeared. The old ideal of institutional isolation was replaced by that of societal integration. A new, dynamic, two-way relationship developed between the Salpêtriere and the external city: popular interest in the hospital increased greatly, as the hospital itself took on the characteristics of the new urban civilization” (722).

Schachner on “country doctors” as pioneers of ovariotomy:

“It is singular and well worthy of emphasis that, although the idea of extirpation as the only cure for ovarian tumors was suggested and discussed in various countries for a century, it remained for a relatively obscure country doctor in the backwoods of a young and newly developed country to put the idea into execution. And strange to say, what occurred in America was repeated in England, Germany, France, and Italy, namely, it was some obscure, or relatively obscure, practitioner in these countries who was the first to adopt the idea and successfully carry out the procedure. The surgeons of note in these countries, practically to a man, either ignored or actively condemned the step” (Schachner, 178; also 191, 201``).

Scientific racism, black women’s sexuality and:

“But whites ascribed black women’s sexual availability not to their powerlessness but to a key tenet of scientific racism: Blacks were unable to control their powerful sexual drives, which were frequently compared to those of rutting animals” (Montgomery, 66).

Sectarianism, Medical, in 19th century America:

“Only in 1806, when Samuel Thomson began to market ‘family rights’ to his botanical system of domestic practice (patented in 1813), were some Americans recruited to a true medical sect. During the 1820s and 1830s Thomsonianism’s following and strength grew markedly. Eclecticism – a parallel system of botanical healing in which professional medical practitioners supplanted the self-help care of the early Thomsonian plan – began to flourish in the 1840s and 1850s. During the same two decades homeopathy, first introduced to America in the mid-1820s, became the country’s most prominent medical sect. Hydropathy, or the water cure, the other major sect to emerge during the antebellum period, became well entrenched from the late 1840s . . . What all sectarians did share was the proclaimed objective of overturning medical orthodoxy. It was their common goal of destroying the established medical order that gave them definition as a group.” (Warner III, 236, 237).

Sects, Fates of Different Medical:

“In general, medical revitalization movements (except in the hypothetical situation of a major social transformation) must invariably adapt or accommodate themselves to what Wallace terms “special interest groups” – in this case organized allopathic medicine, health policy decision makers, corporate and government elites, and client groups – if they are to survive and prosper. In this process of adaptation some heterodox systems (e.g., homeopathy and eclecticism) became absorbed into regular medicine; some (e.g., naturopathy) suffered steady decline; and some underwent structured subordination to regular medicine as specialized auxiliary practices (e.g., pharmacy and midwifery). . . . the evolutionary path that a successful medical revitalization movement takes depends on a variety of historical and social structural conditions, both external and internal to the movement” (Baer, 179) . . . Ironically, at the same time that osteopathy was assimilating allopathic practices, biomedicine evolved into a capital-intensive, specialized endeavor that left a niche for osteopathic physicians, with their emphasis on primary care. Chiropractic, which appeared on the scene somewhat later than osteopathy, filled the need for the nonsurgical treatment of musculoskeletal disorders historically ignored by biomedicine and also by an increasing number of osteopathic physicians” (190).

Seige cycle:

“Drug career cycles generally encompass three phases: first, an expanding use of the drugs, accompanied by high expectations; then, rising criticism and disappointment; and finally, contracting use and limited application. These phases need not be sequential; they often overlap. . . . Because it was Max Seige who first pointed out the cyclical nature of the careers of psychotropic drugs, we have named this general dynamic a ‘Seige cycle’ (Snelders, Kaplan & Pieters, 97).

Serum therapy, achievements of:

“Although serums had failed to affect the course of typhoid fever and certain other infections, they had saved the lives of many persons with tetanus and meningococcal meningitis and had scored major triumphs in the treatment of diphtheria and pneumococcal pneumonia, triumphs that raised the hopes of doctors and the public for additional successes in the future” (Dowling, 54).

Sex hormones, clinical trials of:

“Sex hormones may best be portrayed as drugs looking for diseases. . . . I suggest that clinical trials function not merely as testing procedures for the selection of drug profiles but also as major devices in bringing the relevant groups of actors together. . . . clinical trials played an important role in establishing the relationships between laboratory scientists, pharmaceutical entrepreneurs, and clinicians, with laboratory scientists as intermediaries between the pharmaceutical company and medical profession (20). . . . a conceptualization of drugs in terms of ready-made laboratory products is not adequate to understand the process of drug development. Instead, a conceptualization of drugs in terms of artifacts created in networks of different groups of actors seems more adequate. . . drugs must be considered as the embodiment of interests that become mutually defined through social networks. The production and marketing of female sex hormones as ‘scientific drugs’ matched the needs of both gynecologists and pharmaceutical entrepreneurs to establish their scientific status . . . Moreover, the gynecological clinic provided the pharmaceutical industry with an available and established clientele, with diseases that could be subjected to hormonal treatment. The promotion of female sex hormones fitted seamlessly into already existing institutional structures formulated earlier in the century as part of the professionalization of medicine and the rationalized organization of service delivery (21).

Shell shock, constructivist understanding (per Ian Hacking) of:

“If we define shell shock as everything constituted by wartime understandings of the term, we propose to write a history of something which is not, or is not just, psychological ‘trauma.’ This shell shock is related to ‘war trauma’ . . . but the two are not synonymous. Viewing shell shock simply as one name for a universal psychological reaction to warfare leaves out too much. During the war, shell shock was understood in many different ways: as a psychological reaction to war, as a type of concussion, or as a physiological response to prolonged fear. . . . The ‘historical’ definition of shell shock, on the other hand, takes the etiological ambiguity of the disorder as its defining feature, and follows through all the consequences of this change in subject matter” (Loughran, 14-15).

Shell shock, failure of pathoanatomical explanation of:

“Certain clinical features of shell shock cases, however, did not fit Mott’s narrowly somatic theory. Notably, most cases did not occur in the presence of exploding shells or carbon monoxide, and symptoms frequently emerged at some distance from the scene of combat. What this meant was that Mott’s theory could explain only a small percentage of the observed cases. These cases came to be called ‘concussional shock’” (Brown III, 330-31). . . . The success of analytic therapies contributed to legitimating the functional nervous disorder, thereby extending the boundary of medical explanation to cover the behavior of soldiers not covered by Mott’s notion of concussional shock. . . . The success of analytic treatments, then, as limited as they were, provided a rationale for transferring the management of many men who would not fight, from the hands of military justice to those of medicine” (338).

Shell shock, possible histories of:

“I would argue instead that if one story arc begins with shell shock in the First World War and leads to PTSD (for now), then there is another which leads to MTBI [mild traumatic brain injury caused by high-velocity explosions in Iraq and Afghanistan]. Both exist alongside and crisscross with several other story arcs, but these can only be plotted if shell shock is allowed to remain as amorphous and ambiguous as its wartime incarnations. . . Another example . . . can be seen if we consider physiological theories of the war neuroses. In the later years of the war, physiological theories were extremely widespread . . . In these theories, shell shock was interpreted as a physiological malfunction caused by prolonged emotion and best treated through measures such as rest, isolation, and medicinal remedies. . . . The physiology of emotion as outlined by influential figures such as the Harvard physiologist Walter B. Cannon was explicitly evolution . . . The evolutionary paradigm shared by British physicians, psychologists, and psychiatrists therefore enabled physiological elements to be incorporated into a range of theories and bridged apparently divergent explanations of shell shock. This reframing against suggest that there was no straightforward transition to a psychological understanding of the war neuroses: psychology, physiology, and biology were all inseparably blended in many theories” (Loughran, 16-19). . . . As historians, it is important that we capture contemporary understandings of shell shock, not least so that we are able to explore the possibilities of all its different histories as fully as possible” (25).

Shellshock, and British psychological medicine:

“The groundwork had been laid for the acceptance of psychological paradigms before shell-shock burst onto the psychiatric scene. . . . In the first 18 months of the war, the interplay of two factors shaped British medical responses to shell-shock: the framework of explanation inherited from pre-war psychological medicine [e.g., railway spine], and attempts to comprehend the particular influence of industrial warfare in producing neuroses. Both militated against the ascription of a purely physical aetiology . . . doctors did not uniformly insist on a physical origin for shell-shock in the early months of the war. Physical explanations were barely put forward at all. When they were, it was in the context of widespread acceptance of the functional nature of most cases. . . . physical theories were seen as an alternative and under-explored possibility” (Loughran II, 83). Only in 1916 were separate physical (Frederick Mott) and psychological theories of shell shock put forward: “These developments were in fact two sides of the same coin: Psychological explanations could not be defined in opposition to physical theories until the latter had been formulated in detail. The mid-point of the war did not witness the triumphal march of psychology so much as the emergence of distinct groups from the welter of medical opinion” (86).. . . After 1917, the bulk of the British medical literature focused on the emotional and psychological origins of the war neuroses [though] the assertion that symptoms sometimes resulted from invisible injury to the central nervous system continued to be made until the end of the war and beyond, although most believed that physical shell shock accounted for only a minority of cases” (87).

Shepherd on relationship of gynecological to general abdominal surgery:

“It should be recalled that many of the ‘firsts’ of abdominal operations were achieved when a confident diagnosis of ovarian cyst had been made but at operation a courageous surgeon dealt with something quite unexpected (e.g., removal of uterus on discovery of a uterine tumor, cholecystostomy [drainage of diseased gall bladder], splenectomy)” (Shepherd, 43).

Sickle-Cell Anemia, political activism and demand for pain relief in late 1960s:

“Because the malady seemed to have broad implications for understanding the African American condition, many scholars felt compelled to extend their particular analyses of the disease . . . to touch upon questions of sweeping social significance – racial identity, black inheritance, and the problem of pain. . . . by the late 1960s, such discoveries and the problem of pain relief were being challenged by calls for more immediate, tangible, therapeutic results. Scientific understanding [at the level of individual hemoglobin cells], it seems, was not enough. . . . The pathology came to highlight not only the existence of pain, but also the problems inherent in pursuing a scientific understanding without also having immediate strategies for pain relief” (Wailoo III, 162).

Simms on chloroform:

Following near death of Empress Eugénie in 1860 during surgery for vesico-vaginal fistula, Simms called the drug “delicious and dangerous” and wrote: “I am done with chloroform, and will never again operate on any patient under its influence, and believe it ought to be banished from use” (in Simpson, 244).

Simpson as publishing success:

Simpson’s Obstetric Memoirs and Contributions (1855) sold 10,000 copies in U.S. within two months of publication (Simpson, 217)

Smallpox inoculation, American independence and:

“Overcoming the dreaded disease was broadly understood to be a collective struggle and a crucial obstacle to be overcome as the war for independence stretched across the thirteen states before the victory at Yorktown in 1781. . . . Washington’s order seemed to confirm that their government – whether local, state, or national – should protect public health by providing broad access to inoculation” (Wehrman, 220).

Smallpox inoculation, Colonial Americans’ specious claim to have discovered:

“The discovery and implementation of smallpox inoculation from a folk practice to the medical triumph of the eighteenth century was certainly a global and transatlantic process, but Americans in the 1750s and 1770s cobbled together a shared history about the discovery of mankind’s greatest medical procedure, turning it into an all-American sure. . . . Americans used their claim to have invented inoculation to celebrate American achievement and ultimately to rationalize a revolution” (Wehrman, 126).

Smallpox vaccination, slavery and:

Slaves in American South were not immunized against smallpox “because this might leave marks on their bodies that could discourage buyers” when brought to auction. Similarly, greying beards and hair of aging slaves were “blacked” on the auction block to convey youthful fitness for labor (Montgomery, 64-65).

Social Work, feminist critique of early leaders, flawed nature of:

“ . . . there would appear to be something ahistorical and unrealistic in their implied expectation that historical actors can dramatically transcend the ideological and material forces of their own era. . . . Social work leaders, women and men alike – in family services, the settlement movement, and public administration – continued to act from the premise that women best served themselves, their families, society, and posterity by remaining at home to care for their children. It was not an unrealistic principle for an era in which access to effective contraception was denied to most women and, indeed, was illegal in most states, and when the alternative to home management, especially for immigrant and working-class women, was working under extraordinary conditions of exploitation, low wages, and arduous hours. For such women, work outside the home meant not personal autonomy but severe exploitation in deadening jobs. Economic security, if not liberation, lay in a good husband and provider who worked steadily at fair wages. These were views shared across a broad spectrum of American life . . . Social feminist [of the period] sought protective legislation for women and children, less from motives of maternalism than from a sound assessment of social problems and their probably alleviation or resolution. Advanced social feminists, moreover, actively sought the empowerment of those women who did work outside the home for whatever reasons” (Chambers, 20-21). The female campus leaders at the women’s colleges, the “unconventional women,” were those who entered public life and took up social causes; their career paths led to the settlement house, social work, and reform politics (Horowitz, 197-198).

Social Work, professionalization via psychoanalytic theory in the 1920s:

“ . . . the social workers of the twenties increasingly saw the need to professionalize in order to legitimate themselves with respect to clients and to gain the support of state legislatures, foundations, philanthropists, and other sources of prestige, funds, and institutional power. The key . . . was an identifiable body of knowledge that focused social worker’s practical activities in ways that would be nonthreatening to those patrons. [para] The solution was psychoanalytic theory. Social work professionalized in the form of the psychiatric social worker or, more generally, the caseworker armed with psychoanalytic theory” (Ehrenreich, 60).

Social Work, professionalization, mixed implication for female clientele:

“Thus, some recent gender scholarship understands professionalism as a tentative source of power for social workers, and this power has mixed implications for its female clientele. The profession’s embrace of a scientific framework, and the respect that society afforded social workers as a result of this change, permitted this predominantly female profession to push issues affecting low-income women and children to the forefront of public concern. Yet, according to some gender scholars, professionalism simultaneously created greater distance between workers and clients while granting workers the authority to diminish clients’ representations of themselves and suppress clients’ attempts to solve their own problems” (Abrams & Curran, 434).

Social Work, re illegitimate pregnancy in 1940s:

“Aided by the popularization of psychiatry during World War II, psychoanalytic ideas held undisputed sway over social work theory on illegitimacy” (Kunzel, 148) . . . Beginning in the 1940s, social workers joined psychiatrists and psychologists in viewing out-of-wedlock pregnancy as the unmarried mother’s attempt to ease a larger unresolved psychic conflict” (149) . . . The psychiatric understanding of unmarried motherhood offered social workers a discourse of illegitimacy cast in an esoteric language appropriate to the professional, filled with medical terms and cloaked in the legitimizing mantle of science” (151) . . . Professional aspirations of social workers help explain “the rise of the neurotic unmarried mother. . . . Psychiatric explanations gave social workers a way to comprehend the illicit sexual behavior of young white women of the middle class as something other than willful promiscuity” (153). “Yet, psychoanalytic concepts seem rarely to have found their way from conference programs to case practice” (Kunzel, 148). . . . “The wartime and postwar years witnessed the construction of white out-of-wedlock pregnancy as a symptom of individual pathology and the simultaneous reconceptualization of black illegitimacy as a symptom of cultural pathology. . . . Mounting fears over black illegitimacy expressed larger anxieties about race relations that crystallized and intensified during and after the war. . . . the wartime race riots of the summer of 1943 (Kunzel, 163, 164). . . . The new constructions of white and black illegitimacy – making the white unmarried mother a subject of psychiatric analysis and the black unmarried mother a subject of national social policy – ultimately worked to undermine social workers’ claim to expertise. By the 1950s, social workers found themselves on the lowest-rung of a professional ladder that they had helped construct, jockeying for position with professionals whose status was much more secure than their own. Authority in the field of illegitimacy passed from social workers, most of whom were women, to psychiatrists and policymakers, most of whom were men” (169) . . . professionalization promised a gender-neutrality it did not deliver. By the 1950s, the rules of professional hierarchy, which social workers had used to gain status over evangelical women, now dictated social workers’ subordination to the new experts” (170).

Social Work, “Freudian” response to backlash against ADC after WWII:

“The profession’s portrayal of welfare recipients as victims of psychologically abusive pasts stood in contract to a hostile popular and legislative discourse that cast ADC recipients as unscrupulous chiselers and immoral cheats” (Curran, 366). . . . In line with the profession’s psychiatric outlook, many postwar social work scholars claimed that psychopathology plagued ADC recipients. . . . Psychiatric disorders topped the list of psychosocial issues faced by ‘multiproblem’ families, the term postwar social workers coined to describe long-term ADC users’ (372). . . . social workers’ psychodynamic explanations for ADC use allowed professionals to portray ADC recipient as victims of their own inner worlds. . . . These psychological accounts simultaneously bolstered social work’s claim to expertise in the welfare arena. . . . Psychiatrically minded social workers further displayed their welfare-related expertise by sculpting a specific psychological model of poverty” (373, 374). However, whereas unwed middle-class mothers typically suffered from intrapsychic conflict (neurosis), ADC recipients suffered from character disorders(374-77). . . . The solution to increasing ADC rolls, social workers claimed, was professional casework, understood psychodynamically, i.e., the casework relationship as an “emotionally corrective experience”; casework as parent; etc.  advocacy of integration of professional casework services into ADC program, in which “a psychological viewpoint underscored the profession’s policy agenda” as they sought federal funding for casework services in ADC programs (378). 1956 & 1962 SS amendments provided federal matching grants for social services, including casework counseling, but the funds were never fully appropriated (379-380).

Spanish flu (pandemic influenza of 1918-1919), characteristics of virus:

“The characteristic of the influenza virus that make it so dangerous and gives rise to epidemic after epidemic is its extreme mutability. It perpetually is changing the nature of its outer surface, which antibodies, the body’s most important defense system, must zero in on to be effective. The body’s defenses against flu are always becoming obsolescent, and periodically become obsolete, which means world-wide pandemic” (Crosby, chap 2).

Spanish flu (pandemic influenza), impact on AEF from September through November, 1918:

“influenza seriously depleted troop levels, consumed scarce transportation and medical resources, probably contributed to the problem of stragglers during the Meuse-Argonne campaign, damaged troop morale, and made many people very sick and weak for a long time.” Unlike American training camps, which record all soldiers who reported ill, the AEF in France recorded only those hospitalized, and many soldiers “went to great lengths to avoid going to army hospitals . . . underreporting hid the severity of the situation from the AEF commanders” (Byerly II, 111-112).

Specialization, and general practice in 1920s:

In 1920, AMA Council on Medical Education and Hospitals organized 15 specialist committee to develop undergraduate medical curricula and programs to mark specialist skills: “The plight of general practice and its relationships with specialties raised other definitional problems for internal medicine in the 1920s. General practice was becoming a field defined functionally by exclusion and educationally by interiority. Without a separate committee of investigation in the AMA reports in the early 1920s, general practice could have been defined as something left over after each special field had carved out its domain; how it was to be defined in terms of education was moot. University-based physicians tended to see general practice as a relatively comprehensive medical education that preceded specialization by a select few; in short, they saw general practice as a preliminary, inferior, or lesser brand of education, with specialization representing the intellectual elite” (Stevens IV, 596).

Specialization, and research, sickle cell anemia’s contribution to in 1950s:

“Specialists in pediatrics, radiology, cardiology, and ophthalmology also saw people with the malady as important research opportunities. These sufferers became valued, at least in part, because of the way they legitimated research agendas. Sickel cell patient could be studies by geneticists, pediatricians, hematologists, and pathologists as they struggled to develop the insights of molecular biology in their own fields” (Wailoo II, 116-117). . . . in venturing into the Diggs-Kraus [sickle cell] clinic [at Univ. of Tennessee], patient also unknowingly entered a new economic system. Their pain, their infections, their complaints, and the other characteristics of their disorder were becoming increasingly useful – in raising community consciousness, in mobilizing resources, in building institutions, and in creating research programs” (127-128).

Specialization, divorce between educational and organizational implications:

“Why was there the divorce within the medical profession between the organizational and the educational implications of specialized medicine? Such a divorce there certainly was by the 1940s. The specialties had come of age educationally, as the burgeoning process of residencies and specialist certification testified. Yet specialization was still being discussed as if new forms of organization and financing were irrelevant. The identification of national health insurance with other New Deal welfare programs was clearly one prevailing fact; health care financing moved into the congressional limelight, while specialist regulation did not. Professional self-interest was another factor. . . More broadly, there was the question of a profession’s responsibility for self-regulation. It was not clear in the growing confusion of health services of the 1930s and 1940s how far professional dicta should also govern the way in which medical practice is organized; this was to be an increasingly important issue in the 1950s and 1960s (Stevens, 288-89).

Specialization, fast track during WWII:

“The period of [postgraduate] medical training was shortened by what they called the nine-nine-nine program, nine months internship, nine months assistant residency, and nine months of senior residency. At the end of 27 months you were eligible for military service [as a specialist]” (Beeson, 172).

Speculum, appeal to doctors:

“Part of the attraction of the new technology, however, was that the physician could greatly reduce tactile contact with the patients. One of the inventors of the speculum, gynecologist James Marion Sims, asserted that a significant component of his motivation for experimenting with the new technology was simple distaste: ‘If there was anything I hated,’ he wrote in 1884, ‘it was investigating the organs of the female pelvis’” (Maines, 58-59).

Sphygmomanometry v. finger palpation re blood pressure:

Sphygmomanometric results were given in numerical form instead of general, descriptive terms. Proponents looked with disdain on the ‘vague and varying phrases and similes with which each clinician attempts to express his palpation impression.’ The number that the sphygmomanometer generated could be graphed and correlated with other findings like pulse rate, temperature, and symptomatology, thus making patterns visible that would be difficult or impossible to discern from subjective assessments. Such graphs also served to make diagnosis more uniform and therapeutics more rational. Blood pressure graphs could be transmitted from physician to physician and therefore served as useful teaching and consulting tools. Sphygmomanometer results, then, not only gave specific, numerical values, but they also provided a template with which to organize clinical data so that hourly and daily trends in patient condition could be examined. The finger simply could not do this (Evans, 797-98).

Splenic anemia (1900-1930s):

“The encounter between turn-of-the-century surgeons and the spleen provides us, then, with a glimpse of how these specialists used techniques and technology to construct a new identity for an unknown organ and a new disease and to fashion a new identity for themselves” (48). . . . The construction of splenic anemia depended in part on the use of hematological tools by surgeons, and its movement through time depended upon evolving debates about the proper use and interpretation of such tools” (51). . . . By embracing hematological observation and statistical analysis, surgeons self-consciously fashioned themselves as surgeon-scientists. They used these tools to show that they could regulate themselves, legitimate their practice, and put splenectomy on a more rational basis. Careful blood analysis separated the overzealous craftsman from the scientific surgeon, according to Warbasse” (55). . . . For this generation of practitioners, technologies of blood counting using hemacytometers [they took blood counts before and after splenectomy to show that the operation produced a surge in the body’s red blood cell production] and techniques of abdominal surgery legitimized this leading sector of scientific medicine” (57). From 1908-1928, Mayo Clinic surgeons performed some 500 splenectomies for expanding range of diseases: “Proof of the existence of splenic anemia depended upon the availability of patients for splenic surgery, surgical autonomy in diagnosis and therapy, and a favorable increase in the blood count and the patient’s health after the splenectomy. . . . [William] Mayo knew that splenic anemia could exist only if surgeon remained free to engage in splenectomy and blood analysis” (61). . . . one eventual by-product of the disciplining of the surgeon [by 1915] was the decline of splenic anemia as a legitimate disease. As surgeons were brought into a more corporate and bureaucratic practice, they released their grasp on the spleen, handed over the practice of to laboratory-based pathologists and a growing number of hematological specialists, and thereby abandoned one of the disease concepts that had helped to endorse their modern identity (68) . . . In truth, splenic anemia was an indefensible outpost – defended only by surgical autonomy and by the rhetoric of postoperative hematological and statistical evaluation” (69) . . . By embracing hematological observation and statistical analysis, surgeons self-consciously fashion themselves as surgeon-scientists. They used these tools to show that they could regulate themselves, legitimate their practice, and put splenectomy on a more rational basis” (84).

Standardization in public sphere, growth of:

“At the turn of the twentieth century, public health expanded to include endemic diseases like tuberculosis (refs) and sexual transmitted diseases (refs). Large programs were created to prevent and treat these diseases, and each issued administrative imperatives for more standardized practices of classification, prevention, diagnosis, and, increasingly as we shall see, treatment. . . During this [interwar] period, the public domain of medicine also reached into therapeutics. With the development of effective therapies like Salvarsan for syphilis and radium therapy for cancer, public health programs now frequently included a therapeutic component. . . . Like the programs to prevent TB, cancer-care programs deployed new technologies that were complex and sometimes dangerous, notably the use of x-rays and radium, and that required guidelines and protocols for their safe and effective use (refs) (Weisz et al., 697, 698). . . . In 1931, American College Surgeons published two sets of guidelines, one for organizing cancer services in hospitals, the other a manual of fracture care. In 1938, American Academy of Pediatrics produced practice guidelines for immunizations of children: “A number of features characterized these early international and American efforts at guideline and protocol development. First, a variety of groups were involved: public health agencies at various levels, state and national governments, private associations devoted to specific diseases, and medical societies of all sorts . . . Second, although creating guidelines for practice occasionally was possible, the result frequently was the standardization of categories, measurements, and instruments to produce reliable knowledge. Third, the attempt and failure to produce effective procedural guidelines sometimes led to the standardization of design and terminology” (702). . . . The pressure for guidelines thus has many sources: the recognition of and unhappiness with the variability of competence based on training and credentials; the need for protocols for the proper functioning of new research techniques and complex therapeutic technologies and procedures; and the demands for public accountability and standardized organizational procedures generated by the inclusion of more and more health care practices under the jurisdiction of large-scale public organizations” (715-716).

Steinach, Eugen, and the “puberty-gland”:

“Steinach argued that the generative cells of the testis produced spermatozoa alone and had nothing to do with the internal secretions responsible for sexual maturation. . . The interstitial cells (or Leydig cells) had, however, survived in the grafted glands and even proliferated beyond their usual numbers. It was these cells, according to Steinach, that were responsible for the internal secretory function of the testes. . . . the interstitial cells constituted the unquestionable source of the internal secretions that induced the somatic and psychobehavioral changes of puberty, he christened them collectively the ‘puberty-gland’ (Pubertäts-drüse)” (Sengoopta, 461). According to Steinach, “hermaphroditism and homosexuality were ultimately caused by the lack of sexual differentiation in the gonads, which led to the simultaneous production of male and female secretions. . . . Sexual identity and orientation, then, were produced largely by the glands and were not inherent to a person’s psyche” (464). Steinach moved from guinea pigs to men, enlisting urologist Robert Lichtenstern to remove testicles of a homosexual and replace them with “normal glands on five homosexuals in hope of “curing their homosexuality (465-67). Initial heterosexualization proved a “fleeting phenomenon and dissipated quite rapidly”: only Magnus Hirschfeld proved a “consistent supporter during this episode”(468). . . . “Rejuvenation came to be seen as Steinach’s greatest achievement (and, later, his greatest folly), while his ‘cure’ for homosexuality was relegated to obscurity in the company of his innovative animal experiments” (469). . . . What Steinach took away with one hand, however, he gave with the other. His research on rejuvenation . . . and his ‘cure’ of homosexuality had the same fundamental goal: the restitution of masculinity, as defined traditionally by science and society. True males were strong, energetic, active, courageous, creative, libidinous, and heterosexual. That these were glandular phenomena rather than immutable givens was, in the final analysis, no reason to cease believing in the categories of ‘male’ and ’female’ or in the normativeness of heterosexuality” (471-72).

Sterilization, elective, in NC, re the minority of women who petitioned the state Eugenics Board for elective sterilization:

“For some, sterilization was a clear infringement of their reproductive autonomy. For others, it was a procedure very much desired. . . . Their actions were not simply, then, reactions to hardship. Rather, they grew out of women’s understanding of themselves as sexual beings and of their desire to shape their sexual, social, and economic lives. “Women’s choice of elective sterilization, however, was conditioned by their lack of other alternatives. . . . women’s lack of access to decent health and contraceptive services, their poverty, their race, and their gender all significantly influenced their decisions to seek sterilization. . . . Women did not based their desire for sterilization on abstract notions opf reproductive rights, although some voiced the opinion that they had a right to enjoy sexual relations without the constant threat of pregnancy” (Schoen, 137).

Sterilization, involuntary, in U.S.:

When German eugenic sterilization program began in Jan 1934, U.S. already performed sterilizations routines in 1i7 states. By 1941, sterilization had been forced on 70-100,000 American women, 8831 in CA alone . By 1983, when blacks were 12% of population, 43% of women sterilized in federally funded family planning programs were African Americans (Montgomery, 280-81).

Sterilization, involuntary, minimization of eugenic arguments in Board decisions:

“No witness [before N.C. Eugenics Boards] wasted many words on the question of whether the sterilization candidates in fact had a hereditary condition. Rather than focus on eugenic theory, all based their arguments against the procedure in terms of broader economic and social concerns.” Eugenic Boards and families alike “shared the same language and understanding of the nature of eugenic sterilization cases. All knew that money and sex, rather than eugenics, lay at the heart of the matter” (Schoen, 97-98). ).

Sterilization, involuntary, regulation of sexuality and:

Not only parents and social workers, but “Eugenics Board members, too, valued the sterilization of sexually aggressive men. . . Such arguments rested on the assumption that the danger to society did not lie either in the attack itself or in the physical or psychological harm it caused its victims, but rather in the possibility that it might cause pregnancy. Sterilization of course offered no protection against rape, incest, or any other sexual act. By preventing pregnancy, however, it made illicit and unwanted sexuality invisible. . . . Justifying the sterilization of female victims of incest, authorities blamed girls for their sexual experiences, marked them as sexual deviants, and undermined their future sexual credibility. Sterilizing boys accused of incest, by contrast, permitted them to continue their behavior. . . . social workers and board members indeed expected sterilization to cause changes in patients’ potency and their level of pleasure” (Schoen, 102, 103).

Sterilization, race and, after WWII:

In N.C., fears of rising cost of ADC program led to “significant shift in the racial composition of those targeted for eugenic sterilization. . . . It seemed especially pressing to save funds considering the ‘prevalence of illegitimacy among the lower-class Negro population . . . Black women were also much more likely to be separated, divorced, or widowed, which affected the number of black women in need of ADC. The public association between ADC and black female recipients was thus particularly close [which] reinforced racist stereotypes about the hypersexual black woman – the Jezebel character”(Schoen, 108, 109).

Stethoscope, resistance to in 19th c. America:

“Therefore, multiple reasons explain the time course of the general adoption of the stethoscope by practitioners in America, which include lack of formal medical education, relative lack of clinical bedside training for those going to medical school, complexity of interpretation of auscultatory information, lack of specific treatment for diseases that might be discovered by its use, hesitancy of the physician and the patient to have an instrument placed between them, lack of continuing education opportunities for physicians after leaving medical school, and lack of time and money for physicians to obtain advanced training. A main source for learning new clinical techniques was the medical journal, but even if it mentioned auscultation, it contained little practical information on how to use the stethoscope, and without practical bedside experience with patients, recognition and interpretation of the various complex sounds could not be achieved. To learn the use of the stethoscope, there needed to be educational exposure, as well as preceptorship bedside training, but those opportunities did not exist outside of the large urban academic locations and were likely not felt to be necessary by rural physicians” (Reinhart, 77-78).

Stevens on AMA’s new conservatism after 1920:

“Seen from the point of view of the general practitioner who had been trained in a proprietary or university school at the end of the nineteenth century and who had begun practice at a time when there was little question of the profession’s monopoly of the whole health field, the physician was indeed threatened. He was threatened by increased specialization. He was threatened by the hospital as a potential center for a medical elite. He was threatened by the public’s growing interest in the standards of medical care. And all these threats were heightened whenever the government, particularly the federal government, hinted that it might lend them aid” (Stevens, 146-47).

Streptococci, hemolytic, varieties of:

“These bacteriological findings [of the 1920s] fitted in well with clinical observations, because the course and complications of scarlet fever, streptococcal sore throat, erysipelas, puerperal sepsis, and infections in various areas from which hemolytic streptococci were cultured, such as the middle ear, the meninges, and the lungs, were much alike. Doctors finally understood that a number of different types of streptococci produce these infections and that the organ that becomes involved depends mainly on how the streptococcus enters the body, by way of the upper respiratory passages, through the skin, or into the uterus of the recently delivered woman” (Dowling, 62). Hemolytic streptococci, acting in conjunction with an allergic reaction, also cause rheumatic fever and nephritis (65-68).

Subluxation, chiropractic theory of:

Believed to impinge on spinal nerves, therefore blocking the flow of the ‘innate intelligence’ (according to straights) or causing disease in some other way (according to mixers). “In fact, subluxations have never been proven to constitute a relevant entity. Critics have repeatedly pointed out that even severe nerve root compression does not cause organic disease.” Today [2008] 88% of U.S. chiropractors believe the subluxation contributes to over 60% of all visceral ailments and 90% think it should therefore not be limited to musculoskeletal conditions (Ernst, 548).

Sulfa drugs, status in 1940:

“One form of sulfa or another was now standard therapy against pneumonia, childbed fever, erysipelas, strep blood infections, and the most common kinds of meningitis. Sulfa stopped urinary tract infections, trachoma, chancroid, mastoiditis, otitis media, and a host of less common or less dangerous strep infections. Newer sulfas worked well against gonorrhea. Strep sore throat could be treated with some success; sulfa was especially good at preventing recurrences. Sulfa drugs were increasingly helpful against staphylococcal infections and dysentery. They did not work well, however, against bacterial diseases like typhoid fever, tuberculosis, anthrax, or cholera” (Hager, 244-245).

Sullivan, Harry Stack, equivocation about military screening of homosexuals during WWII:

“Sullivan recommended rejecting homosexual men without a direct reference to their homosexuality. He seemed to be saying that examiners must reject homosexuals because it was required, but that a psychiatric examination is not at all dependable in detecting an examinee’s sexual preference. This contradicted his 1940 statement on the possibility that the screening would increase psychiatry’s ‘prestige.’ He hoped to show the usefulness of psychiatry in the national defense. When it came to the issue of homosexuality, however, he had to say that the screening itself was not dependable. He was in a double bind” (Wake, 483). Sullivan’s screening work in 1940-41 “was perhaps the first major failure in Sullivan’s professional life as a psychiatrist. . . . Not only had the screening process he advocated failed, but it also accentuated the preconception that homosexuals were unfit to serve. . . . The failure was, in part, a consequence of the gap between clinical practice and the making of public policy. Sullivan’s screening work illuminates how the tentativeness of public discussions about homosexuality circumscribed the possibility of creating non-homophobic policies during the war. What was missing from the psychiatric community before the war was a full-fledged public determination to accept or at least protect homosexuals. This was an important factor contributing to the discriminatory screening” (493).

Surgery, and the patient’s person:

“In Goffman’s opinion, the participants regard the patient-body like a sacred object, irrespective of its owner’s status. But this ‘sacredness’ of the body cannot be located in itself, nor solely in the medical perspective on it. It is rooted in its relation to the person inhabiting it: Its character is that of a loan with a temporary status as an object. The patient-body is, as it were, a ‘virtual participant’ (Hirschauer, 306). . . . Distance from the body of everyday life is also distance from disorderly affects. They shall not disturb the surgeons any more when they are making an incision – ‘with a blend of arrogance and innocence’ – into an anonymous, dumb and disfigured body. . . . the symbolic function of sterility procedures is primarily related not to the boundaries of the operating theatre but to those of the patient. So, although the blood is not free from bacteria, blood-stained instruments are treated as ‘sterile’ because ‘being clean’ is to be understood relative to the patient-body. It differentiates anything belonging to this body from foreign germs (306). . . . Sterility procedures reconstruct the injured skin as a zone-like boundary between persons. They restore this territory for somebody who cannot defend it, and thereby constitute an intensification of the ‘etiquette of touch’ described by Young. . . . the ritual dimension of surgery is due not to sacralization but to a protection against desacralization in the course of transforming patients into objects that are as ‘natural’ as possible” (307).

Surgery, as craft:

“Thus, the practice of operating appears to be a versatile craft. It resembles building or carpentry in the way bones are sawed, drilled, chiseled and screwed together; tailoring, where skin and tissue of different consistency are cut apart and sewn together; the work of sailors, when various knots are tied; and a butcher’s trade, when muscles and innards are carved up” (Hirschauer, 300). . . surgery and experimental physiology are both technologies of control and that the specific spaces they used were shaped by this fact, . . . in many ways, experimental physiology is surgical in character and origin. Like surgeons physiologists tried to reach control by active surgical intervention into living bodies. The practice of performing animal experiments actually came from surgery (Schlich II, 238) . . . . Surgery and experimental physiology have in common that they both require a controlled violation of the integrity of the body (239). . . . Just as in human surgery, antisepsis and asepsis opened up a range of activities to physiologists (238-39). . . . [In addition to control] the two fields converged also because surgeons during the nineteenth and early twentieth centuries increasingly adopted ideals from laboratory science, which, as we will see, was eventually reflected in the set-up and the design of operating room. . . . The laboratory revolution coincided chronologically with the rise of surgery. By the end of the nineteenth century, surgery and laboratory science were both generally regarded as the epitome of progress in scientific medicine. Surgeons saw their new operations, such as tonsillectomy, appendectomy and, in particular thyroidectomy . . . as based on science.” (240) . . . In the late nineteenth century, a new generation of surgeons began to reconstruct the functions of internal organs. They were functionally oriented and looked to experimental physiology for the scientific basis of their practice. The paradigmatic technology of this switch in perspective was organ transplantation, which emerged form this confluence of surgery and laboratory science between 1880 and 1920. . . . Transplant surgery began with the Swiss surgeon Theodor Kocher in the early 1880sa. . . . Tellingly, Kocher’s surgical removal of the thyroid gland in humans came to be understood as an inadvertently performed physiological experiment, a ‘vivisection humaine.’ Subsequently, physiologists as well as surgeons in almost all European countries started to perform animal experiments on thyroid removal” (241). . . . asepsis can be seen as the extension of the control achieved in the bacteriology laboratory to the operating room. With the spread of aseptic surgery, the designers of operating rooms subjected surgical spaces to the same precautions that governed a laboratory for bacterial research. In both places, these precautions served to literally control microscopic life forms and keep them out of vulnerable places. In the case of the laboratory, such a vulnerable place was the pure culture of bacteria. In the operating room, it was the surgical would” (244). Cf. Adams & Schlich, 313: “The paradigmatic technology of this switch in perspective was organ transplantation, which emerged form this confluence of surgery and laboratory science between 1880 and 1920. With transplant surgery, surgeons turned away from a purely local and structural approach, took up experimental research methods, and started to look at the body from a systemic and function point of view. Experimental physiology replaced pathological anatomy as a reference science.”

Surgery, reproductive theory of female insanity in nineteenth century:

“ . . . both neurologists and gynecologists believed heredity was the root cause of insanity, although both treated nervous and insane women as if their female bodies were defective. The most dramatic examples of this treatment philosophy were ‘local’ treatments and sexual surgery. If the symptoms of nervous and mental illness were unwomanly behavior and feelings, and if the causes were rooted in the female body, then the cures must produce some change in the woman patient’s reproductive organs to change the woman’s behavior. Women patients and their families and friends were vocal advocates of this line of reasoning (Theriot II, 21). . . . More common than family and friends influencing treatment, countless women patients also demanded or requested surgery to relieve their nervous symptoms. . . . Whether to relieve emotional or behavioral symptoms or to end their childbearing potential, many women sought operations. Physicians reported that these operations cured many cases of nervous and mental illness, even among women in asylums” (22).

Surgical dissection, and creation of ‘the body’:

“Dissection aims to present organs in the isolating style of an anatomic atlas. The drawings show neatly separated organs; in the patient-body this state must first be produced by isolating them with the knife. Surgeons call this ‘exposition’ or ‘making anatomy’ (Anatomie herstellen). Whereas, to a layperson, this procedure increasingly disfigures ‘the body’ – as it is known from everyday life – for the surgeon exposition creates ‘the body’ – as it is known from the anatomy book” (Hirschauer, 301; cf. Selzer, Mortal Lessons, 95). . . . “One ‘leafs through’ the three-dimensional patient-body to find the two-dimensional structure of anatomical pictures. Section after section, the proper anatomy of the ideal body is engraved on these layers. . . . The anatomical body is the result of a sculptured practice. [para] “So the relationship between the patient-body and anatomical body is reflexive: they are models for one another. On the one hand, the concrete body is a didactic model, from which the abstract body is learnt; on the other hand, the anatomical body is an aesthetic model, which is an example to the patient-body and its treatment [PS: which guides the surgeon in understanding the patient-body before him and implementing a surgical form of treatment]. Focused on the issue representation, this means: in surgery ‘transference happens not only between pictures, but between pictures and ‘natural objects.’ This results in a reflexivity of similitude: ‘there is no ultimate reference point. Representational relations are symmetric and can be reversed and extended without limit. In this harmonious circularity, surgical practice takes place. It explores a body which it must already presuppose and take for granted, and it standardizes itself by means of its decontextualized anatomical knowledge” (312, quoting Lynch & Woolgar, “Sociological Orientations to Representational Practice in Science” Human Studies, 11 ([1988], who are referring to Foucault’s This is not a Pipe [1982]).

Sydenham, Thomas, Contributions of:

What is the lasting value of Sydenham’s writings? He rid the Pharmacopoeia of many dangerous and obnoxious remedies . . . His pioneering of quinine was of immense benefit in fever-ridden England, and countless lives were saved by his cooling regimen in the treatment of smallpox. He exhibited iron, either in the form of steel filings or as a syrup, in the treatment of hysteria and chlorosis; and in a pain-racked age he widely realized the value of opium which he gave in the form of liquid laudanum, replacing the solid pill commonly used in his day. . . Sydenham often dispensed with drugs altogether, and prescribed such simple remedies as fresh air, exercise, a moderate diet and the purgative waters of Barnet or Lewisham. But his reputation does not depend so much upon the many sensible and effective remedies he helped to introduce, as upon the general clinical principles which guided his own practice of medicine and illustrated his writings. Sydenham’s revival of the Hippocratic method of studying the natural history of diseases by making a series of accurate and detailed observations set the clinical pattern of future progress, so aptly summarized by Locke . . .” (Dewhurst, 94-95).

Sydenham, Thomas, Therapies in mid-seventeenth century England:

“Sydenham’s therapies consisted of a carefully regulated diet, fresh air in the sick room, abundant liquids, cooling drinks for fever, iron for anemia, mercurial inunctions for syphilis, horseback riding for patients with tuberculosis to provide fresh air and exercise, his own laudman preparation for heart ailments, powdered deer horn (a form of lime) for dysentery, and chinchona bark for malaria” (Bloch). Re disease causation, Sydenham moved away from humoral theory, with its emphasis on bleeding and purging, to understanding of disease as coming from outside the body. His observational/empirical approach, with the naturalistic classification it led to, moved away from individual (humoral) predilections for diseases (location, time of year, alignment of the stars) to individual diseases as multi-symptomatic entities, each with its own naturalistic history of stages (qua Locke), with each disease conceptually and empirically distinguishable from other diseases. It’s not that Sydenham rejected causation per se, but that he rejected it as anchorage for understanding disease and arriving at effective treatments. Rather than beginning with first principles (Aristotelian forms, the four elements, Galenian humors, etc.), he began with observation of a disease in all its singularity, charted its natural history, and then repeated such detailed observation on multiple cases, ending up with the type of low-level generalizations that best suited medicine’s curative potentia. In this manner, medicine could be empirical without denying its rationalistic foundations (King, 1-6).

Syphilis, treatment of by private physicians during 1930s:

“When patients turned to private practitioners they often found themselves being treated by doctors inadequately trained to diagnose and treat venereal diseases effectively. Most physicians did not consider the sexually transmitted diseases to be a particularly interesting or status-oriented clinical specialty. According to some public health officials, syphilis was treated like the ‘illegitimate child’ of medicine, tossed between dermatology and urology depending upon the manifest symptoms. Syphilis and gonorrhea were given minimal time in the average medical school curriculum” (Brandt, 132). . . . Physicians and public health officials in the 1930s went on the attack against the moralistic precepts of social hygiene, an attach that Thomas Parran was to lead. Doctors urged that the time had come to reject euphemism, reduce moralism, and address the venereal problem on the level of science and medicine” (136).

Syphilis and professionalization of twentieth-century psychiatry:

“Syphilis offered psychiatrists and social workers a tantalizing but perfectly legitimate entrée into the domestic realm, and, as such, underwrote their conviction that in the future science would exempt no human activity, however intimate, from expert scrutiny. . . . But syphilology had helped make it possible for psychiatrists to conceptualize sex as a discrete, nonpsychological issue, and to imagine that in implementing a policelike investigatory strategy they were acting in the name of science, not morality. Syphilology brought psychiatry into medicine by means of the disease paradigm to which it conformed. Just as important, however, it gave psychiatrists license to bring sex into psychiatry” (Lunbeck, 52, 53-54).

T

Tait’s contributions to abdominal surgery:

“While his work on gallbladder disease was perhaps the most important, it must be emphasized that he exerted his greatest influence by advancing three general ideas. First of all he insisted on early diagnosis and urgent operation for acute abdominal conditions, whether associated with bleeding, obstruction, or infection. Secondly, he overcame the reluctance of his contemporaries to operate in the presence of acute peritonitis. Thirdly, he preached the value of exploratory laparotomy in chronic conditions in which diagnosis was uncertain. Today these ideas are commonplace and accepted without argument; in Tait’s time they were revolutionary” (Shepherd, 73).

Tay-Sachs, Dor Yeshorim and:

Solution of orthodox Jews opposed to prenatal screening and abortion: screening of adolescents, who then knew whether or not they could marry someone else. For them, it was a “subtle adjustment(s) to the rituals associated with adolescence” that “could indeed support and strengthen the institution of arranged marriage among Orthodox Jews” (Wailoo & Pemberton, 40). It was “a kind of religiously sanctioned genetic matchmaking service” that would eliminate Tay-Sachs from the Orthodox community and in accordance with strict Jewish law (43).

TB, as “master disease” of early twentieth century:

“ . . . not because it was on the rise but because it served other compelling agendas. First, the TB crusade helped to popularize the legitimacy of the new germ theory of disease . .. . Unlike sexual transmitted diseases, which were on the rise . . . the TB problem could be openly discussed in public without offending social mores. Finally, TB served well as a vehicle for pushing a wide range of societal reforms aimed at easing the dislocations of urbanization and industrialization” (Tomes II, 631).

TB, protean nature of:

“Tuberculosis was not a single disease caused by a common germ. This strange disease was the leviathan that dominated every medical discipline and department throughout the world. In every field of medicine, whole subcultures had evolved to fight against it, each in its way a distinct theatre of war. For the ear, nose and throat surgeons it was the disease that infested the throat and the voice box, slowly infiltrating, ulcerating and destroying the delicate membranes of speech, making it an agony even to whisper. For the abdominal surgeon it was the disease that so often followed the throat infection, whether as a primary invader that ulcerated, scarred and destroyed the bowel, ruining its digestive ability and leading to progressive emaciation, or, secondary to the well-known lung disease, as the late complication of unrelievable diarrhea and vomiting, so often the harbinger of a fatal outcome. For the gynecologist it was the insidious infiltration that destroyed the fallopian tubes and ovaries, engulfing the fine tissues of motherhood in a deformed and distorted mass of inflammation and thick cheesy pus, robbing the young woman, should she survive, of her opportunity for a family. For the orthopedic surgeon it was the common cause of suppurating arthritis of childhood, the slow incurable torment that destroyed the hip joints, the shoulders, the elbows, knees, invading the long bones of the legs and arms, to cripple children, or to collapse the tender spines of millions of young people into the characteristic hunchbacks. “From meningitis to infections within the eye or the ear, from lupus of the skin to the manifold horrors of consumption in the lungs, tuberculosis was the protean and omnipresent menace. Not a country, a race, or a people in the world escaped its thrall” (F. Ryan, 308-309).

TB, role of sanitariums in regulating:

“If sanitariums were ineffective, and perhaps even unnecessary, why do they figure so prominently in American tuberculosis histories and resonate so strongly in family memories? One of the strongest justifications for sanitariums was that they met the basic psychological need of a community to isolate its sick and diseased. The sanitarium was a modern and humane version of the pesthouse. . . . After 1905 few people wanted to be around a consumptive if it was not absolutely necessary, and a voluntary system of removal accorded with prevailing mores: if sanitariums could be perceived to be both morally and aesthetically attractive, then the infected would want to go there as a civic and familial duty, and the community could feel it had done a service to the sick” (Ott, 150). Decisions to enter sanitariums based on money, family relationships, and chances of recovery (Bates, 258-61).

TB, state policies ca. 1900:

“The ultimate aim of state policies and programs was surveillance that could generate regulations and control tuberculosis. Mapping, reporting, and restrictions upon various behaviors characterized statement management of the disease. If the essence of individual prevention and recovery was self-control, the foundation of state activity was legislative control. The fledgling bureaucratic state, with the guidance of health and social work professionals, reduced the complex social and biological issues of tuberculosis to a few simple points: public spitting and free sputum examinations; unsanitary housing; and compulsory reporting of ‘cases.’ . . . tuberculosis ended up looking more like social control than disease prevention” (Ott, 133-34).

Therapeutic “success,” values and:

“Just as the notion of therapeutic ‘success’ cannot be defined without regard for people’s underlying preferences and ambitions, deciding what therapeutic course to pursue requires consideration of these same values and goals. When we have thoughtfully asked what, over the span of our lives, would really constitute ‘success,’ we begin to escape the seductive entrapments of wondrous technology, with its monomaniacal focus on the prolongation of life, and start the process of answering how to steer our course through the myriad options that medical care poses. . . . For chronically ill patients, reconsidering what hopes, beliefs, and values underwrite the goals of care – and then reconsidering these goals themselves – can lead to breakthroughs in realizing that certain medical tests or treatments make no sense . . . while other treatment options may be crucial in order to accomplish the clarified goals of, say, enhanced quality of life” (Feudtner, 215, 216-17).

Therapeutics, ca. 1800:

“The body could be seen that is – as in some ways it had been since classical antiquity – as a kind of stewpot, or chemico-vital reaction, proceeding calmly only if all its elements remained appropriately balanced. . . . Drugs had to be seen as adjusting the body’s internal equilibrium; in addition, the drug’s action had, if possible to alter these visible products of the body’s otherwise inscrutable internal state. . . . The effectiveness of the system hinged to a significant extent on the fact that all the weapons in the physician’s normal armamentarium worked: ‘worked,’ that is, by providing visible and predictable physiological effects; purges purged; emetics vomited, opium soothed pain and moderated diarrhea. Bleeding, too seemed obviously to alter the body’s internal balance. . . . This same explanatory framework illuminates as well the extraordinary vogue of mercury in early nineteenth-century therapeutics. If employed for a sufficient length of time and in sufficient quantity, mercury induced a series of progressively severe and cumulative physiological effects: first, diarrhea, and ultimately, full-blown symptoms of mercury poisoning. . . . Recovery must, of course, often have coincided with the administration of a particular drug and thus provided an inevitable post hoc endorsement of its effective. A physician could typically describe a case of pleurisy as having been ‘suddenly relieved by profuse perspiration’ elicited by the camphor he had prescribed. Fevers seemed, in fact, often to be healed by the purging effects of mercury or antimony” (Rosenberg, 6-9).

Therapeutics, second third of nineteenth century:

“ . . . older modes of therapeutics did not die, but . . . were employed less routinely, and drugs were used in generally smaller doses. Dosage levels decreased markedly in the second third of the century, and bleeding, especially, sank into disuse. . . . Mercury, on the other hand, still figured in the practice of most physicians; even infants and small children endured the discomfort of mercury poisoning until well after the Civil War. Purges were still administered routinely in spring and fall to facilitate the body’s adjustment to the changing seasons. The blisters and excoriated running sores so familiar to physicians and patient at the beginning of the century were gradually replaced by mustard plasters or turpentine applications, but the ancient concept of counterirritation still rationalized their use. . . . It seemed to many physicians almost criminal to ignore their responsibility to regulate the secretions, even in ailments whose natural course was toward either death or recovery. Hence the continued vogue of cathartics and diuretics (though emetics, like bleeding, faded in popularity as the century progressed) (Rosenberg, 18)

Therapeutics, transition in 1870s & 1880s:

“It was in the late 1860s and the 1870s that medical therapeutics in America in some respects began to look more like the practice of the early twentieth century than that of the early nineteenth. This gradual transformation involved no sharp break with past practices. Such new remedies as chloral hydrate, the bromides, and the salicylates came into common use, as did hypodermic injections and a revived enthusiasm for the therapeutic uses of electricity, but new drugs and techniques seldom fully replaced older ones. . . . A portion of the physician’s exhibition of activity in controlling disease was transferred from therapeutic intervention to a self-consciously active vigilance. Treatment was gradually redirected toward altering specific physiological processes rather than changing the general equilibrium of the body. . . . In general, physicians used fewer drugs but aimed them more narrowly at manipulating specific physiological processes. . . during the 1870s and 1880s many physicians became preoccupied with temperature. . . Many physicians became absorbed with normalizing high temperature. . . The therapies commonly employed in the 1870s and 1880s gave physicians more sensitive control of many physiological processes than they had possessed early in the century. Salicylic acid and subcutaneous injections of morphia mitigated pain; chloral hydrate and the bromides produced sleep; and aconite slowed the pulse. Furthermore, the physician could monitor bodily processes and identify deviations from the norm not only by measurements of temperature, respiration, and pulse and by such techniques as sphygmographic tracings, but also by quantifying, for example, chemical constituents of urine” (Warner II, 100-102).

Tonsillectomy, rise of after 1910:

“The elevated status of surgery thus contributed toward the heightened interest in tonsillectomy. “Perhaps the most important development in promoting the widespread expansion of tonsillectomy was the appearance of a new paradigm, namely, focal infection theory. . . . Increasingly, clinicians began to argue that circumscribed and confined infections could lead to systemic disease in any part of the body. Edward C. Rosenow of the Mayo Clinic,. . . was one of the most important advocates of the belief that many diseases were the result of the dissemination of pathogens through the bloodstream from a local focus.” His experimental work was echoed by Frank Billings & Henry Cotton (Grob II, 387-88). After 1930, focal infection theory was abandoned as grounds for tonsillectomy and was accompanied by growing skepticism about the efficacy of the procedure in general (399-401). “Parental enthusiasm for tonsillectomy was largely driven by medical advice that began in the 1920s and persisted among practitioners in subsequent decades despite growing conservatism among elite practitioners. . . . The rapid spread of voluntary health insurance plans after World War II also played a role” (406). . . . “The manner in which medical practice was structured clearly assisted in the persistence of tonsillectomy. Tonsillectomy rates were in part a function of geographical location. There were much higher rates in urban than rural areas, largely because the latter tended to have fewer physicians and hospital facilities. The experience of World War II also hastened medical specialization and increasingly these specialties .. . . tended to be surgical in nature (412-13). Slow and uneven decline of tonsillectomy began in 1965. Why so slow? “Clearly, the absence of an agreed upon method to determine efficacy play a role in the persistence of tonsillectomy. . . . Hence physicians, whatever their specialty, could find data that purportedly supported the manner in which they practiced” (416).

Tourette Syndrome, contingent nature of etiological claims:

“These changing claims of the etiology of Tourette syndrome were due less to compelling and robust scientific findings than to the dynamics of the political culture of medicine. The difference between French and Anglo-North American medical views about Tourette syndrome reminds us that medical research and clinical findings, in themselves, often are insufficiently persuasive in the face of long-held assumptions about the etiology and treatment of syndromes. . . . The rise and fall of each successive explanation for and treatment of Tourette syndrome has been as much a story of the power of a shared set of beliefs of a professional faction as it has been a vindication of either rigorous scientific testing or carefully analyzed clinical results. Often, as in the case of Margaret Mahler’s psychoanalysis, the failure of an intervention to achieve amelioration of symptoms was projected onto the afflicted and their families, while the assumptions of the theory itself remained unquestioned. This was no less true of a number of interventions that assumed that involuntary tics and vocalizations were organic in origin” (Kushner, 218, 219).

Tourette Syndrome, resistance of Salpêtriere group to medical/infectious explanation of:

“Given the persistence of the connection between both Sydenham’s chorea and convulsive tic behaviors with prior rheumatic disease, why did tic behaviors continue to be categorized as pathologically separate from other choreas? The answer to this question is tied, in large part, to the triumph of the psychologizing [viz, the fixation on psychological theories of degeneration] that increasingly affected late nineteenth-century thinking and developed into psychoanalytic categories by the 1920s. . . . The assumption that motor and vocal tics were psychopathological and vaguely hereditary developed such paradigmatic force that practitioners felt obligated to explain away rather than to engage in analysis of any contrary evidence” (Kushner, 39, 40).

Trauma, rediscovery of in the 1980s:

“The concept of Post-Traumatic Stress Disorder created a standardized model of how victims respond to trauma, a ‘single syndrome that appears to be the final, common pathway in response to severe stress.’ . . . Military psychiatry, instead of languishing in an obscure medical ghetto, became part of a burgeoning sociomedical movement that aimed to bring into the open society’s ‘collective secret’ and finally to reverse decades of willful ignorance of traumatic acts and denial of post-traumatic suffering. In 1985, the International Society for Traumatic Stress Studies was formed; soon afterwards The Journal of Traumatic Stress began to appear (Shephard, 385). . . [re PTSD] The fundamental objection, however, is that these hypotheses do not explain why some people are affected by traumatic events and other are not. There is still no ‘Grand Unified Theory that can explain in a single sentence or equation’ the cause of PTSD (389). . . . over the last decade, much of the theoretical underpinning of Post-Traumatic Stress Disorder has also been unraveling. . . . [PTSD researchers] overlooked the central fact that not everyone does suffer in the wake of trauma. It has taken psychiatry two decades painstakingly to rediscover this basic truth; in 1995 two high-regarded PTSD researchers announced, with great earnestness, that ‘PTSD is not an inevitable consequence of trauma.’ Any front-line medical officer on the Western Front, never mind William Brown in 1919 or T. A. Ross in 1941, could have told them that. . . . Thus PTSD, if there is such a thing, is not an extreme form of the normal reaction to stress but something qualitatively quite different. In the face of this evidence, the simple uniform model of ‘trauma’ adumbrated in 1980 has been fragmenting into different categories – acute and chronic PTSD, simple and complex PTSD, even male and female PTSD. At the same time, the old issue of ‘predisposition’ or ‘vulnerability,’ anathema in the 1970s, has resurfaced (390-91).

Typhoid Fever:

“. . . until the 1940s typhoid was recognized as one of the most frustrating diseases to manage. The patients were highly uncomfortable and often miserably ill for many weeks, and the doctor could do little but stand by, watching for complications . . .” Serum therapy, beginning in 1893, was unsuccessful. Following pathologists’ finding of ulcers in intestines of typhoid patients, nineteenth-century treatment focused on healing these ulcers via “intestinal antiseptics” that had no effect. In 1870, introduction of the milk diet, the goal of which was to lessen intake of food to decrease the work of the intestines, became popular (Dowling, 43-44).

Typhoid fever, as “chief summer disease”:

[In 1918] “It is no surprise that Edsall [in teaching Harvard students] was still using typhoid fever as his base line in the infectious diseases. A thorough understanding of typhoid gave a doctor a good grounding in those days when typhoid was still the chief summer disease. It was beginning to drop off sharply, however, with the purification of water supplies” (Aub & Hapgood, 152).

Typhoid Fever, Spanish-American War, in U.S. and Cuba:

It raged in Army training camps, especially in Chickamauga, Georgia (almost 10% of 80,000 contracted it), and “was scandalous because typhoid was known to be a water-borne bacterial disease transmitted through the oral-fecal route, that is, poor sanitation.”. Unlike yellow fever, attacks did not induce immunity and lasted weeks rather than a single week. According to William Gorgas, who fell victim, “typhoid fever was the ‘signature event’ of the war, with more than twenty thousand cases and almost 1,600 deaths . . . Final death statistics of 345 deaths from wounds and 2,485 deaths of disease presented a 1:7 ration of combat deaths to disease deaths, far worse than the 1:2 rate of the Civil War” (Byerly, 77-79).

U

Unnecessary treatment, according to physicians surveyed by John’ Hopkins team in 2018:

According to 2,100 doctors, who responded anonymously, “on average they believed 21% of everything done in medicine is unnecessary . . . the doctors in that survey estimated that 22% of prescription medications, 25% of medical tests, and 11% of procedures are unnecessary (Makaray, 4).

Urinalysis, in early twentieth century:

“The urinalysis may have started as a diagnostic test, but between 1900 and 1925 it also became a way of documenting and understanding a person’s changing clinical course. . . . It was the first fairly common test that involved taking a part of the living patient away, going into the laboratory, and studying that specimen with microscopes and test tubes. Moreover, the urinalysis was multidimensional: it provided a variety of results, including those related to color, appearance, and albumin, among others. [The urinalysis] appears to have been the first [technology-based tool] to be used in a systematic attempt to follow patients over time” (Howell VI, 93) . . . . Specifically, urinalysis served as a model for the continuous monitoring of patients and as a substrate for the application of routinized, standardized procedures to medical examinations. From 1900 to 1925, this very old test helped physicians become familiar with the idea of doing multiple, systematic, standardized tests on their patients” (102).

V

Vaccination, early twentieth century American opposition to:

“Criticism of vaccinations during this period must be viewed as part of a broader social criticism of the ‘unholy alliance’ between law enforcement personnel and physicians in the enforcement of vaccination that was ‘trespassing’ on the vaccinated person’s body through the sanction of a legal decree while social reforms that would reduce poverty and improve sanitation were neglected in favor of a ‘magic bullet’ solution. . . . the vaccination – imprinted mainly on the bodies of children at the command of the state – constitutes an excellent case study for the understanding of the politicization of the body. These contests over citizens’ bodies helped to shape a classed identity. Vulnerable bodies, such as those of immigrants and the working class, became an object of surveillance and monitoring by the state. . . . Vaccine was produced form the arm of a child who had been not immunized earlier, and the material was then transferred to other children. Thus public health officials used children as a sort of walking laboratory for manufacturing vaccine. We should also remember that at the outset of the twentieth century there was no laboratory capable of establishing whether a child was immunized against small pox. The procedure was accompanied by pain and, at times, other side effects due to contamination of the wound site. Such complications, and especially the scarification process, were an important component in the rhetoric of vaccination opponents” (Davidovitch, 23, 24; cf. 27 re role of vaccinations in “the emerging public health paradigm”).

Venereal disease, control of in WWI, tension between the two strands of Progressivism:

“On the issue of [chemical] prophylaxis the alliance of reformers committed to a strict moral order and those committed to a technocratic order came unhinged. The military imperative of efficiency dictated that prophylaxis become the centerpiece of the Medical Department’s antivenereal campaign, scientific exigencies dismissing moral claims (Brandt, 113). . . . Although chemical prophylaxis was hardly acceptable to many members of the social hygiene movement, the provision of [self-administered] packets to the men seemed to be the straw which broke the moralists’ backs. . . . The battle against prophylaxis was, for the duration of the war a lost cause. The demands for an efficient military force were too intense, and faith in the ability of soldiers to refrain from sexual contacts did not run high among military officers, in spite of their official pronouncements . . . Only an estimated 30 percent of the men who fought in France maintained the officially prescribed continence while overseas. . . . Without exception, medical officers attributed their ability to control sexually transmitted diseases to the prophylactic stations” (114-115; also 120-121).

Venereal-disease anxiety, in occupied Germany at end of WWII:

“Sexual guilt became a major mental health problem among American troops preparing to leave postwar Europe. . . Service diagnosed with this malady believed they had contracted sexually transmitted disease and complained of physical symptoms, but the true source of their discomfort was psychological (Pfau, chap 3, paras 59-60).

Vital Principle, role of in transplant surgery:

Although the vital principle is nonexistent, “it nevertheless managed to influence the popularization of the tooth transplant, the revival of the skin graft and the resurrection of blood transfusion [in early 19th c., via e.g., the accoucheur James Blumenell]. Though it has since been debunked, vitalism nevertheless inspired transplant surgeons to, for the first time, formulate their profession’s most noble aim: to preserve human life” (Chaddock, 167-170, quoted at 170).

W

Wandering Jew, in late nineteenth century French psychiatry:

“Once the association of vagabondage [“traveling insanity”] with ‘ambulatory’ pathology had been made, it was only natural to go on to apply this latter model to the westward migration of the Jews after the Russian pogroms – that modern literalization of the legend of the Wandering Jew” (Goldstein, 539). Wandering Jew entered French psychiatric literature in 1887; Charcot went beyond the trope with a patient who was not only afflicted with the traveling neurosis but was also a Jew (540). In his medical thesis of 1893, Charcot’s student, Henry Meige took it one step further with a clinical study of Jewish nervousness in relation to the Wandering Jew in legend and iconography (541ff.): “Meige’s variant of it could be seen to serve the cause of a refurbished, modern anti-Semitism. The restless wanderings of the Jews, he seemed to say, had not been caused supernaturally, as punishment for their role as Christ-killers, but rather naturally, by their strong propensity to nervous illness (543).

Wangensteen, radical cancer surgery performed by:

“Wangensteen only too willingly took on cases that other surgeons had abandoned as hopeless. He believed timidity had no place in this field – that when in doubt, it was better to take more than less, for cancer cells could be lurking anywhere. Among the operations he sanctioned for the worst malignancies were the hemicorporectomy, which basically involved cutting a person in half, discarding both legs in their entirety; and the ev iscerectomy, in which the surgeon removed the bladder, the reproductive organs, the lymph nodes, the spleen, the rectum, a kidney, and all but about a foot of the colon” (Miller, 40).

Waterloo, Battle of, as medical disaster:

As late as 11 days after the 9-hr. battle, casualties were still waiting for treatment. British had dismantled their medical service a year earlier, and they had no medical support at all & no wagons set aside as ambulances, no litter bearers. British & French had 56,700 casualties between them. Wellington’s entire army of 60,000 had only 273 medical officers in toto, with no system of triage (unlike Larrey’s) and regimental hospitals designed to handle only 60 casualties (Gabriel, 150-151).

Willis, Thomas, rejection of Cartesianism and pioneer of iatrochemistry and “translational research”:

“Just as the biological psychiatry of the 20th century refuted Freud’s principles, Thomas Willis overthrew the mediaeval concepts about brain function 350 years ago. Willis’s works, particularly Cerebri anatome, had a great influence in Europe and contributed to the weakening of the prevailing Galenian movement. . . . . Willis rejected Cartesian ideas and recognized the cerebral cortex, not the ventricles, as the substrate of cognition. He recognized the brain as an alembic in which brain disturbances are caused by distillation problems (chemical disorders). . . . His multidisciplinary research done for clinical purposes set a precedent for current translational research” (Arraez-Aybar, et al.). As an anatomist, Willis replaced technique of previous anatomists of extracting brain slices with removal of entire brain intact from cadavers and slicing it from bottom up; this permitted study of a less deformed organ. “Although Willis regarded Galen and Hippocrates as the founding fathers of medicine, he held the view that the Classical anatomists not only lacked sufficient anatomical detail but that they were also affected by a flawed, pagan-based belief system” (Arraez-Aybar, et al., citing Dewhurst [II]).

Woman’s Hospital of New York State, founding of modern gynecology at:

“One of the ultimately most controversial surgeries, yet one of the most common during the first decades at the hospital, was the splitting of the cervix. . . . [there was] a great expenditure of energy and attention spent on the cervix during these midcentury years. . (136) . . . The Irish predominated as patients at Bellevue Hospital as well as at the Woman’s Hospital of the State of New York (139). . . . Many Woman’s Hospital patients suffered lacerated perineums or recto-vaginal and Vesico-vaginal fistulas. Also common was uterine prolapse, often caused by multiple pregnancies (140) . . . . the theoretical underpinning of Sims’s and Emmet’s surgeries in the first years was a belief that pain in menstruation often emanated from a blocked or obstructed cervix that would not admit the menstrual flow. . . Symptoms and disease were thought to radiate from the newly glimpsed locus, the cervix uteri. . . . Incisions on the cervix were to Sims a prime example of the absolute utility of surgery over other therapies (144). . . . In the first years of the Woman’s Hospital, the surgeries on the cervix were more radical, involving incision on both sides of the external os per surgery. Results from the surgery were less than perfect (145). . . . As they wrote articles later, they applied newfound symptomatology and diagnoses to cases they had earlier acted upon within a somewhat different nosological framework. Dysmenorrhea, for instance, was a catch-all symptom suggesting need for the surgery. Anteflexion later was identified as the key symptom indicating need for the surgery; and finally endometritis, in the 1870s” (147). . . . The surgery developed ahead of a scientific rationale for it (149). . . . endometritis was added as a diagnosis to uterine and cervical flexure in the 1860s. . . .Both endometritis and metritis were diagnoses for visually inaccessible regions of the uterus” (150).

World War I, German versus British treatment of disabled veterans:

“ . . . disabled [German] veterans subscribed to the restorative value of work. In contrast to the British Legion, which argued that the severely disabled should be exempt from work, German veteran’s organizations agreed that employment was the best remedy for disability. . . . In contrast to Great Britain, where many severely disabled ex-servicemen joined the long-term jobless, unemployment among the badly incapacitated remained low during the Weimar Republic. . . . Unlike in Britain, where a disabled man’s masculinity was measured by his stoicism, German veterans proved their manhood through their labor. . . . Deprived of their fellow citizen’s gratitude, they [German disabled veterans] turned to the welfare bureaucracy to provide the Fatherland’s thanks. Unlike philanthropists, who spoke readily of men ‘who had given their best for the Fatherland,’ civil servants were not accustomed to extol their clients’ sacrifices” (Cohen, 158, 161, 167). “Despite the right to participate in advisory councils and jobs in welfare offices, codetermination proved elusive for the [German] war-disabled. . . . By the mid-1920s, disabled veterans had lost faith in their fellow citizens’ goodwill . . . Weimar’s victims proved susceptible to the Nazi appeal (169ff).

World War I, heart surgery during:

In his compilation, French surgeon Pierre Duval found that 23 of 26 patients survived, a success rate accounted for party because of increasingly sophisticated use of radiography, including stereoscopic radiography (combining of two images to produce three-dimensional image) and fluoroscopy (moving images). (Morris).

World War I, impact on specialization:

“Techniques and specialties were improved, particularly in psychiatry, orthopedics, and plastic surgery, and the phrase ‘physical medicine and reconstruction’ came into general use for the first time – heralding the mixed blessing of machine therapy in the 1920s. . . . Physicians from all parts of the country were brought together, evaluated, sent through medical camps, and commissioned in a large, highly organized medical system based on hospitals. The younger generation of physicians, affected by the specialization, teamwork, and organizational efficiency of wartime medicine, might be expected to return to peacetime conditions with a different view of medicine from that of their older, pre-Flexner colleagues (Stevens, 140).

World War I, medicalization of sexuality during:

“ . . . the debate on sexuality and sexual behavior during the war became increasingly part of a medical discourse. . . . Doctors were among the chief beneficiaries of this process. In Germany and the United States especially, they now came to be regarded and consulted as experts in the fields of sex education and sexuality more generally. In effect, then, the social upheavals of the wars, and especially the concerns it raised about the management of industrial production, human reproduction and military efficiency proved instrumental in hastening the medicalization of sexuality and, at least to an extent, in making possible new ways of rationalizing sexual activity” (Sauerteig, 181).

World War I, segregation of Medical Corps during:

“Gorgas and his colleagues did not recruit African American physicians and barred women physicians from the army entirely. The army awarded medical commissions to only 360 of the 3,000 to 4,000 African American physicians, surgeons, and dentists in the country, and required some of them to serve as rank-and-file infantry privates instead of in their fields of expertise. . . [The OTSG] established a segregated training camp for them at Fort Des Moines, Iowa . . . African-American dental and medical officers treated only Black troops, often with interior equipment and supplies” (Byerly I, 247-248).

World War I, tubed pedicle flap as greatest surgical legacy:

“For many years, surgeons had used open pedicle flaps to bring skin and subcutaneous tissue from a distant donor site to the recipient area. The raw undersurface of the flap exuded blood and serum, requiring frequently dressings in addition to allowing infection to become established. Fibrosis and contraction were invariably, and thrombosis of the nutrient blood vessels was common. Many of these problems were solved by the tubing of the pedicle, which protected the flap from infection and thrombosis, in addition to preventing shrinkage and contraction. In 1916 and 1917, three surgeons independently developed and used the tubed pedicle flap: V.P. Filatov (Ukraine), Hugo Ganzer (Berlin), and Harold Gillies (England), who developed and perfected the technique (Brain, 161).

World War I, “curative workshops,” role in rehabilitation of:

“the most crucial part of making Walter Reed and Letterman rehabilitation hospitals was the creation of ‘curative workshops.’ Early in war, there were little more than “maintenance sheds, places where carpenters and automobile repairman worked . . . This singular act of turning the working quarters of army post carpenters and mechanics into places of medical treatment demonstrates the degree to which rehabilitative medicine became insinuated into regular hospital practice. . . . Hospital maintenance . . . would become places for medical cures, bringing normal, every day labor under the umbrella of medicine” (Linker, 91, 92). At Walter Reed, six new wards were keyed to specific work activities: commercial dept., electrical dept., laboratory for artificial limbs,” automobile shop, etc. (92). This attests to “how vocational training became medicalized and a regular part of health care delivery during the war” (93).

World War II, and birth of open-heart surgery:

American Dwight Harken, stationed in England at 160th General Hospital (Cotswolds) , operating out of a Quonset hut, removed shell fragments and other foreign bodies from the hearts of wounded soldiers, relying on fluoroscopy to locate fragments, endotracheal incubation, whole-blood transfusions, and penicillin. The surgeries, which required cutting into bleeding hearts, had to be performed in under a minute [elsewhere, under three minutes] (Jeffrey, 44-45). Harken had removed bullets from hearts before the war, but during the war he was observed by leading London surgeons; he came on 18 Feb, when he successfully operated on Leroy Rohrbach (Morris, ch 1). Over a period of 10 months, Harken removed 78 “missles” (bullets) within or proximate to the great vessels and extracted 56 foreign bodies from the heart , 13 of them from the heart chambers. He operated on 134 patients without a single death . He “routinely opened the pleura at a time when most surgeons were obsessed with extrapleural approaches” (Symbas & Justicz, 790).

World War II, lasting medical contributions deriving from:

“Lasting contributions included vaccines against influenza, typhus, and cholera, new drug treatments of malaria, the development of the insecticide DDT, and the separation of human blood plasma into therapeutically useful constituents (albumin, globulins, and clotting factors) for the treatment of shock and control of bleeding. Probably the most important contribution of the program was the development of methods of mass-producing penicillin. . . . During World War II, the army death rate from disease was 0.6 per thousand, compared with 14.1 per thousand during World War I. . . . medical science during World War II brilliantly succeeded at what it was asked to do. The success of the war on disease stands as an important corrective to the widespread misperception that American medical science was immature prior to the postwar expansion of the National Institutes of Health” (Ludmerer, 132-33).

World War II, psychiatric management of fear during:

“ . . . psychiatrists argued that certain groups and environmental characteristics rendered people less vulnerable to fear. Indisputably, the most important of these were loyalty to one’s comrades and confidence in one’s leaders. . . . Foreigners (especially Jews) were accused of emotional weakness. . . . Finally, psychiatrists never tired of implying that men who collapsed under the strain of war were ‘feminine’ or ‘latent homosexuals’. . . . the psychiatric profession dedicated itself to delineating a philosophy of the emotions which would render fear reactions ethically illegitimate and would enable people to build up their ‘will power’ so that they would not succumb to fright. These psychiatrists did not eschew moral pronouncements; they encapsulated them” (Bourke, 230, 231).

World War II, repression of war neuroses by American psychiatry after:

“During the war there was a brief period of time when psychiatrists related war trauma to actual battle experiences to be relived and abreacted in psychotherapy. However, when the war was over psychoanalytically oriented psychiatrists displaced the trauma of individual soldiers onto a drama of a threat to the nation’s masculinity – stirringly embodied in the traumatized, crippled, or mutilated veteran. American psychiatrists displaced this threat in particular on the American mothers. Psychoanalysis rose in unprecedented popularity in American culture after World War II by initially acknowledging and treating war trauma and subsequently repressing and displacing this concern with a grand narrative about the threat mothers posed to the masculinity of the nation” (Pols, 254). . . . “psychiatry immediately jumped on the band-wagon of the reconstruction of American society along the lines of an imagined bucolic past by presenting themselves as those who could provide the guidelines for raising a well-adjusted, strong, and virile post-war generation. Because of these new concerns, psychiatry did not form an exception to the wall of mis-recognition and repression of trauma that faced veterans. . . . After the war, their concern with the predispositions for mental breakdown became dominant again. Psychotherapeutic approaches to repeat and abreact war trauma were replaced by a general concern of a threat to the masculinity of the nation. According to post-war psychiatrists, American mothers (‘smothers’) had raised a generation of weaklings that would inevitably break down in battle” (262). [The problem of why so many Americans had been rejected for service on psychological grounds or broke down during the war was addressed by psychiatrists after the war in terms of] “the seeming lack of virile and tough masculinity in American men, and therefore, in the nation as a whole. And, not surprisingly, the feminized home-front was blamed. . . . Instead of helping a few traumatized soldiers, psychiatrists set themselves the task of aiding the reconstruction of the post-war family: a far more daunting task, keeping its significance as the Cold War increased in strength” (264). . . . With respect to the memories that haunted him [the veteran], psychology and psychiatry provided a compelling modernist discourse by erasing the traumatic pain of the individual as individual, in contrast with the larger historical process in which he participated and to which he traced his trauma [via narrative fetishism]. This erasure of trauma or its repression was first accomplished in discourse – that pain is the responsibility of the individual, and placed right there, detached from any historical, political or social process. Second, the redemptive practice of psychotherapy attempted to repair and remove the psychological damage by functionally erasing it (267). . . . This strategy of replacing the working-through of actual mental wounds caused by past experiences by a mythological scheme of an eternal conflict closely resembles Freud’s abandonment of the seduction hypothesis by putting the Oedipus complex in its place. . . . At the end of the war, history was erased to make room for the eternal and universal structure of the family. . . . In this sense psychoanalysis in America was transformed not in the exploration or recognition of trauma but more in its displacement” (268).

World War II, separation of medical science and practice after:

“Medical care is only aided by science; it is not a science itself. The practice of medicine consists of applying scientific medical knowledge to individual patients with unique symptoms and complaints. . . . In the 1950s, when editors speak of ‘scientific medicine,’ they are thus proud of the availability of ‘scientifically’ generated medical knowledge and technology. The notion of ‘scientific medical practice’ is seldom used, and when it is it rarely means anything other than this presence of scientific knowledge (Berg, 441, 442). . . . In this early postwar discourse, then, medical practice is seen as the artful application of medical science (444). . . . The origins of the problems of medical practice lie in the existence of restraints external to the practice of medicine itself: the fight is against corruption seeping in at the margins. Once these restraints are removed, once governmental control is diminished and more high-quality physicians are trained, the practitioners’ art can thrive undisturbed” (445-46). . . . [Only in late 1960s and 1970s, with the works of Alvan Feinstein and Lawrence Weed, does a different rendering of medical practice emerge:] “Here, medical practice is not primarily the application of a science located elsewhere: now, the practice of medicine itself is a scientific activity (449) . . . . inadequacies in medical practice reflect the lack of “’the scientific qualities of valid evidence, logical analyses, and demonstrable proofs’” . . . the solution lies in a new, standardized type of medical file, i.e., the ’problem-oriented record’ that will permit the physician to act scientifically (450) . . . “Standardization now includes medical procedures and is seen as a fundamental prerequisite to medical practice as a scientific activity in itself. It is a sine qua non for the full-blown development of this new science” (451). . . .In the 1970s and 1980s, a new discourse arose in which the ‘scientific character’ of medical practice became a thoroughly individualized notion. . . . the scientific status of medical practice was redefined as a feature of the physician’s mind” (452). . . . Within this cognitivist reconfiguration of the shape of medical practice, we see two different views competing for attention. Based on the metaphor of the information processing computer, one approach views the physician as mentally manipulating nonquantitative symbols: the physician follows the hypothetico-deductive method in solving a patient’s problem. The other approach views the physician as a calculating computer, arguing that physicians intuitively combine probabilities” (456).

World War II, use of Atabrine (antimalarial) during:

GIs feared it would leave them sterile and impotent after the War: “A drug that turned white skin yellow and whose other side effect were unknown, Atabrine heightened servicemen’s sense of exile from ‘civilization’ and their estrangement from national goals. . . . Both widespread resistance to the administration of Atabrine and the resulting epidemic of malaria demonstrated a strong shared desire to evade military service. Some men tried to catch malaria, because they hoped for a discharge or transfer. Once hospitalized, patients often prolonged their stay by secretly spitting out the Atabrine pills they were required to swallow” (Pfau, ch 5, para 32-34).

X

X-rays, ambiguities as legal evidence:

“At issue was .. . . the shifting border between judgment and mechanization, between the possibility (or necessity) of human intervention and the routinized, automatic functioning of the technology. . . . Of all the audiences who addressed the medico-legal concept of evidence, perhaps the most active (and distressed) was the assembly of clinical surgeons, who saw in the new X-ray photography a potential legal weapon that could be turned against them in malpractice suits. . . . Above, all , critics challenged the vulnerability of the image to changes in the relative location of the camera, the X-ray tube, and the object under investigation” (Daston & Galison, 110).

X-rays, shift from illustrative evidence to substantive evidence in 1920s:

“[courts] treated X-ray evidence as an exception to the doctrine, necessitated by the unavailability of direct eyewitness testimony. . . . In short, the courts treated X-ray images as substantive evidence of the conditions revealed by them, while still discounting regular photographs by admitting them only as illustrations” (Golan, 488). . . . the X-ray admissibility procedure was gradually recognized as being applicable to regular photography too” (489). . . . The basis on which X-ray images were admitted was extended to include not only the observation powers of the verifying witness, but also the reliability of the mechanical process that produced them. The essential relationship underlying the doctrine of illustrative evidence – the association between the visual evidence and the witness whose perceptions and knowledge it purported to represent – was severed” (489). . . . The alternative approach came to be known as the ‘silent witness’ doctrine.

Y

Yellow Fever, and theory of racial immunity in early 19th c. New Orleans:

“With acclimation [acquired immunity] serving as the difference between obscurity and wealth for white people, Blac, people’s alleged natural immunity increasingly had the opposite effect. It was used to justify slavery’s permanence and expansion. . . . recasting racial slavery as a health imperative provided a veneer of rationality, even respectability, to its violence” (Olivarius, 67-68; 80-84). “Black mortality, especially of the enslaved, was undercounted to an extraordinary, even conspiratorial, extent” ( 84).

Yellow Fever, as vehicle for physician enrichment in New Orleans:

“ . . . some physicians saw medicine as a capitalist endeavor rather than a civic calling and yellow fever as ‘a money making scheme’ as good as any other. With a constant supply of sick people, doctoring was an ideal mechanism to acquire capital that would eventually be invested in a planation, the region’s primary engine for weal creation. . . . for many doctors, medicine became less about protecting the health of individual and more about financing the doctor-turned-planter’s investments in the cotton and sugar economies. . . . epidemic disease was simply good for business.” (Olivarius, 106, 109). In alignment with planter interests, physicians never diagnosed yellow fever before July 1 to forestall economic contraction that followed beginning of yellow fever season. But after July 1, they were “incentivized to diagnose ‘every little fever’ as yellow fever (103).

Yellow Fever and malaria, different courses of:

“Although yellow fever and malaria were both transmitted by mosquitos, the diseases were quite different. Yellow fever was an acute, short-term disease and was often fatal. Malaria was rarely fatal in adults but could kill children who did not yet have fully developed immune systems. Most commonly, malaria made people feverish and weak and generated high hospitalization rates as a chronic, long-term illness. . . . Malaria is caused by Plasmodium parasites that reproduce in the mosquito gut and are then transmitted to humans by the bites of female mosquitos. The parasites take up residence in the human bloodstream and feed upon and destroy red blood cells, causing a cycle of fevers and chills every two days. . . . Malaria victims could recover but continued to harbor Plasmodia in their blood and could experience multiple episodes of active disease. . . . Yellow fever’s Aedes aegypti were urban and domestic creatures, confining themselves to inhabited areas, whereas malaria Anopheles were country cousins, preferring swamps and forests, making them much more difficult to find and destroy” (Byerly, 168).

Yellow Fever epidemic of 1878 and Congressional failure to pass national public health legislation:

“The investigation of the yellow fever epidemic in 1878 and the struggle over national health legislation during the fall and winter of 1878-79 were governed by political and personal considerations to which the issue of public health was altogether secondary. . . . In its political and constitutional aspects, the struggle found the southern proponents of a strong federal health agency in the role of nationalists contending against northern defenders of states rights. To the leaders of the American Public health Association, the ongoing of the sanitary movement depended upon maintaining the state’s implied police power to preserve and protect the public health” (Ellis, loc 1717-1728).

Yellow Fever epidemic of 1878 and National Board of Health (1879-1883):

“ It was the professional duty of each public health official to maintain the equilibrium between public health and commercial prosperity that was most beneficial to his locality; yet the National Board of Health’s rude imposition of a uniform quarantine code rendered such local flexibility impossible and eliminated this aspect of the local professional’s role. The nationalistic attitude of the federal board was irreconcilably at odds with the localistic orientation of coastal public health authorities , both in the North and South” (Humphreys, 74). . . . Public health officials of interior states and municipalities generally found that the National Board’s activities substantially bolstered their inadequate defenses against yellow fever . . . Southern coastal authorities mostly viewed the National Board In negative terms, for its work appeared to nullify rather than strength, their ability to perform their professional duties” (76).

#

“Regular” Medicine, pre-nineteenth century:

myth of: “The basic presupposition of ‘regular medicine’ itself poses problems. What many physicians from the Renaissance to the Enlightenment were pleased to call ‘medical truth’ was itself a ramshackle edifice of internally inconsistent Classical learning, strengthened or weakened, by later accretions. The assertion of the existence of some pure, perfect and pristine medical truth-system was itself largely an ideological construct, a myth advanced by a medical profession that was itself not an age-old, adamantine institution but a relatively new body attempting to establish corporate existence and privileges” (Porter, Intro). [Porter indulges in polemical overstatement. “Regular medicine,” right through the Civil War, did indeed sustain a multiplicity of treatment approaches and remedies, but the theoretical system shared by trained physicians was always squarely Galenic, revolving around the notion of humoral imbalance and the need to restore balance. This was the essence of “regular medicine,” not the plethora of contradictory treatment approaches that could be undertaken to restore humoral balance. See, for example, Evans & Reed, on the nature of the imbalances ascribed with specific symptoms and illnesses. Later accretions to Galen (e.g., Avicenna) did not alter the underlying humoral assumptions of the model – PES].

“Surprise” bills:

“About half of these bills are for lab work, facility charges, or imaging tests. The other half of surprise bills are generated by doctors working behind the scenes, such as pathologists and radiologists, who may be outside or your insurance network f. . . The same could be true for your emergency room physician, or the lab that processes your blood tests, or the anesthesiologists who puts you to sleep” (Makary, 23).

“White negroes,” nineteenth century medical/scientific understanding of:

“most scientists took a dim view of the white Negro. They perceived in him or her a variety of challenges to the primacy, status, and security of the white race and, with it, to nascent American scientific culture. This is because the scientific culture of the nineteenth century still relied upon taxonomy, or categorization. . . . It would not do to have categories, schemes, and classifications of people called into question by such unpredicted changes as Negroes who inexplicitly and illogically insisted upon turning white. They could not be explained by theory, so they contracted and threatened the classification scheme and, therefore, the social order” (Washington, 137-38).