HIV, Cancer and A History of Early Diagnosis-Early Intervention

 

Last month, two stories about ‘functional’ cures for HIV raised questions about the value of early diagnosis and intervention.[i] The first concerned an infant in the US, the second an organized trial of 70 individuals (with 14 being ‘functionally cured’ for an average of seven years). In each case the condition was caught very early and, after standard treatment, a number of individuals have been able to come off drugs, with a strengthened immune system being able to control their levels of infection[ii] Whilst recognizing the rarity of these cases, the UK National AIDS Trust nevertheless felt it only proper to reiterate how “th[ese stories] just underline the importance of people being tested and diagnosed early”.[iii]

 

Although there may be ‘disease control’ reasons for early diagnosis in infectious conditions, in the context of these stories, it seems to me that the early diagnosis argument refers to individual benefits.[iv] Having your infection caught early, in this sense, might mean having a greater chance of being “cured”, if only functionally so.[v]

 

However, the equation that ‘early diagnosis = better treatment outcomes” is a common trope in contemporary biomedicine, and one that has a long history in public health campaigns. Although also having roots in discussions of mental health and neuroses, perhaps the most prominent example of this attitude can be seen in approaches to cancer from the early twentieth century.

 

Early Diagnosis in Cancer and Beyond: From a Clinical to Public Health Tradition in the US and Britain

 

During the late nineteenth and early twentieth century, American physicians developed models of cancer development which stressed that the disease resulted from the spread of an initial localized lesion throughout the body.[vi] According to Illana Löwy, physicians soon combined this model with independent surgical observations that removal of smaller, less invasive tumours had greater success rates than removal of larger, deeper ones. As such, a new “cancer schemata” emerged. Doctors increasingly understood small tumours to be early, localized lesions, ones whose early diagnosis and removal would cure – or even prevent – cancer.[vii] With this framework in place, professional, governmental, and lay bodies (such as the National Society for the Control of Cancer and the National Cancer Institute), spent enormous amounts of resources designing public education campaigns that encouraged self inspection, frequent medical check-ups, and early surgical and radiotherapeutic interventions.[viii] In Britain, concerns with cancer emerged at a similar juncture in time – around the 1920s – but fears over patient intelligence and professional authority curbed public education until after WWII.[ix] Instead, efforts to combat the condition early focused more on professional education, rationalized service organization, and distribution of precious therapeutic material.[x]

 

By the mid-twentieth century, similar attitudes towards early diagnosis and intervention could be seen emerging in the treatment of other conditions. Natural histories for diseases like hypertension or diabetes were gradually being re-written during this period in light of changing research methods, with some clinicians and researchers divorcing pathology from symptom, and placing diagnosis in the realm of laboratory-based biochemistry or physiology.[xi] The process, moreover, was encouraged in many respects by the creation of successful therapeutics, with pharmaceutical companies sponsoring detection drives and funding research into presymptomatic treatment in order to expand their potential markets.[xii] Today, there is barely a condition I can think of that charities and the profession will suggest is equally as treatable if caught ‘late’ as being caught early.

 

Contesting Benefits: Categories, Tests and Uncertainty in Early Intervention Public Health in Britain

 

This process, however, has not been an inexorable one. Concerns over the efficacy of early diagnosis and intervention have been raised since the 1950s and 1960s. Firstly, there have been concerns over how systems of categorization have needlessly pathologized certain individuals. Even in the 1940s, pathologists involved with pap-smears indicated that there had been labeling problems. “The pap-smear isn’t diagnosing cancer” one pathologist recalled, “it’s diagnosing a pre-cursor. Why did they call it a cancer then? Because no one would pay attention if they called it dysplasia”.[xiii] As Charles Rosenberg has made clear, however, the bureaucratic systems of contemporary medical infrastructure ensure that labels can have material impacts.[xiv] By changing the label from dysplasia (a lesion that might potentially be cancerous) to “carcinoma-in-situ” (a pre-cancerous lump waiting to be expressed) diagnosed doctors hoped to convince individuals to take ‘preventive action’, even though there was every chance that the lesion found could be harmless. Despite the voices of dissenters becoming louder in the decades following the 1960s, the political need for an optimistic view on cancer ensured that such voices were overwhelmed.[xv] Recent British recognition of the problems in breast cancer screening, however, appears to have given official acknowledgement to the problems of categorization in a related area of screening.[xvi]

 

Secondly, even where categorization is/has been unproblematic, there have long been questions about the sensitivity and reliability of screening tests, as well as what disease markers should be looked for. In Britain, epidemiologists such as Archie Cochrane raised such concerns in professional circles, arguing that the high rates of false negatives and false positives in a number of conditions cast doubt upon the efficacy of such projects.[xvii] His arguments were buttressed by other doctors making related suggestions, in particular that screening not only unduly concerned patients, but also wasted money better spent elsewhere twice over: the first waste occurred performing the test, the second when later, more thorough, diagnostic checks overturned initial findings.[xviii] According to these objectors, then, no matter how sound in theory, early diagnosis and intervention would be a harmful waste of money if enacted en masse. Cheap, accurate and acceptable means of screening were a necessity for screening in any condition.

 

Finally, from the mid-century onwards there have also been objections to the ethics of screening where doctors either lack effective therapies, have doubts over existing treatments, or see the crudeness of therapy as being starkly at odds with the sophistication of diagnosis.[xix] Such uncertainty, moreover, has been compounded where concerns have existed over diagnostic thresholds. In diabetes, for instance, doctors agreed very early on that symptoms (most prominently, constant thirst, hunger, urination, and in extreme cases, coma) were not necessary for diagnosis when raised levels of glucose were present in the urine, and more importantly in the blood.[xx] However, even into the late 1970s, no-one could agree on criteria at which pathological processes began and the idiosyncrasies of “normal variance” ended.[xxi] Clinicians were happy to confer diagnoses at the extreme ends of hyperglycaemia, but the problem of the ‘borderline’ patient persisted until WHO guidelines were offered in the 1980s.[xxii] Thus, with no convincing evidence to support starting treatment in such cases, diabetes screening was generally neglected, criticized or limited into the 1990s.[xxiii] Similarly, using treatment for those ‘not-yet’ diabetic was considered unethical, largely because no symptoms existed, and doctors could not be sure that drugs prevented worsening diabetic complications, even in symptomatic cases.[xxiv] Despite the profession agreeing that treatment prevents complications today, health organizations such as Diabetes UK nonetheless still emphasize ‘lifestyle change’ and health education over therapeutics in the treatment of “impaired glucose tolerance”. (A ‘risk state’ for diabetes).[xxv]

 

A Service-Based Prevention: Being Critical Not Simply Criticizing

 

This hidden conflict, then, brings us back round to HIV and why the history of the early diagnosis and intervention paradigm matters. If current results are anything to go by, then clearly for a minority of patients very early HIV treatment might prove effective. However, prevention, as the saying goes, is better than cure – condoms, a more supportive sexual culture, clean needles and drug prevention programmes are better options than early drug treatment in the long-run.

 

Furthermore, broadening out from HIV to other conditions, it is clear that in a number of conditions, prevention has either become service-based (various cancers) or the emphasis has shifted from ‘primary prevention’ (promoting health and preventing onset) to ‘secondary prevention’ (ensuring effective surveillance and therapy to prevent complications, as in diabetes). Part of this is – as my current thesis looks to show – to do with the shifting metrics of assessment used by health services and governments. For a range of philosophical and practical reasons, it is easier to trace “rates of consultation” than to measure “cases prevented”.

 

Whatever your position about the focus of our health service, however, I think it is worth remaining critical about a strictly ‘early diagnosis’, service-based approach to prevention. As highlighted above, there might be innumerable problems with existing attempts to screen and treat in an ‘early’ and ‘preventive’ manner. Of course, every case has to be considered on its own merits. Not all screening is uncertain, just as early intervention in some diseases clearly works. Likewise, not all individuals will judge the same risks of knowing/treatment options in the same way. Nonetheless, if we do not ask the questions, we cannot hope for more certain answers, and it is only through such questioning that we will be better placed to make more informed individual and collective decisions.

 

Martin Moore, University of Warwick.

Image Courtesy of the Wellcome Trust.

 


[i] James Gallagher, ‘Analysis: A cure for HIV’, BBC News Online. Accessed on 16/03/2013: http://www.bbc.co.uk/news/health-21653463.

[ii] BBC, ‘US HIV baby ‘cured’ by early drug treatment’, BBC News Online. Accessed on 16/03/2013: http://www.bbc.co.uk/news/world-us-canada-21651225; James Gallagher, ‘Early HIV drugs ‘functionally cure about one in 10’, BBC News Online. Accessed on 16/03/2013: http://www.bbc.co.uk/news/health-21783945.

[iii] James Gallagher, ‘Early HIV drugs’.

[iv] For better or worse, both British and American societies have liberal traditions of personal responsibility in health care, and have even forcibly detained those individuals who have refused to co-operate with this expectation. Early diagnosis in this sense could be seen as merely another extension of the responsibility argument – the sooner someone is diagnosed, the sooner they can undertake preventive action. The most famous instance of institutionalization has to be Typhoid Mary: Judith Walzer Leavitt, Typhoid Mary: captive to the public’s health, (Boston: Beacon Press, 1996).

[v] Being functionally cured means that the body is able to deal with the infection without drugs, but that the infection is still present.

[vi] Ilana Löwy, Preventive Stikes: Women, Precancer, and Prophylactic Surgery, (Baltimore: Johns Hopkins University Press, 2010), pp.1-2.

[vii] Löwy, Preventive Strikes, p.2

[viii] David Cantor, ‘Introduction: Cancer Control and Prevention in the Twentieth Century’, Bulletin of the History of Medicine, 81:1, pp.5-9.

[ix] Elizabeth Toon, ‘“Cancer as the General Population Knows It” Knowledge, Fear and Lay Education in 1950s Britain’, Bulletin of the History of Medicine, 81:1, pp.116-138.

[x] Rosa M. Medina Domenech and Claudia Casañeda, ‘Redefining Cancer During the Interwar Period: British Medical Officers of Health, State Policy, Managerialism and Public Health, American Journal of Public Health, 97:9, pp.1563-1571.

[xi] For a philosophical discussion of the symptom/pathology divison: Georges Canguilhem [translated by Carolyn Fawcett], The Normal and the Pathological, 5th Edition, (New York: Zone Books: 2007[1947]); for historical discussions, see David Armstrong, Political anatomy of the body: medical knowledge in Britain in twentieth-century, (Oxford: Oxford University Press, 1983); Carsten Timmerman, ‘A matter of degree: the normalization of hypertension, c.1940-2000’, in Waltraud Ernst (ed), Histories of the Normal and the Abnormal: Social and cultural histories of norms and normativity, (Routledge: London, 2006), pp.245-261.

[xii] Jeremy Greene, Prescribing By Numbers: Drugs and the Definition of Disease, (Baltimore: Johns Hopkins University Press, 2007).

[xiii] Lynda Bryder, ‘Debates about cervical screening: an historical overview’, Journal of Epidemiology and Community Health, 62:4, p.284.

[xiv] Charles Rosenberg, ‘What is Disease? In Memory of Oswei Temkin’, Bulletin of the History of Medicine, Vol.77, (No.3, 2003), pp.491-505.

[xv] Bryder, ‘Debates about cervical screening’, pp.284-7.

[xvi] Stephen Adams, ‘Breast cancer screening ‘harming thousands”, The Telegraph Online. Accessed on 16/03/2013: http://www.telegraph.co.uk/health/healthnews/9641609/Breast-cancer-screening-harming-thousands.html.

[xvii] A.L. Cochrane, ‘A Medical Scientist’s View of Screening’, Public Health, 81:5, pp.207-213.

[xviii] See papers in, for instance, . Teeling-Smith (ed), Surveillance and Early Diagnosis in General Practice: Proceedings of a Colloqium held at Magdalen College, Oxford, (London: Office of Health Economics, 1966).

[xix] Illana Löwy in her recent work on cancer prevention herself expressed surprise at contrast between the innovation involved in oncogenetic testing and the lack of parity in the two available interventions – intensive medical surveillance (frequent mammography, clinical investigation and ultrasonography) or prophylactic removal of the ovaries (and in some clinics – though not the one visited – mastectomy). Löwy, Preventive Srikes, p.1.

[xx] Textbooks in the 1920s and 1930s suggested that often diagnosis was made once patients presented with symptoms. Urine tests would subsequently find glucose, and then blood testing found hyperglycaemia (glucose above ‘normal’ levels. In Lawrence, this was above 200mg/100ml if after meal and above 130mg/100ml if fasting). Lawrence, accepts, however, that, frequently, routine testing of urine would reveal glucose before symptoms: R.D, The Diabetic Life: Its Control by Diet and Insulin, A Concise Practical Manual For Practitioners and Patients, (London: J.A. Churchill, 1925), p.15, pp.18-25.

[xxi] This was made most starkly by Harry Keen in his attempts to discover such a criteria through longitudinal studies: Harry Keen, ‘Diabetes Detection’, in G. Teeling-Smith (ed), Surveillance and Early Diagnosis in General Practice: Proceedings of a Colloqium held at Magdalen College, Oxford, (Office of Health Economics: London, 1966), pp.19-24; also: W.G. Oakley, D.A. Pyke and K.W. Taylor, Diabetes and its Management, 3rd Edition (Blackwell Scientific Publications: Oxford, 1978), p.44.

[xxii] Of course, the WHO guidelines were not a magic fix, but for those adhering to them they confirmed that borderliners – now termed “impaired glucose tolerance” – should be closely observed as an ‘at risk’ group. This was predicated on work done by British researchers Harry Keen and R.J. Jarrett: BMJ, ‘Impaired Glucose Tolerance and diabetes – WHO criteria’, BMJ, Vol.281, (No.6254, 1980), p.1512.

[xxiii] If practiced, only those considered to be of the “highest risk” – the overweight and the over ‘50s – were screened, whilst thresholds were set at very high levels so that borderline cases could be excused, or simply placed under continued surveillance: D. Crombie, ‘Testing in a Family Practice‘, Journal of the College of General Practitioners, 7:3, pp.379- 385; also: Keen, ‘Diabetes Detection’, pp.19-24; c.f.: R.J. Donaldson, ‘Multiple Screening Clinics’, Public Health, 81:5, pp.218-221.

[xxiv] Malins, Clinical Diabetes, pp.70-71; pp.458-459; J. Stowers, ‘Treatment of Subclinical Diabetes’, Proceedings of the Royal Society of Medicine, Vol.59, (No.11, Part 2, 1966), pp.1177-1180. Acceptable epidemiological evidence only came into widespread acceptance during the 1980s, and professional consensus not earned until the early 1990s: For instance, D. Hadden, ‘Managing the newly diagnosed maturity-onset patient’, in R. Tattersall and E. Gale (ed) Diabetes: Clinical Management, (Churchill Livingstone: Edinburgh, 1990), p.43.

[xxv] See the pages under “prediabetes” on the Diabetes UK website: Diabetes UK, ‘Prediabetes’, Diabetes UK Online. Accessed on 16/03/2013: http://www.diabetes.org.uk/Guide-to-diabetes/Introduction-to-diabetes/What_is_diabetes/Prediabetes/Managing-prediabetes/.

Tags: , , ,

Leave a Reply

To get your own thumbnail image, go to gravatar.com