So when the radiologist found an odd, bicycle-spoke-like pattern on the film — not even a lump — and sent me for a biopsy, I wasn’t worried. After all, who got breast cancer at 35? 

It turns out I did. Recalling the fear, confusion, anger and grief of that time is still painful. My only solace was that the system worked precisely as it should: the mammogram caught my tumor early, and I was treated with a lumpectomy and six weeks of radiation; I was going to survive. 

By coincidence, just a week after my diagnosis, a panel convened by the National Institutes of Health made headlines when it declined to recommend universal screening for women in their 40s; evidence simply didn’t show it significantly decreased breast-cancer deaths in that age group. What’s more, because of their denser breast tissue, younger women were subject to disproportionate false positives — leading to unnecessary biopsies and worry — as well as false negatives, in which cancer was missed entirely. 

Those conclusions hit me like a sucker punch. “I am the person whose life is officially not worth saving,” I wrote angrily. When the American Cancer Society as well as the newer Susan G. Komen foundation rejected the panel’s findings, saying mammography was still the best tool to decrease breast-cancer mortality, friends across the country called to congratulate me as if I’d scored a personal victory. I considered myself a loud-and-proud example of the benefits of early detection. 

Sixteen years later, my thinking has changed. As study after study revealed the limits of screening — and the dangers of overtreatment — a thought niggled at my consciousness. How much had my mammogram really mattered? Would the outcome have been the same had I bumped into the cancer on my own years later? It’s hard to argue with a good result. After all, I am alive and grateful to be here. But I’ve watched friends whose breast cancers were detected “early” die anyway. I’ve sweated out what blessedly turned out to be false alarms with many others. 

Recently, a survey of three decades of screening published in November in The New England Journal of Medicine found that mammography’s impact is decidedly mixed: it does reduce, by a small percentage, the number of women who are told they have late-stage cancer, but it is far more likely to result in overdiagnosis and unnecessary treatment, including surgery, weeks of radiation and potentially toxic drugs. And yet, mammography remains an unquestioned pillar of the pink-ribbon awareness movement. Just about everywhere I go — the supermarket, the dry cleaner, the gym, the gas pump, the movie theater, the airport, the florist, the bank, the mall — I see posters proclaiming that “early detection is the best protection” and “mammograms save lives.” But how many lives, exactly, are being “saved,” under what circumstances and at what cost? Raising the public profile of breast cancer, a disease once spoken of only in whispers, was at one time critically important, as was emphasizing the benefits of screening. But there are unintended consequences to ever-greater “awareness” — and they, too, affect women’s health. 

Breast cancer in your breast doesn’t kill you; the disease becomes deadly when it metastasizes, spreading to other organs or the bones. Early detection is based on the theory, dating back to the late 19th century, that the disease progresses consistently, beginning with a single rogue cell, growing sequentially and at some invariable point making a lethal leap. Curing it, then, was assumed to be a matter of finding and cutting out a tumor before that metastasis happens. 

The thing is, there was no evidence that the size of a tumor necessarily predicted whether it had spread. According to Robert Aronowitz, a professor of history and sociology of science at the University of Pennsylvania and the author of “Unnatural History: Breast Cancer and American Society,” physicians endorsed the idea anyway, partly out of wishful thinking, desperate to “do something” to stop a scourge against which they felt helpless. So in 1913, a group of them banded together, forming an organization (which eventually became the American Cancer Society) and alerting women, in a precursor of today’s mammography campaigns, that surviving cancer was within their power. By the late 1930s, they had mobilized a successful “Women’s Field Army” of more than 100,000 volunteers, dressed in khaki, who went door to door raising money for “the cause” and educating neighbors to seek immediate medical attention for “suspicious symptoms,” like lumps or irregular bleeding. 

The campaign worked — sort of. More people did subsequently go to their doctors. More cancers were detected, more operations were performed and more patients survived their initial treatments. But the rates of women dying of breast cancer hardly budged. All those increased diagnoses were not translating into “saved lives.” That should have been a sign that some aspect of the early-detection theory was amiss. Instead, surgeons believed they just needed to find the disease even sooner. 

Mammography promised to do just that. The first trials, begun in 1963, found that screening healthy women along with giving them clinical exams reduced breast-cancer death rates by about 25 percent. Although the decrease was almost entirely among women in their 50s, it seemed only logical that, eventually, screening younger (that is, finding cancer earlier) would yield even more impressive results. Cancer might even be cured. 

That hopeful scenario could be realized, though, if women underwent annual mammography, and by the early 1980s, it is estimated that fewer than 20 percent of those eligible did. Nancy Brinker founded the Komen foundation in 1982 to boost those numbers, convinced that early detection and awareness of breast cancer could have saved her sister, Susan, who died of the disease at 36. Three years later, National Breast Cancer Awareness Month was born. The khaki-clad “soldiers” of the 1930s were soon displaced by millions of pink-garbed racers “for the cure” as well as legions of pink consumer products: pink buckets of chicken, pink yogurt lids, pink vacuum cleaners, pink dog leashes. Yet the message was essentially the same: breast cancer was a fearsome fate, but the good news was that through vigilance and early detection, surviving was within their control. 

By the turn of the new century, the pink ribbon was inescapable, and about 70 percent of women over 40 were undergoing screening. The annual mammogram had become a near-sacred rite, so precious that in 2009, when another federally financed independent task force reiterated that for most women, screening should be started at age 50 and conducted every two years, the reaction was not relief but fury. After years of bombardment by early-detection campaigns (consider: “If you haven’t had a mammogram, you need more than your breasts examined”), women, surveys showed, seemed to think screening didn’t just find breast cancer but actually prevented it. 

At the time, the debate in Congress over health care reform was at its peak. Rather than engaging in discussion about how to maximize the benefits of screening while minimizing its harms, Republicans seized on the panel’s recommendations as an attempt at health care rationing. The Obama administration was accused of indifference to the lives of America’s mothers, daughters, sisters and wives. Secretary Kathleen Sebelius of the Department of Health and Human Services immediately backpedaled, issuing a statement that the administration’s policies on screening “remain unchanged.” 

Even as American women embraced mammography, researchers’ understanding of breast cancer — including the role of early detection — was shifting. The disease, it has become clear, does not always behave in a uniform way. It’s not even one disease. There are at least four genetically distinct breast cancers. They may have different causes and definitely respond differently to treatment. Two related subtypes, luminal A and luminal B, involve tumors that feed on estrogen; they may respond to a five-year course of pills like tamoxifen or aromatase inhibitors, which block cells’ access to that hormone or reduce its levels. In addition, a third type of cancer, called HER2-positive, produces too much of a protein called human epidermal growth factor receptor 2; it may be treatable with a targeted immunotherapy called Herceptin. The final type, basal-like cancer (often called “triple negative” because its growth is not fueled by the most common biomarkers for breast cancer — estrogen, progesterone and HER2), is the most aggressive, accounting for up to 20 percent of breast cancers. More prevalent among young and African-American women, it is genetically closer to ovarian cancer. Within those classifications, there are, doubtless, further distinctions, subtypes that may someday yield a wider variety of drugs that can isolate specific tumor characteristics, allowing for more effective treatment. But that is still years away. 

Those early mammography trials were conducted before variations in cancer were recognized — before Herceptin, before hormonal therapy, even before the widespread use of chemotherapy. Improved treatment has offset some of the advantage of screening, though how much remains contentious. There has been about a 25 percent drop in breast-cancer death rates since 1990, and some researchers argue that treatment — not mammograms — may be chiefly responsible for that decline. They point to a study of three pairs of European countries with similar health care services and levels of risk: In each pair, mammograms were introduced in one country 10 to 15 years earlier than in the other. Yet the mortality data are virtually identical. Mammography didn’t seem to affect outcomes. In the United States, some researchers credit screening with a death-rate reduction of 15 percent — which holds steady even when screening is reduced to every other year. Gilbert Welch, a professor of medicine at the Dartmouth Institute for Health Policy and Clinical Practice and co-author of last November’s New England Journal of Medicine study of screening-induced overtreatment, estimates that only 3 to 13 percent of women whose cancer was detected by mammograms actually benefited from the test. READ MORE

(Photograph: Tony Cenicola/The New York Times; Gabrielle Plucknette/The New York Times (umbrella, socks, oven mitt); A.J. Mast/Associated Press; 
Nam Y. Huh/Associated Press; Kyle Kurlick/The Commercial Appeal, via Associated Press; Dr. Scott M. Lieberman/Associated Press.)