“There are two ways of spreading light: to be the candle or the mirror that reflects it.” - Edith Wharton

Thursday, January 22, 2009

Found in Translation

By the time I pull up to the farmhouse in the Spanish region of Navarre, the other students have already arrived. Waiting for lunch, we nervously pretend that we understand what’s going on. A blond woman whose name I can’t pronounce points to a bottle as she pours each student a glass and says, with exaggerated clarity, “Ardoa” (“wine”).

It’s the first word I learn at this barne­tegi, or Basque-language immersion school, and its ordinariness comes as a relief. Until now, as an American living in Spain, the only Basque words I’d managed to glean have to do with violence and power: ertzaintza (“police”), kale borroka (“street violence”), etarra (“terrorist” or “freedom fighter,” depending on your point of view). The media use the words every time an arrest, a demonstration, or an assassination—there was one in September—takes place. Indeed, most of what anyone in or outside Spain hears about Basques has to do with ETA, the radical group that has killed more than 800 people in the past 40 years in its quest for an independent Basque homeland.

“In the rest of Spain, they only talk about us as a problem—the Basque problem,” says Amaia Marin, who is attending an intermediate course at the barnetegi. She is Basque but doesn’t know Euskera, the Basque language, because it was not taught in schools when she was growing up. She doesn’t see herself as particularly political and thinks of independence as little more than “a pretty dream.” But she was driven to learn Euskera, she says, because “being Basque is the thing I’m most proud of in my life.”

Thousands of years old, Euskera has no links to any other known tongue, living or dead. That alone makes it the clearest sign of Basque identity. Franco sought to suppress the language during his dictatorship. In Spain today, when Basques enjoy greater autonomy than at any time since the 19th century, and when Spanish conservatives see that autonomy as a threat to national unity, the language remains a political issue. In some parts of Basque country, a town meeting held in Spanish is reason for protest, even vandalism; meanwhile, new laws that require 2,000 large businesses to offer their services in both Euskera and Spanish have triggered strong opposition outside Basque country. But for people like Amaia, Euskera is a way out of the Basque problem, a way to be Basque regardless of politics.

The number of Euskera speakers has risen in recent years, from 657,000 in 2001 to 775,000 (out of a total Basque population of about 3 million) in 2006. This growth can be attributed largely to schools—parents can choose how much Euskera training their children get, and the majority favor some education in their ancestral tongue. Yet 100,000 of those speakers have learned the language as adults. In my group, Celia and Maite are here because their employer—Microsoft—is expanding into Basque country, while Italian-born Nicoletta, Chilean Paula, and Spanish Dani are married to Basques and have signed up because their children are learning Euskera in school.

In class, we struggle through intro­ductions and learn to count to 10. (Actually, we learn to count to 100, but I’m so flustered by the compound words that I falter at the double digits.) We learn directions and body parts, including zakila (“penis”) and alua (“vagina”)—the Basques are nothing if not frank. In between lessons, we go for coffee in Bakaiku, a pretty mountain village west of Pamplona, where the stone houses are adorned with fat geraniums. It’s an idyllic place if you ignore the pro-independence graffiti and posters that spring up every night—one of which gives me my first thrill of comprehension: Euskal Herria Aurrera, “Forward With the Basque Country.”

No one teaches politics at the barne­tegi, but subtle messages slip in among the vocabulary words heavy with x’s and z’s. A geography lesson shows the Basque provinces without any boundary delineating the three in France from the four in Spain. And we get an explanation of the ancient fueros, laws that granted the Basques certain rights and privileges in exchange for loyalty to the kingdom of Castile.

But the rise in Euskera has not necessarily fed secessionist impulses. “The fact that you learn the language doesn’t mean that you’ll vote for a nationalist party,” says Xabier Monasterio, the director of pedagogy at the Gabriel Aresti school, which runs this barnetegi. Support for the conservative Basque Nationalist Party has stayed relatively constant over the past 10 years. So has the percentage of Basques who want independence from Spain (35 percent) over the past two decades.

“I love teaching Euskera, because it is the language of our people,” Monasterio says. “It’s who we are. It’s not better or worse than any other language—it’s just ours.” I think about his words during the last class, which we spend singing Basque songs. There are hymns to innocent love and odes to sailors, but the most moving song consists of just the words gueria da—“it’s ours”—repeated over and over. To the tune of “Hava Nagila.”

------------------------------------------------------------
Author: Lisa Abend
Original Source: Atlantic
Date Published: January/February 2009 Atlantic
Web Source: http://www.theatlantic.com/doc/200901/basque
Date Accessed Online: 2009-01-22

Labels: , ,

Tuesday, January 20, 2009

How should Obama reform health care?

In every industrialized nation, the movement to reform health care has begun with stories about cruelty. The Canadians had stories like the 1946 Toronto Globe and Mail report of a woman in labor who was refused help by three successive physicians, apparently because of her inability to pay. In Australia, a 1954 letter published in the Sydney Morning Herald sought help for a young woman who had lung disease. She couldn’t afford to refill her oxygen tank, and had been forced to ration her intake “to a point where she is on the borderline of death.” In Britain, George Bernard Shaw was at a London hospital visiting an eminent physician when an assistant came in to report that a sick man had arrived requesting treatment. “Is he worth it?” the physician asked. It was the normality of the question that shocked Shaw and prompted his scathing and influential 1906 play, “The Doctor’s Dilemma.” The British health system, he charged, was “a conspiracy to exploit popular credulity and human suffering.”

In the United States, our stories are like the one that appeared in the Times before Christmas. Starla Darling, pregnant and due for delivery, had just taken maternity leave from her factory job at Archway & Mother’s Cookie Company, in Ashland, Ohio, when she received a letter informing her that the company was going out of business. In three days, the letter said, she and almost three hundred co-workers would be laid off, and would lose their health-insurance coverage. The company was self-insured, so the employees didn’t have the option of paying for the insurance themselves—their insurance plan was being terminated.

“When I heard that I was losing my insurance, I was scared,” Darling told the Times. Her husband had been laid off from his job, too. “I remember that the bill for my son’s delivery in 2005 was about $9,000, and I knew I would never be able to pay that by myself.” So she prevailed on her midwife to induce labor while she still had insurance coverage. During labor, Darling began bleeding profusely, and needed a Cesarean section. Mother and baby pulled through. But the insurer denied Darling’s claim for coverage. The couple ended up owing more than seventeen thousand dollars.

The stories become unconscionable in any society that purports to serve the needs of ordinary people, and, at some alchemical point, they combine with opportunity and leadership to produce change. Britain reached this point and enacted universal health-care coverage in 1945, Canada in 1966, Australia in 1974. The United States may finally be there now. In 2007, fifty-seven million Americans had difficulty paying their medical bills, up fourteen million from 2003. On average, they had two thousand dollars in medical debt and had been contacted by a collection agency at least once. Because, in part, of underpayment, half of American hospitals operated at a loss in 2007. Today, large numbers of employers are limiting or dropping insurance coverage in order to stay afloat, or simply going under—even hospitals themselves.

Yet wherever the prospect of universal health insurance has been considered, it has been widely attacked as a Bolshevik fantasy—a coercive system to be imposed upon people by benighted socialist master planners. People fear the unintended consequences of drastic change, the blunt force of government. However terrible the system may seem, we all know that it could be worse—especially for those who already have dependable coverage and access to good doctors and hospitals.

Many would-be reformers hold that “true” reform must simply override those fears. They believe that a new system will be far better for most people, and that those who would hang on to the old do so out of either lack of imagination or narrow self-interest. On the left, then, single-payer enthusiasts argue that the only coherent solution is to end private health insurance and replace it with a national insurance program. And, on the right, the free marketeers argue that the only coherent solution is to end public insurance and employer-controlled health benefits so that we can all buy our own coverage and put market forces to work.

Neither side can stand the other. But both reserve special contempt for the pragmatists, who would build around the mess we have. The country has this one chance, the idealist maintains, to sweep away our inhumane, wasteful patchwork system and replace it with something new and more rational. So we should prepare for a bold overhaul, just as every other Western democracy has. True reform requires transformation at a stroke. But is this really the way it has occurred in other countries? The answer is no. And the reality of how health reform has come about elsewhere is both surprising and instructive.

No example is more striking than that of Great Britain, which has the most socialized health system in the industrialized world. Established on July 5, 1948, the National Health Service owns the vast majority of the country’s hospitals, blood banks, and ambulance operations, employs most specialist physicians as salaried government workers, and has made medical care available to every resident for free. The system is so thoroughly government-controlled that, across the Atlantic, we imagine it had to have been imposed by fiat, by the coercion of ideological planners bending the system to their will.

But look at the news report in the Times of London on July 6, 1948, headlined “FIRST DAY OF HEALTH SERVICE.” You might expect descriptions of bureaucratic shock troops walking into hospitals, insurance-company executives and doctors protesting in the streets, patients standing outside chemist shops worrying about whether they can get their prescriptions filled. Instead, there was only a four-paragraph notice between an item on the King and Queen’s return from a holiday in Scotland and one on currency problems in Germany.

The beginning of the new national health service “was taking place smoothly,” the report said. No major problems were noted by the 2,751 hospitals involved or by patients arriving to see their family doctors. Ninety per cent of the British Medical Association’s members signed up with the program voluntarily—and found that they had a larger and steadier income by doing so. The greatest difficulty, it turned out, was the unexpected pent-up demand for everything from basic dental care to pediatric visits for hundreds of thousands of people who had been going without.

The program proved successful and lasting, historians say, precisely because it was not the result of an ideologue’s master plan. Instead, the N.H.S. was a pragmatic outgrowth of circumstances peculiar to Britain immediately after the Second World War. The single most important moment that determined what Britain’s health-care system would look like was not any policymaker’s meeting in 1945 but the country’s declaration of war on Germany, on September 3, 1939.

As tensions between the two countries mounted, Britain’s ministers realized that they would have to prepare not only for land and sea combat but also for air attacks on cities on an unprecedented scale. And so, in the days before war was declared, the British government oversaw an immense evacuation; three and a half million people moved out of the cities and into the countryside. The government had to arrange transport and lodging for those in need, along with supervision, food, and schooling for hundreds of thousands of children whose parents had stayed behind to join in the war effort. It also had to insure that medical services were in place—both in the receiving regions, whose populations had exploded, and in the cities, where up to two million war-injured civilians and returning servicemen were anticipated.

As a matter of wartime necessity, the government began a national Emergency Medical Service to supplement the local services. Within a period of months, sometimes weeks, it built or expanded hundreds of hospitals. It conducted a survey of the existing hospitals and discovered that essential services were either missing or severely inadequate—laboratories, X-ray facilities, ambulances, care for fractures and burns and head injuries. The Ministry of Health was forced to upgrade and, ultimately, to operate these services itself.

The war compelled the government to provide free hospital treatment for civilian casualties, as well as for combatants. In London and other cities, the government asked local hospitals to transfer some of the sick to private hospitals in the outer suburbs in order to make room for victims of the war. As a result, the government wound up paying for a large fraction of the private hospitals’ costs. Likewise, doctors received government salaries for the portion of their time that was devoted to the new wartime medical service. When the Blitz came, in September, 1940, vast numbers of private hospitals and clinics were destroyed, further increasing the government’s share of medical costs. The private hospitals and doctors whose doors were still open had far fewer paying patients and were close to financial ruin.

Churchill’s government intended the program to be temporary. But the war destroyed the status quo for patients, doctors, and hospitals alike. Moreover, the new system proved better than the old. Despite the ravages of war, the health of the population had improved. The medical and social services had reduced infant and adult mortality rates. Even the dental care was better. By the end of 1944, when the wartime medical service began to demobilize, the country’s citizens did not want to see it go. The private hospitals didn’t, either; they had come to depend on those government payments.

By 1945, when the National Health Service was proposed, it had become evident that a national system of health coverage was not only necessary but also largely already in place—with nationally run hospitals, salaried doctors, and free care for everyone. So, while the ideal of universal coverage was spurred by those horror stories, the particular system that emerged in Britain was not the product of socialist ideology or a deliberate policy process in which all the theoretical options were weighed. It was, instead, an almost conservative creation: a program that built on a tested, practical means of providing adequate health care for everyone, while protecting the existing services that people depended upon every day. No other major country has adopted the British system—not because it didn’t work but because other countries came to universalize health care under entirely different circumstances.

In France, in the winter of 1945, President de Gaulle was likewise weighing how to insure that his nation’s population had decent health care after the devastation of war. But the system that he inherited upon liberation had no significant public insurance or hospital sector. Seventy-five per cent of the population paid cash for private medical care, and many people had become too destitute to afford heat, let alone medications or hospital visits.

Long before the war, large manufacturers and unions had organized collective insurance funds for their employees, financed through a self-imposed payroll tax, rather than a set premium. This was virtually the only insurance system in place, and it became the scaffolding for French health care. With an almost impossible range of crises on its hands—food shortages, destroyed power plants, a quarter of the population living as refugees—the de Gaulle government had neither the time nor the capacity to create an entirely new health-care system. So it built on what it had, expanding the existing payroll-tax-funded, private insurance system to cover all wage earners, their families, and retirees. The self-employed were added in the nineteen-sixties. And the remainder of uninsured residents were finally included in 2000.

Today, Sécurité Sociale provides payroll-tax-financed insurance to all French residents, primarily through a hundred and forty-four independent, not-for-profit, local insurance funds. The French health-care system has among the highest public-satisfaction levels of any major Western country; and, compared with Americans, the French have a higher life expectancy, lower infant mortality, more physicians, and lower costs. In 2000, the World Health Organization ranked it the best health-care system in the world. (The United States was ranked thirty-seventh.)

Switzerland, because of its wartime neutrality, escaped the damage that drove health-care reform elsewhere. Instead, most of its citizens came to rely on private commercial health-insurance coverage. When problems with coverage gaps and inconsistencies finally led the nation to pass its universal-coverage law, in 1994, it had no experience with public insurance. So the country—you get the picture now—built on what it already had. It required every resident to purchase private health insurance and provided subsidies to limit the cost to no more than about ten per cent of an individual’s income.

Every industrialized nation in the world except the United States has a national system that guarantees affordable health care for all its citizens. Nearly all have been popular and successful. But each has taken a drastically different form, and the reason has rarely been ideology. Rather, each country has built on its own history, however imperfect, unusual, and untidy.

Social scientists have a name for this pattern of evolution based on past experience. They call it “path-dependence.” In the battles between Betamax and VHS video recorders, Mac and P.C. computers, the QWERTY typewriter keyboard and alternative designs, they found that small, early events played a far more critical role in the market outcome than did the question of which design was better. Paul Krugman received a Nobel Prize in Economics in part for showing that trade patterns and the geographic location of industrial production are also path-dependent. The first firms to get established in a given industry, he pointed out, attract suppliers, skilled labor, specialized financing, and physical infrastructure. This entrenches local advantages that lead other firms producing similar goods to set up business in the same area—even if prices, taxes, and competition are stiffer. “The long shadow cast by history over location is apparent at all scales, from the smallest to the largest—from the cluster of costume jewelry firms in Providence to the concentration of 60 million people in the Northeast Corridor,” Krugman wrote in 1991.

With path-dependent processes, the outcome is unpredictable at the start. Small, often random events early in the process are “remembered,” continuing to have influence later. And, as you go along, the range of future possibilities gets narrower. It becomes more and more unlikely that you can simply shift from one path to another, even if you are locked in on a path that has a lower payoff than an alternate one.

The political scientist Paul Pierson observed that this sounds a lot like politics, and not just economics. When a social policy entails major setup costs and large numbers of people who must devote time and resources to developing expertise, early choices become difficult to reverse. And if the choices involve what economists call “increasing returns”—where the benefits of a policy increase as more people organize their activities around it—those early decisions become self-reinforcing. America’s transportation system developed this way. The century-old decision to base it on gasoline-powered automobiles led to a gigantic manufacturing capacity, along with roads, repair facilities, and fuelling stations that now make it exceedingly difficult to do things differently.

There’s a similar explanation for our employment-based health-care system. Like Switzerland, America made it through the war without damage to its domestic infrastructure. Unlike Switzerland, we sent much of our workforce abroad to fight. This led the Roosevelt Administration to impose national wage controls to prevent inflationary increases in labor costs. Employers who wanted to compete for workers could, however, offer commercial health insurance. That spurred our distinctive reliance on private insurance obtained through one’s place of employment—a source of troubles (for employers and the unemployed alike) that we’ve struggled with for six decades.

Some people regard the path-dependence of our policies as evidence of weak leadership; we have, they charge, allowed our choices to be constrained by history and by vested interests. But that’s too simple. The reality is that leaders are held responsible for the hazards of change as well as for the benefits. And the history of master-planned transformation isn’t exactly inspiring. The familiar horror story is Mao’s Great Leap Forward, where the collectivization of farming caused some thirty million deaths from famine. But, to take an example from our own era, consider Defense Secretary Donald Rumsfeld’s disastrous reinvention of modern military operations for the 2003 invasion of Iraq, in which he insisted on deploying far fewer ground troops than were needed. Or consider a health-care example: the 2003 prescription-drug program for America’s elderly.

This legislation aimed to expand the Medicare insurance program in order to provide drug coverage for some ten million elderly Americans who lacked it, averaging fifteen hundred dollars per person annually. The White House, congressional Republicans, and the pharmaceutical industry opposed providing this coverage through the existing Medicare public-insurance program. Instead, they created an entirely new, market-oriented program that offered the elderly an online choice of competing, partially subsidized commercial drug-insurance plans. It was, in theory, a reasonable approach. But it meant that twenty-five million Americans got new drug plans, and that all sixty thousand retail pharmacies in the United States had to establish contracts and billing systems for those plans.

On January 1, 2006, the program went into effect nationwide. The result was chaos. There had been little realistic consideration of how millions of elderly people with cognitive difficulties, chronic illness, or limited English would manage to select the right plan for themselves. Even the savviest struggled to figure out how to navigate the choices: insurance companies offered 1,429 prescription-drug plans across the country. People arrived at their pharmacy only to discover that they needed an insurance card that hadn’t come, or that they hadn’t received pre-authorization for their drugs, or had switched to a plan that didn’t cover the drugs they took. Tens of thousands were unable to get their prescriptions filled, many for essential drugs like insulin, inhalers, and blood-pressure medications. The result was a public-health crisis in thirty-seven states, which had to provide emergency pharmacy payments for the frail. We will never know how many were harmed, but it is likely that the program killed people.

This is the trouble with the lure of the ideal. Over and over in the health-reform debate, one hears serious policy analysts say that the only genuine solution is to replace our health-care system (with a single-payer system, a free-market system, or whatever); anything else is a missed opportunity. But this is a siren song.

Yes, American health care is an appallingly patched-together ship, with rotting timbers, water leaking in, mercenaries on board, and fifteen per cent of the passengers thrown over the rails just to keep it afloat. But hundreds of millions of people depend on it. The system provides more than thirty-five million hospital stays a year, sixty-four million surgical procedures, nine hundred million office visits, three and a half billion prescriptions. It represents a sixth of our economy. There is no dry-docking health care for a few months, or even for an afternoon, while we rebuild it. Grand plans admit no possibility of mistakes or failures, or the chance to learn from them. If we get things wrong, people will die. This doesn’t mean that ambitious reform is beyond us. But we have to start with what we have.

That kind of constraint isn’t unique to the health-care system. A century ago, the modern phone system was built on a structure that came to be called the P.S.T.N., the Public Switched Telephone Network. This automated system connects our phone calls twenty-four hours a day, and over time it has had to be upgraded. But you can’t turn off the phone system and do a reboot. It’s too critical to too many. So engineers have had to add on one patch after another.

The P.S.T.N. is probably the shaggiest, most convoluted system around; it contains tens of millions of lines of software code. Given a chance for a do-over, no self-respecting engineer would create anything remotely like it. Yet this jerry-rigged system has provided us with 911 emergency service, voice mail, instant global connectivity, mobile-phone lines, and the transformation from analog to digital communication. It has also been fantastically reliable, designed to have as little as two hours of total downtime every forty years. As a system that can’t be turned off, the P.S.T.N. may be the ultimate in path-dependence. But that hasn’t prevented dramatic change. The structure may not have undergone revolution; the way it functions has. The P.S.T.N. has made the twenty-first century possible.

So accepting the path-dependent nature of our health-care system—recognizing that we had better build on what we’ve got—doesn’t mean that we have to curtail our ambitions. The overarching goal of health-care reform is to establish a system that has three basic attributes. It should leave no one uncovered—medical debt must disappear as a cause of personal bankruptcy in America. It should no longer be an economic catastrophe for employers. And it should hold doctors, nurses, hospitals, drug and device companies, and insurers collectively responsible for making care better, safer, and less costly.

We cannot swap out our old system for a new one that will accomplish all this. But we can build a new system on the old one. On the start date for our new health-care system—on, say, January 1, 2011—there need be no noticeable change for the vast majority of Americans who have dependable coverage and decent health care. But we can construct a kind of lifeboat alongside it for those who have been left out or dumped out, a rescue program for people like Starla Darling.

In designing this program, we’ll inevitably want to build on the institutions we already have. That precept sounds as if it would severely limit our choices. But our health-care system has been a hodgepodge for so long that we actually have experience with all kinds of systems. The truth is that American health care has been more flotilla than ship. Our veterans’ health-care system is a program of twelve hundred government-run hospitals and other medical facilities all across the country (just like Britain’s). We could open it up to other people. We could give people a chance to join Medicare, our government insurance program (much like Canada’s). Or we could provide people with coverage through the benefits program that federal workers already have, a system of private-insurance choices (like Switzerland’s).

These are all established programs, each with advantages and disadvantages. The veterans’ system has low costs, one of the nation’s best information-technology systems for health care, and quality of care that (despite what you’ve heard) has, in recent years, come to exceed the private sector’s on numerous measures. But it has a tightly limited choice of clinicians—you can’t go to see any doctor you want, and the nearest facility may be far away from where you live. Medicare allows you to go to almost any private doctor or hospital you like, and has been enormously popular among its beneficiaries, but it costs about a third more per person and has had a hard time getting doctors and hospitals to improve the quality and safety of their care. Federal workers are entitled to a range of subsidized private-insurance choices, but insurance companies have done even less than Medicare to contain costs and most have done little to improve health care (although there are some striking exceptions).

Any of the programs could allow us to offer a starting group of Americans—the uninsured under twenty-five years of age, say—the chance to join within weeks. With time and experience, the programs could be made available to everyone who lacks coverage. The current discussion between the Obama Administration and congressional leaders seems to center on opening up the federal workers’ insurance options and Medicare (or the equivalent) this way, with subsidized premiums for those with low incomes. The costs have to be dealt with. The leading proposals would try to hold down health-care spending in various ways (by, for example, requiring better management of patients with expensive chronic diseases); employers would have to pay some additional amount in taxes if they didn’t provide health insurance for their employees. There’s nothing easy about any of this. But, if we accept it, we’ll all have a lifeboat when we need one.

It won’t necessarily be clear what the final system will look like. Maybe employers will continue to slough off benefits, and that lifeboat will grow to become the entire system. Or maybe employers will decide to strengthen their benefits programs to attract employees, and American health care will emerge as a mixture of the new and the old. We could have Medicare for retirees, the V.A. for veterans, employer-organized insurance for some workers, federally organized insurance for others. The system will undoubtedly be messier than anything an idealist would devise. But the results would almost certainly be better.

Massachusetts, where I live and work, recently became the first state to adopt a system of universal health coverage for its residents. It didn’t organize a government takeover of the state’s hospitals or insurance companies, or force people into a new system of state-run clinics. It built on what existed. On July 1, 2007, the state began offering an online choice of four private insurance plans for people without health coverage. The cost is zero for the poor; for the rest, it is limited to no more than about eight per cent of income. The vast majority of families, who had insurance through work, didn’t notice a thing when the program was launched. But those who had no coverage had to enroll in a plan or incur a tax penalty.

The results have been remarkable. After a year, 97.4 per cent of Massachusetts residents had coverage, and the remaining gap continues to close. Despite the requirement that individuals buy insurance and that employers either provide coverage or pay a tax, the program has remained extremely popular. Repeated surveys have found that at least two-thirds of the state’s residents support the reform.

The Massachusetts plan didn’t do anything about medical costs, however, and, with layoffs accelerating, more people require subsidized care than the state predicted. Insurance premiums continue to rise here, just as they do elsewhere in the country. Many residents also complain that eight per cent of their income is too much to pay for health insurance, even though, on average, premiums amount to twice that much. The experience has shown national policymakers that they will have to be serious about reducing costs.

For all that, the majority of state residents would not go back to the old system. I’m among them. For years, about one in ten of my patients—I specialize in cancer surgery—had no insurance. Even though I’d waive my fee, they struggled to pay for their tests, medications, and hospital stay.

I once took care of a nineteen-year-old college student who had maxed out her insurance coverage. She had a treatable but metastatic cancer. But neither she nor her parents could afford the radiation therapy that she required. I made calls to find state programs, charities—anything that could help her—to no avail. She put off the treatment for almost a year because she didn’t want to force her parents to take out a second mortgage on their home. But eventually they had to choose between their daughter and their life’s savings.

For the past year, I haven’t had a single Massachusetts patient who has had to ask how much the necessary tests will cost; not one who has told me he needed to put off his cancer operation until he found a job that provided insurance coverage. And that’s a remarkable change: a glimpse of American health care without the routine cruelty.

It will be no utopia. People will still face co-payments and premiums. There may still be agonizing disputes over coverage for non-standard treatments. Whatever the system’s contours, we will still find it exasperating, even disappointing. We’re not going to get perfection. But we can have transformation—which is to say, a health-care system that works. And there are ways to get there that start from where we are.

------------------------------------------------------------
Author: Atul Gawande
Original Source: New Yorker
Date Published: January 26, 2009
Web Source: http://www.newyorker.com/reporting/2009/01/26/090126fa_fact_gawande
Date Accessed Online: 2009-01-20

Labels: , , ,