April 29, 2009

Medical Moral Hazards

A new report by the Institute of Medicine (IOM) calls for doctors to stop taking gifts from pharmaceutical companies. This could create all sorts of conflicts of interest. Here's an excerpt from the NYT:
"Drug companies spend billions of dollars wooing doctors — more than they spend on research or consumer advertising. Much of this money is spent on giving doctors free drug samples, free food, free medical refresher courses and payments for marketing lectures. The institute’s report recommends that nearly all of these efforts end.
The largest drug makers agreed last year to stop giving doctors pens, pads and other gifts of small value, but company executives have defended other marketing tactics as valuable to both doctors and patients. Medical device and biotechnology companies have yet to swear off free trips or even pens.
A 2007 survey found that more than three-quarters of doctors accepted free drug samples and free food, more than a third got financial help for medical refresher courses and more than a quarter were paid for giving marketing lectures and enrolling patients in clinical trials."
I don't see how samples and pens and paper would create a conflict of interest, i.e., for doctors to go against their professional judgment. But perhaps much larger payments (and trips) would. There's also the issue of physicians consulting for--and investing in--drug and biotechnology companies that benefit from their own prescriptions. Strangely, this is not addressed in the report. But it is in this recent assessment from the American Medical Student Association, claiming it's become a real problem, particularly at Harvard. Excerpts from the article:
"In a first-year pharmacology class at Harvard Medical School, Matt Zerden grew wary as the professor promoted the benefits of cholesterol drugs and seemed to belittle a student who asked about side effects.
Mr. Zerden later discovered something by searching online that he began sharing with his classmates. The professor was not only a full-time member of the Harvard Medical faculty, but a paid consultant to 10 drug companies, including five makers of cholesterol treatment.

Mr. Zerden’s minor stir four years ago has lately grown into a full-blown movement by more than 200 Harvard Medical School students and sympathetic faculty, intent on exposing and curtailing the industry influence in their classrooms and laboratories, as well as in Harvard’s 17 affiliated teaching hospitals and institutes.
The students argue, for example, that Harvard should be embarrassed by the F grade it recently received from the American Medical Student Association, a national group that rates how well medical schools monitor and control drug industry money.
Harvard Medical School’s peers received much higher grades, ranging from the A for the University of Pennsylvania, to B’s received by Stanford, Columbia and New York University, to the C for Yale."
The question of course is where to draw the line. One might argue, as the IOM seems to, that any payment or gifting is inappropriate, as it creates a habit--a slippery slope--that may gradually grow into a serious moral hazard as physicians begin taking more liberties down the line. In any case, if state legislatures and medical associations establish clear guidelines on what is and is not appropriate, then doctors and medical schools might draw the lines somewhere that make sense to most everyone. It's a hard job, but it's one ethicists can surely step up to.

Update (Dec. 7, 2010): Harvard Medical School strengthened its conflict of interest (COI) policy a few months ago:
"Among many provisions, the new policy includes a streamlined central system for reporting faculty financial interests with industry; requires the public disclosure of certain faculty financial interests; bans faculty from accepting corporate gifts, including travel and meals; and ends faculty participation in industry speakers bureaus, making it one of the most stringent of any medical college in the country. In addition, faculty disclosures will be made available to the public on the Harvard Catalyst website. "
 As of this posting, I could not find faculty disclosures at the link above.

April 26, 2009

Employee Free Choice Act

The WSJ's Thomas Frank makes a strong case for the EFCA, that would strengthen the NLRA or Wagner Act, which has been continually gutted since its passing in 1935, first by the Supreme Court Mackay decision in 1938 (see here for a defense) that companies could permanently replace striking workers, and next by the Taft-Hartley Act of 1947. The EFCA would make it easier for workers to start collective bargaining and to create unions. But it looks like the GOP is set on blocking it by a Senate filibuster. Here's some info on the bill. And an excerpt from Frank's article:

"After massive lobbying both by labor and by business, it appears that the Employee Free Choice Act (EFCA), which, as it now stands, would allow workers to organize in many cases merely by signing cards instead of holding elections, will not have the 60 votes required to get past a Republican filibuster in the Senate.

Now, to be pro-labor is to resign yourself to years of failures and defeats, with few tea parties along the way for consolation. Even so, the setback on EFCA has to be a bitter one. Union members worked hard to elect Barack Obama and the Democratic Congress, as they did to put Jimmy Carter and Bill Clinton in the White House. And now, just as in those previous two periods of Democratic governance, labor's friends are having trouble enacting basic labor-law reforms.

To understand why we need new rules governing unionization, look no further than yesterday's New York Times, where Steven Greenhouse told the story of a Louisville, Ky., hospital whose nurses tried to form a union but failed after they were reportedly threatened with losing their benefits among other things.

Such practices are commonplace and well-documented by Human Rights Watch and others. But labor's case never seemed to hit home. Instead, conservatives have carried the day, playing on lurid stereotypes to hint that intimidation by unions is the real worry and that EFCA spells the end of secret ballots in the workplace and hence of democracy itself."

Of course there are cases of large companies treating workers well without much need for collective bargaining, such as REI, Starbucks, Whole Foods, and Costco. Still, it would seem constitutionally unjust to actively oppose legislation that would protect this basic right. Even Adam Smith recognized the importance of unions in order to keep workers from becoming exploited:
"The masters [i.e., employers], being fewer in number, can combine much more easily; and the law, besides, authorises, or at least does not prohibit their combinations, while it prohibits those of the workmen. We have no acts of parliament against combining to lower the price of work; but many against combining to raise it. In all such disputes the masters can hold out much longer. A landlord, a farmer, a master manufacturer, or merchant, though they did not employ a single workman, could generally live a year or two upon the stocks which they have already acquired. Many workmen could not subsist a week, few could subsist a month, and scarce any a year without employment. In the long-run the workman may be as necessary to his master as his master is to him, but the necessity is not so immediate." (The Wealth of Nations, volume I, ch. 8, paragraph 12).
We also find the seed of the concept of alienation in Smith (later more fully developed by Marx) when he argues that unions are necessary to keep labor from becoming so divided into ever simpler (and cheaper) tasks that workers would become "as stupid and ignorant as it is possible for a human creature to become."

This site provides more information on the EFCA including a tool for sending an email to one's congressperson and senators urging them to support the bill.

April 23, 2009

My Forthcoming Chronicle Review Piece [and Sternberg's Current One]

The Chronicle of Higher Education is publishing an article of mine on the challenges of teaching and disseminating ethics in business school. Watch for it. It should be out sometime next month in the Chronicle Review section. I'd post an excerpt but I'm bound by the copyright until 30 days after publication.

However, the current issue contains an interesting and related article by Robert Sternberg, Tufts Dean of Arts & Sciences, offering a "new model for teaching ethical behavior." While it might not strike everyone as completely new, it's a decent attempt to define what counts as ethical thinking and behaving.

Sternberg uses his background in psychology to show what he takes as eight steps commonly undertaken during "ethical behavior." As the title suggests, this isn't merely a decision-making tool but a model for teaching students to grasp the process of ethical thinking itself. It's essentially a map of the kind of thinking one must go through to act ethically. Unfortunately, if one fails to recognize an event as ethical to begin with, the steps are useless. But as Sternberg rightly suggests, his model--if accurate--should at least help us better understand why so many smart and well-educated people continually fail to think through them.

You can read it via subscription. Here's an excerpt:

"In 1970, Bibb Latané and John Darley opened up a new field of research on bystander intervention. They showed that, contrary to expectations, bystanders intervene when someone is in trouble only in very limited circumstances. For example, if they think that someone else might intervene, bystanders tend to stay out of the situation. Latané and Darley even showed that divinity students who were about to lecture on the parable of the good Samaritan were no more likely than other bystanders to help a person in distress.

Drawing in part on Latané and Darley's model of bystander intervention, I've constructed a model of ethical behavior that applies to a variety of ethical problems. The model's basic premise is that ethical behavior is far harder to display than one would expect simply on the basis of what we learn from parents, school, and religious training. To intervene, to do good, individuals must go through a series of steps, and unless all of the steps are completed, people are not likely to behave ethically, regardless of the ethics training or moral education they have received and the level of other types of relevant skills they might possess, such as critical or creative thinking."

April 21, 2009

A New Corporate Social Contract

That's what the Center for American Progress' Senior Fellow Matt Miller argues for. Here's the lead:
"Given the mess into which Wall Street’s poor stewardship has sunk the US, the phrase “financial industry statesman” will be seen by the American public as a laughable oxymoron for some time. But there is a real risk that the justified hit to Wall Street’s reputation will taint the standing of business more generally unless non-financial leaders wake up and take unconventional action. The perils of timidity at this moment are high. If business as a whole (and not just finance) is discredited by today’s meltdown, the drive to renew American capitalism could give rise to steps that burden the US economy for years.

To avoid this fate, far-sighted business leaders need to weigh in now on three subjects on which they have been notably absent: executive pay; the need for an updated “social contract” that fits 21st-century realities; and a strategy to make service jobs that cannot be offshored a path to the middle class. These are no longer political questions that can be left to Washington trade associations or viewed as a distraction from the “real work” of running one’s business, because failure to address them will fuel a backlash that affects every company’s licence to operate. Let us take them in turn."

It's nice to hear this argument return. It's one I made myself somewhat more in depth comparing utilitarian and contractarian arguments on outsourcing. It appeared a few years ago in EJBO. Hopefully more voices will keep joining this chorus.

April 20, 2009

Proof of Pudding

The WashPost reports that Chrysler turned down new government bailout money, opting for more expensive private loans in order to avoid the government's caps on executive compensation.

Unfortunately, this report is rather flimsy, full of vague hearsay and unnamed sources whose claims are denied by the company. But if it turns out to be substantiated (and it looks like it could be if Chrysler goes forward with more expensive loan offers), it will be quite damning. For it isn't merely proof of greed, but would actually violate the promissory agreement to uphold the interests of the shareholders--unless of course the company can argue plausibly that it would otherwise lose irreplaceable key executives. But that doesn't seem to be a very convincing argument since the global economic crisis has significantly limited the number of positions where one could stand to "earn" more.

And it would substantiate the WSJ's Thomas Frank on the terms of the original bailout plans under Bush and Secretary Paulson (I commented on in February):
"If the federal bank bailout were to involve a real crackdown on executive compensation, the Bush administration reportedly feared, it might have driven banks away from taking the deal altogether. Bankers would prefer global disaster to a pay cut, in other words, and this obscene calculation needed to be taken into account. Public outrage was apparently nothing by comparison."
Old habits die hard.

Does Government as Majority Shareholder Create a Conflict of Interest?

Yesterday's NYT had a story about how the administration is considering converting its preferred stock (loans) to banks into common stock. This would reduce the banks' debt load and make them more solvent (at least on paper) by "recapitalizing" them.

But there seems to be some fear that this could create a conflict of interest for the government, as the article concludes with this cryptic statement:
"The Treasury would also become a major shareholder, and perhaps even the controlling shareholder, in some financial institutions. That could lead to increasingly difficult conflicts of interest for the government, as policy makers juggle broad economic objectives with the narrower responsibility to maximize the value of their bank shares on behalf of taxpayers. Those are exactly the kinds of conflicts that Treasury and Fed officials were trying to avoid when they first began injecting capital into banks last fall."
The only potential conflict of interest I can imagine here is that the government-as-shareholder might not vote in any way that could give a single bank an advantage over others, thereby stifling competition.

Still, I don't see how a common shareholder, even a majority-holding one, could effect much change without being an activist shareholder. Such bully-shareholders are often liabilities. And the government, because of its interest in stability, would be that much less likely to be one.

This smells suspiciously like unfounded paranoia of a socialist boogieman.

April 19, 2009

Is the Brain Really Un-Green?

Here's a provocative NYT Mag piece that argues that we have a biological bias against so-called "green thinking." It would seem to me though that myriad cultures in the past have succeeded in maintaining a sustainable outlook of environmental harmony, e.g. many Native American tribes such as the Hopi, Kwakiutl, and Iroquois. These seem like counterexamples to the bio-reductive thesis. I would argue that much more than any genetically-determined un-green predisposition, it's the current culture of materialism and consumption that is much more to blame.

In any case, here is an interesting, if characteristically bleak passage:

"Cognitive psychologists now broadly accept that we have different systems for processing risks. One system works analytically, often involving a careful consideration of costs and benefits. The other experiences risk as a feeling: a primitive and urgent reaction to danger, usually based on a personal experience, that can prove invaluable when (for example) we wake at night to the smell of smoke.

There are some unfortunate implications here. In analytical mode, we are not always adept at long-term thinking; experiments have shown a frequent dislike for delayed benefits, so we undervalue promised future outcomes. (Given a choice, we usually take $10 now as opposed to, say, $20 two years from now.) Environmentally speaking, this means we are far less likely to make lifestyle changes in order to ensure a safer future climate. Letting emotions determine how we assess risk presents its own problems. Almost certainly, we underestimate the danger of rising sea levels or epic droughts or other events that we’ve never experienced and seem far away in time and place.

Worse, Weber’s research seems to help establish that we have a “finite pool of worry,” which means we’re unable to maintain our fear of climate change when a different problem — a plunging stock market, a personal emergency — comes along. We simply move one fear into the worry bin and one fear out. Weber described what she calls a “single-action bias.” Prompted by a distressing emotional signal, we buy a more efficient furnace or insulate our attic or vote for a green candidate — a single action that effectively diminishes global warming as a motivating factor. And that leaves us where we started."

Let's start thinking about solutions. I'd say we should start by spending a whole lot more on education and economically incentivizing green behavior.

April 8, 2009

Back From the Mouldering Archival Depths

Is a biting APA book review I wrote a few years ago (in New York Review style) on the ethics of publishing books by public intellectuals. It's on Peter Singer's One World: The Ethics of Globalization and Richard Posner's Public Intellectuals: A Study of Decline. Starts on p. 195: The Intellectual's New Clothes.

All APA newsletters are now archived in PDFs here.

New Projects

This Summer, the Chronicle Review will be publishing a new article of mine on the ironic challenge of teaching ethics in business school. It's a little like Edmundson's classic Harper's piece on the liberal arts, but from my experiences in business school.

Which leads me to my other current project. Many are wondering why there is a dearth of ethics in business. The short answer is that business is about maximizing profit, which can lead to greed and corruption. But the longer answers to why it is lacking in business school curricula and academic research might lead to real solutions. The New York Times had an interesting piece on this recently. It quoted Rakesh Khurana, a professor at Harvard Business School, who recently published an excellent book describing the problem from a socio-historical perspective.

He doesn't provide much of a solution, but does argue that management needs to have an ethos (beyond maximizing profit) akin to medicine and law to justify its presence in higher education. My own view on this is that ethical business creates social value. I have an edited book appearing on this in the next month or so. I'll post the blurb here as soon as it's online.

So I am preparing an article now for the Academy of Management Learning & Education (AMLE), which has accepted the proposal, arguing that one of the reasons ethical curricula are lacking in business schools is that ethical research is almost entirely absent from the top management journals. And that's essentially because those journals are exclusively empirical. The AMLE is an exception, as it does include a priori arguments, but is more of a second-tier journal (although this is changing as its latest impact factor ranking puts it at 7th place). As such, it is a rare exception.

Yet the AMLE's mission statement only explicitly acknowledges what is commonly referred to in business literature as qualitative or quantitative research, which are both purely empirical approaches. According to this 7-year study of the 1990s, the journals widely-considered to be at the top of the field are almost all entirely empirical:

Academy of Management Journal

Academy of Management Review

Administrative Science Quarterly

Journal of Applied Psychology

Organizational Behavior and Human Decision Processes

Strategic Management Journal

Personnel Psychology

And the main theory journal (AMR) among them states that it will publish only "testable knowledge-based claims." Unfortunately, that excludes almost everything that counts as ethics, which is essentially a conceptual, a priori discipline, akin to mathematics, law, and philosophy. We wouldn't require for example that theses on the nature of justice or logic be empirically testable. But that doesn't mean that they don't count as knowledge-based.

Until top management journals open their pages to a priori arguments on the ethical nature and mission of business, there will be a dearth of ethics in business schools. For the top business schools, which are a model to the rest, are interested in hiring academics who publish in the top journals. Thus those journals have a responsibility as gatekeepers to get the ball rolling.

In the article, I'll offer suggestions on what we might consider the moral mission of business to be. The working title is: "Beyond Empiricism: Restoring the Ethical Promise of Management Education."

I'll be giving a paper at the Eastern Academy of Management (a teaching case on capping executive compensation) in Hartford May 13-16. So if any readers will be attending, it might be fun to meet while we're there. I'll hopefully have some copies of my book by then too. Post a comment below or email me at friedlandj@gmail.com

April 3, 2009

An Alternative View of the Puppetmasters

David Brooks says it wasn't so much greed as stupidity:

Jerry Z. Muller wrote an indispensable version of the stupidity narrative in an essay called “Our Epistemological Depression” in The American magazine. What’s new about this crisis, he writes, is the central role of “opacity and pseudo-objectivity.” Banks got too big to manage. Instruments got too complex to understand. Too many people were good at math but ignorant of history.

The greed narrative leads to the conclusion that government should aggressively restructure the financial sector. The stupidity narrative is suspicious of that sort of radicalism. We’d just be trading the hubris of Wall Street for the hubris of Washington. The stupidity narrative suggests we should preserve the essential market structures, but make them more transparent, straightforward and comprehensible. Instead of rushing off to nationalize the banks, we should nurture and recapitalize what’s left of functioning markets.

Both schools agree on one thing, however. Both believe that banks are too big. Both narratives suggest we should return to the day when banks were focused institutions — when savings banks, insurance companies, brokerages and investment banks lived separate lives.

We can agree on that reform. Still, one has to choose a guiding theory. To my mind, we didn’t get into this crisis because inbred oligarchs grabbed power. We got into it because arrogant traders around the world were playing a high-stakes game they didn’t understand.
I don't think anyone is arguing for permanent nationalization of the banks. Sweden, for example, did it only as a temporary transition out of crisis. Regardless, it does seem that a substantial amount of willful blindness was involved here. And what made that possible was the promise of great wealth. Kenneth Goodpaster describes this classic phenomenon he calls "teleopathy" as the unbalanced pursuit of purpose in his book Conscience and Corporate Culture. It comprises three elements:
1. Fixation or singleness of purpose under stress
2. Rationalization
3. Detachment
So Brooks is right that part of the problem was a kind of stupidity or lack of awareness. But a root cause of that blindness was also an attitude of moral detachment continually reinforced by the professional culture--and indeed myriad daily deliberate choices of its members in high finance to look the other way when their consciences should have been ringing the ethical alarm bells. Unfortunately, years of reinforced teleopathy must have reduced those bells to little more than a faint whisper in many an ear.

Interestingly, one can detect a slight feeling of discomfort in the young interviewer asking Jim Cramer for advice in the damning segments recently aired by Jon Stewart. Cramer seems to sense it a little as he says deferentially that he would never say such things on television. So he seems to be more desensitized than the rather "green" interviewer seeking council, which would make sense since Cramer is much older and thus more fully molded by the teleopathic environment he was evidently weened on during his years running a hedge fund.

April 2, 2009

Robin Hood in Reverse

Joseph Stiglitz just published a devastating critique of Geithner's bailout proposal in the NYT. A highlight:
What the Obama administration is doing is far worse than nationalization: it is ersatz capitalism, the privatizing of gains and the socializing of losses. It is a “partnership” in which one partner robs the other. And such partnerships — with the private sector in control — have perverse incentives, worse even than the ones that got us into the mess.
And another:

In theory, the administration’s plan is based on letting the market determine the prices of the banks’ “toxic assets” — including outstanding house loans and securities based on those loans. The reality, though, is that the market will not be pricing the toxic assets themselves, but options on those assets.

The two have little to do with each other. The government plan in effect involves insuring almost all losses. Since the private investors are spared most losses, then they primarily “value” their potential gains. This is exactly the same as being given an option.

Consider an asset that has a 50-50 chance of being worth either zero or $200 in a year’s time. The average “value” of the asset is $100. Ignoring interest, this is what the asset would sell for in a competitive market. It is what the asset is “worth.” Under the plan by Treasury Secretary Timothy Geithner, the government would provide about 92 percent of the money to buy the asset but would stand to receive only 50 percent of any gains, and would absorb almost all of the losses. Some partnership!
Interestingly, Stiglitz seems to agree with Matt Taibbi's more colorful and candid assessment in Rolling Stone. A highlight:
The mistake most people make in looking at the financial crisis is thinking of it in terms of money, a habit that might lead you to look at the unfolding mess as a huge bonus-killing downer for the Wall Street class. But if you look at it in purely Machiavellian terms, what you see is a colossal power grab that threatens to turn the federal government into a kind of giant Enron — a huge, impenetrable black box filled with self-dealing insiders whose scheme is the securing of individual profits at the expense of an ocean of unwitting involuntary shareholders, previously known as taxpayers.
And in case you missed it, Jon Stewart comes to similar conclusions. No laughing matter. And a deliciously poignant skewering of Jim Cramer, financial adviser of CNBC's Mad Money. A fitting title indeed.