Sunday, May 1, 2011

The Good and the Goods

The old joke in higher-education circles is that public colleges and universities have evolved from “state-funded” to “state-supported” to “state-sponsored,” and finally to “state-located.” The joke used to be a mild exaggeration, but in the last decade has become a deadly accurate representation of our situation. It’s no joke; the portion of most universities’ budgets that comes from state allocations has shrunk so dramatically that it constitutes only a minor fraction of most university operating budgets.
Whence comes this massive de-funding of higher education? The reasons are many and varied, but at least part of the underlying cause rests in a fundamental equivocation about the term “good.” We use this term in a number of ways: it can be used adjectivally to describe the merit of a person or thing, as in “Abraham Lincoln was a good man,” or “that was a good apple,” or it can be used as a noun. When used as a noun, it can have two senses. When, for example, economists speak of “goods and services,” by “goods” they intend some tangible item of value that a consumer might wish to purchase. In this sense of the term, a flat-screen TV or a pound of hamburger or an SUV all qualify as “goods,” that is, items that may or may not be purchased depending on the needs or wants of particular consumers. But we also employ the noun “good” in a sense that is closer to its adjectival meaning, namely, to mean “a good thing,” something that is broadly agreed upon as beneficial. Safe drinking water is a “good,” as is clean air, and literacy. Implicit in this categorization is a human community that agrees on the “goodness” of these goods, and indeed, to make that more explicit, one often uses the term “public good.”
The difference between the two senses of “good” is significant. A consumer good is good only insofar as a particular consumer values it. I am free to choose whether or not a carton of chicken livers is good, and I vote on their goodness by dipping into my disposable income and purchasing them—or not. A public good, by contrast, is something whose goodness is a matter of broad consensus, in which all members of a community have a stake, and for which all members of a society pay through taxation.
For over a century, beginning in the later nineteenth century, as colleges and universities began to spring up in the wake of the Morrill Act, and extending through the 1970s, the citizens of the United States clearly adhered to the notion that public higher education was a public good. Taxpayers subsidized state colleges and universities in order to keep tuition low and encourage widespread attendance. The value to a burgeoning economic power of an educated citizenry was patently obvious. In the Cold War environment of the 1950s and 60s, public higher education received another boost as our leaders looked to education as the pathway to achieve and maintain intellectual, and especially scientific, superiority over the Communist bloc.
Since the tax-cutting 1980s, by contrast, we have seen a rapid dissolution of that consensus, and in the last decade the process has only accelerated. As cash-strapped state governments try to deal with budget shortfalls, they are forced to engage in a triage of funding priorities. In all 50 states, higher education is a discretionary budget item, and therefore a prime target for cuts in expenditures. As the burden of paying for higher education has increasingly fallen on students and their families, we have seen a concomitant shift in the perception of what kind of “good” higher education is. Whereas K-12 education is still widely perceived (and enshrined in budgets as) a “public good,” postsecondary education has been relegated to the role of a consumer good: a luxury that should be paid for by the user, who alone benefits from it.
Once higher education takes its place amid the array of consumer goods, it lies defenseless against the logic of the market. Educators find themselves thrust into the odd role of touting their “product” in terms of its tangible benefit to the “consumer,” whether career advancement, income potential, status, or some other marketable quality. Public colleges and universities vie with for-profit corporations to operate in this marketplace, and for both, the financial bottom line is always in focus.
Even those who would defend higher education as a public good find themselves making their arguments on the basis of a narrow view of its social utility, expressing themselves in terms of workforce development, economic stimulus, and global competitiveness. The utilitarianism of the Cold War-era valorization of higher education has evolved into a new utilitarianism.
And here, I think, is where we have gone terribly wrong. For the “good-ness” of higher education has never lain merely in its economic benefit to the individual, nor in its ability to fill a workforce, as important as those considerations are. Rather, the enterprise of higher education has always been fundamentally concerned with nurturing a certain kind of citizen who will contribute to a certain kind of society, one characterized by civil and rational discourse and a commitment to justice and progress. When we reduce education to a commodity we invite its fruits to be employed to serve any end whatsoever. America leads the world in exporting technical prowess; people come from around the world to fill their minds with techne, and we gladly sell it to them. But we do not export the ethical and humanistic frameworks within which technical prowess is inevitably exercised.
Unless and until we recapture and promote a coherent account of why higher education is a public good, we will remain trapped in a world of pure consumerism as the dust of death settles over the once-bright vision of the value of what we do in our colleges and universities.

Thursday, December 30, 2010

Sketchy Metrics

‘Tis the season. No, not that season. I mean the season when high school seniors and their parents are getting down to business about choosing a college or university. Many of those “you’re accepted” letters have arrived, and next comes the sorting process. If you don’t think higher ed is a competitive marketplace, just spend a little time looking at all the things colleges do to tout their excellence and entice students to enroll. When I was young, only private colleges did much in the way of marketing, but now that states are disinvesting in higher education at a breakneck pace, public universities (if that term still has any meaning) are out there selling themselves as aggressively as anyone.
For some time now, the most popular ranking of colleges and universities has been that published annually by U.S. News and World Reports. On the basis of a set of weighted criteria, the Powers That Be at USNWR tell us where our institutions “rank” against our supposed peers. The U.S. News rankings have been amply criticized by any number of educational authorities, and I won’t rehash the critiques here except to say that if we wish to know whether students have powerful and effective learning experiences at an institution, we will find no answers at all in these rankings. That notwithstanding, many colleges and universities fall all over themselves to move up in the rankings, even adopting policies and processes explicitly conceived to score points in U.S. News.
But all this folderol about rankings and measurement does raise an important question: how do we measure the quality of the teaching and learning that takes place in our institutions? Or should we measure it at all? Or can we measure it? And by the way: what constitutes success in our work with students? Higher-education administrators today find themselves swimming in a vast pool of numbers. We inform, or perhaps mollify, our stakeholders by showing them our “metrics,” by which we mean some plausibly quantifiable measure of our progress toward our goals. [n.b.: I am told by some mathematicians that this is a very poor use of the term “metric,” but it has become the coin of the realm.] The list of these measurements is well-known to those of us in the biz; it includes retention (usually talking about the percentage of first-year students who go on to the second year), persistence (a broader measure of continuing enrollment), graduation rates (four, five and six year!), grade point averages, and so on.
What do these common metrics really measure? After all, we tend to represent them to our constituents as measures of “student success.” How would most of us define student success? The minimal definition would include achieving one’s academic goals, whether that means a degree, preparation for transfer, obtaining a set of skills, or any number of possible ends. A more expansive definition might include deep learning, the ability to think critically and challenge one’s own assumptions and those of others, preparation for engaged citizenship, and so on.
Once we reflect a bit on learning and success, we quickly come to see that the most widely-valued metrics offer, at best, very indirect indicators. And some of them have, at bottom, nothing whatever to do with teaching, learning, or the fulfillment of student goals and aspirations. First-to-second year retention rates tell us only that a student who can “fog the mirror” and has not been suspended for academic or conduct reasons has returned for a second year. Graduation rates tell us that someone has managed to maintain a 2.0 g.p.a. and complete all formal requirements for a degree. A grade point average tells us that a student has maintained a given quantified level of performance in meeting course criteria. None of these measures tells us one whit about whether meaningful, deep, transformative learning took place during a student’s college career. They do not touch upon, except in the vaguest and most indirect manner, any of the central expectations most of us would harbor for high-quality teaching and learning.
Our accustomed metrics for institutional quality are the bluntest of instruments. It is hard to imagine that an automobile company would measure its quality without reference to how well the car performs in relation to direct criteria of automotive quality. Imagine Mercedes Benz arguing for its quality on the basis of how quickly cars move through the manufacturing process, or how much it spends on parts, or how many units are produced per year. But much of higher education—oddly enough in an effort to make itself analogous to business enterprises—moves along quite nicely doing precisely such a thing.
What is the answer? Learning is a qualitative enterprise. A university must demonstrate its success on the basis of a qualitative measurement of learning. Learning can be measured, against a set of clearly articulated outcomes. Can learning be measured perfectly? No; of course not. Anyone who has taught has a healthy respect for the ineffable mystery of teaching and learning. Anyone who has learned knows that frequently the “penny drops” only years, or decades, after the experience. But we must be careful not to lapse into the perfectionist fallacy, namely, the notion that if a thing cannot be done perfectly it should not be done at all.
We should certainly not award quality points to colleges and universities for the mere act of recruiting bright, well-prepared students who can cruise through a college curriculum in a timely manner. To return to Mercedes, imagine them receiving autos 95% assembled from a subcontractor, then adding some chrome and a paint job, then claiming full credit for its successful “manufacturing process.”  This is certainly the ordained path to the top of the U.S. News rankings; look at the top ten national universities and you will find institutions whose “inputs” are so well assembled that they only need a little buffing to get through the process. If we really want to measure our success should insist without compromise on the quality question: what does the learning curve look like for our students? How far do we take them by providing deep, engaged learning experiences? Only when we begin to frame our questions in this way will we generate measurements adequate to the mission of American higher education.

Sunday, December 5, 2010

What to Whom


Funny, the things that stick in one’s mind. My last little reverie got me back to thinking of those student days: chewing on a pencil while trying to unravel the grammar and syntax of a Greek or Latin or Hebrew sentence. For some reason one of the few phrases I remember from my early Greek classes, from Book One of Plato’s Republic, was a description of the function of a physician. The Greek ran something like "he tisin ti apodidousa techne." This would translate roughly as "the art of giving what to whom." The physician is one skilled in knowing how to give to each person what is fitting. This notion comes up in the beginning of a long conversation exploring the definition of justice, or more precisely, of the just person.
It strikes me that increasingly the work of higher education is becoming "the art of giving what to whom." That is to say: we in higher education find ourselves in daily conversations about appropriate levels of academic and social support, appropriate pedagogical approaches, appropriate recruiting, and so on. Put differently, our work confronts us with daily questions of justice. How can educators act justly toward our students? Policymakers, legislators, and trustees all talk about justice at some remove, and one often hears the demand for better access to higher education for people traditionally excluded from its benefits. But the real questions of justice must be answered in the quotidian realities of life in the academy. The push for broader access has brought about a notable shift in student demographics. Students who are the first in their families to attend college (aka "first-gen"), students from under-represented minority groups, students from low-income backgrounds, students who arrive from the foster care system, students of non-traditional age, all require attention to their very particular needs if they are going to have a chance to flourish in their college studies. It will simply not do, as a matter of justice, for professors simply to hurl out their erudition indifferently, letting it stick to whom it will and slide off the rest. Nor can those who offer support outside the classroom be insensitive to the particularities of students.
If American higher education is to fulfill its immense promise and continue to be an engine of social improvement, it must, I think, come to a clear understanding of itself as an agent of justice, of providing what is fitting and appropriate to each and every student. To say this is not to commit the higher-ed enterprise to any political agenda of either the left or the right; it is rather to call those of us who work in colleges and universities to a more circumspect vision of our shared mission and calling.

Thursday, December 2, 2010

The Place of Learning

When I was an undergraduate at the University of Washington, my favorite place to study was the Graduate Reading Room in the Henry Suzzallo Library. It was, and remains, a place that inspires feelings of awe and even reverence: a huge space framed by soaring Gothic arches and a vaulted ceiling, stained glass windows, wonderful stonework, and an almost palpable silence. One felt tiny in there, and had a sense that even daring to cough or chancing to drop a pencil constituted some kind of offense against the gods of learning. It may be too much, but only by a little, to say that this space was imbued with a sense of the sacred. The many long hours I spent there, conjugating Greek verbs or writing history papers in longhand, strike me even now—more than thirty years later—as having been glimpses of the eternal carved out of space and time.
It is, of course, no accident that learning should have been identified with a profound sense of place, and of sacred place at that. After all, the universities of the Middle Ages grew directly from schools that were attached to great cathedrals, where lectio divina, the deep reading of the Bible, was the order of the day. As scholars began to gather in university towns, those towns became in turn places of intellectual pilgrimage. Great libraries, repositories of written wisdom, emerged to supply the raw materials for the “schoolmen” to ply their craft. Cities like Oxford, Cambridge, Paris, and Bologna became the places of learning, for if one wished to gain access to the riches of the preserved literary past, as well as to the fruits of the best minds of the day, one had to travel there.
Today, of course, the context of learning has changed profoundly. A substantial portion not only of the the world’s information, but of its best scholarship, can be readily found floating through the ether, and can be easily brought to one’s desktop, thanks to the prodigious achievements of information technology. One can roll out of bed in Kuala Lumpur, boot up a computer, and take a rigorous course, complete with personal feedback, from an eminent scholar at M.I.T. or Harvard or Duke. Both information and scholarship are well on the way to being ubiquitous. The awed sense of place that I experienced as a student—and which has been replicated over the years in the Bodleian Library in Oxford and the Bibliothèque Nationale in Paris—may quickly become as unfamiliar as writing with quill pens.
So what will become of the notion of a place of learning? Has place utterly lost its place to a new environment of scholarly ubiquity? Some might argue that learning is inherently communitarian and collaborative, and that therefore we need shared physical space to foster the kinds of relationships implied by the educational enterprise. But those who regularly teach on line—and who do it well—tell me that in many ways the opportunity for community building increases in that environment. I remember a discussion of “distance learning” in which someone said “We’ve had distance learning for hundreds of years: it’s called the lecture. Technology can enable us to create ‘proximity learning.’”
If one no longer needs to visit a physical place or space to gain access to higher learning, or to a scholarly community, if privilege has shifted to virtual space, what of the college campus? Why should someone come to Greeley, Colorado, or Austin, Texas, or Cambridge, Massachusetts? I think this question will be asked with increasing frequency. And I suspect that the answer has to do with yet another emerging paradigm: we know now, much better than we once did, that effective learning is connected to life. The purpose, the end, of learning, is not mere learning, but living. The physical space of a campus, in a city, is no longer the exclusive repository of information, nor the only place to find scholarly expertise. But it is the place of engagement, the place where the privileges and obligations of educated citizenship can play out as a community of students reaches out and learns in, learns from, and contributes to the life of a larger community. It is a place where one encounters “the Other,” not merely as a text revealing different perspectives, but in the flesh. It is the place where one is forced to reckon with the implications of knowledge for how one lives life.

Sunday, November 28, 2010

Mimesis

As dean of a new college (founded in 2009) within The University of Northern Colorado, I naturally let my mind drift to questions of what it might mean to do something truly new in higher education. Surprisingly perhaps, given that colleges and universities are chock-full of highly intelligent and creative people, the structures and alignments of American higher ed have changed very little in the last century. One would think that with rapidly changing demographics among students, stunningly rapid advances in technology, decreasing public support, and the shrinking and flattening of our world, institutions of higher learning would be engines of innovation, spawning new structures and processes.

Not so. The way professors go about their business of teaching, scholarship, hiring, promoting and tenuring colleagues, and so on remains substantially unchanged since before the turn of the last century. This is so, I believe, because the spirit of imitation rather than innovation has been, and continues to be, a prominent if tacit principle driving our work. The organization of university faculty into disciplines, with corresponding national and international professional societies, was driven in the late nineteenth century by a conscious effort to ape the success of the professions (law and medicine, for example) in restricting entry to them. As Louis Menand has pointed out in his marvelous new book The Marketplace of Ideas, the professionalization of academic disciplines around the turn of the century, and to this day, is less about controlling the production of knowledge than about controlling the production of knowledge producers. The current disciplinary configuration of colleges and universities is a rather late, self-consciously constructed, and profoundly self-interested piece of mimesis, though I suspect most academics think of it as just the way things have always been in higher education.

There have long been those who have harbored suspicions about this project, and have decried the pernicious effects of an undergraduate education dictated by the hegemony of professionalized disciplines. The great--and varied--efforts in the early and mid-twentieth century of Harvard, Princeton, and Chicago to create core curricula that scumbled the lines between disciplines mark noteworthy examples of such resistance. But their very notoriety bears witness to the fact that their efforts failed to change the direction of American higher education.

Calls for a more interdisciplinary approach to undergraduate education began to increase dramatically in the 1970s, and the term "interdisciplinary studies" has become something of a commonplace; indeed, to this day it seems to have a vaguely "sexy" ring.  But despite decades of such discourse, the exaggerated valorizing of disciplinarity has not gone away, even as evidence of its inadequacy mounts. Indeed, the disciplines themselves have begun to become "interdisciplinary;" a trip to just about any professional conference will reveal that scholars are continually borrowing new methods and insights from other disciplines just to keep their work fresh. There are, then, signs of a kind of self-subversion afoot.

Much else in higher education rests on a foundation of imitation rather than critical thinking. Think, for example, of that bane of all deans: student credit hour production. What is that exactly? Does it mean anything at all? The student credit hour is a fictive construct from the turn of the century, aimed at demonstrating that education can mimic the model of industrial production by cranking out a definable widget efficiently, so that public stakeholders could grasp onto some hopeful simulacrum of a "product" for their money. In fact, it measures almost nothing. Yet academic administrators haggle over the vaporous SCH; programs and careers can turn on it. Whatever it is.

One of my highest hopes for University College is that we can escape the gravitational pull of mimesis--however it expresses itself and regardless of how much others are captive to it--and allow our creativity, our collective wisdom, and our knowledge of how students learn best to dictate our practices.