Tuesday, March 2, 2010

Learning to Live with Professionalization

In his new book, The Marketplace of Ideas, Louis Menand -- Harvard English professor and New Yorker contributor -- confronts four of the most contentious issues in higher education in separate essays: core curriculum reform, the declining influence of humanities departments, interdisciplinarity, and the predominance of liberal political beliefs among professors. All are worth reading, even if they are rather obviously focused on the navel-gazing preoccupations of humanities types rather than the academic community generally. The first essay, on the creation of general education curriculum guidelines for all undergraduate students, was the most interesting.

Menand calls general education "the public face of liberal education," where universities attempt to distill some core beliefs about what it is to educate a college student down into core requirements for all students. Some of the most interesting parts of this essay involve the history of core curriculum development and the establishment of undergraduate education as a unique goal for colleges. One revolutionary change arrived when Charles William Eliot became president of Harvard in 1869. Prior to his tenure, students could enroll in Harvard's law and medical schools with few prerequisites, such as a BA. Eliot pushed through a new framework requiring undergraduate degrees for enrollment in these schools, which both professionalized advanced learning and emphasized a new philosophy of undergraduate education:
The collegiate ideal, [Eliot] explained in his Atlantic Monthly article, is "the enthusiastic study of subjects for the love of them without any ulterior objects." College is about knowledge for its own stake -- hence the free elective system, which let students roam across the curriculum without being shackled to the requirements of a major. And this is the system we have inherited: liberalization first, then professionalization. The two types of education are kept separate.
Today, I'm not so sure this is the case. Some of the recent statistics that Menand cites show a trend away from disinterested learning and toward vocational training. Consider the subjects bachelor's degrees are conferred in: business (22%), education (10%), and health sciences (7%). If engineering students are thrown in, close to half of all undergraduates degrees are already, to a great extent, "professionalized."

Despite being a liberal arts brat (and BA) myself, I don't see this as, prima facie, a bad thing. After all, leaders in every field rarely pass up a chance to remind us that the U.S. is in dire need of more engineers, nurses, and teachers. (Sorry marketing majors, we've already got plenty of you.) Popularity in these fields -- and universities' provision of the majors -- is really a response to technological change and demand for an increasingly professionalized workforce.

But at the same time, the huge shift away from the liberal arts and sciences betrays a lack of intellectual curiosity, a lack of self-confidence, or both. If it's the former, then most students no longer have passion for study without "ulterior objects." And if it's a lack of self-confidence we're seeing, it's manifested in smart students' fear that majoring in something non-professional will undermine their employment prospects or -- if they're really unsure of themselves -- their future productivity. Which is nonsense. The skills and intellectual abilities needed to succeed in the liberal arts (analytical writing, rigorous objective and subjective thought, comfort with abstraction) are necessary and often sufficient for success in all fields. Business majors in particular should read carefully: you can be a bright philosophy major and still become a successful businessperson after graduation.

Which brings us back to the core curriculum idea. Undergraduate cores are dominated by the liberal arts and sciences, which themselves house the suite of general knowledge that all so-called "educated" persons should know and also demand/bolster the reasoning skills most useful in private, professional, and citizen life. Louis Menand, who is an academic, seems to imply at various points in his book that liberal arts are really most useful for educating academics, the "producers of knowledge." (While Menand is intelligent enough to acknowledge the value of undergrads and non-academics, others are not so open-minded. Witness former Columbia provost Jonathan Cole's pitiful performance on Charlie Rose last night, where he all but scoffed at undergraduate education as being a valuable goal for universities.) But knowledge is also produced in vast quantities outside of academia, and therefore even the professionalized majors like education, nursing, and business should require the same liberal arts core as other concentrations. That way, these more specialized, vocational students can still develop abstract reasoning and quantitative skills that will allow them to innovate in their fields.

Speaking to Menand and Jonathan Cole, I would say, "Academics don't produce knowledge, smart people produce knowledge." Universities should be more than factories for producing new professors; they should equip all students with the intellectual self-confidence to handle any profession they might choose. Insofar as professionalized undergraduate programs exist, this muddies the message that mental abilities are more important than a specific stock of knowledge. Forcing these vocational programs to participate in the same liberal arts core as all other students would signal the latter's necessity to a purposeful undergraduate experience, and provide a major lift to the "public face" of higher education.

Wednesday, February 10, 2010

Shred the Depositor Safety Net?

Mike over at Rortybomb addresses the thankfully-never-going-to-happen proposal by some calling for the elimation of bank deposit insurance. If only depositors had to worry more about the viability of the banks holding their money, the thinking goes, banks would be more severely disciplined and less inclined to gamble on riskier investments. Besides ignoring the fact that deposit insurance is an essential protection against bank runs in a system of fractional reserve banking, the proposal also stinks of the typical policy solution for libertarians and market fundamentalists: shift the risk on to us, we can handle it.

Problem is, that's not practical for 99% of individual depositors. Being the generous debater and financial engineer that he is, Mike imagines what kind of mathematical gymnastics we ordinary depositors would have to start employing to keep our money safe:
I know the simple way you do it, some techniques that I’ve had some training in: You place out the payment structures using monte-carlo simulations with lognormal random walks; you take a metric of correlation in the market, perhaps in a gaussian copula structure and use that to run correlations at each step between the instruments; you take the distribution you generate and apply a “value-at-risk” logic to it, looking at some piece of the tail distribution.
I love it when earnestness and sarcasm intertwine. Seriously, this is how a high-powered quant analysis would work, by making some basic assumptions about bank cash flows using historical data, possibly applying data on default correlations between asset classes, and then looking for maximum losses in the tails of a simulated distribution. Thank god you got that Applied Mathematics degree; your life savings may yet survive intact.

This intensively computational analysis, though, is subject to the same problem that bedevils all financiers/economists/forecasters/astrologers. You can't know the goddamn future, no matter how nifty your models are or how rich your data is! (And Mike Konczal certainly knows this, but he's not trying to divine, just to show how such a problem might, with trepidation and book-learnin', be approached.) Felix Salmon has already deconstructed the validity of the Gaussian copula in finance, and Joe Nocera took time a year ago to investigate the dubious risk management value of VaR models. So to those financial sophisticates who would tear up the deposit insurance contract: you don't even have the firepower to foresee the black swans out there, so why should everyone else be expected to?

The notion that we should exorcise deposit insurance from our banking system is, I think, symptomatic of the tired policy solutions emanating from a bloc of thinkers itching to shift more risk onto consumers in the name of "incentives." Yes, economic incentives are powerful, but what about those faced by financial institutions themselves? Might those be a better target for reform? We could stand to bear some amount of market inefficiency in the form of a depositor guaranty if we lessened the chances of it being used. Say, with regulation of banks' leverage, size, and scale of interconnectedness. These are the things that make the financial system inherently unstable and susceptible to magnificent collapse.

Conservatives who justify limited government intervention, in all matters, by pointing to uncertainty (like David Brooks) are on to something. We should be humble in the face of uncertainty and acknowledge that this uncertainty is universal. Quants aren't immune to it (though they mistakenly confuse it with "risk" and feel they've pinned it down), and neither are investors, depositors, or regulators. We know that banks will fail in the future, and that at times the system itself will buckle, but we don't know much more than that. The best we can do is limit the damage when it happens. Let's do that by pushing banks to raise new equity, not by transforming depositors into equity investors.

Sunday, January 31, 2010

The Siren Song of Digital Mediocrity

I just finished Jaron Lanier's You Are Not A Gadget, and found its cultural hypothesis interesting and applicable to a whole bunch of undercurrents moving through the real and digital world. The book is swollen with ideas that range from creatively insightful to verging-on-batshit-crazy, but I'll pull out two that I particularly liked (that aren't in the latter category). First is the notion that technology users in general -- and internet/software uses in particular -- should be mindful that our tech-centric culture is increasingly vulnerable to "lock-in," a process whereby the sweet web tools and gizmos we rely on are handicapped by the skills and choices of the programmers who must, necessarily, create them. Put another way, the complexity of modern software may lead designers and programmers to make arbitrary choices that, because ideas can quickly go viral and generate large network effects, become locked in and go on to limit design flexibility in the future.

Lanier offers the example of MIDI, an interface used to synchronize electronic music components. MIDI was originally created to synch up multiple synthesizers, and didn't include parameters such as note phrasing, intensity, and all of the ineffable elements of performance that make music so unique to each musician. A guy just wanted to connect some synths, so all he needed was the equivalent of "note on" and "note off." Well, MIDI caught on in a huge and international way, and became the standard for electronic music. And while some artists have used music technology to create some very bodacious stuff (I'm partial to Aphex Twin myself), they're limited in their expression by the arbitrary-yet-intentional programming of the original author. This larger idea has cultural as well as philosophical significance; here's Lanier early on, on page 10:
Lock-in removes ideas that do not fit into the winning digital representation scheme, but it also reduces or narrows the ideas it immortalizes, by cutting away the unfathomable penumbra of meaning that distinguishes a word in natural language from a command in a computer program.
For those digital companies looking to build market share (and maximize the network effect), the best choice is to simplify and homogenize our interfacing with their software. So all topics we don't understand fall into their respective Wikipedia entries; all movies must somehow be shoehorned into Netflix's five-star rating system; and our personas too often stand upon a rigidly-defined profile and database of connections on Facebook. This is not to say that these digital services and others can't be or aren't great -- they are! Yet they are only models of a particular idea or relationship, and they can be shoddy and inadequate. For all the attention (mine included) that Facebook siphons from other activities, it's really one of the least innovative services out there and grants very little room to paint your own little corner of the site.

We should imagine, then, that Facebook's banality should give free license to those would-be competitors who can dream up something better and more respectful of the individual's digital "right" to flexible self-definition. Things haven't turned out this way. Yes, MySpace is out there, but even people with MySpace pages had better damn well have a complementary Facebook profile lest their friends and family think they're antisocial weirdos. The unique ability of the Internet to lock in and propagate mediocrity brings me to the second, more important and overarching point that I drew from Lanier's book: the "wisdom of crowds" ideology.

Mr. Lanier addresses this issue within the context of the Web 2.0 leadership. In a nutshell, a major part of this new technological religion (which Lanier frequently refers to as "cybernetic totalism," although I'm not sure how much sex appeal and staying power that has) is the worship of the "hive mind" -- the collective intelligence represented by our individual minds all connected in cyberspace -- over the works of individual minds. It almost recalls that most horrible word of corporate jargon, synergy. As if combining the efforts of individual humans we will somehow transcend our own humanity. This is cult-ish stuff, but since technology is cool and useful we tend to give it a free pass (sort of like we did for finance over the last 25 years). Besides, we only catch glimpses of it in oblique ways. "Information/content wants to be free." (Tell that to those who have to create it.) "The Web is an idealization of democratic values." (Maybe, but that brings with it all of the pandering to selfish wants of the lowest common denominator that political democracy has. Based on Web usage, that probably means we should be focusing most of our innovative efforts on porn and piracy.)

The hive mind's elevation of collective intelligence is visible both in the world of software and of wetware. Online, the most dominant service is Google's search engine, whose algorithm bases its search results hierarchy for a given word or phrase based on how often it is linked to. Sometimes this is useful, but it (a) does not attempt to understand the semantic intent of the searcher, but assumes that the hive will take care of that itself, and (b) subjects itself to lock-in. On the latter point, Wikipedia is the easy example. As more and more people use Wikipedia and link to it, the Wikipedia entry for a given search phrase is invariably the top-ranked search result. Google co-founder Larry Page has said that the engineering challenge of search is essentially the most important one to solve, because the best tool we can create is one that points us to the correct answer for every possible question. This rings true, but a search algorithm that brings me straight to Wikipedia every time clearly signals complacency in the engineering pursuit that Google most prides itself on.

In the real world, the wisdom of crowds fetish has preceded the denigration of authors and editors. So goes the increasingly conventional wisdom: Newspapers are dead. I'll just search for news online. Books are dead. I'll get my information from other sources. Albums are dead. I'll download the single I'm looking for and that's it. This kind of thinking assumes that our culture's institutions survive and progress on the wheels of some abstract motor of crowdthink, rather than through the vision and willpower of individuals. This is a reversal of the previous synergy idea: the pieces are all that matter, not the whole presentation, and those pieces have many (and free) substitutes. But there is professional editorial and artistic value in the whole, and dissecting it not only removes the authors' intentional form, but often undermines the economic viability of the work for its creators. It is the training, experience, intuition, wisdom, and creativity of individual people across disciplines that has given us the rich cultural mosaic that many now see fit to push off into digital obsolescence. If the hive-minders think we can replace these people and their work with the crap that makes up 95% of YouTube and Facebook, they need to unplug for a bit to reconsider.

(Actually, the digital world itself offers one of the best examples of a mensch who reshapes the world through individual creativity. Steve Jobs' repeated successes has prompted discussion about the usefulness, at times, of closed culture in innovation.)

So those are two of Jaron Lanier's (many) ideas that I found most exciting in his new book. One, be wary of lock-in, and understand that the design of the software tools that everyone seems to use aren't necessarily the best or even that good. Consequently, we as users need to demand (and create!) better tools with more flexibility wherever we can, rather than simply debasing our own standards. Second, be wary of the dehumanizing "wisdom of crowds" ideology, which neglects the value of individual experience and effort that goes into our cultural artifacts and institutions. Together, these two principals seem to cry out, Authorship Matters! That is true of the software engineers and the newspaper editors alike, and both should be held to a high standard. And we should be looking for digital design methods that foster community and interoperability without devaluing creative and/or professional work.

I recommend checking out Jaron Lanier's book for the other interesting things he has to say, as well. At times it feels like he could have used a wise editor of his own to reign in his scatterbrainedness, but perhaps, like some other mad geniuses out there, that is simply an integral part of his charm and value.

Wednesday, January 20, 2010

The Return of the Pay Wall

Today the New York Times announced that it will reconstitute a pay wall for its online content, something it tried unsuccessfully to do years ago. Here's the plan:
Starting in early 2011, visitors to NYTimes.com will get a certain number of articles free every month before being asked to pay a flat fee for unlimited access. Subscribers to the newspaper’s print edition will receive full access to the site without extra charge.
According to the Times, the pay wall that existed from 2005 to 2007 attracted 210,000 subscribers paying $50 a year. Considering that the paper has a monthly readership of 17 million, that's a highly insignificant number of people who are willing to pay for full access. However, the original pay wall applied only to certain content, notably Op-Ed pages. Perhaps readers will be more willing to pony up for the full assortment of articles and archives, but that stuff is free now, and Felix Salmon has already attempted to break down the economics of the idea to show that the article gate (which closes after the ambiguous "certain number of articles") is an unwise proposal. But the Times is giving itself a year to figure out the logistics.

For those that don't follow the ongoing discussion about online news, the problem is not that people are no longer reading the news -- 17 million unique visitors is material -- but that online advertising brings in only a minor fraction of what print advertising does. Readers are just too finicky online, and don't page through news like their hard-copy counterparts. So newspapers will have to start getting compensated by their online viewers somehow or else significantly scale back their reporting efforts. The Wall Street Journal and Financial Times both have pay walls in place already, and seem to be doing ok. But theirs are niche business readers with similar interests, income and levels of sophistication. The Times' audience is assuredly more diverse and harder to target ads to.

I share the conviction of others I've read today that a pay wall as it is currently imagined will probably drive large numbers of readers to other news sources that are free online, or perhaps look to bloggers to post longer source quotes. Even if these alternative sources are of lesser quality than the Times, their substitution is understandable in some ways. Subscribing to a bunch of newspapers and magazines is tedious to keep track of, and differs from our experience with other types of media. When we buy a book, movie, CD, or iTunes mp3, we are purchasing an experience that can be repeated or at least drawn out. But buying news is closer to the transient experiences of cable TV, magazines, and the internet: we are paying for a constant stream of new content with a relatively short shelf life, to be experienced and then discarded. As such, people are less inclined to make impulse buys. That n+1 article which suddenly requires a subscription would have to feel awfully valuable for me to buy it.

Since most online content seems to have this transient feel, viewers and readers may be more inclined to pay if the payment mechanism was closer to a utility bill, like cable and internet. Rather than taking an a la carte approach, media providers could create or support media "utility companies" that allowed for monthly budgeting, bundling, and billing. For instance, maybe I decide that I will budget $60 per month for major newspapers, and that grants me access to any article at the NYT, WSJ, or FT websites as well as delivered to my tablet reader. And then I put up another $20 for magazines, and I'm given the choice of any articles from 10-20 magazines that month. And so on to get access to TV networks' full series, movies on-demand, etc. But all of it would be allocated and paid for through a single entity. The goal is to pay for the convenience of constantly refreshed net content, like we pay to have things piped into our TVs and cable modems -- with an all-in-one budget -- instead of managing dozens of individual subscriptions.

Such an approach would require cooperation between and within industries, and already we have seen coordinated efforts to build new viewing and payment platforms in the cases of Hulu, iTunes, and the ongoing effort to make DVD purchases portable. Tablet readers and similar viewing devices will also make the digital reading experience much more pleasurable and comparable to print. The point, then, is that a combination of a new payment mechanism and technology (devices) can force a rethink for consumers about how they consume digital media. (The iTunes example shows that with enough convenience and quality consistency, people will gladly pay for what they used to expect for free online.) Newspapers like the Times should work with competitors and technologists to build a platform that improves and redefines the experience of digital content, smoothing readers' transition to paying for it.

Wednesday, January 13, 2010

Do The Right Thing, Google

It looks like Google may have found a reason to backslide (for ungated, try NYT) on one of its (only) stupid commitments: providing search in China at the expense of censoring some searches the Chinese government deems too morally offensive for consumption. Like Tiananmen Square.



Google's servers came under heavy cyber attack from Chinese sources, including efforts to hack the Gmail accounts of human rights activists in the country. Additionally, attempts were made to steal information from 34 other companies, many of which hail from Silicon Valley. Now Google is saying it will desist in censoring content on its China affiliate, www.google.cn, and may exit the country altogether. Unsurprisingly, some of this news was itself censored in China.

This newfound backbone toward China is admirable, even if it comes after an initial cave-in to Chinese censors in 2006. But the backbone is also necessary for business. As Google has told numerous people, its business model becomes untenable the instant people no longer trust the company with oodles of personal data. For Chinese activists, the dissemination of such information could easily lead to their arrest and imprisonment.

But even if their is a sensible economic reason for leaving China, that fact should not erode the superior value of the moral reason. Here's a quote from the initial Times article:

“The consequences of not playing the China market could be very big for any company, but particularly for an Internet company that makes its money from advertising,” said David B. Yoffie, a Harvard Business School professor. Mr. Yoffie said advertising played an even bigger role in the Internet in China than it did in the United States.
And the WSJ also makes sure to give a sense of China's strategic importance:
Google's revenue in China is relatively small, with analysts estimating only a few percentage points of Google's nearly $22 billion in 2008 revenue came from the nation. But the country's massive number of Internet users has made it strategically important for Google, as it tried to extend its dominance in search and search advertising around the globe.
No surprises here: China has a billion-plus people, who will be increasingly linked in to the web and in need of search services (although Baidu currently has a massive market share there). That is, there are a billion-plus potential advertising consumers to be tapped. So what. It is about time for the worldwide business class to decelerate the growing wave of Sinophilia. We know, guys, you're enamored with the country's explosive growth rate -- assuming the numbers aren't too manufactured -- and deep market for future goods and services. But forget illiberal democracy; China has a straight-up authoritarian regime that imprisons human rights proponents and is happy to endorse genocidal/authoritarian states like Sudan if it means access to natural resources. (In all fairness, so does the United States.) If non-Chinese companies choose not to bless the Chinese government with everything necessary to build its rich-authoritarian economic utopia, so be it.

There will be consequences to a Google withdrawl. Chinese citizens will lose access to the most efficient aggregator of information the world has ever known. But Google -- unlike most international companies -- had to break its hallowed "Don't Be Evil" mission statement and sell part of its idealistic soul to operate there. It's about time China stops getting everything it wants by virtue of its economic potential and starts giving a little. Perhaps Google-like exits by other companies -- such as those that came under assault last week --  will further couch personal and economic choice arguments within the higher aspiration of human rights. The more China grows, the more inextricable these aims become.