Words of Wisdom

"Evolutionary biology is not a story-telling exercise, and the goal of population genetics is not to be inspiring, but to be explanatory."

-Michael Lynch. 2007. Proc. Natl. Acad. Sci. USA. 104:8597-8604.

Social Media
Currently Reading

 

 

 

Cycling

mi (km) travelled: 4,969 (7,950).

mi (km) since last repair: 333 (532)

-----

Busted spoke (rear wheel) (4,636 mi)
Snapped left pedal and replaced both (4,057 mi)
Routine replacement of break pads (3,272 mi)
Routine replacement of both tires/tubes (3,265 mi)
Busted spoke (rear wheel): (2,200 mi)
Flat tire when hit by car (front): (1,990 mi)
Flat tire (front): (937 mi)
Flat tire (rear): (183 mi)

Blog RSS Feed
Powered by Squarespace
Wednesday
Feb162011

Book Club: Dungeons & Desktops...

I realize that this may be a bit of a nerdy book to discuss on my blog, but I hope that I can generalize some of the themes such that they're of interest to most readers. When I was a kid, I got really into video games. I think that most children of the 80s were, having grown up around the likes of Atari and Nintendo. While TV game consoles were all the rage, a smaller and in some ways more dedicated group of gamers flocked to the personal computer. Retailing for exhorbitant prices by today's standards, software designed for the PC tended to be a bit more adult and cerebral than what you'd find on consoles marketed squarely at kids (a Macintosh computer purchased in 1984 retailed for a whopping $5,165 in 2008 dollars, while the 1983 IBM PCs retailed for $10,600 when similarly adjusted!).

In Dungeons & Desktops (D&D; 2008; A.K. Peters), Matt Barton attempts to trace the history of computer Role-Playing Games, or RPGs for short - a genre that was arguably very influential in the history of interactive entertainment and one that has undergone a lot of changes over the years.

A bit of background: A reductive definition of an RPG may be the attempt to create a simplified system of stats-based mechanics in order to simulate some particular scenario. In order for it to be 'role-playing'  however, a further extension to the definition may be that the simulation must have some degree of persistence such that its actors progress and develop over its course. Though the most popular RPG in the public mindset is probably the classic Dungeons & Dragons, where the object of simulation is heroic fantasy, more rudimentary RPGs actually date back to the 1940s when they consisted of sports simulations and table-top war games. The excellent if a bit dry media studies book, Digital Play, makes a point of noting that by pioneering the idea of representing everyday events with numbers, RPGs paved the way for computer games in general (what is a computer game if not representing scenarios with numbers, hidden or otherwise?). Along with text adventures, RPGs were among the earliest games to make it big on home computers.

Back to the book: Instead of a general narrative about the genre featuring a few key examples, the author chooses instead to tell the history of 'CRPGs' using a rogues gallery approach. He seriously tries to write a small blurb on every game for which he can get information. It's really a labor of love, and Barton faces a challenge common to all works attempting a historical catalog: What's the best way to structure the narrative? By year, or sub-genre, etc.? D&D chooses a hybrid approach, wherein CRPGs are divded in to 5 broad, mostly cohesive 'ages' corresponding to general time-frames. Popular series and sub-genres are pooled together within each age. So we read about the Ultima games of the Silver Age or the 'Gold Box' games of the Golden Age, etc. I'll have more to say on this in a moment.

For people who aren't familiar with these classics, it's really quite difficult to describe how immersive and even somewhat crazy they were. These old CRPGs were often shockingly complex, had brutal learning curves, sometimes took hundreds of hours to complete, and as mentioned, required expensive equipment to play. But for those of us who had the time and/or inclination, they were amazingly satisfying and intellectually stimulating - typically requiring a fair bit of strategizing to pass their myriad challenges. A lot of folks dismiss videogames as brainless time-wasters, but I can trace much of my love of problem solving as well as reading and writing back to childhood evenings spent solving these games. Unfortunately, my ability to play CRPGs essentially died when I went off to college, but reading about all of these titles, both those that I loved and many that I missed, brought back more than a bit of nostalgia.

The relative weight of each game or series' coverage is much influenced by Barton's own personal opinions, which is understandable both because again, it is a labor of love, and because it adds a lot of personality to the narrative (it also helps that most of the author's opinions are in line with my own!). However, in terms of assessing the significance of various title's influences on the industry and genre as a whole, some games probably deserved a bit more exposure than they received - Especially failures such as the final 2 Ultima games. While the rogues gallery style of presenting a short write up of almost every game tugs at my heart-strings, the value of hammering out a few paragraphs describing how mediocre a particular obscure title was is less informative than explaining the significance of important milestones in the genre.

My one minor complaint about the book concerns its choice to group games into series within ages. Unfortunately, the most important determinant of any game's direction is most certainly what was successful on the market during the time it was developed. Sub-genres tend to appear in waves, such as Dungeon Master clones, or Diablo clones, etc. In the early parts of the book, when the 'ages' are only a few years (e.g., 1981-1984) the practice of grouping all games in a series together makes sense; they tend to be very similar. However, in the later 'ages', which are all characterized by very rapid technological advancements in hardware, such a system becomes unsatisfying. To give an example to those 'in the know', Ultima VIII (1994) and IX (1999) are discussed back-to-back within the 'platinum age', skipping over a plethora of important developments in the industry during the intervening years. These are vastly different titles existing in entirely different circumstances - the correct historical context is important in understanding why both of those games were, for the most part, utterly disappointing.

In all fairness, this is a much more minor complaint than it seems: It's highly unlikely that anyone crazy enough to buy this book isn't intimately familiar with most of the more recent titles being presented, and thus writing for the lay audience is probably unnecessary. Furthermore, Barton does do a quite excellent job of speculating on the causes of some of the trends in the industry, especially during the early years when most of the book's narrative take place.

At the risk of spoiling the conclusion, I will say that D&D ends on a rather sad note: There's really no denying that the old days of painstakingly saving princesses on CRT monitors are behind us. The designers of these massive adventures learned a lot over the years, streamlining interfaces and reducing frustration, for example. But at the same time, as gaming has moved beyond the niche of a few dedicated users, more titles have begun 'playing to the lowest common denominator' and as such, very few people care for the types of patient, tactical, strategic gameplay that these classics typically embraced.

In all honesty, there's no way that I could find the time to play many of these types of games today: I got most of it out of my system during my (pre) adolescence. I don't think that there could have been a better way than reading this book for a busy guy like me to relive some of those childhood memories.

Friday
Feb112011

Thoughts on Peer-Review...

Peer-review is often touted as one the quintessential reasons for why scientific knowledge is more reliable than that obtained from other mediums: Experts in the field are required to go over submitted manuscripts and vet both the quality of the data and arguments being presented within. But how is peer review performed in practice? Essentially, as a member of a scientific field, you are expected to review manuscripts that are sent to you by editors of scientific publications, pro bono. All in all, I don't think that most scientists would argue that this is not a worthwhile endeavor for the greater good of the field.

This being said however, in some discussions I've had with fellow scientists, I think that many feel that there is a bit of a 'free ride' problem when it comes to peer-review. Though peer-review is designed to weed out those manuscripts whose work and interpretations do not fit the standards of legitimate science, one would expect that most manuscripts will be good enough shape by submission time that the task of the reviewer will involve judging only the content itself. Unfortunately, this is not always the case.

Various factors lead to incomplete manuscripts being submitted. A principle offender is likely the ability to list 'submitted' manuscripts as achievements in grant and scholarship applications. As the deadline for such applications loom, there's a strong pressure to get any 'nearly' complete manuscripts into the hands of a journal. Assuming that such manuscripts are not rejected outright, these greatly increase the amount of work for a reviewer, because in addition to evaluating the science itself, you may now be trying to parse poorly written sentences and incorrectly labeled figures.

Oddly perhaps, there are no 'classes' where you learn how to properly peer-review manuscripts. I suppose it's one of those things that you're supposed to learn from your various mentors - though in my experience, standards for acceptance/rejection vary widely between different people (which is not a bad thing if you value a diversity of opinions). However, I've never encountered a situation where someone thought it was okay to simply reject a manuscript outright because it was incomprehensible as written, thus requiring tedious reading of obscure references and external sources in order to understand what was being argued.

It is my impression that the number of papers being published is out pacing the number of new Ph.D.s actively working in academic scientific disciplines. Thus the pool of reviewers per capita is shrinking. I worry that at some point either we're going to have to work hard to make sure that manuscripts are of excellent presentational quality prior to submission, or the hammer is going to have to fall and more papers are going to get rejected outright for even minor 'preparational' offenses. Scientists are generally busy enough already.

Tuesday
Feb082011

Book Club: The Moral Landscape...

Out of the 'Gnu Atheists' I often find Sam Harris is the most difficult to read. This isn't because of his attacks on religion (not that it bothers me, but Richard Dawkins can be much more offensive in his perceived arrogance), or his writing style (which is quite readable). Rather, it's because Harris has a tendency to challenge deeply held convictions in a no-nonsense style, without a lot of buildup or discussion. This has given his writings against religion (The End of Faith, Letter to a Christian Nation) a bit of an air of 'preaching to the choir'; but he's also challenged some of the tenets of the most militant atheists as well. For instance, Harris has argued in the past that not all prophets charlatans, and that there are certain states of mind (e.g., drugs or punishing meditation regimes) that can legitimately fool the mind into thinking it is having a transcendental experience.

Nevertheless, The Moral Landscape (ML; 2010; Free Press) is probably one of the single most contentious books I've read that wasn't just a bunch of bullshit about aliens or intelligent design creationism. It really only covers a single topic from a number of different angles: Harris argues that the classic is-ought problem, originally defined by Hume, is untenable. Wikipedia explained the is-ought problem as follows:

"Hume... noted that many writers make claims about what ought to be on the basis of statements about what is. However, Hume found that there seems to be a significant difference between descriptive statements (about what is) and prescriptive or normative statements (about what ought to be), and it is not obvious how we can get from making descriptive statements to prescriptive."

Thus we cannot derive what ought to be, from what is. For example, while evolution may pit individuals and species in an endless struggle for existence, this says nothing about how humans should organize their societies.

Harris points out that the is-ought problem itself is self-defeating: It is not possible to arrive at what 'is' without first defining an 'ought'. For instance, scientific knowledge values evidence, reason, Occam's razor, etc. to define what is. But valuing scientific knowledge over, say, revealed 'wisdom', is a subjective judgement itself. In typical Nietzschian defeatism, there is no completely objective definition of what 'is'. Thus if we can accept that we're already making a moral judgement in order to invoke the is-ought rule, there's no reason to hold it up as some sort of universal law.

Alright, with that in place, Harris then makes another contentious claim: Morality is not arbitrary as is often alluded to by moral relativists. Rather, morality (must) concern itself with the well-being of conscious creatures. In all societies no matter how moral norms vary, they are concerned with well-being; however, such well-being may focus more importance on a supposed 'after-life'. Different beliefs about the world lead to different customs and norms, but ultimately what we're concerned with is human and animal well-being (this took me a little bit of thinking, but I beileve that Harris is ultimately correct).

The author points out that there may be many ways in which to organize a society that roughly equally maximize1 the overall well being of its inhabitants, which is where the concept of a landscape comes in. Nevertheless, there are certainly worse ways to organize societies. Some seem rather obvious (or perhaps not depending on your point of view), and yet moral relativism seems oblivious. An interesting example made by Harris himself is:

In 1947, when the United Nations was attempting to formulate a universal declaration of human rights, the American Anthropological Association stepped forward and said, it can't be done. This would be to merely foist one provincial notion of human rights on the rest of humanity. Any notion of human rights is the product of culture, and declaring a universal conception of human rights is an intellectually illegitimate thing to do. This was the best our social sciences could do with the crematory of Auschwitz still smoking.

But we can make a more compelling case by pointing out that some people's incorrect mystical beliefs lead to their not taking their children to doctors when they are ill, for example. Thus suffering is increased for no legitimate reason.

If this all sounds a bit overwhelming, Harris concedes the obvious point that the problem of determining available 'peaks' on the moral landscape is as daunting as any other faced in academia. Like all studies, answers can only come step-by-step from research, and a big part of this could come from studies of how the brain actually works. Understanding human biases and cognitive impairments will go a long way to understanding why people behave in ways that seem antethical to fostering a more harmonious society. I'm not doing justice to a large part of the book, so if you're interested, read it yourself.

Much like the reviews I've read on The Moral Landscape, it's difficult for me not to cringe at some of the ideas being presented here. I'm not disturbed by Harris's arguments against the is-ought problem, or his fundamental thesis that there are better and worse ways to organize society. However, I think we all recoil when even the idea of a dystopian 'perfect society' is raised. Regardless, I don't think that this is what Sam Harris is advocating (he says as much in his book). Rather, I think he's more concerned with pointing out the more obvious, non-well being maximizing, broken societies that exist right now, and questioning our intuition that it's not our place to tell some maniacal dictator how to run his or her (alright, his) junta.

 

1Harris does spend some time discussing the litterature on economics about what various definitions of 'maximize' may mean. It's not always simple as, for instance, in discussions of wealth we can envision a particular situation leads to the richest person in society getting richer, without affecting those less fortunate. Thus this situation may maximize population wealth, but may not maximize overall stability or happiness as it's entirely concentrated in one person's hands.

Saturday
Feb052011

Sacrifices of The Academic Life...

As I approach 30 I've begun to think about things that previously didn't bother me. You know, stuff like: What do I want to do with my life? Where do I want to live? Do I ever plan to start a family? I can only imagine that such mental meanderings are typical at this point.

However, I have begun to spend more time thinking about one issue in particular: The large number of sacrifices that one has to make in order to attempt to pursue a career in academia. Two recently published articles, one in The Economist (The Disposable Academic) and a more recent opinion editorial in Science Careers (Falling Off the Ladder), really hit home. The supply of Ph.D.s in most fields vastly exceeds the extremely limited demand, and yet the number of Ph.D.s graduating seems to be perpetually increasing. Thus, competition for a very small pool of 'dream' jobs is extremely fierce, leading to work loads that routinely blow the minds of people outside of the field: You work from 9-7 on weekdays, continue working evenings, and routinely put in full days on weekends?

The defense of such an arrangement is always the same: It doesn't 'feel' like work if you love what you do! For a long time I was able to take comfort in this very mantra (and I must admit that it's still somewhat comforting nowadays as well), but I've really begun to think about whether it's healthy. I love science of course, but surely being economically destitute, and having absolutely no certainty about future job prospects - even to the point of having no good idea when I'll even be able to apply for prospective jobs - does not contribute to life satisfaction and mental well-being.

If I am on a career path towards academia, at least I can say that I'm doing pretty well. I've got a decent number of publications, a fairly robust set of skills, and an idea for a 'niche' that I can carve out for myself in terms of research projects. I guess that as the pressure builds and the duties grow, these types of thoughts eventually must bubble to the surface. There are career options other than those of a lab P.I. (Principle Investigator) - I wonder if many people go through a Ph.D. knowing that they'd rather do something different than academia? Or does everyone want to be a P.I. at first?

Tuesday
Feb012011

Book Club: How Markets Fail...

I've been quite interested in learning more about economics lately. I'm not sure what brought on this desire, but it probably has something to do with various things I've read in the field of game theory-as-applied-to-evolution, and how such things often apply equally well to human decision making. However, I must say that I'm not really interested in simple economic principles and idealized, steady-state models. Rather, I'm more interested in work looking into how human beings actually behave, which is to say, not always rationally.

Along this vein, John Cassidy's How Markets Fail: The Logic of Economic Calamities (HMF; Penguin; 2009) caught my eye. Using the recent financial crisis as a backdrop, the author attempts to compare two categories of economic theory. The first,  'Utopian Economics', encompasses what tend to be mathematically based, steady-state (i.e., normal market operating condition) models that make a common set of assumptions about human behavior: Namely that people act rationally in order to maximize their own self-interest. Such theories are the bread-and-butter of classical economics literature and much of the textbook pro free-market arguments. The second category, which the author refers to by the much more reasonable-sounding 'Reality-Based Economics' involves models of risk that attempt to account for how individual behavior may not be perfectly rational, for reasons including biases in the brain to asymmetrical information availability. Furthermore, such models should also attempt to quantify the cost of externalities ignored in other models.

Cassidy's book covers a lot of material, starting with a lengthy history of modern American libertarianism, followed by an equally lengthy exploration of research into why certain classical economics principles have failed, and culminating with a detailed overview of the housing bubble that led to the near collapse of the financial market in 2007. The crux of the author's argument is that there are many aspects of the reality of day-to-day financial transactions, both large and small, that lead to poor assessment of value vs. risk. In the small scale of the individual, this leads to waste and debt, but in the hands of large-scale, too-big-to-fail financial institutions, these issues can lead to cascades causing an entire financial system to collapse. A sampling of these issues follows:

Moral Hazard: An unfortunate consequence of American economic culture is that public corporations are unpopular. However, instead of completely privatizing such institutions (e.g., Fannie Mae, Freddy Mac) the government has let them be run for profit, but with express government backing - i.e., if you fail, we'll back you up. Thus investors have tended to over-value the stability of these corporations' opportunities, based on the knowledge that their investments are secure, creating huge problems due to the undervaluation of risk.

Perverse Incentives: In the 1980s, it was decided that corporate CEOs should receive some percentage of their salary paid as stock options. This was thought to have the effect of ensuring that corporate leaders always acted with the long-term stability of the company in mind. However, in modern corporations such stock options overwhelm common salaries, leading to profit boosting incentives, often associated with increased risk. As anybody who followed the Enron scandal knows, prior to the company's meltdown some folks made off with hundreds of millions of dollars based on questionable business practices.

Poor Information: A large part of the world financial crisis concerned investments in 'toxic assets' such as CDOs. One problem with these investments involved their relative infancy, and the inability of financial institutions to properly model the risk associated with trading in them. Banks make money by using sophisticated models to assess the relative risk of loaning money to individuals and institutions. As the author discusses, the stats behind these models were ill equipped to deal with these new, unexplored sources of risk.

The small highlights I've made above are arguably overly reductive, and really don't do justice to all of the details covered in this book. Furthermore, I'm not in an excellent position to evaluate the book's arguments, but the reviews I've read seem to indicate that they're not easily dismissed as ridiculous (it was a 'Book of the Year' in The Economist, for instance). All I can say is that Cassidy's arguments, and some of his suggestions - none particularly radical - caused me to think seriously about the nature of risk and investment, the role of government, and the power of financial institutions to maintain a stable market. HMF is not a socialist screed - far from it - but it does ask us to consider the possibility that 'Rational Irrationality' is a normal part of individual and group behavior, and that we must take this into account in formulating our societal institutions.

 

P.S. For anyone interested, I'm really enjoying NPR's Planet Money podcast, which you can find here. It frequently discusses a lot of the issues raised in the book.