Why ‘Ghostbusters 3’ Might Be Awesome

Published: The Good Men Project (September 27, 2014)

If a female-led Ghostbusters 3 becomes a reality, what could that mean for future of female comedy writers, actors. and directors?

 

Recently, Bill Murray made entertainment headlines by listing his picks for the new line-up to spearhead the long-awaited Ghostbusters 3. The names included Melissa McCarthy, Kristen Wiig, Linda Cardellini, and Emma Stone.

As you may have noticed, each of these comedians and/or comically-gifted actresses is female.

Since this isn’t the first time the prospect of an all-woman Ghostbusters movie has been bandied about, it makes sense that the feminist dimension of the subject hasn’t received much attention from the blogosphere. Now that there are specific names being considered, however, it is important for film aficionados and feminists alike to explore what having an ensemble of talented actresses “take over” such a mega-blockbuster franchise would have for American pop culture.

 

  1. It could be as big as “Frozen” – and that’s not necessarily a good thing.

Frozen wasn’t groundbreaking because it was the first animated feature from a major American studio about non-traditional female protagonists (i.e., not a damsel in distress and/or prize to be won for a male secondary character). Although such movies have been far less common than their socially regressive counterparts, the occasional Mulan or Coraline has popped up every few years or so, netting respectable profit and modest fan bases in the process.

What distinguished Frozen from its predecessors is that it wasn’t just successful… it was hugely successful. Worldwide it was the highest grossing film of 2013, the highest grossing animated feature of all time, the fifth highest grossing film of all time, and one of only nineteen movies to earn more than $1 billion. More important, though, was that Frozen was embraced by the zeitgeist. Its victory for Best Animated Feature at the Oscars was a given; merchandise based on it was characters was everywhere; the smash hit song “Let It Go” became the earworm of the year. The movie wasn’t just popular; it was as universally well-liked as any single work of art can realistically become. To the extent that these things can be objective measured, one can safely say that Frozen was a very, very good movie.

Barring a truly horrendous marketing campaign or a stroke of freakish bad luck, the chances are that Ghostbusters 3 would also be a significant box office success. It’s close to axiomatic at this point that long-awaited sequels to nostalgia-tinged beloved franchises tend to make mint. This financial success, however, definitely doesn’t guarantee cultural success. Blockbusters like Star Wars: Episode I – The Phantom Menace and Indiana Jones and the Kingdom of the Crystal Skull may have been smash hits at the box office, but the stench of notoriety lingers over both due to widely-held complaints about their quality, many of them well-founded (Jar Jar Binks and nuking the fridge, anyone?). This brings me to my second point…

 

  1. It would be the first installment in an iconic franchise marketed on the basis of its talented female comic leads. In other words, it would need to be funny.

Even if an all-female Ghostbusters 3 doesn’t match Frozen dollar for dollar, it would almost certainly be one of the year’s biggest hits in terms of revenue. Nevertheless, it would be a disaster if the movie was deemed a qualitative failure.

It’s a well-known show business fact that women have a much harder time making it in comedy than men. While comedians like Tina Fey and Amy Poehler have done a great deal to change that, the reality is that no blockbuster anywhere near the level of a Ghostbusters movie has ever cast its lot on the public’s acceptance of its all-female cast (after adjusting for inflation, the highest grossing comedy with a female lead in American history was My Big Fat Greek Wedding, which currently ranks #152; by contrast, Ghostbusters in #32). If the movie is a success, Ghostbusters 3 could be a forward stride for female comedians comparable to what Frozen was for non-traditional female leads in animated films. If it was deemed a failure – either because of latent sexism among moviegoers, a poor script, unflattering comparisons to the brilliant original (and stronger-than-average follow up), or any combination of the three – it would be at best a wasted opportunity, at worst a setback.

In short, Ghostbusters 3 would not only risk soiling the good name of a cinematic national treasure,  (again, see The Phantom Menace and Kingdom of the Crystal Skull), but could turn a potential milestone in the history of women in comedy and cinema into an embarrassing footnote. That’s a good reason why…

 

  1. It’s significant that the movie would be partially-helmed by men… and necessary that the women have an equally powerful creative voice.

The first two Ghostbusters movies were directed by Ivan Reitman, written by Harold Ramis and Dan Aykroyd, and starred Ramis, Aykroyd, and Bill Murray. Even if Murray doesn’t get his casting wishes on a Ghostbusters 3, the chances are that the surviving members of the original Ghostbusters team would have a strong hand in every step of the creative process – in particular Aykroyd, who has remained outspoken in his unbridled enthusiasm for a possible third film even after all these years.

This is as it should be. For one thing, the franchise is their baby and, speaking strictly from the standpoint of artistic ethics, it would be unconscionable for anyone to insist that it be wrested away from them to fulfill a sociological agenda. More significantly, though, their presence would demonstrate a symbolic passing of the torch (or should I say proton pack?) from one generation of comedians to another. As long as the third Ghostbusters movie lived up to the standard of the first two films, the collaboration between its male creators and anointed female successors would be akin to David Letterman hand-picking Stephen Colbert as his successor on The Late Show. It would be a poignant demonstration that the legacy of top notch special-effects driven comedy transcends gender lines and knows only one criterion – funny.

For this to work, however, the voices of the female stars would need to be as evident in the final product as that of its director and writers. Even though Murray wasn’t a credited writer in either of the first two Ghostbusters films, his distinct sensibilities were as stamped in the dialogue delivered by Peter Venkman as Ramis’s dry nerdiness on the character of Egon Spengler and Aykroyd’s boyish glee on Ray Stantz. Similarly, it would be necessary for audiences to recognize the individual comic voices of Wiig, McCarthy, Cardellini, Stone, and/or any of the other female comedians who star in the movie. Without it, their casting could risk coming across as patronizing – which would be an unfortunate note on which to pick up (and possibly end) the franchise.

Although the possibility of a female-led Ghostbusters 3 remains purely speculative at this point (far more auspicious signs than this one have come and gone without bearing fruit), there are larger points which come to the fore when discussing this hypothetical movie that hold true regardless. For all of the progress made in the entertainment industry over the past several decades, there is still plenty of sexism, racism, and other forms of prejudice that define the kinds of movies we see and the types of characters we expect to see in them. When changes are made, it is because the same people who achieved their first success by taking risks with their art summon the courage to do so in a way that is socially meaningful as well as entertaining.

This is the lesson that Hollywood will hopefully learn from Murray’s musings about Ghostbusters 3, even if they don’t ultimately bear fruit. We live in an era when reboots and long-delayed sequels are crass disappointments far more often than not, in no small part because they so often lack the verve and originality of their predecessors. For any Ghostbusters sequel to buck that trend, it would have to recapture what made the first two movies so funny and entertaining while bringing something new to the table in the process. Having Ghostbusters 3 shatter one of cinema’s most enduring glass ceilings seems like as good a way to do that as any.

 

Anita Sarkeesian & The Self-Dehumanization of a Generation

Published: The Good Men Project (September 24, 2014; republished on October 15, 2014)

The backlash against statements by gaming commentator Anita Sarkeesian has writer Matthew Rozsa questioning mob mentality in the digital world.

 

Before discussing the Anita Sarkeesian controversy – better known these days as “GamerGate” – I’d like to submit the following quote for your consideration, courtesy of one the acclaimed 1976 dark comedy Network:

“… this is no longer a nation of independent individuals. It’s a nation of some 200-odd million transistorized, deodorized, whiter-than-white steel-belted bodies, totally unnecessary as human beings, and as replaceable as piston rods…”

For the time being, let’s just add 100 million souls to that outdated population estimate and tuck the rest of the quote into some mental pocket. It’s time to move on to Sarkeesian and her online series, Tropes vs. Women in Video Games.

When I first heard about the firestorm that had been ignited in response to these videos – ranging from the predictable misogynistic slurs and efforts (some successful) at hacking her personal information to multiple rape and death threats, at least one of which is currently under investigation by federal authorities – I was understandably intrigued. While it goes without saying (at least among decent people) that this borderline psychotic harassment is reprehensible, I couldn’t help but wonder:

What manner of tinder could spark such a conflagration of hatred against a single human being?

Understandably, I expected Sarkeesian’s videos to be provocative, perhaps even groundbreaking. You can only imagine my disappointment when I was instead treated to a series of strikingly unremarkable observations and opinions.

Make no mistake about it: Sarkeesian does a fantastic job of creating entertaining and persuasive video essays. From a production standpoint alone, she is as skillful a practitioner of the format as anyone online today. Her arguments are presented with a matter-of-fact intelligence that allows viewers to focus on the material rather than the pundit’s eccentricities (a common pitfall for many Internet commenters). When judged based on how well they hold up to so many of the other “commentary videos” one can find these days, Sarkeesian’s certainly ranks among the most thoughtful and well-researched around.

The problem is that I was expecting Sarkeesian to rock the boat, to hock some proverbial phlegm in the faces of gamers everywhere – not because I harbor any malice toward gamers, but simply because there was no other rational explanation for all the hubbub. Quite to the contrary, Tropes vs. Women in Video Games covered territory that virtually anyone who has lived around video games already knows to be true: The prevalence of the “Damsel in Distress” trope in popular video game franchises like Mario and Zelda, the egregious sexualization of and violence toward women in many modern games, the paltry number of well-developed female characters compared to male ones, etc. Granted, she performs her analytical deconstruction not as a “gamer” but as a student of narrative theory. In particular, she draws on conventional feminist understandings of gender paradigms to discuss how sexist archetypes have been embedded in various entertainment media. Even there, however, her points of reference are standard fare to anyone who has taken an undergraduate course on gender theory or women’s history.

In light of the fact that Sarkeesian’s arguments, though well-informed and persuasive, are fairly rote, we are left with two disturbing implications about why there has been an emotionally violent backlash against her. The first and most obvious is that misogyny is alive and well in the gaming community, the protests of her more vociferous detractors notwithstanding. Sarkessian isn’t even the first woman to be prominently abused by sexist gamers; earlier this year Zoe Quinn, an independent game developer, was hounded by gamers after her ex-boyfriend publicly smeared her for alleged sexual indiscretions. However, given that the misogyny issue has already been covered in depth by authors like Amanda Marcotte of The Daily Beast, Adi Robertson of The Verge , and Chris Tognotti of Bustle, I’ll refer you to their articles and instead focus on the other sinister undercurrent.

We can start by looking at the hysteria. While the venomous misogyny at play against Sarkeesian is certainly disturbing enough in its own right, saying that it’s disproportionate to the actual scope of the “problem” would be a gross understatement; it is, in fact, an overreaction to a problem that doesn’t even exist. For all of the sound and fury expended against Sarkeesian, her videos have failed to pose any kind of existential threat to the video games these players so fervently embrace. At most, they complain about excessive “political correctness” and refer to gaming’s feminist critics as “social justice warriors” – claims that, notably, don’t contest the accuracy of Sarkeesian’s observations, but instead try to shift attention to the notion that she’s guilty of being too strident. More often, though, these critics simply spew bile and foam at the mouth, replete with the same sexualized slurs and violent language that have so long marred the gaming community…. All at a single voice among millions online, one which offers observations that it’s hard to believe all but the densest gamers has failed to at least notice in the past.

Again, it must be reiterated: The obvious culprit here, whether they admit it or not, is a desire among these predominantly male critics to protect a gender-based privilege they have long enjoyed in the content of their gaming. As women take up a larger and larger percentage of the gaming community (some polls suggest they’re already a majority), this outburst of misogyny deserves attention and concern. At the same time, there is a deeper phenomenon at play in how this coded hate speech has manifested itself. More specifically, the gamers who hate Sarkeesian do so for the same reason a class of young children will isolate a single classmate for persecution –at some point an irrational and intense emotion began to sweep through this like a wave, with the ones still riding it betraying an unwillingness to exercise the minimal thought necessary to not be a buoy.

This may not seem like a big deal, but it’s enough to make thousands upon thousands of men invest their time and energy for weeks on end into writing blog and message board posts that exist for no other reason than to attack Sarkeesian’s character, while thousands more participate in and/or sympathize with campaigns that have no goal besides harassing her. It is downright ominous for reasons that go far beyond video game culture and women’s rights, because what we’re seeing isn’t a movement in any proper sense of the term.

It’s an angry mob.

The mob mentality driving the anti-Sarkeesian movement is turning its participants not into temporary boors, but into permanent drones. To better understand what I mean, I refer first to the definition of “mob mentality” provided by Tamara Avant of South University – Savannah:

“When people are part of a group, they often experience deindividuation, or a loss of self-awareness. When people deindividuate, they are less likely to follow normal restraints and inhibitions and more likely to lose their sense of individual identity. Groups can generate a sense of emotional excitement, which can lead to the provocation of behaviors that a person would not typically engage in if alone.”

Of course, angry mobs have been around for at least as historians have existed to record them, and they have as often amassed over frivolous causes as they have over valid ones (at least insofar as any reason for creating a mob can be “valid”). That said, there is a ubiquitously prolonged and artificial nature to the anti-Sarkeesian phenomenon, which can be mainly attributed to the fact that its locus is online. Before the Internet allowed instantaneous worldwide communication, there were two types of mobs: Those that materialized in the “real” world of physical space and time, and those that used the somewhat less “real” medium of pen-and-paper (angry letters, newspaper crusades, etc.)… although even the latter was still constrained in some form by physical reality, be it quantities of ink and looseleaf or the constraints of time in conveying messages. While the inextricable link between discernible space-time and human connectivity was first loosened by radio and television, the Internet has practically abolished them altogether. Millions of people are now plugged into a world of 0s and 1s, hypnotically consuming a world of 0s and 1s and (unlike radio and television) directly interacting with the medium itself.

The inevitable result has been the rise of a generation that too often views the digital world not as a means unto an end, but as a self-contained end in its own right. While the flame of an angry mindless mob will burn out in due course when confined by the limits of the physical world, a similar mob that makes it home in cyberspace can survive for previous unimaginable periods of time. Instead of the inevitable cooling process, its members’ violent emotions grow hotter and hotter. Likewise, instead of slowly regaining perspective by coming to their senses as individuals, they make a habit of mimicking each other’s words, opinions, and actions until each one does little more than make a minor contribution to the monotone angry drone produced by the whole.

In the past, this sort of thing would happen all the time on a small scale. Now it happens among such large numbers, over such a massive geographical and chronological scale, and with the cover of anonymity offering relief to the cowardice so integral to the character of those who participate in such mobs, that we have reached a deeply troubling point in our cultural history – one in which the act of being a drone in a mob is mistaken for having an idea and a cause.

This brings us to the closing lines of the Network monologue from the introduction:

“The whole world is becoming humanoid, creatures that look human but aren’t. The whole world, not just us: We’re just the most advanced country, so we’re getting there first. The whole world’s people are becoming mass-produced, programmed, numbered, insensate things…”

While Network was a parable on how the specific medium of television was eroding human individuality, its message serves as an unsettlingly good fit for the anti-Sarkeesian backlash. It may not be the exact same type of dehumanization predicted in movies like Network, or in books like 1984 or Fahrenheit 451; then again, what we are seeing here is a phenomenon in which thousands of people across the globe have maintained a prolonged state of intense irrational emotion in the name of a cause as lacking in real-world consequence as the characters in the video games they play. While television gradually created a global culture in which independent thought is subordinated to the fiscal agenda of the large corporations who control the networks, the Internet has created an environment in which the kinds of mob behavior that would die down under normal circumstances are artificially prolonged… and, indeed, become more intense over time.

None of this is not being said as a criticism of video games. Like other popular forms of escapism, video games can soar to the heights of great art, intellectually and emotionally challenge their players, and offer psychologically healthy catharsis and relaxation for their users. Similarly, the fact that the medium is rife with sexism is less a reflection on video games themselves than on the culture that produces them. If the past serves as precedent, games will slowly eventually join cinema and television by providing less gender-biased content alongside its traditional fare. The fact that the organizers of the Game Developers Choice Awards have honored Sarkeesian with the Ambassador Award (which goes to individual who help video games “advance to a better place”) is as hopeful a sign as any.

In light of the hostility that has been directed toward male defenders of Sarkeesian (many of whom are derisively called “White Knights,” a term that misogynists use in the same way white supremacists refer to non-bigots as “traitors to their race”), I think it’s necessary to end this article the following statement:

I’m supporting Sarkeesian in this controversy not only because her observations about sexism in video games are absolutely right, but because no one – male or female – deserves to be harassed by an emotionally out-of-control mob. This position is born not out of ideology, but out of a quality I fear the Gamergate drones are beginning to lose: Simple human decency.

Al Gore is the single-issue candidate we need

Published: Salon (July 19, 2014)

Maybe he wouldn’t win, but Al Gore could still make climate change one of the biggest stories of 2016

With Republican pundits speculating on the possibility of a third Mitt Romney bid for the White House, I think it’s appropriate to mention another two-time presidential candidate whose moment has come in 2016 — Al Gore.

Allow me to explain.

I have never met Gore, nor am I connected with anyone who has a professional interest in seeing a renaissance for Gore’s political career. Similarly, I am not writing this article in my capacities as a political columnist, graduate student or local Pennsylvania politician, but as a concerned citizen — not only of the United States, but of the world. Like President Obama, who made news this week by pointing out that the climate change crisis threatens every aspect of America’s future, I want to make sure my children will grow up in a strong country, one that is safe and secure on a healthy planet. And America needs Al Gore to make a bid for the White House because of his unique credibility on anthropogenic global warming.

As the EPA explains on their website, a failure to reduce greenhouse gases in our atmosphere will have a devastating effect on “our food supply, water resources, infrastructure, ecosystems, and even our own health.” In addition, as former Navy Rear Admiral David Titley explained in a recent Op-Ed to the Pittsburgh Post-Gazette, the confluence of violently unpredictable changes in our weather patterns and drastic reduction in vital resources will destabilize the international political scene, as the countries that stand to gain or lose the most from climate change will be compelled to overhaul their economic and foreign policies accordingly. As Titley somberly put it, “Climate change is an accelerating threat to national security.”

Yet even though a recent survey of more than 12,000 peer-reviewed climate science papers found that 97 percent of climate scientists agree that global warming is man-made, a CBS News poll last May found that only 49 percent of Americans accept that climate change has been caused by human activity, with 33 percent attributing it mainly to natural patterns, 11 percent claiming it doesn’t exist, and 6 percent either saying that they don’t know or that it is caused by both. Moreover, climate change has long struggled to be taken seriously as a major national priority, a problem reinforced last month when a Bloomberg National Poll found only 5 percent of Americans ranked it as the most important issue facing the country today (placing it seventh).

The good news is that, as Berkeley psychology professor Michael Ranney demonstrated in a 2012 study, people can change their minds when the dynamics of climate change are broken down for them in a straightforward and easily digestible manner. To quote snippets of the 400-word explanation that Ranney found was most persuasive:

Since the industrial age began around the year 1750, atmospheric carbon dioxide has increased by 40% and methane has increased by 150%. Such increases cause extra infrared light absorption, further heating Earth above its typical temperature range (even as energy from the sun stays basically the same). In other words, energy that gets to Earth has an even harder time leaving it, causing Earth’s average temperature to increase – producing global climate change…

(a) Earth absorbs most of the sunlight it receives; (b) Earth then emits the absorbed light’s energy as infrared light; (c) greenhouse gases absorb a lot of the infrared light before it can leave our atmosphere; (d) being absorbed slows the rate at which energy escapes to space; and (e) the slower passage of energy heats up the atmosphere, water, and ground.

Unfortunately, the simple science has been obscured in our political debate. While special interest groups can make some headway by lobbying, no weapon comes remotely close to the potency of a high-profile presidential campaign when it comes to mobilizing large sections of the population and transforming public opinion. Even an Academy Award-winning movie that became part of our pop culture zeitgeist — I’m referring, of course, to Gore’s iconic documentary “An Inconvenient Truth” — had a limited effect because it was viewed as the pet project of a supporting character in the ongoing American story. For better or worse, we live in a society that is over-saturated with issues and advocates; as a result, anyone who is not an active main character on today’s political stage quickly finds his or her cause lost in the noise or, at best, championed only by a static niche of activists and casual policy junkies. The people running for president, however — and in particular someone like Gore, who has the unique distinction of having won the popular vote in a general election, even if he lost the war — are never just supporting characters.

This brings me to the critical detail of a hypothetical Gore candidacy: It would have to be a single-issue campaign. In part this is a fail-safe measure; while a strong case can be made that Gore would make an excellent president (a premise with which a plurality of American voters agreed in 2000), the primary objective would not be to promote Gore the man, but to guarantee due attention is paid to the threat of climate change. While other campaigns on both sides would continue the practice of focusing on several issues in the name of advancing a name brand (i.e., the individual candidate), Gore would have the advantage of representing not his own cause, but the cause of creating an environmentally sustainable future. Indeed, he wouldn’t have to actually win in the primaries to achieve his goal. As long as he consistently received a large enough percentage of the primary vote to be considered a “major player,” he would (a) keep climate change in the national headlines; and (b) force the other candidates to prioritize climate change in the hope of winning over his supporters.

I don’t want to oversell what a Gore candidacy can accomplish to save our planet. Obviously it would be a game-changer if he were elected, but should the Democrats instead nominate, say, Hillary Clinton or Joe Biden, Gore could force them to take a hardline stand on the issue. Even though most Democrats agree that global warming needs to be addressed, it is usually prioritized below other matters like the economy or foreign policy. This is no doubt because it is viewed as a distant threat rather than an immediate one — a perspective that may be the luxury of baby boomers, but, alas, not for the millennials who will inherit the ecological disaster they leave behind.

Gore’s goal should be to force them to commit to a proactive and emphatic position on this matter, making the fight against climate change one of their top priorities, similar to what Ross Perot did for both parties on balancing the budget in 1992; Eugene McCarthy did for Democrats to mobilize opposition to the Vietnam War in 1968; or President John Tyler did to pressure the (still Jacksonian) Democrats to nominate a candidate who would annex Texas in 1844.

Although there have been plenty of single-issue candidates in the past, few have had Gore’s eminence or name recognition. As such, this approach — if executed correctly, especially from a PR standpoint — could come across as refreshingly novel, helping Gore stand out from the pack. This is where the argument that Gore has a civic duty to run comes into play: If he truly believes that we are running out of time to effectively address man-made climate change, then he must appreciate the importance of elevating the issue in our national debate.

While most people associate Gore with the tragedy of the 2000 presidential election, his greatest political campaign occurred more than a decade earlier, when he ran against the likes of Michael Dukakis, Dick Gephardt, Paul Simon and Jesse Jackson for the 1988 Democratic presidential nomination. Aside from Jackson, Gore was the only Democratic candidate in that race who associated himself with a clear cause, not only calling attention to the urgency of addressing global warming but striving to make it one of the central issues of the election — to no avail. As he later recalled, “I made hundreds of speeches about the greenhouse effect, the ozone problem, that were almost never reported at all. There were several occasions where I prepared the ground in advance, released advanced texts, chose the place for the speech with symbolic care — and then nothing, nothing.”

Thanks in no small part to Gore’s own efforts, public awareness of this important issue has dramatically increased in the twenty-six years since that first campaign. While Gore would have probably had a better chance of beating George H. W. Bush than any of the other Democratic aspirants (his reputation as a Southern centrist made him the least vulnerable to the Bush team’s dirty tactics, which were ultimately successful against Dukakis, the eventual nominee), he simply lacked the fame and clout to force global warming onto the national radar. Today he is a former vice president, a Nobel Prize and Academy Award-winner and an elder statesman; his name and reputation alone will make him a major contender as soon as he announces his candidacy (something true of no other Democrat in 2016 except for Clinton).

I know that I am asking a lot of him. Of the four Americans who were denied the presidency despite winning the popular vote, he is one of only two to have never made another bid for the White House (Andrew Jackson and Grover Cleveland both ran again — and, it’s worth noting, won). The other one, Samuel Tilden, was satisfied knowing that he would famously “receive from posterity the credit of having been elected to the highest position in the gift of the people, without any of the cares and responsibilities of the office.” While only Gore knows for certain why he has retired from electoral politics, I would imagine Tilden’s reasoning at least factors into Gore’s rationalization of his decision … to say nothing of his legacy in history.

Under normal circumstances, I would agree. As Gore knows better than anyone else, however, we are running out of time to address global warming, and no weapon would be as effective in fighting it as a Gore presidential candidacy. If ever a man and a moment have met, Gore is that man and the 2016 presidential election is his moment.

Robin Williams Shaped By Depression’s Double-Edged Sword

Published: The Morning Call (September 10, 2014)

The New York Times, which is capable of producing great insights, puzzlingly expressed surprise at Robin Williams’ legendary work ethic in its obituary. “Given his well-publicized troubles with depression, addiction, alcoholism and a significant heart surgery in 2009,” it commented, “Mr. Williams should have had a résumé filled with mysterious gaps. Instead, he worked nonstop.”

While I can’t speak to the effect that Parkinson’s disease, substance addictions and heart surgery could have had on Williams’ career, I’m not the least bit surprised that a severe depressive would also have an indefatigable work ethic.

Our prevailing cultural image of the depressive is that of the perpetually listless sad sack (emphasis on “perpetually”), haunted by inner demons and barely capable of crawling out of bed to trudge through the daily grind. That trope certainly applies to a great number of depressives, but it fails to capture the complete experience for many others, including myself.

Though it may seem counterintuitive, the same mental illness that can rob its victims of hope, energy and any genuine enjoyment of life can also be the fuel that propels them through seemingly successful careers and/or personal lives.

To understand why, it is necessary to turn to a much older passage from The New York Times — a profile of Mark Twain published in 1905 (appropriately titled “A Humorist’s Confession”) in which the septuagenarian explained that he had never worked a day in his life: “What I have done I have done, because it has been play. If it had been work I shouldn’t have done it. … The work that is really a man’s own work is play and not work at all.”

Even those unfamiliar with Twain’s biography have probably guessed by now that he, like Williams, grappled with lifelong depression.

While the sources of depression are myriad and mysterious, the relief that can be offered by playing at one’s true work is undeniable. That is because a depressive who has found his or her own work is engaging in an activity wholly distinct from making a living or escapist play.

It defines their very reason for existing. It inspires their passion, absorbs and replenishes their energy, and brings to fruition the very best of their minds, bodies and souls. This can take the form of a career, a recreational activity, an unpaid job, or services only done for a few other people in a household; its only absolute and universal quality is the fact that, when a person has been matched with his or her true work, it just feels right.

Depression, by contrast, makes everything feel wrong. The problem is deeper than the simple state of being unhappy; it is a toxic philosophy of self-abnegation that embeds itself into your DNA.

Pleasures large and small “taste like ashes,” to crib a line from Lars von Triers’ “Melancholia”; you can only perceive your own qualities as deserving of pity or contempt. The future — not only your own, but that of other individuals you care about and/or of larger causes — seems bleak, and life itself almost farcically cruel. Other people appear ineffably alien, to the point that criticism and hostility inspire the anguish of a worst nightmare that has been validated, while respect and love are doubted as the misunderstandings of those who don’t truly know the real you.

Personally, I am extremely fortunate to have several developing career tracks that feel like play to me — as a Ph.D. student in American history and a political columnist.

I know many other depressives who have had similar experiences. None of them has entirely shaken off the depression that mars their psyches, but all of them understand what it means to have a strong play ethic. Through their depression, they have become what Twain called “the great players of the world.”

Although there are plenty of nondepressives who play at their true work with as much diligence and enthusiastic abandon as their depressive counterparts, a nondepressive who plays does so as one dimension of his or her life — perhaps the most existentially rewarding one, to be sure, but still a single dimension. A depressive who has found his or her true work, on the other hand, develops a great play ethic because the alternative is not feeling alive at all.