Is Culture Stuck?

Maybe the movies are dying? And maybe also books, music and television are dying? Maybe it's the end of "legacy media" as we know it? and culture has fragmented into so many pieces only sub-culture remains?

Or maybe culture is not stuck? Maybe the cultural guardians of the newslettersphere just don't like, or are not aware of, where culture is happening today?

People are reading fewer books, we hear, and young people especially are doing it wrong. People are watching fewer movies, especially in the theater. Music, many say, is dead; it's just the recycling of old styles and tropes, there's no innovation.

Also, vaudeville is dying. Wood block printing is in the red. And cave paintings are just graffiti now.

Shouldn't I be taking this more seriously? After all, I love books and movies and music, and art in general. I'm even trying to write fiction! Who am I kidding?

Will "mass culture" ever exist again?

Is this dire refrain something different from the role cultural critics have historically played, that is, articulating all of the ways the culture sucks now?

Is the novel, which has a cultural history of around two thousand years, or four hundred years, depending on what you think a novel is, actually dying? What do we even mean by that? In the US, hundreds of millions of books are sold every year. There are ups and downs, but the trend is up over all in the past twenty years. While people are reading physical books less than they used to, audio and digital books are making up some of the slack.

What about the movies? Are they dying? The pandemic took a monster bite out of the box office; the last couple years saw ticket sales on par with the early 2000s, which is a big downturn.

But what are "movies?" Major motion pictures are expensive, theatrical versions of "moving image art." This is why they are called movies in the first place. They are images that move. And there are more of them, in more forms, than at any other time in the last hundred years.

The cinematic experience of watching movies on a giant screen in the dark with a hundred strangers is not dead, it's just an increasingly smaller part of the moving image pie. But the pie itself is getting bigger and bigger all the time. Film lovers and filmmakers have lamented the death of film, as opposed to video, but this technological change has barely registered for the vast majority of the audience.

At the same time, the switch to digital video formats over the last thirty years has led to an explosion of moving image art. Not just shorts and feature narrative films and documentaries, but a universe of new forms and styles that have barely even been recognized, let alone categorized, let alone appreciated by the cultural gatekeepers. It is, of course, the cultural gatekeepers who are dying—that much is certain.

Is that the real complaint?

I'm getting old, too. I tripped and fell while attending a football game the other day, on concrete, and severely bruised my right hand, making it hard to do a lot of normal activities. I feel myself getting older—a fall like that would have been nothing thirty years ago, and it might be everything in another thirty years. It's palpable. And hard to deny.

Yet, of course, like everyone, I'd love to deny it as long as possible. Naturally, the things I like and that have been important to me in my life are The Important Things That Must Be Preserved. Which is ridiculous; also, it's the usual stance of the old.

Is culture stuck, or are the critics stuck?

I love going to the movies and, sure, I wish the movies were much, much better most of the time. I'm bothered that young people will rarely watch any films from before the 1970s, and only one or two films from even that decade, sure. I think Hollywood executives have made a cascading series of disastrous decisions for a long time, and I'm happy to tell you about them.

But the moving image is in its infancy. The online video that pundits love to dismiss is taking the medium in new directions every day. It's not cinema—it's something new. It's already mass culture. And it's thriving.

Hollywood Sings the Blues

Ask anyone who closely observes the American film industry, and they'll tell you it's collapsing like an Antarctic ice shelf. Hollywood has not recovered from the box office catastrophe of the pandemic, nor the subsequent historic strikes of 2023. But these realities, which are now in the past, unearthed deeper, more existential problems, for which no one (so far) has any solutions.

Well, no good solutions. The heads of the majors have been cancelling projects and firing people, in the hope that bottom-line discipline will put them back on track for "business as usual." It hasn't worked yet, but maybe we should give it time? Or maybe there's no going back?

What would that mean, anyway? In the minds of the executives at the conglomerates that run Hollywood, "going back" would surely mean "back to 2019," when the modern superhero movie project had its apotheosis in Avengers: Endgame, a world-bestriding success that surely meant there was no end to audience lust for much, much more of the same.

But maybe "going back" would mean somehow returning to the era before every studio started pouring billions into streaming services with no plan for making a profit, and subsequently losing their fucking shirts while, at the same time, they were giving people dozens of new reasons to skip the trip to the theater?

Wouldn't going back really mean returning to the moment before Netflix destroyed the fifty year television business model that had supported an entire industry for decades? The destruction of which led inevitably to the strikes, which the media spun as somehow about "artificial intelligence," a mal-named technology years away from creating the kind of threats imagined by the WGA and SAG-AFTRA. The destruction of which led directly to the panicked spending by the majors on their own streaming services, which cannibalized their own businesses and cost them billions.

From the mid-80s, Hollywood enjoyed two decades of annual record box office. The increases were uneven, but they were always in the right direction. But in the early aughts, this changed. 2005 saw the first drop in twenty years, with returns down almost 6 percent (according to Box Office Mojo). Then, the returns began to seesaw every couple years. Up a few percentage points, down a few percentage points, until the pandemic year of 2020, which saw an 81% drop. Things have trended up since then—how could they not—but 2024, if it beats 2023 (and it looks like it may not quite get there) only gets us back to annual grosses in 2005 territory, a devastating result. Imagine that: the annual returns have not been this low since 2005.

This is the bottom line, and looking at the stats, it's easy to put the blame on the pandemic, which certainly demolished consumer behavior vis-à-vis the movies. But something else was already going on.

What was going on was a different kind of business-as-usual for Hollywood. Three trends that had their roots in the 1990s flourished into the 2000s to shake the entertainment industry to its core, and two of them originated in the technology sector. No surprise there, since the rise of the personal computer and then the Internet has been the greatest cultural shift in our lifetime.

First, a new television golden age was brewing. The logic of capitalism tells us that corporations must make more and more money year over year, or they die like a shark that has stopped moving, and sink to the bottom to be stripped to the bone by opportunistic scavengers. For the movies, this meant the rise of the blockbuster in the 1970s and 80s and the gradual abandonment of mid-budget dramas for grown-ups (which became more acute as multi-national conglomerates began buying movie studios for some reason—it's always been a terrible business). Television writers saw an opportunity, and captured that adult audience, perhaps once and for all. With more mature, complex stories unfolding on the small screen, the movies (of course) doubled-down on the younger audience that wants mostly spectacle. (A smaller-scale version of this played out in the late 50s and early 60s, as well.)

Second, online digital media companies began savagely disrupting older media business models. The first salvo nearly obliterated the music business as it had been for decades, and it was forced to adapt along with the artists who had come up through that old system. The next attack was aimed at the television business, and has succeeded to date in almost entirely uprooting one of the most successful media business models in history, the ad-based, calendar-based broadcast and cable TV business. Alongside these specific disruptions, the digital online space has created new forms of media that have further cannibalized older businesses that depend on people's attention. Video games, social media and online video have diverted younger generations almost completely from taking up the TV and movie habits of their immediate forbears. Crucially, traditional Hollywood understands fuck all about technology and—still—behaves as if the new forms that dominate younger people's leisure time and attention are merely passing fads. They might now be realizing that they are very wrong about that, but it may be far too late.

Third, by the late 90s, the technical barriers-to-entry of the filmmaking profession crumbled dramatically, thanks to new consumer technologies. Digital video cameras became a viable tool for filmmakers looking to make their own work without the expense of film cameras and processing, and the difficulties of raising money for a traditional film shoot. The Dogme 95 filmmakers were among the first to insist that cheap video could be a viable medium for independents; by the start of the new century, there was an explosion of video-shot documentaries and even feature films. New software and faster computers and increased storage capacity meant that non-linear editing, which only a few years earlier was the sole domain of highly-paid studio professionals, was now available to the masses. Hollywood soon got on board the video train and, as soon as the quality improved enough, mostly abandoned celluloid film after more than 100 years. The democratizing effect of these newly accessible technologies, much like the supposed democratizing effect of the Internet that was touted so much in those days, turned out to be overly-optimistic. But the barriers have continued to crumble, nonetheless, and now almost everyone on the planet has an extremely sophisticated film studio in their pocket. Rather than remaking Hollywood, however, these 21st century creators have innovated their own spaces online, spaces in which modern youth spend far more time than they do watching movies or television shows.

And yet, as always, the rumors of Hollywood's death have been greatly exaggerated. As it always has, the movie business will reinvent itself by necessity, as it did with sound, when television came along, when the studio system collapsed (and was helped to collapse by the Paramount Decree), when home video emerged, when indie films briefly ate Hollywood's lunch in the nineties, and when the Internet pulled people in new directions. It may feel different now, the crisis may feel more existential, and there are many incredibly talented movie and TV workers who are suffering at the moment, and who will continue to suffer for a bit. Then, the wheel will turn, and a new cycle will commence.

It will not be the takeover of AI. As William Goldman said about Hollywood, "Nobody knows anything," but I think we can relax about that one. At least for another round or two.

What Is Independent Film?

The definition of "independent film" is both straightforward and woefully inadequate. When speaking of the American film industry, an independent (or "indie") film is one that is produced by a company that is not a "major studio." The term "major studio" is not a generic term; rather it refers to the current configuration of big Hollywood studios, which has been different depending on the historical era.

For example, in the Hollywood "Golden Age," the so-called "Big Five" studios were Paramount, Warner Bros., RKO, MGM, and 20th Century Fox. (These entities, like those of today, had also absorbed previous studios. For example, MGM was created by the merging of Metro Pictures, Goldwyn Pictures and Mayer Pictures.)

At that time, in addition to the Big Five, there was also the Little Three: Universal, United Artists and Columbia.

Disney, at that time, was an independent production company!

Of the former eight Big/Little studios, RKO is long gone (its library largely held by Warner), and MGM and United Artists are owned by Amazon, Columbia is owned by Sony and 20th Century Fox is owned by Disney. Universal, while still operating as a stand-alone studio, is owned by Comcast.

Today, the Big Five are Universal, Paramount, Warner Bros., Disney and Sony.

So, going by the classic definition, an indie film is one that was not produced by any of those five studios (or by whichever entities were considered the Big Five, Big Eight or Big Six of a particular time).

The original Star Wars (1977), by this definition, is one of the most successful independent films of all time. After all, while it was famously distributed by 20th Century Fox, it was produced by an independent company called Lucasfilm. But this is a good example of the complexities of categorizing films in this way. For many, a film that was independently produced, but distributed by one of the Big Five, cannot be fairly considered an independent film.

Everything Everywhere All at Once, the Daniels' Oscar-winning mind-bender, is an example of what's called an indie film. It was produced by four independent production companies and distributed by A24, a beloved producer/distributor of many well-known indie films, and became that company's first $100M+ hit.

Everything Everywhere had a budget reported to be in the range of $15M to $25M. Now, film budgets are notoriously difficult to suss out, since the reported figures are nearly always wrong. It's safe to assume that the budget was minimally $25M, and whethet or not that figure includes marketing is anybody's guess. Further, one of the production companies, AGBO, is the shingle of Joe and Anthony Russo who are also co-directors of films like Avengers: Endgame and of three other Marvel movies (to date) and directors and producers of broadcast television hits.

Is it reasonable to call anything the Russo Brothers produce "independent?" Maybe, maybe not.

Films, of course, are produced at a wide-variety of budgets. These budgetary levels are a bit confusing, because different groups have named and defined them in many different ways. For example, the Screen Actors Guild (SAG), a union that has formal agreements allowing members to work on feature films of various budgets, breaks them down like this:

Theatrical - a film with a budget over $2M

Low Budget Theatrical - a budget under $2M

Moderate Low Budget - under $700K

Ultra Low Budget - under $300K

Micro-Budget - under $20K

And, for IATSE, the union for the majority of film artists and craftspeople, there is the tier system:

Tier 1 - under 6M

Tier 2 - 6M to 10M

Tier 3 - 10M to 14.2M (14.2? Why?)

There is also an unofficial Tier 0, which is just a colloquialism meaning really low budget (under 6M).

You will notice that none of these budget levels apply to Everything Everywhere. It is not a low-budget film, in terms of being able pay union members less than their standard rate. However, Avengers: Endgame had a reported budget of $400M. Compared to that film, Everything Everywhere is definitely "low-budget" and definitely "indie."

But, then, what do you call it when compared to a film like Sean Baker's Tangerine (2015), which was shot on iPhones with a budget of $100K? Or the 2007 mumblecore film Hannah Takes the Stairs, directed by Joe Swanberg, with a $60K budget? Or the feature films of YouTubers like Joel Haver, whose 2024 YouTube feature, Anyone But Me, cost $4K? (It's one of the 12 feature films Haver is making and releasing in 2024.)

Or For Lovers Only (2010), by the Polish Brothers (Twin Falls Idaho), made for around $500, which went on to make $500K in streaming rentals (with no theatrical release).

This is not a competition for most indie, or something, but the range of budgets that so-called indie movies are made for suggests that the term may need some revisiting.

Like indie music, indie film is not only about how much it costs and who actually makes it, but also connotes certain artistic conditions and modes. A24, with its catalog of relatively risk-taking and idiosyncratic auteurist movies, is the current aesthetic embodiment of "indie film," for many fans. Yet its "independence" from mainstream Hollywood is arguable—Everything Everywhere won the Oscars for Best Picture, Director, Screenplay and Editing, as well as three acting Oscars.

The reason any of this matters will be obscure to many movie fans; but for truly independent filmmakers, it's an existential matter.

The logic of the bigger film festivals, which theoretically exist to promote the very best in independent filmmaking, today will often refuse to even program narrative features that fall into any of SAG's lower-budget categories.

There have been countless brilliant feature films made for less than $1M, and even far less money than that, but the conventional routes to market—to an audience of any size—have shrunk to next to nothing for even IATSE's Tier 1 films. The big festivals—essentially the only way for smaller films to find distribution, or even a screening—are less interested than ever before in most of the truly independent films and filmmakers out there.

The American film industry in 2024 is in dire straits. The rate of production in Los Angeles is at record lows. Employment across the industry is cratering. The Big 5 make fewer and fewer theatrical films all the time, and what they do release are films almost universally considered to be candy-colored junk.

Many point to this ongoing collapse as more evidence of a stagnant culture. That may be the case, although it's certainly not the first time the Hollywood studios were making crap because they didn't know what else to do. The difference today is that there's no path for the true independents to shake up the system, as they once could, and little to no chance for the passionate artists at the bottom to rise.

Functional Magic and Subsequent Word Nerdery

My buddy’s climate-art non-profit (climart nonprof? clart nip? canp?), Functional Magic, has lately been making use of the various talents of my wife and myself. He and I…”collabbed”…on a blog post about political art.

I just noticed that the dumb word “collab",” short for “collaboration,” if spelled and pronounced “colab,” would also make colloquial sense—it would mean “we do our laboratory work together,” which has a bit of a different sense for American English ears, I think, although it’s etymologically identical and boils down to the same thing. But I like the idea that we’re all at work in our little laboratories and then sometimes we “lab” together.

Slasher Disaster

Instead of just continuing to watch the hilarious Girls5Eva, as planned, for some reason today I chose to watch a TV series called Slasher I suddenly remembered existed, while riding my exercise bike. It began in 2016 and continues to this day, for some reason. I watched only the very first episode, so perhaps it radically improves somewhere down the line.

I do like watching horror while exercising, as it gets the blood up, so to speak, more than other genres. And I have watched some true garbage.

I haven't thought too much about my preference for movies over television. I guess I do have such a preference, but I have also seen great television that's better than most movies, so it can't be too pronounced. Still, I acknowledge that this preference exists.

It's not fair to judge a show by its pilot. Slasher is a pretty good concept for a show. And Canada has always been good at horror. But, Jesus, what a piece of shit this episode was.

Slasher is an anthology show, season by season. Each season is a new story, a new killer, blah blah blah. There are a lot of possibilities in this premise.

But, by 2016, was a generic, uncredible, expurgated, badly executed, unimaginative approach enough? Evidently—a sixth season is on the way.

Yeah, yeah, I get that television is supposed to be gallingly mediocre, dude. And, yeah, I'm judging this shit by the first episode. But that's not entirely unfair. Are you even going to bother to try?

Now I feel like have to watch one more episode to be fair, but I know what it's going to be like. Maybe I skip to season two and see if a new setting improves it.

Return of Garage Kubrick

Cinema is commonly discussed as a "collaborative medium." What else could you call a kind of art-making that requires, in some cases, thousands of people to work in coordination over a number of years before the resulting art is finished and released? Even an independent film—even a "no-budget" "DIY" film—nearly always requires a division of labor among a handful of collaborators.

Film is not unique in this; but the romantic notion of the Artist generally identifies a single creator. The painter, working mostly alone; or a sculptor; or the ink-stained wretch of a writer. The notion persists even in more plainly group efforts such as dance or music-making. In film, we have the venerable "auteur theory" to inject the notion of the singular creative genius into the plural activity of filmmaking; a single unifying vision that corrals and subjugates the collective action to its command, thereby deserving of the title Author.

The power of that supposedly individual vision to so subjugate the work of a battalion of artists is generally directly related to the perceived quality of the final product, deserved or not. Directors get the lion's share of credit, although few of them, in the history of film, have ever had such complete control.

But this is the dream. The dream of the auteur is that it might be possible to truly create individually the precise vision in their mind's eye, without the need to bend other artists to their will. Traditional filmmaking renders this dream ludicrous. But the promise of AI filmmaking is the realization of the dream; ultimately, the ability to create an entire movie by yourself.

In the last month I have rewatched both Miyazaki and Studio Ghibli's Spirited Away (2001) and Docter and Pixar's Inside Out (2015), both of them masterpieces of animation. In great animation it is possible to glimpse the dream of the solo-auteur—a purity of vision unconstrained by the base realism of live-action cinema—even though, of course, animated films are among the most expensive and human-resource intensive of all.

But it is possible now to imagine a time in the not-too-distant future when anyone with the vision and storytelling chops will be freed from the need for collaborators entirely. This creator is someone I've long-imagined, perhaps even before I read the memorable phrase "Garage Kubrick" in some long-ago essay. The idea of the obsessive auteur, finally able to perfectly realize her vision, alone, perhaps in her garage.

The term appeared—as I've just now rediscovered via Google—in a piece from October 1999 in Wired magazine written by no-less a futurist than William Gibson, who had toyed with the phrase in relation to a character he subsequently deleted from the book he was writing at the time. He was struggling a little to understand such a character, but then he attended, with his daughter, a private screening of early works of digital cinema at Hollywood's Chateau Marmont, and the vision for the character became a little clearer.

The Garage Kubrick is a stone auteur, an adolescent near-future Orson Welles, plugged into some unthinkable (but affordable) node of consumer tech in his parents' garage. The Garage Kubrick is single-handedly making a feature in there, some sort of apparently live-action epic that may or may not involve motion capture. That may or may not involve human actors, but which will seem to.

The Garage Kubrick is a control freak to an extent impossible any further back along the technological timeline. He is making, literally, a one-man movie; he is his film's author to the degree that I had always assumed any auteur would want to be.

And he will not, consequently, come out of the garage. His parents, worried at first, have gone into denial. He is simply in there, making his film. Doing it the way my friend assumed Stanley Kubrick would have done it if he'd had the tech wherewithal.

Prescient, as always, Gibson's vision is about to come true.

In A Violent Nature

Slasher movies are one of the most enduring horror subgenres, which is another way of saying they are among the most popular. Although they take many specific forms and, in the development of the genre, had many precursors (Hitchcock, monster movies, Italian giallo thrillers, the "proto-slashers" of the 60s and 70s), the basic formula was codified in the late seventies and early eighties. John Carpenter's Halloween is frequently cited as the film that pulled together the disparate threads and wove them into the tapestry of tropes we have known since.

The killer is a man who has been psychologically twisted beyond the human by childhood trauma. The nature of the formative trauma is typically dime store Freud: the future killer was subjected to some kind of sexual shock or abuse perpetrated by or on a female family member; or was otherwise abused or bullied as an "outsider" (often due to some physical deformity or mental illness); or in some other way has childishly misunderstood and conflated sexuality and violence, and connected those acts with their own sense of abandonment (by a female family member) or experience of abuse (at the hands of a caregiver or peer). The result is a severe psychosis that leads him, as an adult, to violently murder any person who reminds him of his trauma or anyone foolish enough to stand in the way of his revenge transference, in which, by brutally murdering a random person, he enacts a fantasy of revenge against his own abusers or neglecters.

In spite of this generally clear throughline of pop psychological motivation, the killer is often described as a force of Pure Evil, that is, a post-human monster incapable of remediation or remorse, who can only be neutralized, but never cured.

The victims of the killer are mostly nubile teenagers or twenty-somethings, who are on the cusp of adulthood and who mostly fail an unstated purity test by having sex or consuming drugs and alcohol. Because the fulcrum of the killer's mental break is founded on a loss of innocence—or childhood purity—they attempt, fruitlessly, to return to their own innocence by violently punishing those who would cross that line by their own choice. Ironically, adult characters who scold the teenagers for their loose morals are also routinely killed, suggesting that the killer is psychologically trapped within a mother/whore dichotomy he cannot escape.

This particular fixation means slasher movies are among the most conservative of cautionary tales. The central fear that animates them is the fear of lost innocence or, stated perhaps more productively, the fear of sex. The "rules" for surviving a slasher killer are well known—indeed they were clearly explained in the 1996 meta-slasher, Scream: sex = death. (Although Wes Craven's other famed slasher franchise, A Nightmare On Elm Street, interestingly, subverts many of the accepted tropes of the genre.)

One of the chief criticisms of the precursors, proto-slashers and slasher movies has been that the filmmakers deliberately create viewer identification with the killer. One famous precursor, Michael Powell's 1960 Peeping Tom, does this by having the mostly unseen killer film his victims' deaths with a movie camera, so that the visual point of view is quite literally both the killer's and our own as viewers. Another precursor, Alfred Hitchcock's 1960 Psycho, is more subtle about it, but pulls off a bravura realignment of audience sympathy when, after our protagonist is murdered in the shower, we are made to root for Norman Bates, as he desperately attempts to cover up the killing—perpetrated, supposedly, by his mother.

Slasher movies, as they were born as a genre, have used similar techniques. In Halloween (1978), future killer Michael Myers is just a boy, abandoned by his older sister on the night in question, when she prefers to gallop upstairs to have sex with her boyfriend, rather than take young Michael out for trick-or-treating. He dons his clown mask and we see directly through the eyeholes as he removes a large knife from a kitchen drawer and heads upstairs where he stabs his topless sister to death. Moments later, he stands traumatized on the front lawn, mask removed, still clutching the bloody knife, as his absent parents return from their evening out.

Most slasher movies have continued this tradition, also exemplified by the well-known voyeuristic perspective shots in the Friday the 13th series. Although the first film famously subverts our expectations about the identity of the killer, it introduces a moving camera frame, sneaking slowly through the dark woods, peering in cabin windows (often at naked girls) and sneaking up on unsuspecting victims from behind. Again, we are the killer—we are the bereft, the abused, the traumatized, and we take our revenge.

We are also trapped in the crossed-wires psychopathy, and we know it—we, too, seek out the beautiful, young, naked bodies (nearly always in a heavy-breathing male gaze), then we trade one taboo for another by brutally destroying them, often seen, at least partially, from the killer's direct perspective. At this point in film history, this is all cliche upon cliche upon cliche. Even Scream's winking acknowledgment of the profusion of tropes is an old cliche now.

All of this is what makes the newly released Canadian slasher, In A Violent Nature, written and directed by Chris Nash, so interesting for fans of the genre. (And an interesting example of a heavily split critic/audience score on Rotten Tomatoes and other online measures—low quantity movie viewers don't get it.)

The story is generic slasher boilerplate—it's basically a Friday the 13th movie and the killer is basically Jason. You've got your stock horny youngsters, including the stoner, the jealous boyfriend, the lesbians and the Final Girl. They head out to the cabin in the woods, near a lake. On the way, they steal a gold locket they find in the woods—which turns out to be the maternal talisman keeping the killer buried. They tell a scary story (which naturally is the backstory) around a campfire. There's a crusading forest ranger and a jerkoff hick poacher. The kids smoke and drink and—maybe they have sex? It's unclear.

Because nearly all of that is in the background. It happens largely off-screen or in the periphery of the frame, until it doesn't. Instead, the film centers the killer—literally. He doesn't speak. He shows no emotion. He just claws himself out of the earth shortly after the locket is taken and goes on the hunt. We watch him, from behind, often in full frame, as he trudges through the beautiful woods. Many of the shots recall Gus Van Sant's death trilogy (Gerry, Elephant and Last Days from 2002-2005) in that we simply follow this character around for most of the film. This is not "slow cinema," though, with single shots that last forever. But the effect is the opposite of the typical slasher voyeurism. It's the opposite of the forced psychological identification, the shared POV. Instead, the film is something of a pop treatise on the nature of evil—and the evil of nature.

We are distanced enough from both the killer and the victims for most of the film that, in our detachment, we can see the situation more clearly. There is no question of psychology, or human motive. He wants his locket back, but he doesn't even know it—it's just pure animal instinct. When the talisman was stolen, a circuit closed in the animal brain and—almost like a machine—it did what was in its nature to do. Which turns out to be to perpetrate some of the most gruesome, disgusting killings you will ever see in a movie. This extremity serves to drive the point home: you cannot escape nature.

There is a purity there that is beyond revenge, beyond meaning. And it's terrifying.

A Certain Tendency of American Festival Short Films

I volunteered over the weekend for the 20th annual Boulder International Film Festival. As a result, I was only able to take in three programs as a viewer, including one at which I was also providing video documentation of the surrounding event. One of those was a documentary called The Arc of Oblivion (Ian Cheney, 2023); the other two were programs of short films.

On Friday evening, I watched the festival's Teen Film Competition with my son; it was this event for which I also provided videography. The program consisted of short films across several categories: Comedy, Documentary, Drama, Thriller, Experimental and Animation. If the categories seem like they could overlap somewhat, it's understandable, since the purpose of the event was to showcase a quantity of films that would offer a broader opportunity to a larger group of young people than might be included in a conventional shorts program.

And indeed, Friday's program included three films in each category, for a total of 18 short films. The Shorts 2 program, which I attended with my wife on Sunday at the Boulder Theater, showed only five films. Those five were longer than most of the teen films, and the screening included two Q&A segments, so both programs came in at approximately 90 minutes. It is notable that four of the five films in Shorts 2 were shortlisted for the Academy Awards this year; none of them made the cut.

I mention the quantity of films at the Teen Showcase with more than a touch of irony. We know, of course, what quantity is often contrasted with—and it was quality I was thinking about when I woke up this morning. Specifically, I remembered François Truffaut's sarcastic mockery of "the Tradition of Quality" in French films of the 1940s and early fifties in his seminal essay for Cahiers du Cinéma, "A Certain Tendency of the French Cinema." In this essay, Truffaut makes his initial argument for what came to be called "auteur theory," and the piece is generally considered the intellectual gauntlet-throwing of the French New Wave.

It was helpful to think of Truffaut as I considered just what irritated me about most of the short films in the Shorts 2 program—all made by "professional" filmmakers—as opposed to the work created by the kids honored in the Teen Showcase, "amateurs" all.

Why does anyone make short films? Two answers seem to me the most important. One, people make a short film to act as "calling card" for their talents, in the hope that they will be allowed to make a feature film based on the strength of their short. And two, people make short films because they lack the resources to make a feature film (they suppose), and feel compelled to make a film. Into the second category fall most student films—a person is interested in filmmaking, enough to attend a class or even a film school, and they are required to make short films as part of the curriculum.

Shorts are in this sense invaluable learning tools, and a kind of rite of passage for filmmakers. Other than film festivals, there is essentially no market for shorts, is the prevailing wisdom. Yes, a few of them might enter collections to play on public television or the internet, but that's not thought of so much as a market as the best possible outcome to find a small audience, although there is little to no compensation. (Little attention is given in this context to the explosion of short work on sites like YouTube.)

I would hazard a guess that most film directors and producers actually never make a short film before making a feature; but a significant number do go that route. And what kind of short film do they make?

Judging by the Shorts 2 program—and every short film program I have ever seen, and most of the shorts I have seen individually—they make a vastly overpriced, overproduced, overlong pastiche of feature film aesthetics calculated to be familiar and stir shallow political or emotional identification among an often gray-haired crowd that seeks comfort, rather than anything resembling living art.

So you get a wildly expensive Wes Anderson pastiche, stretched five times longer than it should have been for no reason other than that money was available and the Oscars like longer short films with "high production value," even though the length exhausts the concept. And you get self-important real-world political drama in miniature, but not so miniature that the filmmakers don't knock you over the head with their context and timeliness five or six times too many—the Oscars like social relevance. Or, perhaps, another topical "issue" story based on real life, that nevertheless abandons much of what actually happened in real life to pump excess drama and sentiment into a situation to the point it loses most of its authenticity. But, look! the producers must have worked hard because they filmed in a difficult location, say LAX, which automatically makes it authentic, right?

Or, worst of all, they make a nearly unendurably long, cliche-ridden "inside baseball" show business "satire" about the stupidity and crassness of studio notes ruining a fledging auteur's Important Film. The film as shown, of course, is terrible to begin with, utter garbage, so it should have received devastating notes—resulting in an insufferably smug, largely unfunny reel assembled to show off the producer or director's access to the Warner Bros. backlot.

This it the Tradition of Quality writ small and cynical, for people who have little interest in the film they are actually making, but hope to win an award so they get to make a terrible feature film.

(You may have noticed that I gave four somewhat vague examples. That's because the fifth film was a French-Canadian comedy that was appropriately scaled as a short film, self-contained and actually funny, made by a real filmmaker who understood the aesthetic possibilities of the form and wasn't just trying to score a more prestigious gig. It is very worth noting how the robust public film funding program in Canada continues to give us great work after many decades, work that is largely unencumbered by commercial considerations.)

I haven't named names here, and kept the descriptions vague, because I am acutely aware of how difficult it is to make any film, and I generally have a great deal of sympathy for filmmakers and all the artists in general who do this demanding work. But it's very disappointing to see such compromised work celebrated, when it would have been better for the money to have been spent in almost any other way. Perhaps some of the filmmakers involved will make better films in the future.

As for the kids at the teen showcase, I know they will. Their films pulsed with the joy and excitement of artistic discovery, having not yet been castrated by the Tradition of Quality misbegotten commercial calculations. In fact, the work seen there gives me a great deal of hope for (and pride in) this generation of young people, who, having cut their teeth on YouTube, have a far more sophisticated sense of filmmaking than, perhaps, any generation since the film school brats of the 1970s.

This is clear in their fearless experimentalism, their crackerjack comic editing, their sense of place and complex emotion, their sense of fun and satire. There was a silent film parody, an immigrant/workplace comedy, layered documentaries about failed ambition and resilience after devastating injury. A notable drama about academic burnout and parental pressure, followed by ambitious thrillers that—if they didn't entirely work—reflected serious cinematic problem-solving and complex storytelling. Experimental work full of imagination, visual invention, and legible emotion, followed by delightful straight-to-the-heart animation.

One of my favorites was a conceptually brilliant mock documentary about the statistically most "average" student at the school. This was not the only film to make use of technical tropes straight from online video to hilarious effect; the YouTube generation is almost singlehandedly tearing down and rebuilding moving image art while the grownups are chasing boring awards and tired cliches. Another film I loved was a moving personal essay about connection to his ancestral land by a Diné (Navajo) boy, beautifully filmed on the Navajo Nation, effortlessly recontextualizing Tsé Biiʼ Ndzisgaii ("Monument Valley"), the home of the most famous of John Ford's Westerns.

The news out of Hollywood is dire, as it has been, like clockwork, every five to ten years for the past century. Maybe it's even worse now—competition from online video, streaming, social media, lengthy strikes, artificial intelligence—but I'm not worried. The kids are all right and, if they just keep doing their thing, the future of the medium looks bright.

Makin' Art

Today I discovered a little lightning bolt icon in the Squarespace interface. Clicking it led me to a new AI writing helper—as I suspected/feared it would. But not to worry—as you can see from the previous post, which I allowed the AI to write, mostly, this tool is really going to come in handy.

As has the current Photoshop Beta, with “generative fill.” That’s a technical term for asking a robot to add stuff to your Photoshop for you. I’ve been making art. Here’s some of it.

This is tremendously fun and time-wasting. And, obviously, brilliant shit. It kind of reminds me of some work I’ve seen where people find framed art at Goodwill and add new elements to it. Except it’s my photographs with some weird magical junk layered on. I’m going to be so rich.

In which I get help writing this post and need to make only one small edit

As I compose this post, I am utilizing AI technology to enhance my writing process. With the assistance of AI, I can seamlessly generate ideas, structure my thoughts, and enhance the overall quality of my writing. The AI algorithm analyzes vast amounts of data and provides me with suggestions and alternative word choices to ensure that my message is clear and concise. It also helps me identify grammatical errors and enhances the readability of my content. By leveraging AI in my writing, I am able to streamline my workflow, saving valuable time and delivering a more polished piece of writing. Fuckin’ A!

Horrible Bosses

There are few things worse than a bad boss. Most of us have had one or two experience with such a person. But, not for nothing—my own experiences pale in comparison to the every day absurdities visited upon professional women by the gormless fragility of so many male middle managers. A woman I know, who has worked in tech for decades, even under generally good working conditions says she has had to express requests and solutions through a sympathetic male coworker just so they would be heard. This is among peers, folks.

And when it’s bad, it’s stupid. Stupid, for example, in the way that a proactive task she took on six months ago—and which got her chastised and called “arrogant” by her boss at the time, so she stopped doing it and ate crow—is now the unfinished requirement holding up the project, and she better jump on it. He has no memory of what he said back then, but now another man has told him the task must be done.

She endures frustrations like this daily. My solution, if I were in her shoes, would be direct action—reminding my boss that I tried to do the task six months ago and was called arrogant. A solution that would be risky for me, and impossible for a woman. Supposing, for a moment, that the boss was culturally misogynist, to pick an explanation at random, a man could push back if justified; a woman who pushed back might get a warning about her future employment.

This kind of thing is so widespread, so “just the way it is,” that most of fail to ever even notice when it happens. Certainly we don’t do anything. Another reason it’s great to have a penis.

Old Man Yells At Robot

I found myself yelling at a robot today. For the nth time this month I had to call the internet company. As often happens, I knew I had an issue the robot couldn't handle and I knew I needed to speak to a fellow human.

I keep saying robot; I'm talking about a telephone customer service bot. I can say things to it and it's meant to understand and respond appropriately.  When I get on the phone with the bot, she says, "Hello, Nathan," in a tone that reminds me of the withering sarcasm of my high school friend, who has been expert at mocking and then dismissing me for about thirty years.

This was the second time I got on the phone with the bot and she spoke to me like that. "Hello, Nathan." It sounds very much like she is trying to sharply draw my attention to the impressive fact that she already knows my name.

I immediately ask to be connected to a person. I say something like, "Person."

She says, a little sadly, "Oh, I didn't quite catch that."

"Human being."

This time she hears me, and begins steering me deeper into the menu. What did I want to talk to a human about? Connectivity issues? Bill payment? Adding services?

What I wanted to talk to a human about was how the chat bot that contacted me earlier about getting the internet cable—which for six weeks has sprawled across my backyard, coiling once in the middle of a walkway, before jacking into the house—buried underground had some of its facts wrong. The chat bot believed I have a dog and a sprinkler system, which I do not. Two things which might complicate a cable burying job.

I tried to quickly explain. The bot got confused. I asked for a human. She told me I needed to tell her the topic. Connectivity issues? Bill payment? Adding services?

Next, I tried some things that had worked in getting through phone trees in the past. First, I hit the zero key a bazillion times, like I wanted "the operator." The bot ignored this, just kept trying to talk to me. Next, I told her a whole story, but slurred my words like I was a 73 year old hobo. At this point, past bots have given up and sent me to a person.

But she misheard me, deciding I had connectivity issues that we needed to troubleshoot.

"I understand that you're having difficulty with your connection. In a moment, I will reboot your router—"

"No! Nononononono! No, fuck, you stupid bot. Stop!"

She kept right on talking me through the procedure, but sometime after I had screamed STOP! at full volume ten or twelve times, the line clicked and and changed and I was dropped into the real person queue.

I spoke to a person and resolved the issue. I didn't say anything, but I wondered whether they listen to me freaking out on the line, sometimes, just for fun. I would.

Strike! just in time to welcome our robot overlords

I support the WGA members who are now on strike. They have chosen a dangerous moment to do so, with robots waiting in the wings to take their jobs, studio executives must be thinking to themselves. But, then, any worthy executive knows the bots aren't quite there yet.

The writers aren't striking over AI, but they're also looking for some rules on AI in the negotiations.

The strike is about writers responding to a decade-worth of producers' schemes to avoid paying them. The usual.

The dispute is unfolding as a new technology has appeared on the scene with the potential to take workers' jobs. What's unusual is that we're talking about writers, here. It's a scenario previously known only to science fiction, that a kind of "artificial intelligence" can perform most writing tasks relatively easily. While we haven't had an entire movie written by an AI yet

Here's a movie written and directed by AI.

I provide the links just for reference. I am in no way endorsing the idea that you should watch this five minute or something movie "written and directed by" an AI. It's easily as terrible as that implies; so, no worse than most student films. But there's little to recommend it, although the bizarre camera moves that the AI supposedly asked for are rather amusing. Could this be an elaborate joke? Well-executed, if so, but not nearly funny enough.

The short film is an argument between three family members as they watch a news report. The top story is that rogue AI has taken over the world and humanity is at an end.

From the article: "According to [a producer], while AI still has ways to go in terms of creativity, it can indeed supplement human efforts by reducing months’ worth of writing into a couple of minutes." Emphasis mine.

The producers of the short film collaborated with the AI—it was a back and forth that generated many ideas; the team picked the ones they liked the best. That's not the same thing as "a robot wrote and directed the whole thing" but it's as near as makes no difference as it relates to the future job of the writer.

This is a real of-the-moment problem for the WGA. On the one hand, the Guild is due an updated contract and what they're asking is reasonable. On the other hand, if the strike goes on for any length of time—as historically these strikes have done—there is nothing stopping producers collaborating with AI and moving forward thinking maybe, this time, they really, truly don't need the goddamn writers anymore. Halle-fucking-lujah. A ChatGPT-like bot trained on a massive written corpus including screenplays could absolutely shit out a workable screenplay with a little help from a producer friend, potentially turning months to minutes. Now.

There is a moment when I am trying to think about AI where my thought processes kind of brown out. My brain gets fuzzed with activity and I can no longer find my way through. I know nothing in that moment.

The subject of the nothing that I know is What happens next? Once AI can be used to create entire photorealistic movies—from where we are today, for those looking closely at developments, just a small hop away, a matter of a handful of years—then what? It's one thing to think Then what? for the movie business, but I'm on to thinking more on a larger scale. This generation of AI—the large language models, the chatbots—is not sentient. But that's seeming increasingly beside the point. ChatGPT doesn't need to achieve self-consciousness to end the writing profession as we know it.

That's where my mind goes opaque. Supposing it's able to do that, really, what won't it do? There is a growing chorus of worry, like a black cloud on the horizon.

Metaverses from the Abstract

I’ve been slowly reading this book over the last while. Finished it recently when I acknowledged that much of it is skimmable. What it lacks in grandeur it repays by being dense and information rich in parts while still painting a complex big picture of the coming metaverse revolution, or what have you.

For me, it’s a great reminder of all the practical reasons why a true metaverse is further away than some tech lords might wish. Not least among these reasons, that a small handful of huge corporations have laid claim to the economic lanes—the payment rails, as the crypto bros say. Perhaps it’s not a coincidence that Neal Stephenson, who coined the term “metaverse” in his 1992 cyberpunk classic, Snow Crash, was already lecturing about data privacy in 1999’s Cryptonomicon.

Still, I read the book because I am one of the people who believe some version of the metaverse as envisioned by science fiction will eventually emerge. I want to think about it a bit in preparation to tackle another writing project. We already have a number of proto-metaverses in the many online games people play all over the world. The power and processing speeds needed for a worldwide persistent 3D virtual environment and economy will require even greater incentives for clean energy and quantum computing. At the same time, we’re likely past the point of being able to turn back some of the environmental consequences of burning fossil fuels. If the surface of our planet becomes increasingly uninhabitable in parts of the world, yet we have found sources of clean power, we may rely on metaverse tech to help us live as social humans in a sometimes inhospitable environment.

Although they are different categories, I drop my interest in/research on AI into the same bucket. Why?

I see fans of the current LLM AI bots and related tech on Reddit talking about the singularity; I don’t believe that based on any of the chat tech I’ve seen, in spite of its impressive imitative abilities. But when I think about source of boundless energy [fusion] + exponential compute [quantum computing] = x, then introducing x to LLM-type massive collections, full of words and images and sounds…

I see some potential there.

The slow train to Futureland

Time drags, but there are so many little tasks it also crystalizes and fragments and may even shatter. We’re moving to Boulder in a couple months. A shitload of stuff is happening very quickly and also not quickly enough. I want badly to already be sitting in a serene moment, having “settled in” in our new home, but that scene is a dreamy, soft-focus pink far away over the horizon in Futureland.

I want to feel like I’m getting some work done, so I’ve generally been focusing on discrete tasks, like decluttering for the realtor and some preliminary packing. Getting quotes from moving companies. Estimating box requirements. We have two months before we will really be moving so now there’s a lot of anxious winnowing. Then, I also try to do my other work, like reading and revising my novel, doing my other self-assigned reading, other creative pursuits. I have a small editing project on the side, purely for fun, that I’m keeping cued up.

Then a lot of the rest of the time is driving around, doing dishes and other chores, playing with the kids and schlepping the kids. I can’t seriously pack quite yet because that would fill the house up with boxes right when the realtor is trying to show the house for a “coming soon” period. Sort of like a soft opening. We’re going to Boulder for the next week; we could even have an offer by the time we get back. These all feel like momentous things happening in slow motion, and I’m in shock, just watching as a big red fireball rolls up the street, like it’s a nineties disaster movie.

But it’s no disaster, it’s just a simple twist of fate. A lifestyle change, a scene change. When I finally do sit in that serene moment, there will be mountains out the window.

Brainwashed

Nursing a nasty cold this week, I took time yesterday to watch Nina Menkes’s lecture documentary, Brainwashed: Sex-Camera-Power. I have followed the development of this lecture for some years. It came out of work she’d done in her CalArts classroom and became a presentation she toured with for some time, and now a documentary (available for download).

At its core, the thoughtful lecture discusses the “male gaze,” a term coined by academic Laura Mulvey in her 1973 essay, “Visual Pleasure and Narrative Cinema,” and which signifies the overwhelming point of view of film culture as the heterosexual male point of view. Men are the action-taking subjects of film narratives, while women are the static (fetish) objects of men’s action. Menkes expands on this idea by claiming and demonstrating that even the way films are shot reinforces the ideologies of power, typically, gendered power. Men are dominant, women are submissive, and we see this recapitulated ad nauseam in cinema’s dominant visual strategies.

A simple example, from the days of classic Hollywood, would be the way that male characters are lit in “3D light,” as Menkes says, meaning, they are lit realistically with light and shadow as if they were in a real place doing real things. Female characters, on the other hand, have traditionally been lit in “2D,” a look that flattens the image, reducing shadows, focussing on glamor and beauty above even attempting to place the character in the same “real” context as the male.

Among the effects of more than a hundred years of this approach, says Menkes, is that we culturally too often see women as body parts, without agency, and men as integrated actors upon the world. She connects this to the fact that the film industry is one of the absolute worst offenders in every measure of gender equity of any sector (apparently measurably worse than coal mining), to the MeToo movement, and to rape culture. In effect, she’s saying that boys and girls (and men and women) receive misogynistic messages about women from our films and TV that contribute to attitudes and behaviors that are illegal and dangerous.

On its own, no reasonably observant and educated person, apart from those with ideologies allergic to obvious facts, could really disagree with the basic premise. Our art, high and low, reflects and molds the attitudes and prejudices of the culture. Although we have by many, many measures made tremendous social progress in the last century, we still carry a deep-seated misogyny that harms women every day. Men and women are both susceptible—and we often need to be shown how this works because it’s so ingrained we don’t notice it.

So show us Menkes does. In clip after clip, she shows how a misogynist ideology informs so much of what we see and, to my way of thinking, it’s undeniable. I can see myself showing this film in a film class because, particularly for undergraduates, it’s an incredibly useful explication of cinematic point of view both in terms of camera technique and ideology. This is a tough subject to teach because, at first blush, film can feel deceptively naturalistic; that is, something that happens on film, we tend to believe, has “really” happened, which is great when trying to engage an audience but makes it harder to see how the artifice prejudices us.

I have two critiques, or questions. I say the film is useful for undergraduates. For students who have not really grappled with these truths, it may be hugely eye-opening and valuable. I also see an audience among the type of film fan that attends festivals but is unconnected to the industry, the film-literate audience. Also, perhaps filmmakers.

For advanced students, who are already aware of most of this (though they have likely not experienced such a fine curation of examples), there isn’t much more to get out of it. Menkes provides some counter-examples, as do plenty of female filmmakers (and filmmakers of color), of ways to challenge the visual power system. She also insists that she’s not here to shame anyone for filming sexuality. She is simply challenging filmmakers to break out of these dominant patterns.

The first question I have is What should heterosexual male filmmakers do? What is being asked of them?

Perhaps, first of all, to be aware of the messages they are sending? To not simply make films cynically and without examination? To be better at their jobs? Fair enough.

But this will not necessarily yield the differences Menkes underlines. Martin Scorsese, whose Raging Bull is analyzed in the film, surely understands and could articulate (at length) the argument she’s making. He knows what he’s doing when he crafts a scene. Jake La Motta’s point of view is that of a violent misogynistic thug. I suspect that Scorsese makes for a fine example precisely because he’s so good, because his visual narrative is so legible. We can see clearly—and hear, in one of Menkes’s more fascinating observations—how the narrative treats the Cathy Moriarity character.

It was interesting, but it didn’t suggest to me that Scorsese should have done something else. The crafting of the scene seemed appropriate to his subject—who is a pig. There are, of course, plenty of lesser examples in which there is much less thought put into what is filmed. Your typical slasher movie, for example.

A heterosexual male point of view is not inherently immoral, nor is the male gaze. I think the problem is when that point of view is presumed to be the “standard.” And the solution is way more women and others need to be given the chance to direct. This seems like a different matter altogether, really, like another order of importance. I don’t think we even know what the “female gaze” might be, because we’ve had so few examples.

Perhaps there’s no ask, or maybe the ask is simply to become aware of what you’re doing as a filmmaker. It’s hard to avoid the sense, though, that for Menkes and some of the other participants, the male perspective need never be seen again, so sick of it are they. This is understandable, perhaps, but surely the long term goal, if there is one, is parity. This where everyone—especially men in power—can take action by hiring women and making room for their stories and perspectives, actively.

The other issue for me is a conundrum. In a patriarchal, misogynist culture, isn’t our art going to naturally reflect that? Must we attempt to make art that somehow creates better people? Is that possible? Should we expect a popular form like the movies to be any better than we are?

Will AI kill the camera?

Preposterous!

I’m reading about AI today and clicking the links in the article. Trying out generating AI video. The use case is painfully boring and generic corporate videos. (They suggest adding an AI avatar “talking head” to a PowerPoint presentation.)

But it’s not difficult to extrapolate from this. I thought of my old Teletrivia videos. Now I could create a fake host, with their own voice, and animate the video using a script and some tweaks. I could say the script out loud and then cause it to be. Given the capabilities we’re already seeing in the new generative AI in just a few months of release, what’s realistic when we look ahead five years, say?

If I trained an AI using a high-resolution corpus of every movie in existence, matched to its script, how long before I could create an entire movie just by writing a screenplay?

I can already conjure beautiful, photorealistic images of people, places and things that don’t exist with simple text description. What happens when we run applications that can do that in real time on quantum computers powered by nuclear fusion?

Mushroom Zombies in the Post-Apocalypse

I finally played through and finished the PS4 game, The Last of Us. I played it on Easy because I wanted to get through the story quickly—so I could check out the upcoming HBO series. I have little pride about such things. I don’t have a burning need to be good at video games, I have a need to experience gaming in a way that’s not too frustrating for someone who historically has little time for it.

I am fascinated by video games—it’s a part of my larger media interest; beyond strictly film and TV, I mean. That’s where I started—TV, then movies—developing a passion for the moving image arts. Video games, of course, are among them.

My history with gaming began later than some of my generational cohort. We did not have an Atari 2600, unlike every other family we knew. We did get an Atari 7800 several years later, a system doomed to fail by arriving around the same time as the Nintendo Entertainment System. Which we also did not have.

My original Nintendo memories were made over the summer of 1988, much of which I spent at my girlfriend’s house playing Mario and Zelda. But after the disappointing 7800, I didn’t have my own console until I bought myself a Playstation ten years later. I did occasionally play computer games growing up—text-based adventures and early side-scrollers. And my younger brother was given a Nintendo 64, which I sometimes played.

But I came to gaming later than many and I’ve never been particularly good at it. I played a lot when I first moved to California—Playstation and Dreamcast. But in the near twenty-five years since, I’ve consistently dabbled but only occasionally worked all the way through a game. Most of those were Calls of Duty or Grand Theft Autos. Although, while the latter is my favorite game series, I’m not sure I’ve completed more than one or two of them.

That’s not why I play. I’ve always had the most fun playing for the sheer ridiculous freedom and fantasy of it, not for stats or likes. I don’t do much online gaming, in part because I don’t have the time to get good enough to play against others. I am trying now to play more often and create more opportunities to play with my kids.

But I finally got through The Last of Us, a game that I enjoyed a lot, even on Easy mode. It’s such a lauded title and I’ve heard from friends who played it, but I found it a little underwhelming. Not because of Easy mode. It’s just that it’s really good—for a game.

The Last of Us is know for its story. The game mechanics are superb, the art is gorgeous and the characters are strong. The story is good and the ending is powerful—again, for a game.

One of my big curiosities is when—or whether—we’ll get games that can rival other forms of narrative. Like movies, books, comics. Apart from its being exceptional for its medium, the story of The Last of Us is something we’ve seen before about a hundred thousand times. This in no way makes it unworthy as a game—and the gameplay is where the focus should be, perhaps. But how will the series avoid that sense of being a show based on a video game that was already based on every post-apocalyptic zombie show or movie you’ve ever seen? It’s a kind of generic recursion, almost.

I know, it’s all about money and recognizable IP. I’m going to watch the show. I’ll report back.

Another Weird Biopic

After musing yesterday about the past year’s crop of biopics that used the hoary genre somewhat subversively or at least differently, I watched another one.

Weird: The Al Yankovic Story is a fake biopic of a real person. It parodies the genre thoroughly in form, except that most of the factual information is made up. It’s a mostly hilarious movie—soon to be a stoner classic—that incorporates every biopic cliche you can name. Daniel Radcliffe is a delightful Al, in a performance at least as credible, if not more credible, than Rami Malek’s in the awful Bohemian Rhapsody, which is strange because Radcliffe’s Al is fake. Evan Rachel Wood is also great as Madonna, and super sexy doing it. Naturally, there are billions of funny cameos and many more people playing celebrities. And tons of great Weird Al music.

The main weakness of the film is the directing. It’s competent, in that the film is still a lot of fun, but it underserves the script by being boring and sometimes too slack. And it really underserves Radcliffe, who seems to be having a great time, by failing to match his energy.

But the central conceit of making a biopic of a real person but lying about nearly all the personal details is a masterstoke.