AI, Journalism, and the Future of News (w/ Julian Sanchez)

March 02, 2024 00:51:08
AI, Journalism, and the Future of News (w/ Julian Sanchez)
ReImagining Liberty
AI, Journalism, and the Future of News (w/ Julian Sanchez)

Mar 02 2024 | 00:51:08

/

Show Notes

Both the short and long term impact of AI technologies is unknown, but it’s almost certain to be significant. It will destroy some industries, accelerate others, and revolutionize still more. And, it seems, no one has a lukewarm opinion about AI. You’re either excited about its prospects, or convinced it’s nothing more than intellectual property theft, or the inevitable end of the market for human creativity.

Worries are particularly acute about what this all means for journalism, and those worries are worth taking seriously, given the importance of quality journalism to a free society and a functioning democracy.

My guest today, writer Julian Sanchez, has worked as a journalist and policy analyst, and thought quite a lot about these issues. He joins me for a conversation about AI, the state of content creation, and the future of journalism as a profession.

Produced by Landry Ayres. Podcast art by Sergio R. M. Duarte. Music by Kevin MacLeod.

View Full Transcript

Episode Transcript

[00:00:03] Speaker A: Welcome to reimagining Liberty, a show about the emancipatory and cosmopolitan case for radical social, political, and economic freedom. I'm Aaron Ross Powell. Both the short and long term impact of AI technologies is unknown, but it's almost certain to be significant. It will destroy some industries, accelerate others, and revolutionize still more. And it seems no one has a lukewarm opinion about AI. You're either excited about its prospects or convinced it's nothing more than intellectual property theft or the inevitable end of the market for human creativity. Worries are particularly acute about what this means for journalism, and those worries are worth taking seriously, given the importance of quality journalism to a free society in a functioning democracy. My guest today, writer Julian Sanchez, has worked as a journalist and policy analyst and thought quite a lot about these issues. He joins me for a conversation about AI, the state of content creation, and the future of journalism as a profession. If you enjoy reimagining liberty, I encourage you to subscribe to my free newsletter, where I write frequently about the kinds of issues we discuss on the show. And if you want to support my work, you can become a member and get early access to all new episodes. Learn more by heading to reimaginingliberty.com. With that, let's turn to my conversation with Julian. Recently, I saw a poll someone had made on the social media site threads asking the question, do you think that artificial intelligence and the related technologies will be a net benefit or net harmful to society just in general? Maybe we start there. [00:01:57] Speaker B: What do you think? I think potentially a net benefit. But I think, as with a lot of these technologies, as with arguably with print, there is probably a long lead time where it's going to be on net more disruptive than helpful. I mean, I think you can make the case that in its first century or two, print was a pretty big problem on net. I think you can tie the early modern witch panics pretty tightly to print, to books like the Malaysia Melthakaram and the spread of sort of the viral meme of the idea of witchcraft. Obviously, and depending on your theology, maybe this is good or bad, but we saw centuries of religious warfare that are intimately connected to the new capability to have widely dispersed scripture and more diverse interpretations of scripture. And I think analogously, we will eventually, I think, probably find ways in which AI is going to be massively beneficial, but that's going to be disruptive, I think, in a lot of ways, first, because I think a lot of people are going to find malign applications for it that are more easy to rapidly deploy than the benign ones. And also just because I think it's going to take a long time to figure out what the role of human beings is when a lot of cognitive work can be done by automated systems, that currently is sort of aspirational work for humans. I mean, it used to be the dream of automation for a long time was, well, once the machines can do the drudge work and the manual labor, humans will at last be free to write poetry and novels and make paintings and create sculptures. And there's a plausible sort of dystopian future where now that the computers can take care of making the paintings and writing the novels, humans are free to clean the sewers. [00:04:28] Speaker A: That seems like it has an element of snobbishness in its concern. And this is something that I notice in the critiques of this technology, which is that nobody complained about automation. Most of us are not buying artisanal bread, right? The bread that we buy was manufactured, was made in a factory at scale. It's using techniques that bakers developed over the centuries, millennia, but it's machines just cranking out another loaf of bread that looks exactly the same and are not inventing new ways. There's no heart and soul in the loaves at the grocery store, but no one really complains about that. There isn't this widespread movement on social media to yell at anyone who mentions that they went grocery shopping. [00:05:24] Speaker B: Right? No, I agree. Look, I'm voicing this critique just to sort of have it out there more than to endorse it. But I think this goes back to the idea of what is the ideal of what humans are going to be doing in the future. And so for a long time, the answer to folks who complains that, well, the machine looms are going to throw a lot of weavers out of work, and farm machinery is going to throw a lot of agricultural workers out of work was, well, in the short term, that may be painful, but in the long run, what that means is manual labor will be done by machines, and the jobs that are left to humans are going to be more attractive, are going to be closer to the kind of thing that you would like to spend your time doing. And I think the reason you see this kind of pushback from artists is that it's harder to make that kind of argument. If your aspiration was, well, I want to be a composer and I want to be an artist, then it's not that, well, then there's something else better you can do, even if the machine takes your old job. This is the kind of thing human beings used to imagine they would be free to do with their time when scarcity was reduced. So I'm sympathetic to that, and in particular, when you think about the amplification of a kind of winner take all. So in the near term, it might be the case that the very highest level of artistic achievement is not something right? Chad, GBT is not going to give you output on par with NK Jemison or David Foster Wallace or James Joyce. It's going to give you kind of passable copy. And the same thing with art, it's probably not going to do anything terribly know often will have a library of a certain number of styles and will often give you some fairly confused responses to a prompt. But the top whatever 1% of artists, writers and composers probably don't have anything to worry about. But as it turns out, that's not, by definition, most of the people in those spheres, and to some extent, being bad at things, is kind of a prerequisite to being good at them. People who are great composers often start out doing less rarefied kinds of writing or composition or playing or painting. And so I understand people's anxiety that the kind of work that a writer or a painter or a composer might do while they're honing their craft is increasingly going to be again, unless you get a kind of mass desire for, let's say, the artisanal, human produced version is going to be hard to justify economically. If you're saying, look, well, I don't need a great original work of art, what I need is an illustration for this brochure or a jingle for this advertisement. If you have expert systems that can do that essentially for free, and particular in regions where there isn't a kind of demand on the consumer side for we want the human version of this, that's unsettling, and again, unsettling in a way it maybe isn't. When people hear, you know, gosh, maybe humans having to physically plow the fields isn't going to be a thing anymore. So I understand the concern. I understand why people view it differently, even though I think probably what we're going to see and what we're already ending up seeing is something more like a kind of collaborative relationship. [00:09:42] Speaker A: I think that last point is right, because it feels less like this will replace people outright, and more like this will raise the floor for a lot of people's abilities. But the human touch allows you to go above and beyond that. So you don't publish the article that you wrote with Chat GPT, but you use it as something to refine your ideas with, or get you starter prose to then build off of, or help you organize your massive notes into an outline and so on. But I guess the thing that is striking to me in a lot of the objections to it is they're analogous. You can imagine analogous situations that, again, people don't really complain about in the same way. So they treat it as this is, these models and the work that they do and the kinds of thing they create are qualitatively different somehow than what came before, but then ignore the analogous situation. So the bread baking is one of these that it seems to me to say, like, there just is far more artistry in being the person who writes poetry than the person who dedicates their career to the craft of baking bread is just to basically express a preference about, I happen to like poetry more than bread, but I think it would be hard to justify that there's less craft and artistry in one versus the other. But on the producing stuff that is cheaper, we have more. As journalism has seen, declining, the market for journalists has declined, in part because you had this wave of 20 somethings who are willing to crank out content for content farms at vanishingly small salaries because they'd live 20 of them in a Brooklyn walk up. And that took jobs away. The content creator job, or just like entry into the marketplace or internationalizing content creation, means that there's now people overseas who will make you a logo for much less, because the cost of living in Moldova is much lower than it is in Manhattan. But we don't see a similar like, it is wrong for you as a company that wants a logo to hire a designer in Moldova versus the much more expensive person in Manhattan. We don't tend to see a like, we need to stop the kids from entering journalism because they're bringing our salaries down, devaluing our content and so on. And so I guess that's what I kind of keep coming back to is it's not clear. It seems to be that the objection is, well, this can do it at scale, right? But scale, know, there are lots of things that can happen at scale. So that's a quantitative difference. And then part of it seems to be that if you, Julian, want a picture for your wall, you have some sort of moral responsibility to hire a human artist to make that picture versus an AI, and that if you can't afford a human artist, your moral responsibility is to not have the picture versus to get it at a price that you can afford, which might be free or the $10 a month mid journey membership or whatever. And so I guess at the broad level, that's my hang up with a lot of this, is it seems like the arguments that are made against this tech, if we took them seriously, would also apply in a lot of domains that they're not typically applied and that people don't seem to feel the same degree of rage about. [00:13:46] Speaker B: I mean, the wrinkle here, and I don't know if this is necessarily a very good argument, but of course, argument people raise is, look, these models are all trained on vast amounts of data. So this is in some sense uncompensated exploitation of the labor of all the artists whose work is fed into this. Although you also say, well, there's plenty of work that's now in the public domain, so you could do quite a bit of training without at least exploiting the labor of any currently living person who has a legally recognized right in that work. But no, I find that compelling. I think the arguments here are really backwards from anxiety about a world in which it's not economically viable to be an artist or a writer, or at least it's not economically viable for more than some very small number of people to pursue that work. And we'll see again whether that turns out to be the case or whether it's that the kinds of things that it's viable to get hired to do alter somewhat as certain tasks are taken over by AI. I think this is somewhat analogous to what we see in complaints about big tech related to journalism, right? I think art arguably, or it probably is arguable, but I certainly think it's the case that the sort of nosediving of journalism as industry is bad for humanity, that functioning democracies need people doing journalistic work, and the fact that it's increasingly not viable to underwrite that work is a bad thing. And I think what you see as a result of that is people kind of casting about for a model that's viable. So you will hear people say, I was at a conference just a few weeks ago, sponsored by the Knight foundation, where writer named Corey Doctoro was on stage talking about how Google and other search engines that make ad revenue off news sites are stealing from newspapers and news sites when they sell ads on search results for that. Now, I think that's pretty hard to defend when you sort of think about it. We don't think they're stealing from Donald Trump whenever people search for Donald Trump. But what is motivating the attempt to find a wrong in need of compensation is the absolutely justifiable, I think reaction that, gosh, if it is no longer economically sustainable for the New York Times and the Washington Post, or at least some institution doing that kind of work to support itself and finance the labor that it takes to do good investigative journalism, that bodes very ill for democracy, quite apart from whether it makes a lot of journalists who'd like to earn a salary unhappy. I think in a sense the idea that that's unacceptable then drives you to say, well, then there must be a model that we can justify that makes it tenable again, and so we can convince ourselves that then Google search that makes money off news results must be stealing, and so they owe the money that's got to be paid again. It's not that the moral argument is in itself credible. It's the sense that something has to be true that enables us to make it viable to do journalism. [00:17:57] Speaker A: So why does the argument, or the strategy for argumentation against this technology or its widespread use or its unencumbered use flow in that direction? So why is it that, say, artists who are upset about the potential impact this has on their livelihoods begin with arguments about intellectual property or the nature of creativity or the human touch versus just coming out and saying, look, I work in an industry that I think this is going to demolish. My livelihood depends on that industry persisting, potentially growing and so on. And so therefore this technology is bad because it will hurt my livelihood, or therefore this technology is bad because it will lead to a decline in the kind of journalism that is necessary for a democracy to function. Well, why hide the ultimate concern versus just leading with the ultimate concern? [00:19:11] Speaker B: I mean, I think I do see some people indeed leading with that concern. So it's not that nobody is saying that, but also, in a sense, this is the old bootlegger Baptist syndrome, right? An appeal to your interests is always less generally persuasive than an appeal to a principal. So if you say, well, this will lower my, or reduce, impair my ability to make a living, one might rightly say, well, you know, new technology often shakes up industries in ways that make people have to look for other work, or lots of things might make it harder for you to earn a living. And we don't think that's usually a justification for saying we're going to shut down a technology that's made that harder if it's seen as just a. I personally am the one affected by this. Whereas if you could make a kind of broader argument out of the undesirability of it, that's more appealing to a lot of people. And again, in the case of journalism at least, I think there is a good argument about the general sustainability as of yet at least there's a lot of kinds of reporting AI can't do because it doesn't have legs and can't just do a lot of things autonomously. Maybe that will change at some point, but for the near future, there's lots of kinds of reporting that are difficult for as systems to do. But we have a model for financing journal that's based on selling the writing, not selling the labor that goes into it. Of course, we have to sell as the output, and we have this sort of IP problem of, well, we recognize copyright in the particular expression of, let's say, a news report or a news analysis, but not the underlying ideas. So if it becomes essentially costless to reproduce all the information effectively in a news story without actually infringing copyright, without having any string of, let's say, four words that are exactly the same outside of a direct quotation, well, it becomes very hard on really any model to sell that concept if you also have to pay the overhead of the legwork that went into reporting it. And I think, again, apart from the displeasure this causes to journalists who like to earn a salary, there is a good society wide case that if it is not possible to underwrite that kind of reporting, that is bad for us collectively. I think one problem is we're at a kind of nadir of trust in media. So people are a lot less open to that argument, I think probably than they ought to be. But also that it's very hard to find a solution that makes a lot of sense for very good reasons. People are not particularly sanguine about the idea of saying, all right, well, if the market can't functionally underwrite journalism anymore, the government ought to do it. We sort of understand, I think, for pretty well trod reasons why that's an unattractive solution, at least as a primary solution. It might be that in a competitive market, having some news outlets that are subsidized isn't that bad an idea. But I don't think anyone likes the idea very much that most or all news outlets would require government largest to function. It would be, in practice impossible to preserve the necessary independence under that schema. And so then the problem is, well, what is your alternative funding model then? And so casting out for alternatives, one answer is, all right, if the problem to some extent seems to be rooted in tech, and the tech companies have a lot of money finding an argument that makes them on the hook for paying for the process. Seems, seems attractive, I think, to a lot of people. I think the reality is, though, I think it's just going to be very hard to even sort of at a policy level to prevent without in a sense, creating equally bad problems, the sort of harvesting and copying of news content that, you know, is already sort of happening in, in kind of human form with sort of content sweatshops, and seems obviously on the horizon at automated scale. [00:24:23] Speaker A: So that last point gets to the question I wanted to ask, which is basically, how new is this? And is the wrong thing being blamed for the malaise in journalism? And I should say I'm very much in agreement that having a robust ecosystem of journalism and a robust culture of journalism is really important. I mean, both of us used to work for a think tank, and our work was ultimately parasitic upon, well, first academic research that we would draw on and then journalism. We basically would take those two. [00:25:03] Speaker B: I was in fact a journalist for a decade before I became a think tanker. Right. [00:25:07] Speaker A: So you were producing and then consuming and reworking. And so much of content is just taking and remixing and thinking about and analyzing that on the ground reporting. But it's long been the case that hard reporting is not economically viable by itself. So newspapers didn't make money by selling subscriptions to hard reporting. They made money by selling classified ads, people who did buy subscriptions. Most of a newspaper's draw is like the opinion section, the sports section, the entertainment section, et cetera. That's what readers subscribe for. If it was just hard reporting, they wouldn't be subscribing in the first place. None of that is new. As you mentioned, there have long been these kind of content farms that take existing journalistic pieces and lightly rewrite them to publish on various fringey and scammy blogs and so on. None of that is new. And this collapse in journalism I remember seeing somewhere recently, I think it is that of the kind of major national newspapers, the New York Times is the only one that's profitable. But newspapers, the economic model has been collapsing for a while, and AI hasn't even really begun yet. These nightmare scenarios haven't hit yet, right? [00:26:39] Speaker B: No, of course, journalism's problems are, in a sense, long predate and have to do with factors unrelated to AI. I mean, it's essentially about the loss of the ability to leverage a kind of monopoly on the distribution of large amounts of pulped wood to lots of households and just leverage that and the attention that came with it for an advertising model but that said, the industry is already on the ropes, and it's not hard to imagine an additional blow at this point being kind of lethal to the extent it's not already, given the layoffs we've seen in recent weeks. [00:27:30] Speaker A: But I guess, is there a worry that if all the attention is on the problems that these models and technologies might represent rather than the cultural, like, if we can just, it's, you know, we can save journalism if we can just get OpenAI to stop consuming our content and then regurgitating it, or we can just save journalism if we stop Google from indexing our websites versus these kind of cultural. So there's the cultural problem of declining trust in the institutions, and it's hard for the institution to then make a case for their continued relevance and necessity, and therefore reasons why you should pay for them in one way or another if people don't trust them. But also long running meat. Like, you can tell people that the hard news is like the vegetables, right? And you can tell people that eating their vegetables is good for them, but they prefer the sweets of the op ed pages. And that if we focus our attention on big tech has lots of money and big tech is doing this thing, so if we can blame this thing for the problem, maybe we can extract some money from big tech and let that take the eye off the ball of the cultural and consumer preferences, then we're at best kind of kicking the collapsing can down the road a bit. [00:29:05] Speaker B: I think that's right. Although in a sense, AI would pose a threat even if you sort of fixed the preference, or even if, I don't know, people became more sort of civically responsible and understood, it's sort of better for them to consume less listicles and more good long form reporting, you would still have the problem of the sort of free riding of expert systems that can replicate that without violating intellectual property as currently conceived. One answer is a radical rethinking of how intellectual property works, so that that kind of derivative production would be understood as infringing. That in a sense, putting an article through an AI that is synthesized and does a rewrite is viewed as creation of a derivative work in a way that the current law probably wouldn't consider it to be. But you run into, I think, again, a lot of problems there with identifying when the convenient thing about copyright, to some extent as it currently operates, is that it is in a sense, based on surfaces, right? Whether one piece of music infringes on another does in part depend on whether the composer of the second did actually hear the first piece of music. But when you ask her, well, are they different enough? It has to do with whether they are essentially different enough to an ordinary auditor that they sound like different pieces. It's not a mathematical or music theoretical definition. It's a kind of ordinary person's response standard. But that at least is in some sense transparent and based on, I guess, surface features of the work, and not in interrogating the process that went into composition and what internally happened in someone's brain as they were transforming and synthesizing their influences. I will say one thing that we've seen as kind of a response both at different levels, let's say, of cultural production, is the very familiar move that we see on things like YouTube and twitch to, in a sense, sort of monetizing the parasocial relationship with the audience more than the actual content. So you may have sponsors and you may have ads that run on your YouTube video, but the way a lot of the stuff works, and way, frankly, a lot of substackers now are operating, is less trying to get people to pay sort of upfront for the content, or even trying to monetize via ads, the content, but by trying to create a sense of a relationship to the human being behind the work, and a sense that you're supporting a person whose work you appreciate and not merely consuming the product. And to some extent, I think that leads to some potentially somewhat dubious incentives to create kind of the illusion of a real social relationship when that doesn't really exist. But we're seeing large enterprises sort of emulate that model as well. So in the same way that a youtuber might have a level of Patreon sponsorship where the content is free, but also if you're a sponsor, you get to do a Zoom chat with them, or you get to be on their discord and have some kind of communication, maybe even play a game of some kind with them if they're a game creator. And we're seeing other industries are shifting toward something very similar, where you have the Atlantic and other publication and the York Times, increasingly moving toward the idea that, well, the prestigious publication is going to create a kind of aura of desirability around a group of people, and then what you can sell access to is a physical event where you're going to go and interact with those people. Actually, a few years ago, my partner, we love doing crosswords together, got a little event with will shorts, where a bunch of crossword fans paid some amount to go on a Zoom chat and do a little workshop in crossword constructing. So that was fun, and probably not the kind of thing that's going to fund the totality of the New York Times. But I think we're seeing a shift toward emphasis again, both freelance creators, but also institutions, toward trying to monetize a sense of relationship to a human that, again, at least in the short term, is probably not something AI can copy as well. The question is whether, again, that's enough to fund the operations of the New York Times. [00:35:07] Speaker A: I worry, too, that, that I've seen this in fiction writer communities where for quite a while, it used to be you were the writer, you wrote the book, you sent it to your agent, your agent put in front of an editor at a major publishing house, they published it and marketed it, and maybe you had to go and do some signings somewhere, but your job was just to put words on a page. But that has shifted where even the major publishing houses now basically expect the author to do most, if not all, of the marketing. And you're supposed to be active on social media and building these parasocial relationships in order to sell books. And that's not just time consuming, it's time that you're not spending putting prose on a page in your typewriter. But it basically means that success now selects for features of the author that aren't how good of a book they can write, but instead, like, maybe how engaging of a personality they are, how good they are at crafting these parasocial relationships and so on. And is there a similar thing that happens, like journalism? It seems like there's a tension between the objectivity of hard. [00:36:22] Speaker B: Yeah. [00:36:23] Speaker A: And the being a personality, especially if the way that you find success is to inject enough of your personality into your work that people are. I'm not just reading this breaking news story about Trump's legal troubles because it's information packed, but because I like the journalist who wrote it. That would seem to potentially drag, like, create incentives to just make your journalism more personality filled. But that might come at the expense of the kind of hard objectivity. Influencers are not generally the people you go to for hard news, right? [00:37:07] Speaker B: No. More broadly speaking, it would not be desirable if all the people who were able to have successful careers as novelists or journalists were people who were skilled at making engaging TikToks, or even, frankly, people who are compelling on a stage at an event you might pay to see. And also, yeah, I think there is something to be said for the idea that the decline in trust in journalism is probably, well, I think I could go on for quite a long time about what I think the causes of that are. But I think one factor is the rise of social media, sort of breaking the fiction of the journalist as a kind of tabularaza conveyor. It's not that journalists obviously didn't have personal opinions about the things they covered before, they of course always did. But in the sort of social media era, and I think one reason a lot of newspapers got very skittish and tried to kind of clamp down on the kind of content it was acceptable for reporters to be producing on social media was when that's all visible, when it's clear that, well, in fact, the reporter does have a personal opinion, whether or not they're good at being fair. And I don't know if you want to say objective, but accurate and balanced in their coverage, it makes it, I think, harder for people to disconnect that and trust that, well, as a professional, that's not going to be coloring their coverage. And to some extent, maybe that is healthy. It is not a terrible thing to be conscious that everything comes from a human's perspective, even if they adopt the conventions of journalism. They talk about this reporter instead of using the first person pronoun. A certain amount of that is healthy. But the kind of nihilism about journalism that a lot of the public seems to have fallen into, I think is less so and indeed not justified by the underlying facts. So, yeah, I mean, all of that said, to some extent that's not new either. I mean, it has always been the case that an author or musician who was strikingly attractive had certain advantages unrelated to the quality of the underlying product. And there's probably a lot of brilliant songwriters who are not particularly photogenic, who might like to have their own careers, but have wound up writing music for fit and attractive people with a passable voice and less songwriting talent. So that is not an entire novelty. But yeah, I mean, claim to agree it is to some extent undesirable, but in a way. Look, the point you were making earlier was that it's always sort of been the case that the model for a lot of these industries has been sort of orthogonal to the underlying product. Right? So the model for journalism was. Well, it happens that we are delivering this bundle of pulp paper to people's doorsteps on a daily basis, and that drives a certain amount of attention. And so we can make the money off the fact that there's a bunch of businesses that would very much like people to look at their coupon or learn about the new hair tonic that they're releasing, we can get more revenue from that than people are willing to sort of pony up just for the value they're getting directly from the creative work or the journalistic work. So it's always been the case that both the success is subject to a lot of factors other than the quality of the work in isolation, but also that the way revenue is driven is often a little bit orthogonal to the work in isolation, as opposed to the way it functions as a hook that makes something else attractive. [00:42:18] Speaker A: Is there potentially, then, a self correcting mechanism in this? So even before we get to the question of AI wanting to regurgitate summarize news, you could tell a story that it's not in, say, Facebook's economic interest for the entire news industry to collapse, because one of the appeals of Facebook is as a replacement to, like, both of us. Remember the Google reader era? And I still use an RSS reader to aggregate feeds from a bunch of sources and read them all conveniently in one place. And that's the role that Facebook plays. [00:43:03] Speaker B: Your audiobooks on wax cylinder too? [00:43:05] Speaker A: No, I have upgraded to listening to those things. You know, Facebook took on that role for a lot of people, as this was a place I could go, one place I could go, and I would see the stories, the most important stories of the day would show up in my feed. Or Twitter played this role, for a lot of journalists went to their primary news source was their curated Twitter feed or Twitter lists. And so that creates economic value for Facebook or Twitter or whatever is replacing Twitter, because I don't know how many people use it as their primary source of news now, but if that industry collapsed, then it takes away economic value from them. Similar to if it shifts to the way that I get my news is I log into Google and have their Gemini AI model tell me what's happened in the world in the last 24 hours. The value of that to me depends on how good the data is that Google is able to consume. These people who are being blamed for the collapse of the industry would seem to have a very strong economic interest, if not a, just like civic democratic values interest in seeing a healthy news industry producing the kind of content that they're dependent on. And so should we, I guess, just have a relative degree of optimism that because of these financial incentives, even if you or I can't immediately imagine what the new model to pay for it like. It's no longer selling subscriptions because of the opinion pages, it's no longer selling classified ads. Those models of subsidizing news don't work, but we can be relatively confident a new one will come along. [00:45:00] Speaker B: I'll confess I'm not that confident, in part because Meta and Facebook seem to have made the calculation that for various reasons, they do not want to be a major news source anymore. They don't think, it seems, that the revenue that they can hope to net from people consuming news on their platforms is worth the candle, even without. Well, it's not without bringing the question of subsidy, partly because in a lot of countries you see news outlets trying to effectively get their cut of ad revenue from platforms that are running news content, but also because it brings a lot of political scrutiny and ends up getting people dragged in front of congressional hearings if the way their algorithm is handling news is not to the liking of one faction or another politically. So again, at least that company has very clearly decided. They've essentially said, look, on threads, on Meta's sort of Twitter clone, we're algorithmically sort of de emphasizing political content, except for people who actively tell us they want to opt into seeing more of it, because frankly, it's sort of a hassle, it's not worth it. And so I think it's an open question whether whether they're going to step up to the plate. It's also, I think, an open question whether, to the extent they find it worthwhile to invest in that, they're going to invest in a way that happily dovetails with the sort of civic motives for which you would want a healthy journalism. Obviously, we already don't have journalism that perfectly overlaps with what you would want the resource allocation to be if what you were maximally interested in was a well informed democratic pollist capable of governing itself in a reasonable way. So perhaps we shouldn't be comparing imperfect real world results to the ideal, but certainly we shouldn't. But we don't know what the kind of journalism they would find it worthwhile to fund for kind of traffic driving purposes. We don't know how far, let's say it deviates from the kind of journalism you probably think is necessary for a well functioning republic. And again, a lot of this has, I think, very little to do with AI. It's just that AI is kind of the monosodium glutamate for a lot of existing trends, and that it just makes it possible to ratchet up the speed and the scale. So another problem is, well, it turns out people like news, but they would rather have news that sort of flatters their priors and reinforces their pre existing worldview, or their kind of tribal identification. And so the economic situation that kind of created the Edward R. Murrow that's the way it is style of journalism that presents itself as objective at least, or makes some kind of attempt to approximate whatever objectivity means was fundamentally about right. It's really only sustainable to have a couple of newspapers. In most cities there's a limited number of broadcast television channels. You don't want people to flip away, so it doesn't make sense to try a narrowcast. But increasingly now, well, it is viable to try and narrow cast. And that was true before AI. It's even more true with AI, when you can have a kind of bespoke version of the news article that's tailored to precisely your set of priors and has some maybe kernel of shared fact. But the emphasis and context is geared toward what is going to maximize your continued engagement with that piece of content. And so I think if you tell Google or meta or hey, you can get more engagement out of people if you to some extent subsidize the production of news, the kind of news they produce is going to be very much geared toward maximizing engagement, and the kind of news that satisfies that criteria might be suboptimal on a lot of other dimensions we care about. [00:50:22] Speaker A: Thank you for listening to reimagining liberty. If you like the show and want to support it, head to reimaginingliberty.com to learn more. You'll get early access to all my essays, as well as be able to join the reimagining Liberty Discord community and book club. That's reimaginingliberty.com. Or look for the link in the show notes. Talk to you soon.

Other Episodes

Episode

March 16, 2024 00:51:52
Episode Cover

The Evolving Discourse of Social Media (w/ Renée DiResta)

Digital expression is weird. When we move our communities and communications into digital spaces, such as social media, the result is an uncertain landscape...

Listen

Episode

February 01, 2023 00:55:58
Episode Cover

The Four Corners of Liberalism (w/ Emily Chamlee-Wright)

This is a show about liberalism, but liberalism is a bit of a contested term, slippery, evolving, and claimed by lots of people with...

Listen

Episode

September 21, 2022 00:56:28
Episode Cover

Growing the Liberty Movement (w/ Trevor Burrus)

Across nearly a decade, and over four hundred episodes, of the Free Thoughts podcast, Trevor Burrus and I talked a lot about liberty.Today I...

Listen