in

Rep. Ro Khanna on regulating Big Tech, banning TikTok, and the 2024 election


Today, I’m talking with Representative Ro Khanna. He’s a Democrat from California, and he’s been in Congress for about eight years now, representing California’s 17th District. It’s arguably the highest-tech district in the entire country.

You’ll hear him say a couple times that there’s $10 trillion of tech market value in his district, and that’s not an exaggeration: Apple, Intel, and Nvidia are all headquartered there. He’s also got a big chunk of Google’s offices. So, you know, no big deal.

I wanted to know how Khanna thinks about representing those companies but also the regular people in his district; the last time I spoke to him, in 2018, he reminded me that he’s got plenty of teachers and firefighters to represent as well. But the politics of tech have changed a lot in these past few years — and things are only going to get both more complicated and more tense as Trump and Biden head into what will obviously be a contentious and bitter presidential election.

On top of that, Congress itself is beset by dysfunction. There’s been a lot of talk about tech regulation in the past few years, but almost nothing has actually passed, even though both sides love to hate on Big Tech. All that inaction means that Americans have basically given up regulatory control over tech to Europe, where the EU is passing more and more tech regulations by the day.

Speaking of which, the new iPhone has USB-C ports because of the EU. A bigger example is that EU competition law kept Adobe from buying Figma and is setting the tone for our own regulators. I wanted to know how Khanna felt about that, and if he could see a way forward for the US to retake a leadership role in thinking about tech.

We also talked about content moderation, which remains the most contentious issue in tech regulation. Almost every attempt to regulate content runs into the First Amendment, which it should. So the new trend is to come up with laws that are ostensibly to “protect the children,” regardless of those laws’ other consequences. I put that problem to Rep. Khanna and he has some thoughts here as well, and he returned to his call that Section 230 needs to be rethought.

Of course, we also talked about the election. Khanna and I spoke the day after Trump walked away with the Republican Caucus in Iowa. But one key difference in this election cycle is the presence of generative AI, which can fire a cannon of just-believable-enough, highly targeted disinformation into every social network that exists. I wanted to know if there is any sort of plan for dealing with that, and on the flip side if there were any positive uses for generative AI in this election cycle.

That’s a lot, and to Khanna’s credit, he really went down the list with me. Also, I asked him to help me make a TikTok, so we really did hit all the boxes.

Okay. Rep. Ro Khanna. Here we go.

This transcript has been lightly edited for length and clarity.

Ro Khanna, you are the US representative for California’s 17th District. It has the most tech companies, I think, of any district in the country. Welcome to Decoder.

Thank you. Honored to be on.

Yeah, I’m very excited to talk to you. It is an election year. You’re among our first guests in what will be a challenging election year. We’re talking just after the Iowa caucuses where Trump ran away with a win. The House of Representatives, in particular, seems like it’s more chaotic than ever, maybe permanently in chaos. How are you thinking about 2024? There’s quite a lot happening. There’s a lot of actual lawmaking to talk about, but the context of all that seems quite challenging.

Well, we’re going to be in a very difficult fight with Donald Trump. I mean, I think Iowa showed that basically he’s going to be the nominee. We shouldn’t underestimate him. There’s a lot of polling out there. But the number that concerns me the most is that we’re 20 to 25 points down on the economy. That means we need to do a far better job of communicating a forward-looking vision of how we’re going to improve people’s financial lives, how we’re going to bring economic security for them. Acknowledge that the American dream has slipped away for a lot of folks, that they’re drowning in college debt, they’re drowning in medical debt, housing is out of reach, can’t afford the rent, can’t afford to buy a new house, and the jobs that they may have aren’t paying enough. Then we need to offer two or three bold, concrete ideas on how we’re going to fix that moving forward.

When you think about offering big, bold ideas… I want to talk about the economy. I want to talk about where the US is in terms of regulating tech companies compared to the European Union, which seems to be just forging ahead with new regulations every day. But bringing that home to regular people: On the scale of particular elections in the House, which are two-year cycles, how do you think about connecting “Okay, we got to make some big long-term bets and make some long-term policies to change how things are going, so people can feel it,” with also, “Every two years, I’m held accountable”? Because those things seem misaligned to me.

Well, we’ve got staggered elections. So the presidency is every four years, and I agree that it still makes it hard. We’ve been suffering from short-termism in the United States. Our CEOs have to make quarterly earnings reports. Our politicians are perpetually running if you’re in the House of Representatives. And even presidents have four years, but basically they’ve got a year to do things and then the midterms come and then the presidential [election] comes. So I would just say, structurally, we’re aligned toward short-termism.

One of the most astute observations that someone made about President Biden is they said that President Biden is building cathedrals. We’re building new infrastructure. We’re building new clean energy opportunities and jobs. We’re building new semiconductor plants, but that these are often five, 10-year projects, but that voters vote on the here and now. What is happening to the cost of groceries? What is happening to my cost of rent? How is my household budget taking place? And so there are two challenges. One is how do we excite people about building cathedrals in communities and make that relevant to them where they feel ownership and excitement? And two, how do we deal with the here and now? And on both counts, we have to do better.

Given all of that, given the short-termism, given the fact that it’s just going to be a very noisy election cycle this year, should we expect Congress and Biden to get anything actually done this year? Or should we just put our expectations on hold?

Well, the first rule for Congress should be do no harm. Can we actually get a budget deal so that you don’t have automatic cuts go into place? I mean, automatic 1 percent cuts at a time where we have an affordability crisis really would affect people. It would mean less assistance for housing, less assistance for food stamps. I do think we can get a budget deal. There have been some promising signs for at least getting a continuing resolution until March, which means we avoid a shutdown. [Rep. Mike] Johnson, to his credit, so far has been willing to stand up to the Freedom Caucus and say, “No, we’ve got to get some deal done.” I think that is the highest priority. Now, the second priority is can we get some aid, in my view, to Ukraine?

Because otherwise we’re going to basically be handing Donbas, Luhansk, and other parts of Ukraine to Putin. And that would be devastating after how hard the Ukrainians have fought. I’m hopeful we can do that, but that depends on the Republicans. And then the third thing is some immigration deal. And the Democrats are willing to do that. Having more border agents, having higher fines for people who are hiring unauthorized workers, having unauthorized workers have some process to get work. But we have to see if we can come to a compromise. So yes on getting a budget deal, most likely, but on the other things, harder.

Well, let me just put that into context for this show. I agree those things are important. “Let’s not shut down the government” rises to there’s an emergency. “Let’s fund Ukraine” rises to a “there’s a war with the heart of Western democracy in peril and the state of Europe in peril” emergency. Immigration, constant low-boil emergency that both sides are kind of running on in different ways. Is that it? Things have to rise to that level of emergency status? Because I want to talk about AI, I want to talk about autonomous driving. I want to talk about how labor is going to change over the long term. And privacy regulation — we’ve been talking about it for 10 years, we haven’t gotten anywhere. Can that stuff ever break through in the system that we have today?

You know, look, I called for an internet bill of rights in 2017 with Tim Berners-Lee, and I’ve still been pushing it. The New York Times lets me write op-eds on AI and labor, but it’s hard to get legislation considered on that. And I’d say, one philosophical point, the problem with government is it seems the only time we’re capable of decisive action in the United States is in moments of crisis and emergency. So when covid happened, on a bipartisan basis, we passed the CARES Act. We put out enormous resources to save people from unemployment. We basically funded Operation Warp Speed with vaccines and distributed them. And I would give, actually, both on that — Trump and Biden — credit. And so you saw $5 trillion of massive resources mobilized, and the only nation that comes up with the best vaccines fast because of crisis. It may be overshot to some extent. I mean, that’s Larry Summers’ argument. 

But the bottom line is that was government actually working and working in a moment of crisis. But then we seem incapable of long-term thinking to tackle immigration, education, industrialization, AI, technology regulation, privacy, things that aren’t immediate. And this, I would say, is the most legitimate criticism of the United States government. Now, I do think having a president lead on technology and say “This matters to me” would help, and I have great respect for President Biden, but this hasn’t been at the top. I mean, he’s had a lot on his plate, but it hasn’t been at the top of his concerns in the way it was, I would say, for President Obama, who was very conversant in technology. Would come out all the time to Stanford and knew the tech leaders, was willing to push back. So I do think having a president, having leadership, saying this really matters is important.

You’re describing a system that you have said a few times now is organized around short-termism, right? You’re making very short-term decisions. There’s a long-term view of things: building cathedrals, building infrastructure. You’ve got to run your office. This is the most Decoder question of all: How have you organized your office to balance the different needs and different constituents you have?

Well, I’ve got a great team. So one, we empower people. We have a very decentralized approach to management. It’s not, “Okay, here’s what I’m saying needs to be done,” and then everyone follows it. There’s certain things they know are priorities of mine: building new clean steel plants, that bill. But what we do is empower people to say, what, given the values of this office, do you want to do? What are initiatives you want to run with, and how can you do that well? And how can we have flexibility in your life? So if you need to work remote at a certain point because you need to be with family, we understand that as long as you’re doing things.

If you want to be flexible in coming in some days, not coming in some days, we understand that. Here are times, though, that we all should get together for team meetings. And then you find a balance between our short-term goals, which is what do we have to do in this Congress to pass in this legislation and how do we have to respond to appropriations at committee hearings in the long-term projects that are important to you and our office? And I would say there’s one thing I’ve done slightly differently than most offices: really empower the young folks to be creative.

And then here’s the Decoder question. Here’s the whole brand. You have a lot of decisions to make. You’re obviously a politician. You are trading votes back and forth, you’re making compromises. How do you make decisions? What’s your framework?

That’s a great question. So I make 70, 80 percent of decisions quite quickly because now I’ve been in Congress, it’s my eighth year. I have a clear set of guiding principles, a clear set of values. And we’ll probably hop on a text message often — maybe on Signal with my chief of staff, with my chief strategist, my comms director, and go back and forth. Sometimes a phone call, but often just back and forth texting, and we’ll be able to make a decision. Usually, if it’s anything of consequence, we’ll run it by a few people. And if it’s a real consequence, like what’s going on in the Middle East, I’ll talk to my wife. I’ll sometimes talk to my mother. I remember my mom calling me saying, “You need to call for a ceasefire. I don’t understand why you’re not calling for a ceasefire.”

So I did call for a ceasefire on November 21st. But for the broader, bigger decisions, I’ll probably not just talk to my team, but I’ll talk to my wife, talk to family members, people close to me, close friends. And then, in a day or two, couple days, ruminate, think, and make a decision.

How does the politics of it all factor into how you make decisions? I feel like I often talk to executives who are usually fully empowered to make decisions however they want. Maybe they have a board of directors, maybe they care about their institutional investors. Oftentimes, it feels like they’re just doing whatever they want. You have constituents. How does that affect how you make decisions?

Well, it affects it a lot and it should affect it. I mean, they’re not electing Ro Khanna, philosopher king to go make decisions for them. They want to be heard. And so let’s talk about the situation in the Middle East. I had a town hall where I basically got yelled at for an hour and a half after October 7th. And it seemed in that town hall, I couldn’t say a single right word. And I heard very, very pointed criticism from folks on all sides of that issue. That did shape how I was looking at things. Now, I reached out to experts and reached out to ambassadors and foreign policy experts, but in the back of my mind were stories about Jewish Americans who knew people who had been captured and taken hostage. The brutality and fear that many people had in Israel and people in Gaza. I mean, folks in my district who knew people in Gaza who had literally been killed, children had been killed, multiple family members killed.

So I think the constituency on an issue like that did shape my sense of urgency, my sense of response. But ultimately, then you have to make a decision based on your values. And it’s a combination. And I think any politician who’s being honest will say that the politics of things does matter. Now, maybe not on matters as clearly on war and peace. I mean, they’re actually probably matters the least because most members of Congress feel the weight of those decisions. But on typical decisions, one will consider what is the impact of this? Is this going to upset certain groups? Is this going to make it harder on the president when we want the president to get reelected or is this a time to speak out? Of course, one considers that as one factor. It shouldn’t be the only factor or the dominant factor, but any politician has to consider it as a factor or you wouldn’t be effective.

So this brings me into kind of a big question, and then I do want to get into the policy of it all. Your district includes Apple’s headquarters, Intel, LinkedIn, Nvidia, Yahoo, which I imagine is just an enormous policy weight on your shoulders every single day. And the last time I asked you, “How do you think about representing these companies?” I remember very clearly you said, “Well, I’ve also got firefighters and teachers and cops, and I think about them more.”

That feels like it’s shifted, right? There’s something big since the last time we spoke to now, maybe in the last couple of years, where it feels like the tech giants are doing more politicking, they are more openly political, they’re pushing for different kinds of deregulatory structures. Do you feel that weight change at all over the past few years?

There’s certainly more tech wealth in the last few years. I mean Google, also. They’re technically in Mountain View, but most of their offices are in my district. And when you look at AI and the wealth that potentially could be generated, you’ve got Google, Anthropic, OpenAI in my district. A lot of Microsoft offices in my district. So many AI startups in my district. And you see more and more tech leaders taking an active role in policy conversations. Now, I still think that we have to prioritize the needs of working and middle-class families.

And I’ll give you a concrete example. On the truck driving bill in California, many of the tech companies, they wanted it to be deregulated and let’s have automation do whatever it wanted to do. I sided with the teamsters, saying, “No, that we should have a human on board these trucks.” The reason being that for safety and that these workers actually know what will be safe. So I have this sense of both believing in technology’s promise and entrepreneurship’s promise and wanting to spread that opportunity in places across this country, but at the same time, pushing back on tech saying that you’ve got a blind spot when it comes to some of the issues for working and middle-class Americans. And we’ve got to do better in dealing with income inequality. I don’t always get that balance right, but I would say it’s more acute, the tension in my district.

That tension expressed, again, throughout the economy right now because of AI. The autonomous trucks bill is actually a really interesting example of it. And I kind of want you to walk me through a little bit. You wrote about it in The New York Times recently. The bill, as you said, would’ve required human drivers on board. The teamsters supported it. You supported, obviously, on the federal level. It passed the state assembly in California, and then Gov. Gavin Newsom vetoed it. How did the dynamics of something like that work? That seemed like a very surprising result to me.

Well, I was a little surprised he vetoed it because all the labor in California was for it. The teamsters had it as one of their highest priorities. Some of the business interests got to Gavin and said, “Well, this is going to lead to the offshoring of these companies to other states if not to other parts of the world.” And I disagree with that one. Silicon Valley, my district, is $10 trillion of market value. There’s a reason people still are starting companies there and innovating there. It’s because we’ve got Stanford and the world’s most brilliant technologists and extraordinary venture capital. So this idea that there’s some exodus from my district to talent or capital is just belied by the actual facts. I actually think AI is going to be a huge boom for Silicon Valley, but I think the bigger issue was do you trust working families and center and prioritize that?

I don’t think the teamsters would want fake jobs. If those jobs really weren’t needed, they’d be the first to tell us. Working-class Americans have a lot of pride. They don’t want to just do things that don’t have dignity or value. And what they were saying is, “No, we need a human on board just like we need a pilot on board with all the recent airline issues. Certainly, we’re glad we have pilots and we have a crew on board.” And I think this gets to the crux of the issue. Sometimes the incentive is to use technology or AI to excessively automate. 

Let me give you a clear example. You call up an airline, and how many times do you have to press 0, 0, 0, 0, get me an agent, and you’re struggling. You’re almost sometimes fighting with the phone. And then sometimes the phone automatically disconnects you, and then you have to call back and figure out the code to get an agent. That’s excessive automation. A lot of times it would be better just to have the agent. Or how often have you tried to do some self-checkout at a grocery store or at a CVS, and you end up talking to the person running down because they have to take off the lock for the blades because the shaving blades have a lock on them. And this is stuff that an MBA may not figure out, but the workers would. And what I’m saying is we need to incentivize workers to think about how to use technology, not just to automate. And we need a tax code that doesn’t overly incentivize automation over investing in people.

So, in the case of truck drivers, right? It seems like self-driving will come to trucks in particular first, because they’re fixed routes traditionally on highways. You can apply a lot of regulation and surveillance to those things, because they’re commercial vehicles in different ways. There’s a big push in general to have AI do this to white collar industries. We’re going to replace a bunch of doctors and lawyers, right? AI can do a diagnosis pretty fast. Maybe it’s right, maybe it’s wrong, maybe it’s fully hallucinating, but it can do it. We’ve seen lawyers get in trouble for filing AI-written briefs. It’s coming to every sector of the economy, not just truck drivers. How are you thinking about a framework for understanding where it’s appropriate and where it’s not appropriate?

Well, in one way, that is the interesting dynamic. Right now, you have truck drivers having solidarity this past summer with literally Hollywood writers. I mean, you couldn’t think of two more different jobs. And yet they’re both, in some ways, standing up to automation. Hollywood writers are saying, “Don’t have AI write all our scripts,” and the truck drivers are saying, “Let us have a job on these trucks.” And so, I actually think that there are interesting ways to have labor organized and have labor power and have labor solidarity, and that the growth of the labor movement in this country may be one of the most promising things to have countervailing power to corporations.

And then you say, “Well, what does that mean concretely, Ro?” It means that when these companies are making decisions about how to use AI, workers should be at the table with a clear decision-making role, that there should be incentives for workers to get some sense of the company’s profits, which used to be the case with Sears Roebuck up until 1968. Workers used to get a percentage of the company’s profits. And so those kinds of things, I think, are more and more important as you have technology that could either be augmenting people or displacing people.

When I think about the things LLMs can do today — the ChatGPTs of the world can do today, the Midjourneys of the world can do today — it’s create a lot of information. It’s pump out a lot of information very quickly. Maybe the information is right, maybe it’s wrong, maybe it’s totally made up. It feels like that will have a huge impact on the large platform companies which have to figure out how to moderate it. It will have a huge impact on our information environment, generally. Deepfakes are a real problem today. As we go into an election year, they’re going to be an even bigger and more dangerous problem. Do you have an idea in mind of how you might regulate away some of these openly negative effects of AI?

That is an enormous issue, and I think it starts with clean datasets. I mean, we’re putting garbage in, we’re going to get garbage out. And right now, a large part of the challenge with AI, it’s been trained with generative AI on everything on the internet without necessarily distinguishing what is true from false. And that is going to lead to distortive results. So I think we’ve got to figure out environments where there’s heavy disclosure on what data was put in, how it’s been used, and to encourage more clean datasets to be used.

And then, I think, the challenge of deepfakes and the challenge of AI being able to create false content very fast and at scale is what’s concerning. And we need to have some sense of regulation around that, that there has to be clear labeling or marking of AI-generated products. This doesn’t mean that it’s all bad. I mean, there was someone in India actually using AI to have a politician speak in 20 different dialects. That could be a positive use of AI; Ro Khanna speaking in Spanish and speaking in Tagalog and speaking in Hindi across my constituency. But people should know that’s AI generated and that’s not really me speaking. And so I think a lot of this is going to go toward proper disclosure.

There’s a tension there. There’s a reason I ask those two questions back to back. There’s “Will the labor movement contend with AI and get themselves profit sharing?” and “Will we have trucks with drivers in them?” That’s a long-term problem, and it seems like we’re organized around that problem pretty directly. And the problem of “We’re about to flood every social platform and search engine with a bunch of election misinformation powered at scale by AI LLMs,” we have no plans for. Is that a tension that you see reflected? Is that a thing that we can fix?

I do think that we need to pay even more attention on the labor front. I would say that’s not something that has had enough attention because its potential to increase wealth disparity, income disparity, is enormous. But I agree with you that it’s on people’s radar. The second problem, I’m calling in, on February 15th, the top 20 academics in the country to be in DC to have a round table exactly on this. What is the recommendation of the next 10 months that we can do?

Well, it probably isn’t going to be legislative. So what are the guidelines that you want these tech companies to adopt? How do we prevent the proliferation of this information and the targeting of this information? I think that’s the problem with AI, that it may make the targeting of misinformation so much more precise where you know exactly who may be vulnerable to misinformation and be able to get that to them and the creation of misinformation much easier because you now have it being generated through AI. There should ideally be legislation, but in the absence of that, there needs to at least be clear principles and guidelines and agreements by these social media platforms.

Do you think the social media platforms are doing a good job right now supplying trusted information?

No. I mean, I don’t see how you can look at the current information environment and say that the social media companies are doing a good job. But to their sympathy and to the extent there is any, it’s a hard issue, right? Because there is a tension between free expression in the First Amendment principles and not having a platform proliferate with falsehood and ugliness. And that’s a genuine tension. Where I think there’s low-hanging fruit, and they could do much better, is the addiction on kids.

So, at the end of last year, I interviewed former President Barack Obama. We talked about the First Amendment in this context. If you want to impose some set of content rules on social media companies, you have to overcome the First Amendment. The government has to make some speech regulations. And I said, “Well, how are you going to do that? How are you going to get around it? There’s no way to do it.” And he looked at me very seriously, and he’s the former president, and it became very clear to me in that moment that he used to be the most powerful person in the world, and I was not. And he was like, “Well, you just got to figure it out.” And he literally walked out of the room. That was the end of our interview. It made it clear, right?

This is what government is for, to figure out ways to do what people want to do legally, lawfully. I don’t have an idea for what that hook is to say, “Okay, we’re going to go to Instagram and we’re going to say, ‘You can have this content and you cannot have this content that makes young girls feel bad.’” It feels like politically in the United States right now, “Someone think of the children” is that hook, right? It’s the thing that will get us over that First Amendment barrier, but we haven’t quite figured it out. Is that the only hook we have? “Please think of the children,” or is there some other way to make a set of content regulations a floor for content moderation that everyone can agree with?

I’d say a couple points. I think we start with the low-hanging fruit, which is the children. I mean, don’t get the children addicted. Children have First Amendment rights, but it’s subject to more content place requirements, and I think you could get actual bipartisan legislation on that. The second thing is we need to have much more privacy. Because if your data is protected, if we had strong privacy provisions, it becomes harder for these social media companies to target misinformation to you. So the very nature of them having surveillance makes the targeting and misinformation problem worse. The third thing I would say is let’s have multiple platforms. If you’re just beholden to one or two platforms then, again, the misinformation problem is worse. If you have a plurality of places that you could go for speech and conversation, that’s a better scenario.

You could see sites emerging that say, “Look, we want to have more civil discourse and have the opportunity for them to emerge.” Right now, you have such a monopolization of social media platforms. But the most important point, I think, is that it’s not just about what government can do to regulate, because the regulation of content is very difficult under the First Amendment. It should be difficult. Let me give you a clear example. I put out a statement [on X / Twitter] that the president violated the War Powers by striking Yemen. Twelve hours later, there was a community note saying additional context of the War Powers Resolution and Khanna’s interpretation may not be correct. That community note was taken down 24 hours later because it turns out my interpretation is at least very plausible, if not an absolute truth, because it’s a complex issue and people can have differing interpretations of the Constitution.

So I’m not sure I want a Twitter board, or an X board, out there saying, “Should we allow Khanna’s statement to remain up there, or should we take it down?” You can imagine the abuses of that kind of power. So there’s a reason we have the First Amendment. So I would say, though, take out the content that’s clearly inciting hate, inciting violence. Take out the content that’s clearly inciting public health crises. You still have a lot of terrible content out there. So how do we deal with this? And this is where I–

… Not to interrupt, but it’s pretty legal to incite hate. It’s pretty legal to incite a public health crisis. To pass a law saying you cannot have content that makes a public health crisis worse… we would still have to overcome the First Amendment that would immediately get challenged and face what would just generally be strict scrutiny, I think, in the courts. That’s the challenge. That’s what I’m focused on here. We’re looking at a bunch of companies in the district that you represent coming up with cannons of content that they’re going to fire onto all these platforms and distribute them, as you said, in more targeted ways than ever before.

And people can use them for good or evil or everything in between. I don’t see a framework for how the government can regulate that. There is a brewing consensus that, “Hey, we should protect the children” might overcome some First Amendment challenges. But everything else, it doesn’t seem like we have any ideas on how we do it, and “Maybe we shouldn’t” is a perfectly valid opinion if you believe in the First Amendment. But I’m looking at the next election season, and it seems like maybe we should think about that more constructively, or we should push the platforms to think about it more constructively. Because I don’t know that we’re ready for the cannon of misinformation that is coming because of AI.

I’d say two things to that. I think, obviously, you have a legal background and are well versed in this. I mean, under Brandenburg, the test is very narrow where you have to really show imminent excitement of illegal conduct. So imminent excitement of violence. Now, I’d say in January 6th, some of that line was crossed. I mean, if you have people on Facebook posting that we want to kill the vice president on January 6th at some time, that seems to be pretty much imminent incitement of violence. And one of the things I’d recommend is… Right now, there’s such a broad Section 230 immunity that Facebook doesn’t even have to take that down, even if it’s a violation of Brandenburg. I would say have the ability to go to a court to get a court order to remove the things that are clearly violations and that may incentivize these platforms to remove things that are borderline leading to an incitement of violence.

And that should be a reform to Section 230, saying if you have a court order for incitement of illegal conduct. But beyond that, these platforms obviously have their own decisions to make. I sympathize with them in wanting to have First Amendment principles, but I would say that you can have First Amendment principles and still take out things that are clearly hate speech that the government couldn’t take down, but you can take down as a platform. You can take down things that are clearly violations of public health, and you’re going to get criticism. You’re going to say, “Well, this is too broad.” But I think on balance, these companies need to make that decision while having a diversity of view. But the point I do want to make is that all of our focus is on what the companies can and cannot allow on the platform.

Nothing is focused on what are the digital platforms we’d like to build, right? After the printing press, there were wars basically for a hundred years because the pamphlets were inciting wars, not just inciting violence. And then we thought, “Okay, how do we create a town hall? How do we have deliberative democratic conversation?” And I think all the digital emphasis has been just on regulating these platforms. How can we do more things like these podcasts and, online, how do we create better forums for democratic deliberation?

So you mentioned something earlier about markets and competition, right? We shouldn’t just have monopoly social media platforms. There’s a little bit of change now with whatever’s happening with X, whatever is happening with Threads. You can see the rumbles of competition. Threads is still owned by Meta, which is one of the dominant providers of social media services in the world. You used to be an M&A lawyer, in the before time before you entered public service. There’s a lot of just antitrust action in this world, somewhat successfully in the United States, right? It doesn’t all go well. Much more successfully in the EU. They seem to have stopped more deals over there, and certainly they just stopped Adobe and Figma

Are you seeing that as a place to put some policy pressure to say, “Okay, the giants are giant, we need some competition.” How do we incentivize more competition, richer markets? Maybe it’s better if we have a richer market for information services or social media, and the market can decide an appropriate level of moderation. How do you get from here to there policy-wise?

So, I was a tech litigation lawyer, not M&A — just don’t want to overstate my credentials. But I think we have to have a lot more scrutiny on these mergers. Facebook should never have been allowed to acquire WhatsApp or Instagram. Imagine if we had more social media spaces. You’d have more content moderation strategies. We’d be able to see what was working and what wasn’t working. We’d be able to call out a really bad actor and say, “Why can’t you adopt a social media strategy like this? They seem to have a better balance.” Instead, we only have a few people making these decisions. So obviously, I wouldn’t ban all mergers or acquisitions. That’s usually the exit for a startup. And if you did that, you’d really hurt the startup space and you’d take all the innovation into just these big companies. They’d all do their work internally. But I think for large mergers, things that are over a billion dollars and that are in a particular industry, we should have a great scrutiny on that.

Just before we jumped on to speak today, I was looking at the news. The EU now has proposed some set of rules around music streaming. The music streaming companies should pay the artists more. That’s a great rule. Maybe it’ll happen, maybe it won’t. The EU is doing this every day. It feels like every day I wake up, and the EU has a new idea about how to regulate tech companies, and most of those happen. The new iPhone has USB-C ports because the EU decided that they were going to have a common charging standard. The Digital Markets Act is going into effect. I’ll pick on Apple again. They’re going to have to split the App Store in two and allow sideloading of apps on iPhones in Europe, on and on and on and on. It feels like we have Big Tech companies here in the United States in your district that are increasingly being more effectively regulated in the consumer interest by the Europeans. How do you close that gap? Is it even worth closing that gap?

Well, first, I wouldn’t just blindly look to Europe.

I feel like the United States politicians saying, “Don’t blindly look to Europe” is the easiest softball.

It’s that Europe has a lot of regulation. I’ve said this to my European friends directly, but they’ve got one tech company in the last 30 years of any consequence, and that’s ASML, which helps do the semiconductor stitch lithography in Holland.

And by the way, for all of their crowing about markets, ASML is a monopoly company, the only provider of that service.

So if you’re looking at how do we be innovative in the world, and you look at Europe’s done one thing over the last 30 years, it’s probably not the right model. That said there–

… There’s a lot of angry people at Spotify headquarters right now, Representative Khanna. 

I should give Spotify an honorable mention. But my point is that they’re also not as effective in regulation as they think because these tech companies, when you look at it, they just go to the least enforcement forum. They’re 19, 20 countries. They’ll often go to the country where the enforcement isn’t happening, and they run circles around the European regulators because the European regulators don’t have the technology proficiency often. So they’ll do dark patterns to get around checking the box. They view it as sort of a speed bump, but not as effective as the Europeans may think. 

That said, I think the United States has been derelict. We have not had a strong privacy legislation. We have not had any AI regulatory agency. We have not had a strong antitrust regulation saying, “If you have an app, you’ve got to have it open to multiple things, and that you’ve got to not charge people a commission on these app stores, and you can’t be privileging your own products.” So we should be focused on how we do a better job here. We can look at some of the best practices of Europe, but my sense is Europe’s tendency is probably to regulate every single possible thing without enough focus on innovation. Our balance has been off and not having sufficient regulation. And what we really need is more people focused on what American regulation should look like, and that I think could be the standard for the world.

Does that feel bipartisan to you? I feel like there was a bipartisan push toward an antitrust bill last year or the year before that seems to have fizzled out. But it was striking to me that that was a more bipartisan effort, right? Because both sides seem to enjoy hating on Big Tech. Can you get that back? Can anything get done there or are we just waiting until the next election cycle is over?

We’re waiting until the next election cycle. I like Klobuchar’s bill. I supported that bill despite coming from Silicon Valley and having some of the tech companies not agree with it, and it wasn’t a perfect bill.  But it was better than what we have now, which is just laissez-faire on some of these issues. I do think there’s bipartisan opportunity there to have thoughtful regulation on privacy, thoughtful regulation on antitrust. I think it’s going to take a president getting elected and saying, “This is one of my top priorities.” The tech stuff has gone from being a niche issue to now an issue that people really realize, “Okay, this affects our lives,” but it’s still not high up on the priority list.

I mean, Klobuchar’s bill should have passed, should have been signed. If it’s not perfect, then it can be amended in the future. But there needs to be some US regulation on these issues. But we also need to understand the biggest divide, which is that you’ve got $10 trillion of market value in my district, and you’ve got all these people around the country saying, “How do our kids, how do our young folks get funded, have some participation in a new digital economy? How does this not leave us behind? And what is our strategy toward creating these new economic jobs and opportunities across the country?”

You mentioned the presidency. You mentioned the president having to prioritize his issues. Earlier in the conversation, you mentioned that President Obama prioritized tech and President Biden hasn’t as much. Do you think that’s something Biden needs to improve, his outreach to the tech community, his cheerleading of better tech regulation, whether it’s privacy or AI or what have you?

Yes. I think he could do it in two places. So one, he should set a goal and say, “I want, within the next six months, legislation passed to protect America’s children,” and not just in the State of the Union where he’s alluded to it. I mean to have a task force, have someone in his administration call members of Congress, get it done, and say, “Look, this is unacceptable that our kids are getting addicted to social media.” At dinner, going out and having to post on Instagram, interrupting dinner because they’re so addicted to the worst experiences of junior high on steroids.

I think he needs to do that. He needs to say it’s embarrassing we don’t have privacy legislation. He needs to say that we can’t have Big Tech companies not have appropriate competition. But he also needs to convene these tech leaders and go to rural communities, go to Black and brown communities, and say, “What are you doing to invest in our HBCUs and our HSIs?” We created a program with Google in historically Black colleges in South Carolina. Young folks get an 18-month course, $5,000 stipend, $60,000 to $70,000 jobs at the end of it. How are we getting a hundred thousand new Black and Latino tech jobs? How are we getting more of these jobs in communities in the Rust Belt and across America? I think the president needs to mobilize technology leaders to say, “You’ve got to help create the job opportunities for the next generation.”

We’ve talked a lot about different social media platforms, the information environment we live in, targeting that information. You’ve talked a lot about the harms to children. It feels like the elephant in the room in that conversation is TikTok. There was a lot of discussion about banning TikTok under the Trump administration that carried through to the Biden administration for a minute. It seems to have all disappeared as we head into an election. Do you think there needs to be more scrutiny of TikTok — its Chinese ownership, how it works at this moment in time — or has that faded to an appropriate level?

Yes, it needs to be scrutinized. We shouldn’t have the data be potentially in the hands of the Chinese Communist Party. And I’ve said, have the sale be forced to an American company. And there are a lot of things about–

That’s your position? That TikTok should be sold to an American company?

It should be sold, but not banned. And I’ll tell you why it shouldn’t be banned. And I don’t love everything on TikTok, and I’m obviously not great at it because we’re still figuring out how do you get one of these videos to go viral? We’re on it on our campaign.

This one right now, make this one go viral if you’re watching this.

Yes, it’s a bit hypocritical because you have all these politicians railing against TikTok, and then they all go to their 25-year-old millennial they know or a Gen Z person they know and say, “Oh, how do I do better on TikTok? I need to get my message out on TikTok.” So a lot of hypocrisy there. 

But look, I don’t agree with everything on TikTok, but the fact that you’ve got all these people on TikTok being critical about our Middle East policy, being critical about our environmental policy. The fact that you’ve got now influencers on TikTok who have more say than boring Congressional house speeches, that’s not a terrible thing. So I think you have to have these technologies be democratizing, give people a voice, but then have guardrails so that they’re not violating privacy, so that the information isn’t going in the wrong hands.

But there are two types of folks who want to come down against this technology. One, legitimate folks who don’t want the information misused, who don’t want people targeted, who don’t want the spread of misinformation. But there’s a second group, and they just don’t want a threat to the establishment. They don’t like these new voices. They don’t like the fact that people in Congress are losing power and that the establishment is losing power and that suddenly a whole new set of people are having the impact of the conversation. And I have no patience for that second group. And that is the vision, ultimately, of the internet — that at its best, with the appropriate guardrails, it can empower ordinary people to have a voice.

Is there any momentum? Is there any political capital right now to force the sale of TikTok? There was once. Microsoft CEO Satya Nadella called it one of the weirdest deals he’s ever been a part of. That seems to have gone away.

There was, and it’s something that I think President Trump and President Biden agreed on. I don’t know the details of where that committee that the president appointed stands, but I think having a forced sale with appropriate compensation and having an American company monitor it would make me a lot more comfortable. I mean, we wouldn’t give up CBC, NBC, or ABC to the Chinese, and yet the channel that’s leading to communication with voters under 30 is in China’s hands. That, to me, is a long-term danger.

Yeah. Alright. Last one. We’ll do this one for the TikTok. You’re going to answer this question for the TikTok audience. It’s going to go viral.

Am I finally going to go viral?

Explain to our TikTok audience, as quickly as you can, how you are thinking about regulating generative AI.

Three principles to regulating generative AI. First, you’ve got to know whether something is human or AI generated. Second, make sure generative AI isn’t replacing workers. Make sure workers have a say in their jobs. And third, have basic safety so that generative AI can’t just create massive misinformation or risks to civilization.

Is there a bill people can go look at that contains these principles?

There is not a bill because to get a bill, you need to have some consensus. I can put out a bill tomorrow. It is not going to go anywhere unless I could get Republicans and senators on board. What I would say is pay attention on February 15th. We have literally the world’s top academics, people who have spent their lives thinking about it. Too often when we want to regulate AI, we think, “Okay, let’s call Elon Musk. Let’s call Sam Altman. Let’s call Bill Gates.” All brilliant people, but they’re not academic neutral experts. I’m calling the 20 leading academic experts in the world, and let’s see what recommendations they give. And I hope that can start to form the basis of bipartisan legislation.

Alright. Representative Khanna, you’ve been amazing. Thank you so much for coming on Decoder.

You’re an important voice in the debate and conversation. Thank you for having me.

Decoder with Nilay Patel /

A podcast about big ideas and other problems.

SUBSCRIBE NOW!



Live: OpenAI CEO faces questions from Congress on potential AI regulation

Surprisingly, scientists decline to move the Doomsday Clock closer to midnight

Surprisingly, scientists decline to move the Doomsday Clock closer to midnight