Skip to main content

tv   Campaign 2024 Discussion on Elections in the Age of AI  CSPAN  April 27, 2024 2:40am-3:52am EDT

2:40 am
hosted by the bipartisan policy center.
2:41 am
2:42 am
>> they have much easier access to tools that can create convincing deepfakes. and to fuel conspiracies about voting irregularities. we cannot discount the benefits of ai. technology has for decades fueled innovation, campaigns, voting, and candidates. this comes at a time of heightened tensions. it poses real challenges. they have been underfunded for decades. and under threat. i think you are here to hear about solutions. so that me interview the panel.
2:43 am
he is a professor at the university of chicago school of public policy. > welcome. thank you so much. thank you for having me. we pride ourselves on being outside of washington. so we can take the time and space to be analytic. we are working very deeply and tried to affect policy change. to have this exchange back in or. to do the hard work. we are really happy to be doing this. let me quickly introduce them before we get into our conversation. he is the director of the elections project.
2:44 am
he has also worked on technology issues for senator elizabeth warren. in research on related issues. she is the director of technology policy at american progress. she spent the majority of her career promoting american progress. she also served for in the department of the treasury.
2:45 am
he is the senior research fellow at the center for growth and opportunity. he guided efforts to enable people to approve. he has worked as an attorney. great to have you. we will jump right into talking about this. we can try to talk about some of the problems. and some of the blue -- solutions. i think it is worth noting that as we start doing so, it is important to note that we will be facing an important election
2:46 am
this year. there will be many threats. we will be grappling with a set of issues. let's start with election risks. tell me how you think about what the set of risks we are facing our. we can get into discussions. on this includes things like cybersecurity and the like. i want to unpack that a little bit and hear from you on what are the threats we should be
2:47 am
thinking about? and then we will turn to some of the solutions. where do you want to get started? >> i am glad you mentioned that. we can get into first-order versus second-order. there are so many things to talk about. so many ways to order the discussion. you can think about the kind of risk brought up in the introductory remarks. other kinds of content produced by actors who are seeking to do that. malicious actors. that is a whole set of risks that we could spend the entire hour discussing. there are also reasons to think that we as americans are resilient to this stuff. that is one group of people,
2:48 am
malicious actors. the second group's average voters using ai to get information on how to vote. this is something people have concerns about. there is good reason to think that this is not a huge risk. voters are interested in receiving information. we did a report of a couple of weeks ago on elections. getting information from chatbot's ranks really low. i think it was last below i don't know where i get my information. it is something voters are not interested in, for now. they are popping up. there -- they are inescapable. they're becoming more integrated.
2:49 am
another reason not to be concerned is that tech companies are aware they can hallucinate. they tried to do things to redirect voters to authoritative sources. . that will not be so bad. >> thank you for having me. my take on this is like with any technology, ai makes good things better and that things worse. what i think about elections in the way we consume and receive and generate information, it is already to -- pretty fragile. with the advent of generative ai , i see that problem exacerbated. i don't think ai will be the reason democracy ends are falls
2:50 am
apart. the reality is it drops the barrier to creating and disseminating information that is problematic in nature. the dissemination of information falls to the platform. people and not sharing on openai the way they are on instagram or snapchat or other places. the existing problems are more pertinent than ever. they are not heightened because of generative ai. the only caveat i will say is the thing that keeps me up at night is not anything about joe biden or donald trump. it is the down ballot considerations. in such a massive election with
2:51 am
all 50 states heading to the polls at local levels, there is somebody microcosms of threat that exists where the digital literacy around spotting and reporting on a deepfake of joe biden will be less likely. it is really incumbent on those date holders, the state authorities, to build that literacy. while i am a former employee and i work on things like authoritative sources, i know it is barely scratching the surface. >> that is not a risk. one of the early discussions was cybersecurity would be a big challenge. ai glitzy build code.
2:52 am
i think there was a lot of early talk about whether this would present a cybersecurity risk to elections. my discussions with experts, they are not that worried about it. it turns out that the u.s. has a robust, redundant, so distributed that it is chaotic election system. it is very hard to capture in a way that you can manipulate the votes. even with a bunch of vulnerabilities. that will not be the concern it was initially. there could be other cybersecurity threats. but it doesn't seem like this is a particular rest. the biggest risk i see is we
2:53 am
still do not trust each other very much. i think the concerns around misinformation are where people are not focused. one of the things i worry about in this space is we are changing the outcome of an election. by manipulating what people seattle national level. that might be another thing. if it was possible to do, the hundreds of millions of dollars like going to getting voters to change their minds would be much more predictable. we don't really know how to do that well. one thing that is easier to do is to make people less certain about the validity of the outcome. we are already at a place you mentioned previously where the american public is worried any
2:54 am
time an election happens about whether or not there guy got a fair shake. a lot of the discussion around ai, while maybe not actually contributing to that, is another narrative. that people can grab onto and say this is another reason why my guy did not get a fair shake. it is really up to us to educate the public. so that they talk about this issue in a way that is balanced and addresses concerns people might have but does not heighten the concern so people automatically turn to ai as a reason the election did not go well if it does not go the reason they want. that is where i am most worried.
2:55 am
whether or not ai actually changes a person's vote. >> there is a question if you think about 2020, one of the positive stories for american democracy is the extent to which the professionalism of the election administration really held up. secretaries of state, county election administrators, really behaving in extremely professional, technocratic manner, even when facing a lot of difficulty. it is really a story of a triumph of american democracy. what we are thinking about this year is should we we be worried about ai? i can give you a few scenarios. if the election does not go the way you want and there is this narrative about ai, and we
2:56 am
undermine faith in elections through blaming ai, even if that is not true. another scenario would be if you think about the famous video of a poll worker innocently moving some ballots from year to their aid arizona that became a very big story. you can imagine a deepfake story of that. you really get the use of ai for an attack. undermining faith. you can imagine other types of scenarios. you should show up. it is the set of attacks on election administration broad enough that we should be talking about policy responses or
2:57 am
education responses now? one of the set of responses about? that should be most concerning to us? >> my biggest concern is what is generative ai ring to this? it would take a very detailed photo to stir up a giant controversy. will attackers spend a bunch figuring out a tool to do this? or will they just take a blurry photo and spend all of their time and money distributing that rumor? rather than generating the content? i don't think we know exactly how that shakes out. it is distribution that matters. the chokepoint for misinformation is not really the generation of it. it is the distribution of it.
2:58 am
that is where people will still spend a lot of's -- of time. we should think about what is the margin before we decide to plunge in and do something very specific to ai rather than say more general about deceptive attempts to intimidate voters or distract them or directs them somewhere else or threaten election officials regardless of what technology you use. that is how i think about it. >> all of the examples you cited , there are low-tech equivalents of than that would be just as effective. that election officials have seen in the past. it does not change the picture that much necessarily. it will require officials to double down. they have had so much more put on their plate in terms of
2:59 am
responsibilities. being prepared to counter some kind of false narrative. i think a lot of the solutions are things that election officials have been thinking about for a while and getting good at. it is hard to say what kind of narrative we will face this year. >> i think the problem of scale is really relevant. the platforms have been dealing with these challenges. it is an existing problem. when you have any of these photo or video creation tools, there will be more of this content out there. in a situation where you can't
3:00 am
tackle the problem head on with enough resources to do it fully and you add additional tools to create and generate this, who is to say what the reality will look like? it could be awful or it could just be a drop in the bucket of an existing issue. >> i think that also gets to your point on the effect of smaller jurisdictions. when we talk about a biden deepfake, obviously that will be debunked and reported pretty widely and hopefully that would make it to voters. at the lower level, where there is not a local newspaper that is actually able to pick up the narrative, i do agree with you. that will be something more to watch.
3:01 am
>> when voters are heading to the polls the day of. >> it is interesting to think about if we start turning out attention to regulation. to what extent is this just politics as usual? there has been dirty politics since time immemorial. it is more of the same. another possibility is that the scale really matters. as opposed to what we have been focusing on. maybe this floods the zone. more than in the past. i don't know. as we think about the set of problems, do we have a set of working regulatory or policy
3:02 am
ideas that feel up to the moment? we have seen states attempt to do some regulation. we are having conversations about regulation. where is the regulatory conversation or the policy conversation making progress? >> another thing we need to think about is the parties involved. incumbents traditionally worry about the technology. especially ones that push the capabilities that help people at a lower scale be able to produce content at a more professional level. that can help explain some of the concern on generative ai. it is the type of tool that they have been spending tons of money to create content that looks like that. now a competitor can create
3:03 am
sophisticated content for a low-cost periods i think we have to keep that in mind when we are thinking about policy. regulation around speech about elections runs right into the first amendment. you always have to keep that in mind. that is why a lot of the most targeted tools in this space will be focused on the types of things that are conduct aimed at the electoral process. and not aimed at the discussion around the election. when you are doing things about lying to people about where the polling place is or threatening or harassing election officials in an automated way, those are the types of things that we should have solutions for. i do know that they need to be ai specific. we have laws around much of that.
3:04 am
maybe we need to strengthen those laws. to address any challenges. >> i would take a big picture approach and say there is some tech regulation. i am of the beliefs that it is a failure of this administration if that is the only thing they are regulating, a singular platform. -- there is a strong need for protecting kids online. you can argue how to go about those things, but my general thought and that of the folks in the states is that there is a need for regulation in the online ecosystem. we see it with the fda, we have seen a failure on the platform
3:05 am
effectively. with all of that, not withstanding generative ai, it is clear there is a need for more at d.c. and the states. the states have stepped up in a lot of ways. california has raised the foundation for how they build privacy in their systems, and with judy p.r., it is baked into what they're doing now, but even with generative ai, and michigan, governor there passed a law around the closure of generative ai the head the election, so you can see the state starting to create the approaches for themselves that they think is a problem, and some have pros and others have cons. they are fresh off regulation, but given all of that in the moment that we are in it now, there is a strong need for it, for baseline regulations, but especially for ensuring that we
3:06 am
can harness the opportunities of ai with balancing the risks, but the caveat is, as said, there is a lot already in their that does not apply to ai, and really excited that we are undertaking a body of work to see how does ai apply for the department of education or hhs and other organizations?
3:07 am
so that is a good starting place for any sort of targeted platform, which is what we have right now. >> it is about the harms and risks that we are worried about rather than the technology itself. we already do have so many tools to watch, like you referred to. in lots of states, the manner of elections is already illegal, whether using generative ai or crayon. so some of the things the state are doing is pretty interesting around transparency, so the states like michigan have passed laws that lawmakers have to figure out. >> if i was advising a candidate, it would be hard for
3:08 am
me to say do not put this disclaimer on your content because i think it would be pretty hard right now to produce a video that did not use some sort of learning algorithm. i mean the chips and cameras now essentially have invented machine learning algorithms that sharpen a photo, so does that count as ai? i don't know if you saw that theo with the iphone 15, where the bride had like an arm in the wrong place or three arms, and it was because of the generative algorithm that is built into the phone as it takes a picture, so with that count as generative ai and would you have to put this disclaimer? i think the safe bet were to be putting that disclaimer everywhere, which is useful. i think there are practical
3:09 am
challenges. they are hard to nail down. that is why eight text specific approach. >> if i think about the story about white incumbents may the post technology, if any change creates risk, you're going to win as things were, or do i change anything? and i would like to think of positive use effects of generative ai in the election space. the story told about why they oppose regulation is a good story for electoral democracy, especially for funded candidates, challengers who haven't come who cannot generate rational looking campaign, and
3:10 am
generate ads almost for free. if you look at the data from the last couple of elections on who is using facebook ads or political targeted ads, they are taken up by challengers so is this a good story for ai and elections? are we headed toward the golden age of electoral competition? are there other stories about ai and not all doom and gloom, but coming out of these elections for us? >> good or bad, maybe the incumbents are good, right? there are some technologies that could make the world more complicated. you look at the social media and the story and politics in the story for the president obama election that was very positive,
3:11 am
and the story about how trump used it was negative, and i think what you will see is a lot more variability in the ability of candidates to get traction who do not have traditional backgrounds. i think that could be challenging story for democracy but it depends in part on the candidates. i trust the american people enough to think that we will figure this thing out. we have a system in place for not clearly maximum votes getting you across the line, so it does take a lot of scale and the ability to build a coalition. i think some of that protects us from the most extreme types of variance, and ai does not really change that too much. i think there are other positive stories about use of positive communications tools, but i have been talking too much. >> 20 think about upsides, i
3:12 am
think about election officials as part of the workforce, the civic workforce that is being asked to do a lot. with not earning much funding. elections are carried out in the u.s. by 10,000 jurisdictions. many of them, the elections are carried out by one person. one person who is maybe part-time. maybe they have one i.t. staff or zero depending on the size. they have a lot to do. i think opportunities there that ai can be used could also be used in terms of doing more with less, which we are always asking them to do, which what we should actually do is find them at higher levels. there are things that election officials have to do that can be streamlined by use of ai. they also have generate a lot of memos and things, just like any campaign would. they have cereals who need
3:13 am
proofing and things like that. there is kind of a set of activities that they can do that ai could potentially help with. some election officials are experimenting with it in these kinds of areas, but it is early days. there is not official guidance from anybody on how to do this kind of safely and responsibly. i think we will get there eventually. eventually, it will be incorporated into election official workflows just like it will be for all sorts of workers. i was curious, raise your hand if you have tried using generative ai to do something related to your job? it is a little over half. this goes for most different offices and election officials, so we have to figure out how to put safeguards on it. with elections, a lot of that
3:14 am
involves making sure that no matter what you do, there is human review at the end of the process, and that is just built in to the dna of election officials, like a part of the process. >> i would add another positive, which is the flipside of the coin i spoke about earlier about the ease and the reduction of a hurdle to create misinformation and disinformation and video, texts and audio, which is civic participation becomes easier for people who would like to write a letter to their congressperson or engage on an issue and are not sure how and need a head start. those of you who raised your hand probably have used chat gtp to get started on something. you have an idea, and it gives you information and fleshes out that idea into something more.
3:15 am
you can think about that in the civic context, i care about this issue and would like to write to my congressman or mayer but don't know how to do that, and that is a clear-cut example of what real democracy space that carries all the risk facebook about, but it is a real opportunity. >> maybe you can help us think about this side, so let's think a little bit about we have seen a variety of different approaches to these issues over the last decade or so. i'm interested -- and that is when we are seeing twitter going the opposite direction from where everybody else has gone the last decade, so i'm interested in your take on what kinds of self-regulatory strategies have been tried in earnest, and did they do anything? >> [laughter]
3:16 am
great question. so, in my time at meta and twitter, and i will say that i was fired the first day that elon took over the company, a crowd badge i wear every day, yeah, actually here in d.c. so i was on a team that was called product policy, and essentially advocated for all the things we're talking about here like content moderation, safety, transparency, accountability, in rooms full of people filtering the stuff, so designers, engineers, and i was often the only person in a room full of these people advocating for the issues that i cared about. these are things like growth and engagement and time spent, stock price, the bottom line, and
3:17 am
things institutions naturally care about without external push to relate prioritize these things. ultimately, these incentives were never prioritized by the companies because they did not have to be. to this day, without appropriate external pressure, regulation or forcing function, i don't believe they will be. that is partly why i made the transition and left the techworld to come to civil society and have the conversation and be straight up when i talk to people at the company and kind of speak the language they speak and where the hats that i have worn in the past in these conversations and say, you know, i know you guys care about these issues. you are working tirelessly, but at the end of the day, these incentives get the prioritized by things like growing into keeping users on the surfaces. and keeping the lights on.
3:18 am
without that external pressure, we will just continue to see a failure to self regulate. it is like a lot of crises have occurred and have implicated the platforms in severe ways that have led to loss of life, violence, moral issues. we are at an inflection point as a society with the industry where it is really incumbent that all of us in this room and d.c., the states, and around the world, honestly, need to figure out how to wrap our arms around it. >> when you were there, i'm curious how you made it to civil society -- welcome -- >> thank you. >> a lot of tech focused organizations are always trying
3:19 am
to make recommendations on what they things co -- what they think company should do. did you find that people considered that within the companies? >> yes. there are well-intentioned teams internally spend all day talking to organizations, and when i got to cap, one of the first things i did was right one of those reports. here is how you protect elections. this came out in august, and eroded in the style of how i would write internally. whether or not it had impact remains to be seen, but i do know that after meetings with organizations, there were well-intentioned people who bring those recommendations to the people designing the system and building the products. timidly, that is what i said, it is a for-profit institution that will do what it needs to do to generate profits, and if it is not on the list, they will not
3:20 am
prioritize those things. >> content moderation is already difficult, difficult problem. political content moderation can get you in a lot of trouble. even if you do it perfectly, 99.99% of the time, you get a few examples and people make a big deal out of it, and it gets you hauled in front of congress. i think there are strong incentives to get it right. but it is a difficult problem. getting it right is difficult depending on where you sit politically. all the recommendations in the world do not make the problem easier. around performing correctly around elections, one would hope there would be bipartisan agreement on what running a good election looks like. i'm not even sure that is 100% true but i think it's challenging. we are not talking about
3:21 am
generative ai companies but social media companies. self-regulation in the generative ai space is something different. in fact, you can think of the reinforcement through human feedback as essentially a form of self-regulation. a lot of work is going into that space. how are they doing it for election related materials? it is not clear. i think they are trying hard to tell people that these are imagination machines, but then they have marketing that says something different. i do think that it is a complicated problem for them to solve. again, i think the real problem is a distribution, not content generation. maybe that is the right place to talk about self regulation in the space. i did want to talk about that we are not really talking about self-regulation. >> i would like to come back to
3:22 am
self-regulation on the first side, on the generation side. i would like to stick to distribution for a minute. they are interwoven, and distribution is a complement, so content moderation is really hard to do right. nobody agrees what is right because policy is a zero-sum and the thing good for one team is fine for the other. i tend to agree that content moderation leads to political because every piece of content to take off by pleasing one-sided making the other side mad, the other thing, if you compare today to 2016 or 2020, and here again, i think twitter is really outlier. if you look at tiktok, sudan, whatever, just the mass of turning down the volume --
3:23 am
tiktok, instagram, whatever, just the mass of turning down the volume, we are just seeing less politics on the platforms that we did half a decade ago or etiquette echo. i'm interested in your thought, is it good, bad? it is a kind of self regulation that has taken them amount of the politics. >> i will add that most platforms said they will not directly run political ads. it is cutting into their pocketbook. but that is not a good solution. for the same reasons you said, these targeted ads are the way startup, encumbrance, or people challenging incumbents can get traction and build a campaign. if you remove that factor, i
3:24 am
think that's bad for democracies. you can see the appeal of it. they don't make good decisions anymore and they just say, here we don't do that here anymore. it is a form of self-regulation but i think a pretty bad one. >> and 2017 to 2020, the pendulum was slung on the post 2016 side of things, and now twitter is the outlier. they did not do global ads last time and they are doing political ads this time. the pendulum has now swung to the texas and florida cases, and missouri b morsi, and the relationship between technology and the government, and the public is at an all-time inflection point to where the problem of scale is so severe and the realization that they cannot effectively moderate political speech and advertising in a way of -- forget keeping
3:25 am
everybody happy because that is zero-sum, but perhaps keeping the space healthy is completely impossible, so the solution is to have political content with machine learning models trained on every social issue that exists. it is quite imperfect when you think about the scale at which there are rating area even though that is the approach they have taken, you will still see spy reality across certain content that will still become issues to companies. it is one approach, not necessarily the one i agree with for this course and living in a healthy democracy, but that is what they have turned this into. >> i'm looking back at my previous testimony. as a the date of this, i think
3:26 am
this is still true, if you do have synthetic content and you disclose it, i don't know how they would police that effectively, but if anybody can, they could. that is a different approach, and their definitions are much tighter than someone legislator language around what is ai? >> they were aware earlier that it was done with ai. >> exactly. >> so speaking of self-regulation on the ai side for ads, and some of these are social media companies, so openai has seeking a stand, if you cannot use their tools for political content and other companies have been less district about that, and there
3:27 am
are open-source monster can put on your laptop and you don't have to follow their rules at all. is that content generation side self regulation doing anything? or is the genie out of the bottle and there is nothing that can be dented this point? >> i do think that the most effective self-regulation is the training of the model and the layers they put above it to check. it does not work so well if you are downloading it to your laptop, but if you are interactive with chat gtp or something like that, there are layers of checks to see if the results coming out has something to do with something that might be a little squeamish. that doesn't always go right either because sometimes that blows up into a pr problem, but
3:28 am
that is the approach. there has also been sort of collaboration -- collaboration might not be the right word -- but agreements, some driven by the white house, some driven by the u.k., around uses and procedures that they might've dropped so ward off worst uses. i think the practical effects of those, it's too early to know and to hard to believe some of this. i have very little fear of somebody using something like chat gtp to crank out tons and tons of spam, lies, and emails because they can see everything that you are asking us, and they are watching. i worry less about that, but people are going to do some crazy things to them, and i still think the problem is on
3:29 am
the distribution side. >> deplatforms and the non-platform ai companies a few months ago did sign the munich accords, which was an effort as a group to self regulate and try to prevent and try their best to prevent deceptive content from being generated with their tools. i think that is a good first step and it remains to be seen how effective that is going to be. i would like to see them partner more with society, to give a little more transparency into what is going on on platforms, what people are generating on generated ai tools. this is something society groups have asked for a long time, asking them to be more transparent on conversations that go on on their platforms, when they try to make interventions, to make it less
3:30 am
toxic, but if he respects the interventions, how long are they working? always letting everybody else in to see how it is actually working, so more transparency would great to see? >> i inc. were things like the munich record or what the ai generative calipers signed, as well, it fell really short. neil is correct with chat gtp. failure keyword detection and negations and responses that say, sorry, we will not tell you how to vote but you are safe. there is a huge component to their bottom line that is api and accessing it. this goes for all of those models. that is really where it is the
3:31 am
companies, the developers, making pretty big recommendations of third parties that are integrating the app, so the example is in order to integrate gtp four into your third-party organization, in the terms of service and openai, you have to disclose that you are using that technology and virtually no one would say yes because why would they? there is no regulatory hand forcing them, there is no openai check bear doing. there is a tiny of service. that is an accountability issue. you think of that in all the technologies and spaces where it is being deployed. you have some training of the model, yes, but it is an easy -- very easy to do. a lot of the trust stuff they
3:32 am
build on top is optional. so as we think about the growth and development in the fact that microsoft is already in our pockets and they will search on millions, and millions of people will have that upscale time that social media has had historic way. it is incumbent on all developers who have the appropriate tools for their first and third parties. >> i will open it up to questions in a second. one more question, we talked about to some extent civil society, regulators government. what's the role of journalists in the coming election? i have a pet theory that at some point, the information environment comes bad enough that people have to turn back to something like authoritative sources or professionals who are willing to sign their names to things because they know they can not leave anything they see
3:33 am
without someone telling them it is trustworthy, even if it were polarized on who is trustworthy or not, or at least turn to someone accountable, but i'm interested broadly, what are the and opportunities that our first big ai election presents for professional journalists? how should they be preparing themselves? what is there key function here? >> i think this is really important because there is the risk that if the stuff is not covered correctly, people might say i as a reason to become more reasonable avail of electoral process, so a lot of it comes down to understanding the technology at least a little bit with sophistication, not talking about -- not lumping it all
3:34 am
together he definitely don't use a terminator analogy or other types of sci-fi movie analogies. these are hyped up auto police, not robots yet. i think there are basic things around that, but, also, just looking about how are we going to talk about this pretty incident? my organization has been tracking media coverage around the incidents since the beginning of january, and we got a lot of coverage. there is the biting defect, taylor swift holding the trump one card. there are two others that i'm not thinking of. ai generated images of donald trump with black voters, and the campaign add that had some
3:35 am
variations of trump hugging faucher. all of those got a ton of covers, but didn't actually affect voters? the early versions of that name -- the secretary of state did investigate that and bunch of people got them. did it change anybody's food? i think that is what really matters. but also, spending the time to get the facts right and talk about what is the actual effect on the election or the electoral process. that is what is really important. >> i think everyone generally agrees there is no server bullet, but around education and authoritative forces, those keep coming up. these journalists are in many
3:36 am
cases are uplifting them, appropriately giving them the space to investigate, not attacking them online, and really letting them do their jobs effectively in an ecosystem that is tough to navigate and protrudes from not of the murkiness but everything going on in the elections, and i would agree with neil it is really important to not just take yourself or a surround online first 24 hours. do you have to make sure the organization, person, or is will start -- or in a timely matter because it will be down today's and hours in the lead up to the
3:37 am
election. >> we have been hosting a series of table topics to try to simulate some of and try to figure out how to handle it throughout the day and what is the best response. journalists play an essential role. when it is is something that may or may not have happened in the election office, and if we are worrying about the election process itself, i think it is really important for journalists to try to understand how elections are actually carried out in the jurisdiction that they cover, form relationships with the chief local and being able to respond quickly.
3:38 am
that is what i would like to see, and i think journalists have actually gotten really good at this stuff since 2020, which, of course, so much misinformation was about the nuances of electoral process. that is what i would like to see them double down on here. >> i think we have some microphones. oh, and when you have a question, please stand up. >> hello. megan gray, and o'neill, i appreciate your comments. in terms of actual impact on the election, role will we see that in the news -- where will we see that in the news coverage? we will see it last-minute coverage of supposedly news report or a race ryan. i do have a question for the group, which is for the down
3:39 am
ballot candidates or for citizens voting in the election, if they see something that they suspect or ai generated that is leaving and inappropriate impression and the only ones i'm familiar with are the standard portals into facebook. there is not a fast-track reporting system for political candidates or citizens to stop the reality of this type, but you all would know better than i would, so i'm interested in your thoughts. >> they are reporting buttons built into the platforms, and a lot of argument is on whether those are effective and if they go into a black hole or not, but there are also outfits to an
3:40 am
understand common cause, and stopping cyber program. and they also have relationships with some of the companies, or at least they did in 2020. >> i cannot speak to it now because i'm not there, but there was a pretty robust program, and it was the effort to onboard election officials in each state to quickly expedite information to the company's, and that gets prioritized and looked at in a quick way to ensure that if it is election related or related to a potential candidate, it sort of gets the eyes of an employee sooner than something i would report that skill. >> we have checked twitter a
3:41 am
good amount, but immunity notes on twitter is pretty productive. and it harnesses what we were talking about, the americans you disdain for authority that we don't really trust just because they have a title or position. in america, you have to prove it. i think the platforms to the extent that they are decentralizing some of that and taking advantage of these are contentious issues, and if you give them a claim that can help amplify them that, it's on a total solution but better than sending the report into the black hole and see whether or not people at facebook pick up the phone. >> great to see you.
3:42 am
the european elections are in six weeks. has there been anything? the only actual policy we have heard sounds like a curtain to a bad idea in michigan. what have people done before those elections, granted these are places that don't have a first amendment or in many cases, actively suppressing free speech, but what are they doing about this, if anything? >> there is so much going on internationally that it is hard to track, so i don't have great insights into what india is doing. i have a little knowledge about what the u.n. is doing generally around ai, but ai and elections i don't know. your point, there are entire regimes around election speech
3:43 am
is totally different than mine. i don't even want to venture there. i think it is probably worse. >> a few things, the indian technology regulator is extremely strong, and for the u.s. not having one, they are the opposition. to question what they are doing, they are taking a different approach to a lovely legal content and moderating content and applying pressure to take down certain applications. there are also one-way challenges and political groups in india, too, and he really affects the uniquely rest -- rest of problem window facing the rest, and they are not a lot
3:44 am
of solutions going out to get their hands on the problem. there is a host of others that were prior take downs of certain information in a timeframe, and that is a very sort of unique approach and probably not something that would work here in the u.s. and the eu, the dsa exists, and they just undertook a special project on elections. to really start to build principles and policy around how platforms are setting themselves up. mixed results across all the ones i just said, but there are bespoke approaches happening around the world. >> there have been mixed results across the world. you do see those used by regimes in power to take down a speech they don't like.
3:45 am
yeah. >> hello? the fact that we are talking about potential misuse of ai produce-information, suggests that natural incentives are not aligned with public goods, so it is what can we use to align the incentives between owners and users of the technology? >> i think we have to make the companies compete to produce the safest product. they do have -- i think any of the companies would like to be in the news hurt for creating some machine d powers the election campaign.
3:46 am
the free press has to do with they do tomorrow and do good reporting when something comes up like this, try to understand the source of it and told the company's feet to the fire. >> i will push back on your premise little bit rate historically, do technologies -- we are much better at imagining how they can go wrong then imagining how people are going to be. i don't think it is naturally implied that the interest of the providers are unaligned. i think it is that we are much better at imagining how it could go wrong, which is part of the job. >> i think there is a sobering reflection in the tech policy niche i existed in the last 20 years, and looking ahead and not wanting to repeat those mistakes. i would agree with neil, and
3:47 am
there is the desire to not go down the path we went to before. >> there was a question appear in the front. > eric can with the ap. i was interested, particularly at the lower level, you talked about potential vulnerabilities, but we have seen a lot of organizations that have either been gutted or we just have seen a lot of them close, so i'm. what is the next step if you don't have that reporting process available? and then you still have this parable vulnerability, how do you address that customer -- address that? >> you see that jokingly. the idea of reducing the cost to produce of -- it would still
3:48 am
take a lot of work to get the facts that your dad put in and crafted into a story. kelly backtracked? i don't know. it is a tough problem. we have more access information we ever have. we have the potential to make that until someone who has knowledge. ai might help us do that, but i don't think anyone has figured out how to do that will get. the immediate business models have been rather slow, and i think you're going to see even more shelf under ai. i don't see a path yet that is obvious.
3:49 am
>> the report i mentioned earlier on where voters go to get their information, local journalism was important. there were she had better ideas on how to make the economic journals work because it is essential. >> that will have to be the next panel. . >> please join me in thanking our panelists. [applause]
3:50 am
3:51 am
in russia's invasion of ukraine. this is about 25 minutes.

0 Views

info Stream Only

Uploaded by TV Archive on