Skip to main content

tv   Govt Officials Tech Advocates Speak at Internet Policy Conference  CSPAN  May 6, 2024 8:48pm-12:29am EDT

8:48 pm
8:49 pm
internet education foundation. it is three and a half hours. [chatter] >> hello. alright, let's get going.
8:50 pm
he has a long day ahead of us. welcome everybody. so, i am the executive director of the internet a dictation foundation which coordinates the state of the net conference. welcome to the 20th annual date of the net which seemed to need to be insane because i didn't think we would get to the second annual. but there is a totally different story. i would like to make a profound statement today about what 20 years needs, where we are at what point in time, internet policy we are in time now looking back over the 20 years of internet policy, the ebb and flow. something certainly new. we are: we talk of internet policy because we feel like something is happening in the next 20 years will be even more exciting than the last 20 years.
8:51 pm
that said, a lot of the issues we dealt with 20 years ago are still in play today. we are at an incredibly important inflection point in history when it comes to connecting every american to broadband internet, same question we had 20 years ago. we are seeing the emergence of that issue this year. you see that reflected in the entire program we have today. a lot of the other issues like cybersecurity, same as they are then, they are now my privacy as well. it's an important time and we are really excited to host this conference. i will not go into details about how i feel about the issues because that's not my job, my job is curating conversations about them. the program will reflect in your thinking about -- other things
8:52 pm
we are thinking about when it comes to internet policy. in the coming months we went to post a conversation --. conversation about the next step. we will have a follow-up to this conference to focus more on the next point in time. some housekeeping i want to do, the schedule is much bigger than it was last year. we introduced lightning tops and we have introduced salons. the schedule is posted all over the place. there is an app that you can use to look at the schedule. we have a lot of programming. really excited about the lightning talks. about 100 people sent in proposals for lightning talks and salons and we can only accommodate something like 18. so i want to thank everybody that has been a lightning talk proposal, sorry we couldn't accommodate all of them.
8:53 pm
i went to -- i need to thank some of the sponsors and my board members. we passed out a list of board members we have and we are very grateful for them. you will see them, throughout the day introducing different people. i want to thank them. i have to thank the sponsors, because without people willing to sponsor the conference and sponsor a conversation to has among robust debate and not everybody agrees on the topics here, it is incredible that people would come out and allow us to create this debate. they include comcast, tiktok, meta-, netflix, pepsi, verizon, workday, amazon, at&t, discord, lumen, incompas, internet society, stand together, venable, american university, and special thank you to the lego group. a lot of the graphics you see
8:54 pm
around the space and online are a result of glen echo's amazing public relations and media team so i went to thank them for an amazing job. that is pretty much it. there are four people, there is a donate button when you register for this conference, and last year one person debated donation to us. [laughs] this year four people donated, a fourfold increase. i am excited about that. they are my personal heroes this year, thank you so much. forgive me for prattling on. let me introduce the assistant secretary of commerce, principal adviser to the president on telecom and informational policy. ntia is one of my.
8:55 pm
it in town, i used to work there a long time ago and the issues weren't as they are today. alan davidson is like a renaissance man of internet policy, he has been everywhere, done everything, he has been to brazil, america, google. he has been at the department of commerce even before. people don't know that he holds an engineering degree from mat as well as a law degree from yale law school. when i met him, he was working for cdt, the organization that created this organization. at attend the conference was conceived, he was board treasurer, so he is largely to blame for all of this. he is going to be doing a fireside chat with a guest from the washington post, the national policy reporter. she was the first anchor of the taco policy -- tech policy at
8:56 pm
the 202. kat, please. [applause] >> can i just say thank you? >> yes, please. >> [laughter] thank you for that intro for both of us, but particularly i will say congratulations to tim on the 20th of these. pretty incredible. i will say, he is right, i was at cdt at the time this conference started, and i was also on the idf floor. when the idea of the conference came up, we were skeptical. we said this is a brilliant idea, but really, can we do more than two are these? completely [laughter] right. clearly, there was a lot to
8:57 pm
talk about. so congratulations to tim horton and his whole team for 20 years of incredible conversations that have made us all smarter in the space. congratulations, tim. [applause] >> sorry. >> alan, thank you for that opening. it brings us right into a topic i wanted to start out with which is ai. you have been in washington for many czech policy battles over the years. what do you think makes the ai moment we are in right now different? >> i think we have captured the public's imagination and that is really powerful and something that we should all really take a moment and lean into. i will say as a starting point, i think we have all realized the power of ai, right, that machine learning and these new systems will really transform so much of our economy. i think there is a strong
8:58 pm
feeling within the administration that responsible ai innovation is going to benefit a lot of people, but the key is that we are only going to realize that benefit if we are able to also manage the very real risks that these systems are opposing today. i think that has captured people's attention, but the potential benefit, and the path that we need to get ahead of addressing the risks. it has captured, the large language models have really brought it up to the fore, you have been covering it for years, but it is really something special how much people are paying attention to it now. >> last year you did a request for comment on it are accountability. can you tell us something about the initial findings from that period? >> thank you. we started this project about 18 months ago, before large language models work will.
8:59 pm
-- were cool. [laughs] the idea behind it was to say, if you want trustworthy the big question we raised in this project is what does it take to be able to hold a system accountable? what do you need to know? we got 1400 comments. it was like a huge amount. it touched a nerve. thanks to who submitted it. and i think what we've heard there is real action we can take to make systems more accountable. we have a report that is coming out in weeks, not months. this winter, he's nodding and one of our main authors.
9:00 pm
>> there is work by the private sector to be done and accountability consequences for missing on what you said a model can do. and i think one of the things analogies we are using for this is the financial auditing. we have overall today in the financial accounting where we can tell if a company has latest numbers because we have an accepted set of practices about what it means to be in
9:01 pm
compliance with accounting rules and ecosystem of auditors out there who audit companies. very well accepted. that is what we need to move towards. a world where we have real rules that people understand and ecosystem and army of auditors out there who know how to look at systems and understand and be able to prove keeping people private and safe and that's the world. >> when you think about having a system of a.i. auditing from the government, what is the biggest object stack calls, is it talent, power? >> all of the above. we are so early. when you look at the level of transformation, we are very in this learning until a.i. revolution. we have a lot of things we need to build.
9:02 pm
part of our goal is identify those building blocks, starting with the rule space and giving the public more information to information so researchers can understand and set up better practices and creating this ecosystem, a world where we do have companies that are in the business of holding systems accountable. >> what steps do you think congress needs to take as they debate a.i. legislation right now? >> there is a lot for congress to do. it is early days where they are interested in watching and trying to provide technical assistance where congress is getting engaged in this. there will be a lot to do here. everything from how do we make sure we have the resources to stay in front.
9:03 pm
i think that's a key part. we think that innovation here in the u.s. and west needs to happen. going to provide a lot of benefit for people and a lot that congress can do to promote that and we need accountability on the back end and congress can have a role that. >> on the part of accountability and towards setting up the a.i. within nist. how ruin forming those efforts? >> we are about the a.i. institute and big announcement, new leadership and congratulate the new leaders put in place and enduring institution for the u.s. we see them coming out in other places around the world and working closely. there is plenty of work to go around. this is all a product of the a.i. executive order from the
9:04 pm
administration which is a symbol of the level of urgency that we feel in the administration about this. a strong accepts of urgency about the need to engage on a.i. second piece is i think that executive order is probably the most ambitious government effort to date of any government to address all of the broad range of issues around a.i. and safety institutes is a big piece of it. and we are going to be working with the safety institute ourselves.
9:05 pm
and dealing with risks. i think it is across the board a very ambitious effort. we will lead legislation in this space, too. but for something we could do quickly and i am proud we were able to move in this space very quickly. less than a year ago this spring, even nine months ago we started an effort to get leading a.i. companies and language r large we got those commitments and moved to an executive order and it shows the urgency in
9:06 pm
addressing a.i. right now and enduring theme and look at today's program. there is a lot of conversation about it and appropriately so. >> on that topic of the executive order and will be doing a request for open source and what do you hope to learn from that comment period? >> one of the most interesting things i think homework assignments that we got from the executive order was to give the report to the president by july. we have a date, we have a shot clock about widely available model waves for foundational models. what does that mean? it means what should we think about opening up making some of the key parameters around models that let people use them and it's a very sensitive topic and
9:07 pm
we are taking the time to to think about it. it is hard to implement, there is an initial and open up these powerful models, how will we protect safety and security when they are open and anybody can use them for almost anything. another competing view, which is that we want to make sure that opening up these models can be a way to promote competition, access, give more people the ability to use them in different ways and figuring out that path through and making sure it's not just a few large companies that have access and control to the most important models that is a big assignment for us. we had a terrific listening
9:08 pm
session. there is more to come that we will be doing and request for comments that will be coming out. >> on this issue particularly, this is the one where i hear the most startups and venture capitalists and lobbying tech companies how are you weighing that as you handle this process? >> it's a terrific question and why it is important to get a wide variety of inputs and first was in session, public interest organization. we have tried to get input from
9:09 pm
a wide variety of stakeholders and public request for comment coming out the next few weeks and contribute to that and i think we will hold a public conversation and working with our international counterparts. >> i want to take a moment the importance for congress to act when it comes on a.i., but over the last decade we have seen congress unable to act on many tech issues and areas where there is bipartisan interest like privacy. why do you think a.i. might be different? >> it is a great question and we have a lot of members of congress coming today and there are some key areas where we really would benefit from action and put privacy at the top of
9:10 pm
the list. if you asked many of us working in this space when this conference started 20 years we will be sitting here and won't have a national comprehensive privacy law. many of us would say we need one of those. and we would benefit from more of that action. the reason i'm optimistic about a.i., it has really captured the public's imagination and because it is so useful, our kids are using it for homework assignments and people are using it in the workplace now. it is transforming our society and see members of congress engaging in a way they haven't for a lot of tech issues and tech companies and government through executive action moving
9:11 pm
faster than any other major tech issue and may well be action here because of the urgency and people are reacting. while on the topic of congress, there was recently a kid safety hearing on capitol hill with c.e.o.'s. you had a session at the white house with the kid safety task force. tell me about how that impacted the discussions there. >> as a starting point, i would say these issues around the safety of our young people online and health of our young people online are massive and one of the things i hear about the most when i talk about broadband. parents are really worried. and while we -- while there are
9:12 pm
still a lot of questions and research to be done about the impacts of the internet and online services on young people, i think the strong feeling among many of us is we know enough to know that we need to act and did have a listening session with leading mental health kid safety advocates and people from the white house, it was clear to everybody there was the time for hearings is over and the time for action is now and there are things that we can do. i co-chaired this task force that the president set up of kids online health and safety with some cut-rate colleague at h.h.s. and we have been tasked with a couple of things including putting out a set of best practices for industry and
9:13 pm
recommendations for families and policy recommendations and best practices for industry, we need to do more. we cannot say this is on parents now to keep their kids safe and can't say that kids need to stay offline longer or not need these services. we need to help them thrive online and industry has to help, too. you mentioned with a.i. and having an industry pledge would you do something in the area of children safety? >> great idea. we'll see. i think we are keenly interested right now in identifying those best practices. there are some very good innovations and they are working hard and we can help a lot by lifting up the good practices and asking people to make sure
9:14 pm
we are raising the floor here. >> while on the topic of social media and ask you about tiktok. we saw the biden campaign join tiktok and there are a lot of and ownership by the company bightdance. do you think it precepts a concern? >> they will make their choices and assessments of how to engage with the american public and tiktok is a popular application right now. i can say cfius about foreign investment. the administration has endorsed the restrict act which is how to create better tools to address these issues and this administration is focused on two
9:15 pm
important things. one is making sure that american data is safe and secure and the president has said that he wants that large platforms to be more accountable for the illegal or harmful content that is on their platforms and that will be true wrarls what the platform is. >> last year when there was a debate about banning tick tock and said it would be a sure way to lose voters. and i mean our political calculations impacting how this administration thinks about its policy on tiktok. >> i think we have had a very clear line on this all along. and i think the secretary of comments were made in jest. we had a clear long on this.
9:16 pm
tiktok is not available on any device. i can't use it, our team doesn't use it. we can't install it on our devices. and we have laser focused on promoting the privacy and security of data.
9:17 pm
it would be very damaging and will be damaging to our economy. i think it will be damaging, harmful for these families to no longer get online and ntia, trying to promote broadband access and build networks. a.c.p. has been a key part of making sure we can deploy those networks knowing there is a customer base available and people are going to participate is what makes it more affordable and attracting, without a.c.p. we have a problem with building out networks for everybody in america. it is an important program and we are hopeful that congress will be able to act and part of
9:18 pm
the president's supplemental request and high priority for this administration. >> can you tell us more specifically how the digital work you are doing under the infrastructure package might be impacted if this program goes dark? >> as i said, affordability is a necessity and key part of getting people online. we have been talking about the digital divide in this country for 25 years. we actually have the ability to do something serious about connecting people. we are going to in the next decade connect everybody in america with a high-speed internet connection. that is going to happen and on our way to doing it. a wire that goes past a family's home doesn't do any good.
9:19 pm
affordability has to be a key part of this as well as access to tools, devices and skills and our digital equity work -- our goal is not just to have a connection available but help people thrive online and have meaningful adoption where people are online and using the tools. affordability is a key component of it.
9:20 pm
>> this is news broken this morning by the "washington post." we were putting out and have an event this afternoon a 42 million grant for a testing center around technologies. this is what we call open end. but the 5g equipment market, the cell towers is a static market. only four, five major vendors who produce the equipment in these cell phone towers and a couple of them are not trusted.
9:21 pm
supporting these test beds. and some of the most things we have heard from industry how we can help and creating these test beds and people can prove their equipment works.
9:22 pm
>> and we are just about out of time and ask you a big picture question. the biden administration came in with very high expectations on tech policies from issues competition, broadband, privacy, how do you grade their progress so far? >> i think we are doing well here. there is still a lot of work to do obviously. there are things we would like to do more of. this is an important moment. i think there's a lot to feel proud about about our work on broadband and connecting everybody across america. our work on competition which you'll hear about and work of the f.t.c. and justice department, there is a lot more we would like from congress. our work on a.i. is nothing like
9:23 pm
we have seen before in terms of engagement on major technology issue in this country try in recent memory we are proud of the work we have done but there is a huge amount still to do and it's going to be challenging and going to be a very challenging year. we have a lot of tech concerns about a.i. and disinformation and concerns about a.i. is going to fuel those things in a charged political year. if there is one thing that gives me hope and i am an optimist and conversations like the ones we are having today. it's the fact that we have a community of people who are engaged in this in a way we never had before in the previous conversations about the internet, the web, social
9:24 pm
media. never had this level of engagement and we have tools at our disposal to deal with giant challenges that connectivity, a.i., they are hard problems for us to deal with but give us tools to deal with the things facing our planet and how do we create a vaccine and how do we deal with climate change and figure things out. we need to get them right. we are engaged far earlier in this conversation than we have ever been in the technology space and that's what gives me hope. i would say there is a tremendous program of conversation today. we can look at this agenda and say these are the conversations. it's hard work but working together we can build technology that is going to make people's lives better. >> none of us know what is going to happen in november, if you
9:25 pm
only have less than a year left in your office, what is the number one item in your to-do list? what has to get done? >> i'm going to give you two. we are well on our way of connecting everybody in america. that is our all hands-on-deck project. this is a moment. once in a generation moment. we are already -- get tens of billions of dollars to connect everybody in america. once from this congress and have to get it right. the giant things connecting with everybody with like that and water and building the highway system. this is our generation's chance to connect everybody with the tools they need to thrive in the modern digital economy and we are focused we land this year is
9:26 pm
getting that program launched. and we need everybody's help. >> thank you so much secretary davidson for your comments today. [applause]
9:27 pm
i have great honor to introduce don beyer in his fifth term representing 8th congressional district. prior to his service, representative beyer served as u.s. ambassador to switzerland and lick bridenstine. he is a member of the senior.
9:28 pm
representative beyer is a leader in spret technology and now a.i. issues. he is currently and vice chair of the new democrats coalition a.i. working group and working on obtaining his master's agree in machine learning at george mason university. at the end of december, representative beyer h.r. the a.i. foundation model transparency act and in january he introduced the federal artificial intelligence risk management act. please join me in providing a warm stet of the net flukes to representative don beyer. [applause] >> thank you for the kind
9:29 pm
introduction. i'm a big supporter. i was reading materials yesterday. but potential for decentralized to promote communication, commerce and democracy. reminded me of the last couple of days. i was with my daughter watching the super bowl, first time ever she wanted to watch the super bowl. people who watch football and so every time kells yes caught a pass. 28 years old and never read a book. as a father you are supposed to answer your kids' questions. she is so full of knowledge which she only gets because of internet. i was meeting first genz member of congress and he pointed out that genz1997 to 2010 is most
9:30 pm
politically engaged in american history and what percentages are voting and showing up and campaigning. i don't think we have anybody to thank but the internet because my generation if you are a young person where do you get your news. the evening news that we didn't watch or the newspapers we didn't read. and far more engaged which i think to pick up on comment makes me very optimistic. i have been following alan virginia tech to talk about open networks and building this out. but let me talk about the internet and i contest a know a lot are and net neutrality. tom wheeler is the only boss i
9:31 pm
have had. and tom was f.c.c. chair, and a dear friend. they made it happen only to have it reversed during the trump administration but thanks, it's coming back and my four internet savvy children net neutrality that you don't let one company or any set of companies throttle down who has access and who doesn't. i am proud of everything he was talking about. the connectivity and broadband. long time when i was lieutenant governor, i spent 100 different visits to southwest virginia the coal fields which was in great decay. and so you had schools shrinking and most people on federal benefits and the notion then was
9:32 pm
if we get internet connectivity the last mile of the house, it would change. and now with leadership with the president and representative jim clyburn, we put those billions of dollars out there. i don't know what hees pessimistic. we have border security. and do-nothing congress has done nothing. however, i am optimistic because it's not a partisan issue. i'm a democrat. but most of the people severed by the low income connectist are rural republicans and my friends are going to join us to make sure we extend that. we talked a lot about privacy.
9:33 pm
again, when i talk about a.i., i start with the fact we have had social media around for 25-plus years and only bill is 230 to keep them from being sued and done nothing to protect the american bill or to build privacy out. it is almost impossible so far but we haven't given up because more and more people, town hall meeting, people are concerned about who is collecting their data and selling their data and how is the data getting into china and how is it getting to g.s.a. or n.s.a. our own national security apparatus. and the difference in privacy that comes from your health care records versus the privacy records concerning your online shopping. very different stuff. my friend who is a member of
9:34 pm
congress has been pushing hard for a national digital signature that is unique and unbreakable as a way for the first step of protecting our own individual privacy so we own it and get paid for it. and henrietta lock for those who are into biology, one woman who had cancer in the 1950's and leveraging off her data. we are leveraging off of our data and they are getting rich selling you it. they mentioned the judiciary committee. raises the whole issue of what regulations do you put in place, should it sit with the parents or the oldest daughter on the
9:35 pm
way in how do her 12-year-old and 10-year-old have access to media beyond the football game.
9:36 pm
this is like 45 years ago. there was so much information in the world that we could longer see the connections between this vast body of information. and not just couldn't see the correlations and the causes and effect. jump forward to 2024 last year in 2023 we generated more data than in the first 2,000 years in just one year. i was on a committee for couple of years, our satellites 6500, 95% of us never even looked at. and only if we could figure you out how to examine it. i decided to take an a.i. course
9:37 pm
at stanford. i was loving it the first three weeks until i got a zero and i didn't know python. i was hopeless and it's fun. i have a long way to go. nice my middle daughter is a senior software engineer and i got closer and i got her to help me with my homework. on the policy part, though, it's tricky, i tend to be on the optimistic on the a.i. curve and there was something said when the internet first came out we are way overestimating a short-term impact and underestimating and i feel the same way. and lives will look completely different because of artificial
9:38 pm
intelligence. and the ability to and now pran korea attic cancer. the whole notion of liquid biopsies. and we have a great bill that apparently 10% of patient ever year is due to misdiagnosis. think if we took the power of artificial and manifesto and look at a.i. applied to reading the wrong medicine off the chart or cutting off the wrong arm or whatever it is, to really move us forward, there is enormous amount of good. but what we are hearing from the
9:39 pm
radiologist and more effective tools to look at the x-rays and mammograms. a young man who looked at the folding of the proteins and pharmacology in the near term. so many things are going to be done because of alpha fold. my wonderful a.i. staffer. when she was at princeton and doing the phone calls and lifelines and national suicide prevention lifeline is using a.i. conversations to train people much better than weekend after weekend after weekend so they can jump right in. you don't a nonhuman being talking to the people in crisis but people that are trained by them. we had a meeting with the new a.i. foundation of veterans'
9:40 pm
affairs. congress two years ag when our a.i. caucus talked about the terminator and 75 people. and the room was packed. jack clark. and there is a great deal of interest. 191 a.i. bills pending in congress right now. a lot of them are good. a.i. caucus is led by republican mike mccaul and anna eshoo and obernolte yes and me. we are trying to keep it out of the fight partisanly divided
9:41 pm
place. of these 191 bills should we can we pass this year. avoid the mistakes on social media. and that resonates. speaker mccarthy put together a task force and he got unseated. speaker johnson and hakeem jeffries are about standing up this informal working group soon to get four, five, 10, major a.i. bills done this year. among them are the create a.i. act. we had a researcher who complaipped some people who don't have resources to compete with huge data resources of
9:42 pm
google, microsoft and the like that we need data bases that are available to businesses, researchers and this sets up a large data base, probably not as. they scrubbed trillion words when they set it up. if you scrub six trillion words, maybe 40% may not be true. but there's a lot of stuff not there. you let the scientists of the national science foundation build it for all of us. another one, go to the most basic part of a.i. safety is the notion of not allowing artificial intelligence to make the decision on nuclear launch against china, or russia. there are a lot of wonderful
9:43 pm
ideas out there that we will try to get done and build incrementally. and those who are vein enough to think we can regulate. they are trying to do it in europe and lots and lots pushback. they might disagree. but trying to manage and regulate the math and science where as we think that is impossible and also not smart and trying to regulate the end users if it puts a child at risk, if it of the many different things it could do that are negative, how do we guard against that. there is a firm in northern virginia whose only mission is to develop a downside risks and different uses, what are the things we have to worry about, because those are the things we
9:44 pm
have to address and overcome. so that this is an incredibly sign in terms of artificial intelligence and the different ways people are using it day and day out. it showed up last night in super bowl ads that i didn't understand. we can act in a bipartisan way and take the five best bills and make a difference and build it on the years to come. if we stick to the optimistic side and live to be 120 years old happily and helpfully. and not just in health, supply chains, management and democracy can be breakthroughs as long as we protect on the downside. i am exciting that all of you
9:45 pm
are here which is very cool. two years ago, we talked about a.i., six people showed up and you are not to talk here. good luck with the rest of the day. [applause] >> no rest for the weary. for our mid-morning keynote my distinct pleasure to introduce the principal deputy chief technology officer for the office of science and technology policy for the white house. i first met muscly began when i was a baby tech head and staff counsel and one of the founding members for the centers of democracy and technology five years ago in 20.
9:46 pm
she has been very busy since at the university ofcalifornia berkley, where she is the faculty director for law and technology and explores the legal and technical means of protecting free expressions, privacy and the freedom and fairness of new technologies which is a very important thing right now. this country is very lucky to her at the helm of complex issues. and my pleasure to introduce deidre muscly began. -- mulligan. >> it wasn't here five years ago. so last year, my predecessor was
9:47 pm
here. many of you know him. and he kicked off this speech reflecting on how nice it was to be with this assembly. they also kind of lamented how thin our ranks were. and i feel at home here as many of you know. i'm a lawyer by training but spent with most of my career social scientists, all of whom believe we just can't regulate our way through this world and build our future in a robust meaningful way that helps us thrive, we have to review the design of technology itself as a tool. and for that reason, i am grateful to be in this room. but i am grateful to be in this
9:48 pm
room because almost 30 years ago as laura mentioned, i had the privilege of starting the center for technology and democracy and shortly thereafter we created the internet education foundation. and as he tells it, i apparentlily recruited to come running. i feel grateful to be in this room and to tim who is maintaining this space for robust conversations around these issues for so many years. so truly today, we are at a pivotal moment. everyone around the globe gets what this community has gotten for many years. everybody understands if we want to build a future in which democracy thrives, human rights are protected, equity is advanced and competition is meaningful and robust and leads
9:49 pm
to innovation, that we have to lean in to building technology and couldn't be a better to sit in this room and our ranks are growing. as maura said, i'm in a new role. after 23 years as a professor at u.c. berkley, i serve as chief technology officer in the white house of science and technology. it works to maximize the benefits of science and technology and advance health, prosperity, security, environmental equality and justice for all americans. we advise the president and other senior white house officials relating to science and technology and president biden and all of us in the biden-harris administration are
9:50 pm
supporting the public interests ensuring it works for every member of the public and protects our safety and security and reflects democratic values and human rights. i wanted to start by responding to the question to alan, how is the administration doing. i have been in this space now for about 30 years and from my perspective not just as an administration official but someone in this community, the administration is hitting it out of the park. alan talked about the important investments in broad bands making the internet for all, a realizable goal and look at the spectrum strategy which sets a clear path for how we make the most of a very, very scarce resource that is important for everything that every one of you do in this room and look at the
9:51 pm
administration sweeping efforts on a.i. i was in a briefing call with members of civil society, labor, consumer protection and philanthropies and someone calls me afterward and said i think this is the most sweeping thing any government has ever done to advance civil rights, civil liberties and equity in the design and use of technology. they were just blown away and in awe at the moment. where we think with the internet and doing catchup as it is developed. what you can see today from the administration is trying to be on the ground thinking critically about how wry design and use and regulate this technology at its inception. so i want to talk about a.i. last october, president biden
9:52 pm
signed the landmark executive order on the trustworthy development and use of artificial intelligence. it lases out an ambitious agenda and develop standards for safety and security and protect americans' privacy and promote civil rights and promotes innovation and competition and advances american leadership around the globe and much more. my guess is that the executive order has touched every person in this oom and designed to improve the lives of every american. president biden directed everyone across the agencies and across the o.e.p. to pull every lever. we were to do all tay we could and i think we did. when i was looking back at the speech last year, i noted there
9:53 pm
was very specific issues that he pulled out as places where a.i. was undermining the public's rights and opportunities. he spoke about black americans being denied life sairveg health care and um whose job applications are being rejected by a.i. and i wanted to reflect back of where we were and say when you are looking for concrete action on key issues, we have done should have. in the past year we have addressed these issues. and develop a strategic plan that includes frameworks and use of a.i. and a.i. enabled technologies. it includes a focus on identifying and mitigating discrimination in the health
9:54 pm
care system to deliver essential services across the country. last year, the equal opportunity commission released technical assistance on the use of systems indecisions related to recruitment, hiring, promotion and did you dismissal. in the enforcement plan the emp eoc that employers are using technology including a.i. to recruit applicants and make an assists in hiring and employment decisions and the agency said in order to combat employment discrimination and will be focused on this as a key enforcement priority. in addition to tackling a.i., the administration has focused more broadly on equity in all of its work in particular requiring federal agencies to look at the
9:55 pm
equity implications of things they use in agency work and includes bringing civil rights officers into the conversations about how we design, deploy and use such tools. the biden-harris administration isn't just telling other people to do but leading by example. we are establishing policies and practices to guide the responsible development and cultivate the best use of a.i. exroos the federal government and investing in research and infrastructure and talent necessary to do this work. in november, i am sure many of you saw that the office of management and budget released a draft policy on government use of a.i. this guyedance would have structures in federal agencies, advance a.i. innovations increase transparency, protect federal workers and manage the
9:56 pm
workers for a.i. the draft policy emphasizees the importance of ensuring a.i. relationships and reporting and transparency mechanisms through the a.i. case inventories and impacts the public's rights and safety including impact assessment stakeholder complication and feedback, real world performance testing, not just labs, independent evaluation and ongoing monitoring to make sure systems don't drift off in ways that perpetrate bias. this is groundbreaking policy. it would ensure that government use of a.i. from uses from facial recognition technology and distribute critical and life saving benefits to the public are designed to protect the public right's rights and
9:57 pm
advance equity and fairness. we are putting our money where our mouth is. this guyedance and the executive order sets the blueprints in the a.i. bill of rights when technology is changing, it is important that we are clear about our principles and values and that we take really serious and swift steps to put them into practice. the a.i. executive order and memo serve as guiding lights as they are taking action to use a.i. responsibly for the benefit of the public. none of this wouldn't be possible with out a a.i. task force. we are bringing in talent that can leverage a.i. for public purposes that can aid our enforcement agencies in oversight and enforcement and do the risk management work that
9:58 pm
many companies are struggling to do. we need to do that also. we need the scientists and machine learning and the emphasis, the privacy officials to do this work and be a model for other countries and the private sector as you work to do it, too. we are not just pointing out new ads but changing the way we bring people in. the office of per son mel management is making it easier to hire dieta scientists. we are leveraging programs that stood up during the obama administration like the u.s. digital service and leveraging the new career opportunities that the bidenthe biden adminisd with the u.s. digital core. many federal agencies have
9:59 pm
resonated to advise agency leadership on ai and coordinate with ai activity. it is really important to understand that technical leadership is not just necessary at the implementation stage, it is necessary when thinking about how we are going to engage technology, how we are going to bring it into operations, so this ai talent surge is designed with the strategic vision and the hard implementation work. second, i want to talk about facial recognition technology. what i just mentioned is particularly important when it comes to rights impacting the ai system, like facial recognition. the u.s. government across a wide range of agencies. we've heard from many stakeholders who have expressed concerns with government and
10:00 pm
private sector use of facial recognition technology, and we have seen this prompt action from our federal trade partners. in may 20 22, the president signed an executive order directing the federal government to discuss how data can inform equitable and effective policing as well as to identify privacy, civil rights, and civil liberties and recommend best practices related to the use of technology, including facial recognition and predictive algorithms. my team and i are working with colleagues at the department of homeland security and the department of justice on this important work. to support this work, a study was conducted on facial recognition technology in particular this used by law enforcement, and law enforcement agencies. recently, the support sites the
10:01 pm
civil liberty concerns, by organizations not just that use the technology but by those that develop it. we continue to work with our colleagues at the department of justice and homeland security to identify best practices and to recommend guidelines not just for federal buffer state, tribal, local, and territorial law enforcement agencies as well as technology vendors themselves.
10:02 pm
and we are going to continue to work together not only on this important effort around facial recognition technology and predictive algorithms but also more broadly, because the ai executive order passed to department of justice with reviewing and examining the ways in which ai is functioning across the criminal justice system at large. finally, i wanted to highlight the important role of privacy, privacy professionals, and privacy tools in responsible ai work and the way the administration is trying to engage and uplift that. so the community assembled here today knows the importance of protecting privacy. you also understand the vast implications of the personal data and the increasing technologies that automate surveillance and decision-making for human rights and for democracy. some of you are no doubt among the vanguard professionals, privacy professionals, and must
10:03 pm
now become a more interdisciplinary field that understands the importance of shaping and providing technical tools. the work of the privacy community both in and outside government has never been more vital through our future. it is incredibly important that this industry and it is incredibly important that the tools that we use are updated to make sure they can fully address the risks and halogens -- challenges posed by technologies such as ai. the executive order on artificial intelligence recognizes anything american protections for privacy is a top priority and address of the agencies to take meaningful action. part of that action, the a ito directs omb to solicit input on how privacy impact assessments may be more effective at mitigating privacy harms, including those that are further
10:04 pm
exacerbated by ai. the public comments, reviewed by april 1, and i hope we find many of your names in those submissions. this public input will inform omb as it considers what revisions to guidance may be necessary to ensure privacy impact assessments will be important to request mitigation and risk assessment of potential harm and do so in a way that is transparent to the public. in conclusion, our nation has immense aspirations to achieve robust health and ample opportunity for each person in every community. to overcome a climate crisis by reimagining our infrastructure, restoring our relationship with nature, and securing environmental justice. to build a competitive economy that creates good paying jobs and foster a strong, resilient,
10:05 pm
and thriving democracy. technology plays a vital role in achieving each of these goals, a particularly ai. technology can help achieve these aspirations. we can meet the climate crisis and strengthen our economy, foster global peace and stability, achieve robust health for all come and open up our opportunities for every individual. all of us in the advising harris administration are committed to this deeply wonky and exceedingly important work that is required to building for the future, and we look forward to your continued engagement and partnership. thank you.
10:06 pm
>> as i mentioned in my opening, we have a new set of programming this year. have lightning talks, which is a totally different concept from before, and we have overlap. lightning talks is starting at 10:30 in the north hub. my board chair will introduce the next keynote. >> first, i want to thank you all for your patience. i know the room is getting warm, and it is feeling small because there is so many of you. i want to think and responses. -- thank you for your responses. to get a bigger room, we will need to remind the sponsor is how much we appreciate their generosity and possibly look into a bigger room for next year. there will be lightning talks
10:07 pm
and other events that will get you out of this room, but we have one more speaker this morning, ftc commissioner brendan carr. please join me in welcoming the true champion of american connectivity. brandon carr is senior republican of the ftc where he has had a long career in telecommunications. through his early efforts, he earned a title and accelerated deployment by lightning fast nationwide networks. but his vision extends beyond connectivity. he recognizes the need for a skilled workforce, launched landmark programs for technical colleges and apprenticeships. creating a direct pathway, providing careers with a burgeoning 5g sector. with his series of 5g-ready hardhat presentations, he honors the vital work of the tower climbers, including he has
10:08 pm
climbed the tower himself with the workers, as a show of respect for the work that they do, as well as the construction crews that form the backbone of our network. during covid, commissioner carr highlighted the importance and is also a fierce advocate for the telehealth partnership. he has done groundbreaking work on the connected care pilot program, which he has helped oversee, bringing high quality health care to underprivileged communities. commissioner carr has also had a strong commitment into bridging the digital divide with his willingness to leave bureaucracy behind in washington and traveled the country, meeting workers on the ground and small operators that are part of our nation's backbone network. it sets him apart by ensuring that all voices across america are heard, especially in rural carriers, connecting large distances and shaping the future of our communication landscape.
10:09 pm
with his vast experience in both public and private sectors, commissioner carr can bring a nuanced and balanced approach to issues as complex as the broadband expansion we are seeing today, and as earlier mentioned today, the opening radio access networks. i am sure commissioner carr will offer valuable insights in the work that he is doing here at state of the net. please join me in welcoming commissioner brendan carr. [applause] mr. carr: thank you so much, chair shane, for that great introduction. so good to see so many of you here today. it has been a couple of years since i was actually at state of the net. there is a reason for the five-year gap. the last time i was here, it was during a government shutdown.
10:10 pm
for reasons that is the meeting today, during the shutdown, i decided to not trim my beard at all. i came hear and spoke. there is actually a picture. this is one of the most terrible and terrifying looking beards i have ever had. the thing that is interesting is that then became the main photo that was used for any story about me or any story about the ftc. i never was that into the european right to be forgotten law, until i was seeing that photo of me everywhere. this year, i trimmed to the beard and hopefully there will not be anymore photos that i have to try to get scrubbed from the internet. but it is wonderful to be here. happy to start out with the state of the net. i think the state of the net from the isp layer is actually really good. particularly if you go back to 2017 when i voted with my
10:11 pm
ftc colleagues to reverse the decision to apply heavy-handed title ii regulations to the net. there were a lot of predictions about doom and gloom that would befall the internet when we reach that decision. i remember the viral campaign that was launched right around then. in fact, you know, all these predictions about the end of the internet. i remember clearly one personal story was i got a facebook dm from a high school ex-girlfriend , and she told me that by reversing title ii, i was about to make the second biggest mistake of my life. to be clear, i broke up with her, and neither one of those were bad decisions. they were both good decisions. the data actually points this out, at least on the internet side. i will focus there.
10:12 pm
speeds on the internet side are up 3.5-fold. the mobile wireless side, we have seen speeds increase over sixfold. the percentage of americans with access has increased by 30% as a result of our title ii decision. we have also seen entirely new forms of competition emerge and take hold, whether it is this latest generation of lower orbit satellites, or even the slightest move toward direct to cell technologies. a really important emerging trend, in fact, if you look at it, traditionally mobile wireless players are now taking some of the largest new market share when it comes to in-home broadband. the opposite is also true.
10:13 pm
you see cable now getting into the mobile wireless game more so than ever before, taking traditional mobile wireless customers. prices are down since that decision. price per megabit has also followed substantially. you can compare that to other more heavily regulated utilities like water, power, gas, where prices have continued to increase. the other day, i think ultimately covid was the ultimate stress test of global telecom policies. you can compare the environment we had in the u.s. and the results we had during covid with the results that took place in europe, which historically has had a really different approach to regulation of the network. what we saw during covid was the regulators in europe called up some of the largest edge providers, netflix and others, and asked them to degrade the
10:14 pm
quality of their signal, because they were worried that content networks would break given the spike in internet traffic we were seeing. today, the data is equally as clear. u.s. networks are faster than every single country in europe. u.s. networks are also more competitive than those in europe, with the u.s. having nearly a twofold or 40 percentage point lead when it comes to household with two or more wired facility providers. networks are benefiting here, investing threefold as much per household than their european counterparts. when you look at the last couple of years at the ifp level of the internet ecosystem in this country, there's a lot of good to see. you see increasing investment, increasing speed, the digital divide narrowing.
10:15 pm
there is always more that we can and should be doing to promote even more robust competition. i look forward to taking some of those actions, but i also want to flag a couple of storm clouds on the horizon. one is spectrum. i think the biden administration has sort of fallen into a bit of a malaise when it comes to spectrum policy in this country. in getting spectrum out there, freeing it up for consumers to use, is not only good for efforts to get people connected and for bridging the digital divide, but it is also good for u.s. geopolitical leadership. when we are freeing up spectrum, the eyes of the world turn to the u.s. capital flows here. we get a bigger say in standards setting bodies. we get to make sure that the services that are built out on these spectrum bands are ones that are going to work for our
10:16 pm
interest. yet we are sort of are moving in the wrong direction. in fact, in 20, the fcc raked back spectrum that we had made available previously. that same year and again in 2022, the biden administration delayed the rollout of 5g services on steve bannon spectrum -- see band spectrum. in 2023, the biden administration announced that they are not going to meet a statutory deadline for identifying spectrum that is required for the ftc to auction just this year. on top of all of that, the biden administration recently put out a national spectrum strategy.
10:17 pm
that strategy fails to commit to freeing up even single megahertz of spectrum. this marks a real 180 from the progress we had been making. if you look at 2017 through 2020, at the ftc, we freed up 6000 megahertz of spectrum for licensed use and additional megahertz for unlicensed use cases. again, you can track that to the national spectrum brand, which only says they are going to study 2800 megahertz. the biden administration only plans to study a fraction of that. this is where we need to turn things around. we are falling behind. right now, u.s. ranks 13th out of 15 leading markets when it comes to the availability of lightning mid band spectrum. on average, we trail other countries by about 400 megahertz of spectrum. we trail china in particular by about 710 megahertz of spectrum.
10:18 pm
so spectrum and the lack of it is one of the storm clouds on the horizon that we need to come together as a government and turn things around. the second piece i want to talk about in terms of storm clouds has to do with the current rollout of the funding initiative. the plan is run out of the commerce department. the idea is to make sure that every american has access to high-speed internet services. but we are already starting to see some issues. congress made an important decision to proceed on a technology neutral basis. doing so is vital through clearing the goal of advancing every american. but there were policy cuts that have additionally raised the cost of connecting americans.
10:19 pm
you can see in this heavy thumb they put on the scale for fiber and some other decisions that have raised costs, including unions, preferences for government run networks. while we have clearly set the goal in allocating funding to get the job done of connecting all americans, the policy cuts are holding us back. you are starting to see evidence of that. a number of states have raised their hands and indicated that the amount of money that they are going to get from b.e.a.d. is not going to be enough to connect all unserved locations in their states. this includes places like california, new mexico, and minnesota. this is concerning because with the right policy cuts in place, b.e.a.d. was poised to get the job done. that is concerning to me.
10:20 pm
but there is still time to correct course. the commerce department is looking at some of these state applications for using the dollars. we need to move in a tech neutral way, and that includes leaving room for technology to get the job done. fixed wireless can connect americans on pennies on the dollar where fiber could be 6 x the cost. it can do so on a more compressed timeline. we still need to fund a tremendous amount of fiber bills, but we need to leave home for fixed wireless or we will end up in a situation where we run out of money before we get the job done. that would be a failure. more broadly, i want to talk -- last piece of my remarks today. i want to share more about my regulatory philosophy in
10:21 pm
general. one thing i have seen in this job and in tech policy over the years is that people tend to move quickly into polar opposite corners. on the one hand, you have a group of people who will say any amount of regulation at all is too much. on the other hand, you will have a set who have never found a regulation that they do not love. people reflexively go into those corners. i have tried to take a different approach. i am guided by a number of overarching policy considerations. one is looking for market power, looking for abuse of market power, looking for national security issues, looking for disability access issues. these are some of the points i look at to help guide me when we ultimately make the decision, charged to us by congress, of making decisions that are in the public interest. i have agreed with my colleagues on the ftc that we should go
10:22 pm
beyond, for instance, the voluntary wireless resiliency efforts put in place by the wheeler ftc and imposing new rules to promote wireless resiliency. i have agreed with my calling in the majority that we should adopt new rules to bring more competition to apartment buildings where cable providers , and other providers in some circumstances, have been locking up markets in an anticompetitive way. i have also pushed back when i think that my colleagues have gone too far with regulations. you can see that on my dissent on the future equity, with my future dissent on title ii regulation as well. i want to talk about that through the lens of two issues. one is the censorship we have seen taking place by social media companies.
10:23 pm
this is an area where i agree with tim wu and others. social media companies have articulated this view that their decision to discriminate or sensor against americans when they participate in the modern-day town square is beyond the reach of the government due to the first amendment. i recently joined in with my colleague commissioner simonton. we wrote an article that explained how the government can, entirely consistent with the first amendment, put guardrails in place to address the censorship taking place by big tech. one last issue that i think has pretty serious anticompetitive implications. i started with these remarks talking about the competition that we are seeing in the areas directly regulated by the ftc, the mobile wireless space,
10:24 pm
and telecommunications more in general. but increasingly, all of that competition is coming down to a single show point. apple, with their control over the operating system, software, coupled with their control over hardware, is creating a walled garden that is having anticompetitive on the telephone telecom space. it is an area where authorities are looking but also an area where the ftc has a role to play as well. one feature of that walled garden that apple has is blue message, green message. what happens there is when you are on an iphone, they are necessarily degrading the text messaging service -- whether it is sharing a photo between an
10:25 pm
imessage and an android user -- the photo quality is degraded. whether it is looking at the green bubbles that show up give it a low contrast. that makes it difficult for people with low vision or difficulty seeing to pick up those messages. that is one feature of the broader problem that is happening with this walled garden approach. but it also highlights a role for the ftc. how many of you are familiar with the issue of beeper mini? that's it? a couple. beeper mini was a technology that rolled out in december of last year. it came up with a solution to the green bubble, blue bubble. walled garden. it enabled people on android
10:26 pm
devices, samsung users, to communicate with i message users in a blue bubble fashion -- no low contrast, no degrading of photos or video. in that way, among other things, in my view, it promoted accessibility and use-ability by people with disabilities, including having the ability to introducing more competition. which is good for everybody. i think the ftc should investigate apple's conduct with respect to beeper mini and to subsequently apple made changes to imessage, to disable the functionality of beeper mini. the ftc should investigate apple's conduct to see if it complies with the part 14 rules. part 14 rules flow from a landmark disability rights statute.
10:27 pm
those provisions talk about accessibility. there is a provision that says covered providers, which includes apple electronic messaging service, shall not install network features, functions, or capabilities that impede accessibility and usability. so i think the fcc should launch an investigation into whether apple's decision to degrade the beeper mini functionality that which, again, encouraged accessibility and usability, was a step that violated the fcc's rules in part 14. the fcc analogized those rules to our core provision on interconnection. this is one small example of the broader, negative consequences that come from apple maintaining and perpetuating a walled garden approach to technology.
10:28 pm
but we are not just seeing it here. there is potential to see it in the future as well. whether it is with ai, as those technologies continue to rollout, or ar or vr, i think there are potentially negative consequences, if apple perpetuates its own technologies one way and a world where it degrades the performance of competitive technologies. some of the broader competitive effects are ones that the doj should look at and the fcc should look at, including structural remedies. but there is a role for the ftc as well, whether it is looking broadly at these negative impacts that come from apple being a chokepoint. more specifically here, it was looking at our part 14 rules and whether apple's conduct would be permitting ultimately violating the rules. with that, i will end here. hopefully i will get you guys back on schedule a little bit, but again, i think there is more
10:29 pm
we need to do to continue to encourage robust competition at but the biggest threat to openness and competition in the internet ecosystem right now is not taking place at the isp level, it is taking place at the edge. this is an important conference to bring different people together to talk and a look at these issues. thank you so much for having me. hopefully it will not be another five years before i am back. thank you so much. [applause] >> thank you, commissioner carr. do not worry. your beard looks great. that was insightful. >> we are members of the congressional app challenge alumni advisory board. >> it is the official coding competition for students. >> we both competed and won the
10:30 pm
challenge and joined the board as college students. now we volunteer to host panels , events, and initiatives, including the annual house of code event in the capital. >> we also host a podcast called de-bug. if you are interested in learning more working with us, we have a booth in the hallway. please come talk to us. >> our first breakout sessions are currently ongoing. in the north hub, we have lightning talks. in the south hub, i will join you for a conversation on intelligence threats. >> the central hub, we have a i at work and navigating the digital playground session. >> thank you and do not forget to stop by the congressional app challenge booth. >> the next breakout session in
10:31 pm
this room will be the youth online safety panel but there is one going on in the central hub, the south hub and lightning talks in the north hub. if you are here for the online safety, you are in the right place.
10:32 pm
10:33 pm
>> good morning. my name is naomi nix. i'm a reporter for the washington post where i cover social media. and i focus a lot on how social media impacts our world. and lately, that's meant covering this larger conversation that we've been having among parents, among activists, among regulators about whether our youth are really safe online, and how tech companies can bolster their safeguards to protect our kids.
10:34 pm
we've had the u.s. surgeon general warn that, you know, our teenagers are in a mental health crisis. and that might be partially to blame because of the time they spent on social media. dozens of state attorneys generals are suing meta alleging the company is compromising their mental health. and just last month, we had the senate judiciary committee grill some of our biggest tech ceos about how they're protecting kids online. it's clear that there might be a problem here. i think it's less clear what the right solution is. there's a lot of disagreement about that. i expect we'll hear some disagreement about that this morning. but here to sort of dive into some of those tricky questions about how we protect youth while preserving freedom of speech, and privacy. we have an esteemed panel of guests. i'll briefly introduce them, and then i'll ask them to introduce themselves at length. so we have john carr. he is the secretary of the uk children's charity coalition on internet society.
10:35 pm
we have hayley hinkle. she's a policy counsel for fairplay. we have maureen flatly, she is a stop child predators advisor. and then we have daniel castro. he's the vice president of the information technology and innovation foundation. so why don't we just start, have you introduce yourselves briefly and talk a little bit about what your organization does. john, you want to start? john: ok, so i'm a brit, as you might have gathered from my accent. and i'm very glad to be here see many over familiar faces, particularly my old friend rick lane is over there. i was very distressed, however, because the 49ers recently acquired and became the owners of my football team. in england, leeds united. so i'm beginning to worry about the knock on effects of last night. so if anybody's got any tips or clues, please see me at the break and let me know.
10:36 pm
so i work with a vital adviser to the british government. i've advised the council of europe and united nations various bits of the united nations, but above all children's organizations in the uk, with a particular focus on digital technologies. >> morning all, my name is haley. i am policy counsel at fairplay, we are a children's media and marketing watchdog focused on the ways in which big tech's business model and the excessive screen time it encourages impact kids' healthy development. maureen: i am maureen flatly. i have spent the last almost 40 years engaged in oversight and government reform of issues related to children across a range of systems. i always tell people that one of the most formative experiences of my life was being the daughter of an fbi agent who spent most of his career detailed to the senate racketeering committee, where he developed testimony against the cosa nostra with joe valachi. and that proved to me with my
10:37 pm
own eyes for most of my childhood that congress can solve big problems involving criminals. and i think at the end of the day, my point of view on this issue is that this is not fundamentally a technology problem. this is fundamentally a crime problem. and by ignoring that, for far too many years, we have now created a problem that seems almost insurmountable. but i hope today that we talk a little bit about some concrete solutions. >> thanks. and i'm daniel castro, vice president of itif, information technology and innovation foundation, which is a nonprofit think tank focused on innovation. i've been doing a lot of work generally on tech policy and the internet economy and how it works, as well as the metaverse and children's safety online. and then these kind of emerging platforms. i agree 100% with maria on this point that i think this isn't a technology issue. and we tend to wrap these up and intertwine these things when they're not there. so hopefully we'll get into that today.
10:38 pm
>> i'm sure we will. i want to start with the hearing. you know, i've been covering tech for a long time in washington. and that was probably one of the most emotional hearings i've ever covered in part because we did have families show up and talk about the experiences of young people who had lost their lives to suicide or found drugs on online platforms. and yet, even though it was very heightened, there was also a lot of like blame shifting, right? we had lawmakers accused the tech companies of having blood on their hands for not doing more and then also sort of criticize themselves for not passing legislation. even in that question answering session with senator josh hawley. mark zuckerberg said the company had a lot of parental tools, implying that like parents could play a role in protecting their kids. amid sort of all of that, all of the various proposals and all of the various diagnosis that happened in that hearing, just
10:39 pm
to sort of start us off in broad strokes, i'm wondering if each of you can talk a little bit more about what you see as sort of the top risk we're facing right now when it comes to youth safety online? and who and what would be your sort of first biggest step that you think should happen right now? and, daniel, you want to start us off? >> sure. i think, you know, watching that hearing, you can't come away with anything other than thinking it's political theater, in probably the worst possible way, because i mean, we're seeing parents there with real issues that impact children. suicide, self harm, eating disorders, i mean, so much that affects so many people. and the solution to those problems are not going to be the top solutions, which these are very complex issues. technology is not at the top of that list, right? we have bullying problems, we have problems of addiction in
10:40 pm
communities with drugs and overdose and all these issues. when you when you look at that you realize that the purpose of that hearing is not to advance solutions that will actually help children. in the end, it's to advance legislation that's intended to regulate big tech. and it's using children and children who were suffering, to advance that narrative and to advance legislation. and we've seen this successful playbook in the past, which is why it's being used again today. we have seen it with fosta, it was the same issue where nobody wanted to oppose it, because you know, who was going to be on the other side of that issue? i think that is what we are seeing with children's online safety. there are legitimate issues, and there are issues that we can address. but if we think that the reason that the addictions, self-harm the reason these things are , happening are because of
10:41 pm
technology platforms, that is not the reason and it's distracting from real solutions. >> maureen, do you have anything to add about just in broad strokes what you see as sort of the most urgent priority right now? maureen: yeah, for sure. i've been to hundreds of congressional hearings in my career. and i have to say that i was really appalled at the framing of the hearing last week, which i did attend, it was, as if, you know, the panel was blaming the tech industry for everything from global warming to kidnapping the lindbergh baby. i mean, it was just so over the top, it was not an exercise designed to come up with solutions. it was a show trial. there wasn't a panel that had any kind of affirmative input at all. and, you know, i was thinking about this last night. and i thought, you know, if i
10:42 pm
had been mark zuckerberg, what would i have said to that panel, and i actually wrote it down. this is what i would have said. congress has had 16 years to implement the protect act. it was a virtually perfect first step to building an infrastructure around law enforcement to mitigate all of the things that we're seeing today. yet, in a recent gao report, which if you haven't read it, i recommend it highly, they issued a scathing indictment of doj for failing to implement this really important bill. congress hasn't held any really meaningful oversight hearings, doj hasn't issued any real reports. i mean, child exploitation isn't even on their list of priorities. and as someone who grew up in a family with a father who worked for doj, i've always had the highest respect for that agency. but i have to tell you, that in a universe of institutional
10:43 pm
players who are responsible for what we see before us today, the justice department is at the top of the list, at least as far as i'm concerned, closely followed by congress, which has failed to do its job which is authorized appropriate and overseas spending. the tech companies are mandated reporters, just like teachers and pediatricians, pastors and daycare providers. all mandated reporters in other contexts are specifically shielded from civil liability because if they weren't, we would never get any meaningful reporting. but if you look at the list of side attempts, virtually all of them come from tech companies. the number one reporter of cyber chips is meta. and meanwhile, no one even bothered to ask the owner, the founder of meta what would be helpful to them to prevent the proliferation of this activity on their platform. i don't know how many people live in washington. but remember when the columbia
10:44 pm
heights cvs was swarmed by shoplifters, so badly that they are now closing the store? no one blame cvs for the shoplifters. they called the police. in this instance, i'm waiting for somebody to say, hey, tech companies are private companies. they can't arrest or prosecute anybody. they close the cyber tips dutifully every year, year in and out and yet they can't provide the public safety response to them that is needed. and quite frankly, when you look at what is really going on here, and what happens to the tips they're geo located, and they're , referred to countries all around the world. and, again, if we're looking to
10:45 pm
mitigate the problem, we have to look at the underlying crime problems, no tech company is going to be able to combat the sextortion ring that is, for instance, the yahoo boys and nigerian based gang that is operating in probably 25 different countries. when i listened to those parents, in their anguish, and believe me, i have worked with hundreds and hundreds of victims when i discovered in 2006 that the civil penalty to download a song was one third the penalty to download c-span i got john , kerry to write the bill that fix that in six months, tripling the civil penalty. so it's not that i'm unsympathetic. but i'm saying that blaming the tech companies for a global crime problem is not a path to success here. it just isn't. >> law enforcement could play a bigger role? maureen if i were mark : zuckerberg, i would have been asking a lot of questions. and one last observation as long as we're on this subject. when i look at the plaintiffs that are suing meta, most of them have been sued themselves for poor child welfare outcomes, several of them horrendous child welfare outcomes.
10:46 pm
so any suggestion that these individual state claims against meta are somehow on the moral high ground with respect to outcomes for children, don't make me laugh. at the end of the day we really need to sort of refocus this conversation, and look at what's really going on here and work with the tech companies, because anger is not a strategy, conflict is not a strategy, whatever's going on right now is not working and certainly not helping kids. >> great. i would say my framing of the problem, fair play and our fellow advocates, the incentives are such that these platforms are using kids' data, and designing user interfaces in a way that's meant to extend their time online, expose them to advertising, and give the platform's access to more data
10:47 pm
in order to better refine features that extend use and targeted advertising. and so i think a lot of our work in this space has been very focused on a sort of two pronged approach. one is around data privacy and the other around safety by design. i think this year, we've started to see some push and pull on the debate sort of implying that advocates think that, you know, there is sort of a direct solving these big tech issues will solve all the problems around teen mental health. i think that that's certainly not the case, certainly not a view that's shared by my organization or many of the folks that we advocate alongside. i think that what we are seeing is parents and youth advocates and organizations that are on the frontlines of these issues, seeing that families really need tools and help. and we were at the hearing last month with quite a few parents who have unfortunately lost
10:48 pm
their kids to a range of horrifying online-related harms. these are folks that had all kinds of parental settings that have a lot of conversations with their kids. but i think that for me, one of the big takeaways from the hearing was that there's an understanding across the aisle on both sides of the aisle on capitol hill that parents need help, that these companies have failed to self regulate. i mean, part of the reason mark zuckerberg was getting so many questions is because of the information that we've learned through the lawsuits that many, many, many states now have brought against instagram, some of the things that have been unsealed, what is revealed in their internal research about the way its product impacts kids and teens. and so those are the things that are really sort of top of mind for us as we carry this advocacy forward.
10:49 pm
we're not attributing some sort of direct you know, will solve all of teens problems if we if we regulate these issues. the fact is that there are features and functions that exacerbate existing issues for kids as they're developing, we are talking about young kids whose whose prefrontal cortex is very much not developed, who are very vulnerable to what we know are scientific techniques to influence their behavior. and so all that said, the hearing only means anything if we actually see action from congress. we've had many, many, many hearings, two very notable whistleblowers and francis haugen, and arturo bihar. our message has been and will continue to be that we need to see action now, we've done enough talking. >> okay, so there's nothing new under the sun. there's always been misinformation, there have been
10:50 pm
children who've had drug problems, or they've been bullied at school, or they've been bullies. there's all kinds of things that have been going wrong in society for centuries. but what's distinctive about the period that we're living in, is the way in which digital technologies have put various of these problems on steroids. and therefore, in my mind, no question that the technology companies being the masters, mistresses, bosses, whatever, of that technology have a uniquely important role to play, the idea that the cops are going to sit on the network's monitoring stuff or watching stuff, it's i mean, it's probably even scarier. it's a very scary notion. and it's not the practical notion. so the companies have to step up and take an important role. i can't think of a single area of law enforcement in my country, and i'm gathering it's the same here, that is adequately staffed to deal with any area of crime.
10:51 pm
they were always under resourced. that doesn't give the criminals a pass. it doesn't give other agencies permission to ignore what they can do to help minimize or reduce crime. i'll start with a short story. so in the uk, your kids go from little school to big school at around age 11. so when our kids went, like most parents, we opened up our kids their first bank accounts, to help teach them how to manage money, we failed, which is why i'm a poor man. but nevertheless, routinely as part of being given their bank accounts, aged 11 they were given debit cards. so around 2003, 2004, we began to hear cases reported to children's organizations, not my kids, i hasten to add, have children typically boys 14, 15 years of age, being clinically diagnosed as gambling addicts, and just so we're clear about
10:52 pm
that, what they were doing, they were getting their pocket money or their part time earnings, putting it into their bank account and blowing it on a horse or a football match, or, whatever it might be. our law is quite clear. you have to be at least 18 years old before you can gamble on anything. online gambling has been possible since the dot, basically, in the uk, i know that the situation has been different here in the uk, i went to see all the big gambling companies offices in the city in london very expensive suits, , wonderful boardrooms, all of that stuff. i said, you've got kids being diagnosed as gambling addicts coming onto your site. and they said, yeah, we take this problem very seriously, though, if i had a pound, for every time i'd heard a tech executive say we take this problem very seriously i'd , probably be in the bahamas now. but anyway, they all said the same thing. publicly, they said we take it very seriously. we are working on solutions. they did nothing. privately, what they said was,
10:53 pm
of course, there is friction that we couldn't enter into, we could engage on our platforms to try and protect kids or slow them down and things. but if we do it all of our first, competitors will basically steal our business. so none of the gambling companies did anything until we changed the law. and we change the law in the 2005 gambling act, the relevant parts of which became operative on the first september 2007. under that law, the gambling commission will not give you a license to operate a gambling website in the united kingdom unless you can show you've got a robust age verification system in place since the first of september 2007. i'm not aware of a single case, not one, where a child has gone on to a website, tick to say that they were 18 and then gone and blown their pocket money, or get or gambled in any way at all. i'm not saying it hasn't happened, they may have impersonated their parents, they may have borrowed a parent's credit. that's a different set of issues. but in relation to the gambling companies and the available technology they had at their command, they had the means of
10:54 pm
doing that before the law changed. they didn't do it until the law compelled them to. and now we're going to do the same with pornography sites. but i might come on to that later if you want me to. >> why don't we just dive into that topic now? we're in a moment where there's a lot of state bills that are pushing this idea of parental age verification and getting parents having youth get their parents' permission to use social media, and sort of enhanced parental controls. i think, you know, we heard even in the hearing just last month, i think it was the snap, evans snaps, evan spiegel, who said, you know, just 400,000 of our teenagers have enabled parental supervision. that's like 2% of their platform. my own reporting shows that meta's safety experts have for years been concerned about this idea that like, relying on parents to police, their own
10:55 pm
kids' online activity might not be the most effective strategy. and yet it does seem like one, that lawmakers and even parent advocates can embrace despite some of sort of the tricky technological issues. i'm wondering if you can all sort of reflect on what is the best way to verify kids ages these days? and can we even rely on parents to police their own kids ' activities? >> i think our new legislation, the online safety act 2023, which became law in october last year, it's not yet fully implemented. because the regulations are being drawn up, they have to go back to the government and then to parliament before they're operative. so we won't actually see any concrete action until probably towards the end of this year. parental education, digital literacy, all of these things are a crucial part of the total picture. but they are not an alternative to expecting and demanding that the tech companies do their best with the technology that they've
10:56 pm
got available and which they understand better than anybody else. if you in the united kingdom, under an agreement which i helped to negotiate in 2004, if you get a device which depends upon a sim card, so a mobile device, typically a telephone, smartphone, whatever, then it will be by default it will be assumed that you are a child, you will not be able to access pornography, gambling, alcohol, anything at all through that device from the word go, unless and until you've been through an age verification process. and it's established that the normal everyday owner of that device is an adult. this is typical of what happens with most consumer products. you deliver the consumer product in the safest possible way, at the point of first use. of course, if i wanted to kill myself with an electric fire, i could probably buy a very, very long flex, plug it in at one
10:57 pm
end, and then run into a swimming pool with the electric fire, you know, clasp to my chest. there are certain ways in which, you know, if you're determined to mess things up, you will mess things up. my point is at the point of first use, the tech companies should be under an obligation, where children are likely to be most engaged with the product or the service to ensure that it's as safe as it possibly can be. parents, of course, can liberalize the settings if they want to, but at the point the first use, it should be in its safest possible condition. >> yeah, sure. okay. i very much agree with a lot of what john just said. i think that platforms have been very enthusiastic about legislation around parental permission to access. the parents we talked to are very clear that that doesn't really solve many of the problems because the choice of parent faces then is do i socially isolate my kid and deny that permission? or do i say, okay, and then they
10:58 pm
have access to a platform that just has all the same problems, because all we've done is passed a permission to access regulation. and also very much we'll just agree with the sentiment that, the strongest default settings and protections need to be in place when a child accesses the platform. and i think that that is an important piece of this conversation as we think about what the state parental permission bills mean. >> if i can add on, i mean, i think the problem we have with age verification united states is basically right now we are in a world where we assume everyone is an adult. and a lot of the proposals on the table are to assume everyone's a child unless you prove that you're an adult. and for many adults, you know, there are privacy concerns with that. and for many children, there are privacy concerns with that, and i think when we are talking about some of these state laws like utah, right, this is a state that, you know, they, they've said that, you know, they think some of these apps
10:59 pm
are so unsafe, they can't be on government devices, yet they want to have a law that says, you know, every parent has to now upload a copy of their id and give a copy of their personal id to the same apps, there is a serious conflict fair and i think, you know, we need to have alternative options. and so for example, one of the options that ietf has come up with, and we're looking for others like this are ways that empower parents with something like a trusted child flag that you can attach to a device, so you can put it in a child mode. and then once you give it to them, every app, every site you visit after that would have to respect that child flag, if it's an adult oriented site. something like that gets around having to verify ids, having to verify age, it's no longer about this, are you 18, 16, or whatever the threshold is. the problem with that is it kind
11:00 pm
of substitutes government oversight for parental oversight and not all children of the same. some 16-year-olds probably can not handle things 18-year-olds can and vice versa. there's such a wide range. creating a trusted child flag is basically saying, can we take the ecosystem we already have of some of these controls that you mentioned that aren't being well used? and think about how can we actually make them so that they are well used? well, one of the problems one of the reasons parents don't use the child safety features right now is because there is so many of them. you have to figure out the one for this social network, and then another social network, and then this device and another device, there's no interoperability between any of that. so our point is, why don't we work on actually making this all work together? so that you can give one child one device, you can set screen times that applies across a chromebook and an ipad, and the windows device. you can you can do much more
11:01 pm
with that then tried to say we're gonna ban certain types of features on certain types of social media sites where everyone has to display their id. people have concerns with their privacy. maureen: wow. my head is going to explode. i agree, john, that we should be providing them the safest possible environment for kids. i've spent my whole life working toward that goal. but i think that when we talk about the public safety aspects of this problem, which we do not talk about nearly enough, we're overlooking the fact that we're not talking about having like, i live in a town of 3000 people 30 miles north of boston, we have a wonderful little police department. we're not talking about having policemen sit on the systems, right? we're talking about breaking up global criminal enterprises that are not just terrorizing kids. they're terrorizing the companies, too.
11:02 pm
and so at some point, we skip this part of the conversation, so i was reading the other night that there was this. this is from one of the pro parent groups u.s. has traditionally put the onus on parents to supervise their children's online experience. however, this doesn't get to the root of the problem about companies designed platforms to maximize engagement. okay, we skipped a huge element there, which is that as crime moved from the real world into cyberspace, we have a moral obligation to tackle that and we're not doing it. why are we even bothering to collect the cyber tips, if something like 3000 out of every 100,000 cyber tips are even examined? i mean, the notion that we're doing enough to protect kids in this conversation is preposterous.
11:03 pm
and quite frankly, by shifting the blame directly to the tech companies, we're missing a huge opportunity to protect kids. we talk about data privacy, i see my good friend, the newman my seinfeld rick is here. and rick and i banter about encryption all the time. so i've spent a lot of time working on identity theft and children. it's a huge problem. if you think the problems of exploitation are a problem, come look at those numbers. and so i've had a lot of concerns about weakening encryption. my dad was an fbi agent, sure we want to help law enforcement, but what is next? we strip away the miranda warning? we've got to find a better way to do this. and encryption is the backbone of protecting not just kids, but every single consumer. we talk about children and data privacy. hello. it's a nightmare. let's talk again about some of the meta plaintiffs because i've been looking at these states for 40 years, i started my work in california where they were routinely issuing multiple social security numbers to kids,
11:04 pm
so they could make multiple title 4a claims to the feds for their kids in foster care. if you're a child in foster care when we're talking about meta monetizing kids data without parental consent, if you're a child that enters the foster care system in any state, but maryland, and you have social security benefits, the state is taking them from you without anybody's permission, and you're leaving the foster care system destitute. so i guess i'm finding this whole conversation a little troubling, because i see a huge disconnect between the lack of outrage about those practices that are going on in the real world and by a lot of the people who are now directing attention to the tech companies, because it directs attention away from them. and i think that there's a lack of consistency. i mean, we could talk about utah all day. these state rules that are being put forth, i'll let's just use my favorite example. my least favorite democrat, gavin newsom.
11:05 pm
>> can we wait on gavin newsom for one second? maureen: sure, but we are going to get to gavin newsom. [laughter] >> there is a project, which i'm chairman of, which is trying to develop an international framework for doing age verification. so the problem that you were mentioning about having to jump between different platforms, different methods for doing it should be solved. it was originally funded by the european union. it's now funded by the un and i the chairman of its advisory am board. we recognize the problem that you've raised, and it should become smoother and easier. meta is part of the the experiment that we're involved in, and other tech companies are as well. >> no one is really objecting to gambling sites and keeping children off that. i mean, the questions come up when we're talking about sites, and services that have a broad
11:06 pm
range of users that are you know i think about you know if you look at how children you know children are most likely to be have amputations because of lawn mowers, right, we could have id checks for lawn mowers so that, you know, you could not push a lawn mower if you're under the age of 18. but we don't, right? we expect parents to be responsible in this space. we expect there to be a balance. that doesn't mean we sell lawn mowers that are intentionally dangerous. we do have safety standards. but i think this is where we need more of a balance and more of a respect for the fact that there are going to be, you know, multiple types of, you know, parents and standards and what we want out there. and it can't just be, you know, treat everything like a gambling site on the internet, because they're not. >> one of the laws that was getting a lot of airplay during the hearing was the kid kids online safety act. that would establish some reasonable measures that tech companies could take to prevent harm. but it's been controversial, right, because i think there's some who are concerned that you
11:07 pm
know, that it might empower state attorneys generals to limit certain types of content for vulnerable use, like our lgbt communities. i'm wondering, and maureen, i want to start with you if you don't mind. because you brought up newsom. how do we even define what is harmful content for kids and should the attorney general in some of these states really be the decider of like, what's what's actually harming our kids were not? maureen: god no. i think it was justice potter stewart who said he knew pornography when he saw it. so it's a very subjective matter. one of the reasons that i've become so concerned about the role that state attorneys general have played in all of this is that a, no individual state is regulating the internet.
11:08 pm
and also, as child welfare and adoption, another issue that i do a lot of work on over the years have been generally viewed as state law issues. they have become so fundamentally intrastate and intercountry activity that i don't believe that the states can adequately control or enforce them anyway. one of the reasons that i frankly pulled away from supporting cosa is that the state attorneys general are not enforcing their existing child welfare statutes, much less adding all this new stuff on that they're not experts in to begin with. so here's the gavin newsom example of why leaving it up to the states is a bad idea. he had this flashy press conference, talking about a bill that california passed that was supposed to be fantastic and solve the problem and protect children in california.
11:09 pm
and at the same time, they were letting 1000s of pedophiles out of prison on early release, one guy served a whopping two days in the la county jail for a pretty gruesome crime. now, this is a crime that has arguably the highest recidivism rate of any category of criminal activity. and taking, first of all, that it was a miracle that any of these guys were convicted, because one of my concerns is that the conviction rates as against the cyber tip numbers are negligible. so really, gavin, i'm not really that interested in having you or mr. bontic do anything that has to do with protecting children, because there's a fundamental hypocrisy to that kind of disconnected thinking. and as far as the states are concerned, what we're talking about is dramatically interstate activity most of the time, there is no way that they can really wrap their arms around it. so these state laws have just become this performative
11:10 pm
exercise. so i'm really not at this moment, i'm not going to support any bill a that doesn't focus on the criminal justice aspects of the problem. and that leaves anything up to the state attorneys general, because i just don't think that they've done a good job with kids, not just on this, but on anything. >> others have a viewpoint? >> i might add to this conversation, i think that there has been a lot of important discussion around the kids online safety act and the duty of care over the past almost two years now. you know, i think the bill in its current iteration is pretty clear that it's not meant to impact the existence of any single piece of content, it says very clearly in a rule of construction and the duty of care that this isn't about a child searching. i think that there's an important distinction to be made between holding platforms responsible for the mere existence of content, which is protected under section 230 of the communications decency act, and the decisions they make about what they are actually promoting into our feeds.
11:11 pm
because they are training algorithms and targeting metrics. and we know that more outrageous content gets more eyeballs and therefore ends up pushed onto feeds. i'm so i think that's an important piece of this conversation as we continue to talk about how we can conceptualize conceptualized cosa, i also think that the text in its current iteration is clear that it runs to the design and operation of the platforms. and that's really what we're trying to get at is that features and functions are just simply not the same as just the mere existence of content. >> can i just make a quick comeback on that point about sub state substituting parents rights? absolutely not how we see it in europe. what we see and i might remind you, the united states of america is the only country in
11:12 pm
the world that's not signed the united nations. >> oh, i have a lot to say about that john brennan, and they're never going to, so kind of a false analogy -- >> it's not immensely relevant right now. the point is, we accept that the state has an obligation to help parents, particularly in complex areas like this. in apple figures less than 1% of , apple users actually use any of the safety tools that they've they've put on there, it's quite clear that there is a disconnect between what we expect and hope and would like parents to do, and what's actually happening. and what we're saying in with our legislation in the united kingdom is that's over. we're not going to take fine words from tech executives about their hopes and aspirations. we're not going to let them do their own homework anymore. this is a key piece of our new legislation. there will be legal obligations on tech companies to report in detail the risk assessments that they're taking in relation to how child users of their service are actually faring in terms of
11:13 pm
that service, and the steps that they're taking to make sure that those problems are minimized or disappear. and if they tell lies, and by the way they have done in the past, somebody will go to jail, because another unique feature of our legislation is there are criminal sanctions being attached to the reporting obligations under our legislation. and i expect we'll see how it works. we have a new regulator, that will be undertaking the work here, but we've had it with fine words and promises from tech executives, that period is ended. we're in a new era where the state is failed to the ad. and by the way, this went through with all party support. nobody voted against this legislation when it went through the british parliament. that is extremely rare, extremely unusual. but it happened. >> so, john, you know, i emailed you about this over the weekend, you know, the tech companies are
11:14 pm
not shielded from federal criminal prosecution right now. and quite frankly, one of the reasons that i am so keen on law enforcement is that they shouldn't be marking their own homework. if there are companies out there that are conspiring with groups like the younger boys, for instance, let's put them in jail. there is absolutely no scenario in which existing u.s. law does not provide exactly the sanctions you are sinking. john, here's the thing. you can you can talk about that stuff all you want. and i agree that we want to have safe platforms. you have skipped over the entire discussion of public safety. and so if you're dealing with, i'm working with a victim of sextortion right now, she's a lovely young woman. it's amazing how she's come back from a horrible experience. but really, do you think that any platform can push back on an international global ring of
11:15 pm
sextortion without some help, you think that all of these issues that we're talking about cannot be bolstered, in fact, strengthened by having a public safety response too? okay. and then, and then i just have to address this u.s. versus the world thing. the u.s. framework of child welfare law is very different from international standards. i've worked all over the world, okay. the u.s. is never going to ratify the u.n. convention on the rights of the child, for better or worse, because our framework around children is more like property law than human rights law. that's just is what it is. but at the same time, i've looked in detail at the global law enforcement response to this problem. and guess what, nobody's doing a good job. right? why? because there isn't enough investment in public safety. there isn't enough investment in collaboration, there isn't enough focus on the criminal
11:16 pm
activity that is fueling the victimization of kids, i've got eight children, okay, two daughters, and six stepchildren, all of whom i raised. 17 grandchildren. those kids range in age from 56, to six. so i've seen every possible application in my own family of the evolution of technology. and would that it were so simple that a company could just wave a magic wand and create a safe space. but if we're not creating a safe space for the company to do business in, they're certainly not going to be able to create a safe space for kids. and i'm not here to defend the tech companies. but i am here to tell you that i've been doing this for a long time. and i can tell you that if you ignore the criminal activity that is fueling the suicide rates. and by the way, this whole discussion of mental health has a lot to do with a lot of things and not technology.
11:17 pm
and by the way, i haven't heard anybody talk about the positive things that technology does for kids right this minute. >> i do want to get there , actually. maureen: then let's get there because it is an important part of the conversation. >> if you look around the past year, we've had these heated debates in many states around book bans and other types of content, where, you know, we are seeing state legislators substituting their view about what content is appropriate and what content is not. and even though the legislation says they're not, you know, trying to focus on any specific content, just think about the last year we saw and what that hearing would look like, if a law like cosa was on the books, every ceo would have a parade of content from third party content and saying, why did you think this was appropriate for children? did you not have a duty of care to take this down? and we would just see more of
11:18 pm
the same political theater where it will be about the content, it will be very subjective about what content is allowable, what content is not. and to the point about, you know, are we actually helping children at the end of this? i mean, one of the concerns i have with a lot of these bills is basically taking more and more of the internet away from children. and the idea is, of course, you know, they're saying, well, we're doing this in ways to protect children. we're also taking a lot of the value away from children. and increasingly, what's left for children on the internet isn't very useful unless you have money to pay for, you know, paid software, paid apps, paid services. so you know, if you want the future, the internet to benefit children of all backgrounds, those who can afford things and those who can't, we need an ad supported internet that actually delivers value. now, we want to make sure we're not delivering them harmful ads, but it will be ad supported, just like much of, you know, other public content that we have on broadcast tv was ad supported content.
11:19 pm
>> i think you guys, both of you actually talked a little bit about some of the promises of social media. and i think one of the things we actually heard some of the tech ceos talk about is, you know, a lot of teenagers, their whole lives are on social media, right? they find connection, they find communities they find people they wouldn't maybe have otherwise met. most people who use social media don't face the tragic stories that we heard about during the hearing. and i think there is starting to be an argument about you know, is there a sense in which some of this is a bit of a moral panic? are we just sort of fretting about social media in the same way that we might be once fretted about video games or other sorts of television and other sorts of technology?
11:20 pm
and i'm wondering if you all, you know, can reflect on that idea, and what you might tell regular parents about how concerned we actually should be about some of the risks we've discussed here today. >> i think the comparison with video games is very appropriate. i think back to columbine. right after columbine, people were looking for something to blame and what they could do. and the answer then was video games. video games was seen as the problem. and we have seen regulating that that would not have addressed all the school shootings we've seen since then. there were other problems there. i think the same thing is going on with social media where, you know, part of the frustration i think a lot of parents have myself included with this debate is, it takes attention away from the real issues. you know, if we're talking about real issues that are gonna help children, more law enforcement, better safety nets for children, there's so much more that we can be doing to help children address children's need. and it's not about you know, whether we're having autoplay on videos on social media, those aren't the big issues.
11:21 pm
>> you know, i think, you know, i think about how, you know, kids used to go to school on horseback, and now they ride school buses that might crash, you know, there's a, there's always a generational thing that goes on here. but, you know, at the moment, the ukrainian government is looking for tens of thousands of children that were abducted by the russians, and how are they doing that? they're using ai. and they're using ai to scrape the images that the russians are putting out of the kids. and they're matching them with the images they got from the parents that reported the missing, and they're geotagging their pictures. and guess what, we know where 23,000 of those kids are probably we look at the way that over a million kids have been adopted from foster care in the last 20 years that probably would have had a really hard time finding families, if it were not for the internet. we'll look at all of the educational applications of kids . during covid, kids would have been dead in the water without having some technology access to education, to mental health services. and from where i sit looking at a massively underserved population of kids of all kinds. we look at technology as the new
11:22 pm
frontier in terms of delivering adequate mental health services in particular to all kinds of children for all kinds of reasons. but when you're talking about a moral panic, naomi, i think there's a there's another aspect of this that really hasn't been discussed, and that is opportunism. and so when people started to figure out that they might be able to sue the tech companies who have some pretty deep pockets. that seemed like a pretty attractive alternative. now, again, i look at that and go, okay, they're mandated reporters. so if you're going to start to sue them, you need to start suing your pastor, your school teacher, your school nurse, all the other mandated reporters. and in the meantime, by the way, because they're not mandated reporters, let's go after the gun manufacturers because they're shielded from liability, and they kill a lot more kids than facebook does. so i think that there has been this sort of masterful
11:23 pm
manipulation of the message that has departed sharply from what you know, i would consider and a lot of other people would consider to be best practices in child welfare. mandated reporting has been around since 1974. and thank god it is because it saves a lot of kids lives. as far as the issues of suicide and other side effects of being victimized by criminals, well, i say, let's just give it a shot. let's see what happens if we go after the organized criminal enterprise on the internet. because you know what i think will happen? it will be safer for everybody. just some thoughts. >> other thoughts? >> my issue with the moral panic framing is that these harms have been realized, they're not imaginary. we have a lot of youth advocates and a lot of families saying they need help. i think there are many wonderful things for kids and teens online. and, you know, our driving sort of motivation behind this advocacy is that they should be
11:24 pm
able to learn and play and socialize free from some of these incentives that currently exist because of a lack of regulation. kids are not going on to a social media feed, and just seeing what their friends have posted. they are seeing things that are being pushed to them, because a platform has decided it as such in order for it to be profitable. i think the fact is that we talk with families week in and week out who have experienced the very worst of this and have struggled very mightily against it. this is not something imaginary we've got the american . psychological association and the u.s. surgeon general saying these features and functions will exacerbate issues for kids and overcome their own sort of developing sense of things like when to log off and go outside. and that is why taking action is so necessary. >> of course, the internet's been hugely beneficial for the vast majority of children and society as a whole.
11:25 pm
i mean, we're not here to celebrate hope, that's not what we're here to talk about. we're here to talk about the bits that are still not getting the right degree of attention. and the damage that's done to children through something the way that you've just described, is huge and lifelong. in the u.k. what we basically said is, we're going to compel you to do the stuff that you've been saying you want to do, because we don't think you're doing it consistently enough. we don't think you're developing enough result, devoting enough resources to it. so we're going to put the force of law behind it, we're going to make you honest, we're going to make you keep the promises that you've been making, with your fine words over so many years, which so far have have not produced a good enough result. and can i just say, on the child sex abuse thing, if you looked at my bibliography, the subject i've written most about and
11:26 pm
lectured most on is victims of child sexual abuse on the internet. that's a very, very serious, huge issue. and i'm not trying to minimize or reduce it in any way whatsoever. but there are a whole set of other things, too, that have very little to do with organized crime, and everything to do with the way algorithms work. maureen: oh, john. all right. now we're having some fun. listen, i started working with the boston globe spotlight team on the catholic church abuse cases in 2001. most of that abuse didn't happen on the internet. okay? again, the moral panic aspect of this sort of ignores the underlying cultural issues, the underlying criminal issues, and the underlying just sense of and, you know, at some point and god knows as a mother i feel this keenly, we have to also
11:27 pm
make a decision of where do we as parents draw the line, where do we as parents take control of our children's lives? i can tell you right now, because i have spent almost 40 years looking at it, the government makes a terrible parent. and there are right this minute 500,000 kids in foster care here to tell you that is true. so it really concerns me, as much as i am concerned about child safety, that we hand any of these overarching decisions to the government, especially when it might be elected officials whose ideological positions will change from administration to administration. and take it away from parents. and if you look at what's happened to the real world's child welfare system, which is really with what i call family policing had some very damaging effects, not necessarily a world we want to build in cyberspace. >> you might have gotten in the last word because we've run out of time.
11:28 pm
i don't know that there was any agreement here on this panel as there was in congress last month, but obviously this is an important issue we'll have to continue discussing as we find out what the right policy solutions are. thank you for sharing your perspectives and thank you for listening. [applause] >> all right. so, we are going to do a quick turn-around here. in this room, we will have a.i. governance puzzle assembling the pieces of policy and trust. that will be starting momentarily. the other rooms we've got ditch the glitch, the new network responsiveness approaches a better broadband user experience. and the high court and online expression, deciphering the legal landscape. we're going to get started in a minute so get settled and we'll get going.
11:29 pm
11:30 pm
>> hi, everyone. hello, hello. hello? hi. we're going to get started here. thank you all for joining us today for the, about the current state of a.i. governance. [indiscernible] i'm a tech policy reporter for bloomberg government. and i'm going to have each of the panelists quickly -- [indiscernible] -- dive right in. >> awesome. i'll go ahead and go first by virtue of seating order. i'm travis hall, i'm the acting associate administrator for the policy of development. as a national telecommunications -- sorry, you asked for a short
11:31 pm
introduction, my title is very, very long. i run our policy shop for t.i. within the department. our assistant secretary spoke earlier and i'm looking forward to the panel. thank you. >> hi, everyone. i'm miranda, director of the a.i. governance lab at c.d.t. the center for democracy and technology. we are a 30-year-old civil society organization, has been doing a lot of work when it comes it a.i. i do work on figuring out what comes next in the a.i. governance conversation. how do we implement these ideas that are floating around and how do we tackle some of the emerging questions that are coming around the more advanced a.i. models. >> i am the a.i. and data privacy lead for work day here in the united states. workday is an enterprise cloud software company providing enterprise solutions both in hr and financial management to 70% of the fortune 50, 10,000 organizations around the world. and we have about 65 million users under contract.
11:32 pm
>> i'm a senior fellow at the institute here in washington. i've spent 32 years doing tech policy for a variety of different academic institutions and think tanks. my real claim to fame is i've attended every single state of the net conference. tim gives me a little fake chairman sticker to put on my thing. i'm not the chairman of anything but i've been to all of them. i think the first one we had like alexander graham bell as a keynoter and a tech demonstration from marconi or something. glad to be here again. >> great. so let's start off with setting some context. almost a year ago, the a.i. frenzy really hit washington. and we've seen so much go on since then. the biden administration has taken action on a.i. congress is moving to take action on a.i. beyond the federal level, states are moving to try to regulate this technology. the rest of the world is moving to try to regulate this technology. so i guess my question to you
11:33 pm
all is why? why are we seeing governments try to respond to a.i. and why now? >> i'll take a stab first and simply say that this is not actually a new topic, it's not a new work stream. it's a new name for it but we have seen iterations of these conversations for over a decade and a half. right? under the obama administration, it was big data. we've been talking about algorithmic bias and discrimination for a very long time. all of these topics in terms of algorithms and collection and use of data and the impact on people's lives, we've been engaged in in various stages over quite a long time. and one of the things that i think has captured the imagination in terms of current
11:34 pm
conversations around ai is that it takes policy conversations that are some what disparate. some of the conversations where people were having different conversations, and focuses it on a single piece of technology, a single focal point of like potential both excitement but also risk and harm. and then we also just simply, about, you know, a year and a half ago, right, had a series of products that came out that really, really were cool and fun and exciting and accessible and you could play with and you could make images and you could write lullabyes to your daughter, you know, about basically any topic you could think of. and then it also really brought to the fore some of the potential risks about these technologies. in ways that i think people, again, in policy circles and tech circles have been talking about for a long time and really made a little bit of a lightning strike for the public. as our assistant secretary said
11:35 pm
this morning, we actually struck that lightning a little bit because we have been working on a project on a.i. accountability, a.i. accountability policy, looking at how to do audits and certifications and things like that, and have been doing that for about eight months, it takes a long time to clear documents through the federal government. and put out the request for comment right when interest was peaked and we brought in over 1400 comments where usually comments are 50 to 200 or so. and because everyone was so keenly interested, so keenly aware. and overall i think that that public awareness, that attention from government is in general a good thing. people are coming together, having some of these hard conversations, again, building on some of these past conversations that we've had, but i do think that that kind of -- many different issue areas coming together, kind of a single piece of technology, a single moment of people being really interested and excited and concerned really helped to
11:36 pm
draw this attention to the fore. >> i think in those early years there was a pocket of policymakers, civil society groups, researchers, who understood that the infrastructure that was being built, data and machine learning, was going to power all sorts of experiences that would effect people but not that people were experiencing directly and could attribute to the technology. that was one of the things that changed. there's now a language to this technology, there's an interface that just makes it much more present. so in that way, a.i. has become a lens through which people are projecting a lot of different concerns. a lot of different policy concerns. some of which have to do with the technology itself, and some of which are macro issues that technology is just a fast-moving train to jump on and try to get the energy that has kinds of -- that has kind of come up around a.i. to tackle an issue that maybe they've been asking to be tackled for quite some time but the challenges in d.c.
11:37 pm
have meant that we haven't tackled those issues. it's an interesting moment because i think it does give us an opportunity to revisit some, you know, conversations we had about privacy, for instance. still don't have a federal data privacy legislation, but privacy might end up in some kind of a.i.related vehicle, for better or worse. copyright, labor, civil rights, all these things that people feel like there are gaps in, ai can make those issues worse but could be a vehicle to try to get to creative solutions in and try to get the attention on those issues that people haven't felt that they've gotten. >> i think the question is why now. i think we are asking ourselves that question. because i think very much like travis and miranda just pointed out, these issues have been around for quite some time. as a company, workday has been using a.i. for about a decade. so i think what we are seeing as new is a trust gap. because these issues have gone
11:38 pm
to the forefront of public debate, you know, there's been some serious concern among policymakers about how do you address those risks while also harnessing the real significant benefits of a.i. when it's developed and deployed responsibly? so we actually have some numbers now behind that trust gap so workday published a research, some research that we had done last month and we found that in the workplace, that trust gap is very real. so about four and five employees -- in employees have said that five their employers have not issued responsible a.i. guidelines about how it is used in the workplace. similarly, only 52% of workers found that a.i. is -- that they trust a.i. being used responsibly in the workplace. and so i think the policymakers are responding to that real gap in trust and can we as a company, we're here to help them address that while also trying to understand the great
11:39 pm
potential benefits of this tech. >> just to link everything that's already been said which i agree with, you know, travis pointed out in some ways there's nothing new under the sun. there has been debates over these issues for a long time. there is a new gloss over everything ai with an added layer of concern. i think that is what really makes this debate interesting, because it is the confluence of many different debates into one. and everybody is now interested in legislating on issues that we've long been debating through the prism of a.i. regulation. and especially not just the federal level, states, local and even international. so multilayered kind of interest. what you hear at every single congressional hearing and all of the different sessions on a.i. around this town and other state houses is, like, we can't allow what happened to the internet to happen to a.i. what does that exactly mean? because we got a lot about the internet right and america's become a global leader on digital technology and ecommerce
11:40 pm
thanks to a really enlightened policy vision, in my view. but now there's a reassessment of that, the wisdom of that. and a question about governance government's moment, , like, we have for a.i. that we may have sort of maybe not taken as seriously for the internet. i remember, because i'm such an old dinosaur, being around when -- at the table when we were writing the early drafts of the telecom act and the internet was not found in the telecom act. and so we had this interesting sort of accidental fortuitous moment where the governance of this new thing was a very light touch, kind of very simple approach, compared to traditional analogue-age media but now we're thinking of reversing that and going in a different direction with a more heavy-handed upfront preemptive approach to algorithms and governance regulation. -- computational governance regulation. >> you brought up a good point about the conversation today around a.i. is we don't want to repeat the mistakes of the past when it comes to the internet and social media.
11:41 pm
but i guess my question is, is that a correct comparison to draw? because all of you just mentioned how this technology, even though it is a technology, it encompasses so much more and it touches on everything. so when you're thinking about a.i. regulation, are you actually regulating the technology or are you kind of filling in the gaps or addressing these long-held longtime debates that ai is bringing to the surface? x i certainly have something to say about that. i wrote a study on legislative overview looking at the proposals in front of congress. the answer is a little bit of both. there is an effort to actually try to use ai to regulate individual policy silos, privacy, child safety, security, so on and so forth -- and various other technocratic areas, ai in employment, medicine, whatever.
11:42 pm
those are interesting debates. but the different debate is the macro debate we are having right now that is diverting a lot of time and attention about regulating ai as a general-purpose technology. that is very, very different and unique and is sucking up a lot of the oxygen from the committee rooms as people talk about existential risks of ai on high-level macro relation of ai. -- regulation of ai. in my piece that i did come the legislative overview, i said this is why i don't think this is why anything is going to be done by congress. we are not breaking it down into smaller components and having our concrete debate about individual policy silos or issues that have been on the table for a long time. you can get traction on a lot of those things but now, ai has come along and derailed a lot of back. we should have had the privacy bill last year and we can't get it introduced. where is the driverless car bill? it has been derailed by high-level regulating ai governance and that is a
11:43 pm
nonstarter. i don't think anything will happen because of that. >> i think the thing that people realized from approaches in the past the technology that were more, let us wait and see what the technology will do, it could provide benefit, we saw the externalities that came from that. people realized with ai, a lot of those externalities are predictable. there are some that might be more speculative that we don't want to tackle before we understand what the shape of those risks are. there are a lot of impacts that technology has had on specifically marginalized communities that it is very clear that without thoughtful approach or the right incentives of the people building technology to prioritize those externalities, they will be left behind. the question is what is -- where is the intervention point? in some cases there might be intervention points that are at the level of ai data sets or documentation where there is some set of recommendations,
11:44 pm
rules, guidance that play a role no matter what, or where the ai is being integrated. in other cases the correct intervention point might be more at the implementation layer. then you get back into the sort of vertical areas and the approach that adam was mentioning. without the foundational interventions, maybe you don't have enough information in the context of deployment to know what ai system has been used or for regulators to have the information they need in order to take action. so i think figuring out what is the appropriate division, what are the things that can be tackled in the ai circumstance, where there is a specifications that need to come up in different contexts. there have been proposals for example, for there to be some kind of risk related designation that agencies would determine
11:45 pm
which types of implementation of ai would fall into, which kinds of risks. the eu has taken such an approach. we have expectations around these systems but in certain contexts there is heightened requirements because of the implications of the plane -- deploying the technology in that context. figuring out how to tease that apart, not letting everything be bundled into ai as an umbrella for everything will be important to finding the right tools. >> i think you hit it on the had -- hit it on the head. which is taking a risk-based approach, focusing on the potential impacts to individuals. the way we think about this the -- and we have seen this emerge in the european union and the state level, at least some bills in congress, which is focusing on content until decisions, consequential impacts. those that, we know there is some agreement on across the aisle about focusing on things like hiring decisions.
11:46 pm
decisions to terminate or set pay or promote. i think folks generally agree that if in terms of credit, credit ratings or credit offering or extension of credit or health care, these are the core essential elements of human life that we all agree that are those high-impact areas. rather than try to have an all-encompassing ai approach would regulate every potential use of ai, instead focusing on high risk potential uses of technology. >> just to kind of provide observations on the conversation, this is more nuanced. unlike adam, my company has only used it since 2015 but the conversation right now about these issues has developed in terms of policymakers thinking about these issues, having constituents calling them about these issues. we are in a place where, to
11:47 pm
miranda's point, we can recognize what the potential externalities are. we have real case studies of what they are in analogous cases. not necessarily internet, but other algorithms. a darned necessarily intelligent. and i think a little bit back to that lightning and a bottle moment, capturing people's imaginations and also, in terms of the benefits and harms making intuitive sense, right? like, the people actually grasp how these technologies will in fact affect their lives, where it took a lot longer for the internet to be taken up. it took longer for it to actually impact people's lives. it has to be deployed. we are still deploying it. whereas i think there is more of a sense of immediacy, of the
11:48 pm
need to think about these issues, but i also think in a positive light that the policy conversation is a lot more nuanced and mature in terms of how to think about it. some of the difficulties are that in terms of the technologies, artificial intelligence is a broad umbrella term for a family of technologies that are white different that encompasses a family of uses that are quite different. a little bit of a christmas tree that are necessarily uses of artificial intelligences. that is where nuance and subtle distinction can be quite difficult, because the technology is actively changing and shifting and you get technical experts in the room and the technical experts actually disagree environmental -- in fundamental ways about what the technology can do. it is not that either one of them are necessarily wrong, it is that it hasn't proved out
11:49 pm
yet. i think that we are, the train is further ahead than it was before but we are writing it as the trucks are being built. >> so you both mentioned, or brought up what is going on in the eu with the ai act. obviously in congress, there is not an equivalent yet. we don't know if there will be ever, but there has been a lot of comparisons at least now, to what the white house has done on ai considering the sweeping executive orders that came out last fall. would you say that was the united states setting this foundational approach to ai policy? is that what the executive order mainly serves to do? if i could -- ask at least one of you to briefly tell us what this executive order does.
11:50 pm
>> the eu was working on the ai act for years before this big moment. that type of legislation takes a lot of work. the u.s. had focused on other big efforts like privacy legislation and know it is thinking about what holistic approaches would look like. it takes time and it was clear, in that moment, that there was action needed. they executive order tackles a lot of different angles. it looks at the foundational issues like automated decisions, government uses of ai, enforcement of existing laws, when it comes to ai privacy-enhancing technologies, , security components, and a -- looped in the new work considerations around longer- term risks, national security implications, geopolitical dynamics, things like that. it got a lot of credit for including so many different perspectives in that document and it was one of the longest executive orders that had ever existed, if not the longest. i don't know if anyone is able
11:51 pm
to actually prove that. it goes to show the importance of the topic coming up and the number of different angles that people were prioritizing getting into that vehicle. and i think we are seeing a lot of movement coming out of that. the timelines are pretty short. a lot of agencies are already undertaking the requirements, and not everything has come to fruition yet but we are seeing movement. there is a lot to be seen as to whether that will have the intended effect. executive orders are somewhat fragile. if we have a change of administration, there could be some backtracking there. and it won't be the most horrible way to regulate things but it provided at least momentum for agencies and for the industry players. i spent time in industry and i'm curious what you think, evangelos, but having voices saying people building this technology have to take a certain set of steps really does give more license for internal
11:52 pm
folks to commit money and spend time of those things. -- on those things. but how will we make sure that momentum continues and how do you make sure that attention is not overly narrow on only the most advanced types of ai. because you need that same level of care and diligence among people building technology even for simpler systems because they can have significant impact on people as the more advanced ones might have in the future. it is just difficult to imagine what that might look like. that is one of the places where the eu, there were questions about the scope of focus and how we make sure the administration's broader focus on decisions automated decisions , and impact on consumers and people doesn't get lost in this new moment. >> i think comparing the executive order and the eu ap is apples and oranges. the eu ai act is in process since at least 2019. the executive order is a
11:53 pm
landmark executive order but it is just that, an executive order. it is something that we have rolled up our sleeves to assist wherever we can with the administration. i would point out two elements that i think will have lasting impact on the ai governance landscape. first is pretty new, very ai safety institute consortium, which workday is a founding member of. there is a general recognition about the lack of standards which are needed to underpin effective, scalable ai governance, and the ai safety institute will help fill the gap . by pooling resources, expertise, appellant, we will be able to address -- talent, we will be able to address new measurement science, which is exciting. we are happy to be part of that. second in terms of potential impact is the omb's draft ai executive memorandum, which will govern how the federal government acquires and uses ai
11:54 pm
if it procures and deploys. it is pretty ambitious. if you read it, it reads, it has been formed by some of the commercial oriented ai proposals we have seen elsewhere in other jurisdictions. on the whole it landed an uppity -- and a pretty good place. it focuses on high risks to individuals, it deploys impact assessments and relies on ai governance program and includes a shout out to the risk management framework. if that is implemented it will have a significant impact on how the government acquires soft for -- software and pursues i.t. modernization and on the broader ai governance landscape. >> i will just say thank you to my co-panelists for summarizing the ai order. i thought that was my job, but i appreciate it.
11:55 pm
i will give a quick shout out that there is also a number of slow burns. hot deadline, slow burn like a report on dual use foundational models or we are the president's principal advisor on telecommunications policy and we provide advice and we are not a regulator and not going out and actually going to be immediately putting out rules. we cannot, but we are putting out a report on this that will then feed into policy. it will feed into future conversations, future legislation, things like that. there is a number of such reports and a number of such activities where the seeds are being planted. we have 270 days since executive order was passed in order to write this. part of my brain is thinking about that right now. [laughter] but this is going to be -- there is going to be a number of things that come out of the executive order that are going to have a longer-term lasting effect that are not
11:56 pm
things that are going to be done and immediately completed on day 2 70. i do agree that a regulation as sweeping as a european union regulation as opposed to directives is a different beast than an executive order. but i do think that we are all very proud of having put out such a powerful, such an ultimately impactful policy document with multiple moving parts where it is the entirety of the administration that is currently working on this. it is not simply a single agency or a single department, although the department of commerce has gotten a lot of that work. ultimately, this -- even though they are very different beasts, the ai executive order is going to be extraordinarily impactful for the reasons evangelos noted but also for some of the smaller
11:57 pm
bits and pieces that are included in that are going to have a longer-term effect. >> a brief word about the politics of the executive order because i think it is overlooked. many sensible things are in the order but it is so big, it takes a kitchen sink kind of approach to addressing all things algorithmic. and that is well-intentioned, but the reality is it over shoots by quite a bit. stuff about the defense production act, stuff about empowering the federal trade commission, department of labor, a variety of other things. these are meant for congress to do. these are legislative functions. i understand executive order's response to congressional dysfunction and inaction, but the reality is some of these are overreach that ultimately, could be subjected to be eliminated immediately by the next president. that is more likely now because i think the executive order supercharged political discussion about ai on the hill among republicans.
11:58 pm
i think there is a lot of pushback about what is in the order, or just the order itself and how big it is. will that mean congress gets its act together and does its job? i don't know, but i think it is a different discussion because of the executive order. >> i do think it is important that the executive order noted all of the agencies that should be acting under their existing authorities that enforce the law that exist for technology impacting those areas. because agencies like -- the implementation of these systems in context require the domain expertise, and probably some technical insight that legislation might not be able to get precisely in whatever congress puts together if they end up putting something together. agencies are going to play a really important role here, and important role here, and we hope that they will be proactive in thinking about how technologies of any kind, ai included, are in their wheelhouse already and what action they can take.
11:59 pm
>> so i'm hearing a little bit of maybe the executive order was too broad in scope, some of the political issues around the executive order. but what are some of the issues potentially arising now that the implementation process is beginning? what are the issues and more the substance of the order? where did the administration may be miss the mark? >> i think one of the big questions that underpins all ai governance, how do you define risk, how do you measure and detect it, and then what is required once you measure and detect it. i am certain cases, they deputized it and that is really important because the measurements that people are coming up with to determine what impact it might have, what capabilities might have, whether it passes a line of some kind, that is going to determine the effectiveness of any interventions.
12:00 am
and i'll be honest, the measurement science right now is very nascent. very poor. it is throwing things at the wall of the moment. that will be important to get right, and nist is good place to do that. they have that expertise. but having a reliable way to set a benchmark and say this is probably within bounds of what is acceptable and this surpasses that. another example was with the foundation models with the widely available models, the executive order sets a threshold of compute as the way to determine when risk might manifest. they know that that is a placeholder, but it was one that did raise a lot of questions about the right way to set the limit. does that tell us much about the risk of a model? how many parameters and how much compute was involved? it is not clear but that is the hypothesis of the moment. coming up with better measures for when the risks that will be
12:01 am
unacceptable where we want regulations to come into play . that is going to be the crux of effective ai governance, knowing when something poses a risk that requires action. >> at the risk of misstating something travis has said, i think there's a lot being done in a short period of time. and getting the right mix regardless of the context within the executive order, is key to hitting the mark when it comes to the stated policy goal. i mentioned earlier the omb ai memo. government procurement is really complicated. that is what i have learned preparing comments for this. it is on the whole a very positive proposal, but getting the details right matters. it is not missing the mark. again, on the whole the executive order does hit the mark.
12:02 am
but making sure that the time is being put in place so that you can listen to stakeholders, take on feedback. amend these proposals as needed so that what we do get is a very high quality product. that is certainly one aspect of the executive order we are keeping a keen eye on. >> just a brief word about the nist process because everybody agrees on the important role nist has played in setting a multi--stakeholder framework and standards for ai governance issues writ large, security and many other issues in terms of ai ethics. but there is now with the executive order and with some legislative proposals an effort to sort of put some potential added enforcement there and utilize the nist framework, which has thus far been a multi-stakeholder voluntary bottom-up into a more regulatory thing. that is not traditionally what nist has done. nist is not a computational super regulator.
12:03 am
we don't have a federal internet department assisting the department of commerce. so there is this subtle but important shift in the governance discussion now about taking that nist role and supercharging it and putting it on steroids, whatever you r preferred mixed metaphor is, i'm giving it some added enforcement. that is the debate we will have this year in a big way. one other thing i need to mention is the executive order and everybody's missing is the absolute explosion of state and local efforts at ai regulation and governance. bsa did a study last year-over-year, 440% increase in october state and local legislative proposals. and the number of municipal proposals including ones that have passed is like nothing i've never seen in the world of tech policy. the eo doesn't do a lot about that and congress hasn't done anything. all the bills that are pending, none of them talk about preempting the states. this is very different governance for ai and competition dan -- than we had
12:04 am
where we had very clear for the internet. where we had very clear proposals and policies from the clinton administration and congress saying this would be a national, global resource. global framework for electronic commerce. go back and look at it, 1997 white house, single best governance document for technology i've seen in my life. it made it clear that this will not be something we should meddle with at the state and local level. i see it going into very different direction and a dangerous one with ai especially with congress unable to act. >> may be veering a little bit away from the state and local conversation, because it is certainly something we are looking at and thinking about, but not something that we are actively directly working on. but i do want to just put in that the executive order did put in a number of steam vents. there are a number of places where instead of making a final decision, which it could have done it actually put in place
12:05 am
, reports or notices of comment. i just want to point out that omb doesn't always for everything put in comments on its guidance for procurement. that is particularly on something as high-level and important as this. that is kind of not a regular thing and the executive order notices the need for that kind of engagement, for that kind of thought. yes, the deadlines are tight. i personally would love longer deadlines. but i think that this is part of the urgency of the moment. in terms of some of the definitions as well, agencies are given some discretion over interpretation and over future refinement of some of those definitions. i do think that while this is a policy document that is attempting to stake some flags in the sand, there is also recognition within the
12:06 am
administration that some of those flags at some point in time might need to be moved. and that there are mechanisms and ways to do so. >> and i'm hearing that the administration, even though this is not the first, but one of the most significant steps so far in the ai policy landscape at the federal level, there is still a need for congress to step in. but congress hasn't just yet in terms of actually planting the flag in the way that the white house has. tell us more about how congress fits into this conversation and what you would like to see from lawmakers on the hill. >> i don't think it is a secret that this is an election year, which will make legislating more difficult. i will give folks on the hill a lot of credit. there has been a significant amount of very thoughtful, bipartisan bicameral attempts to grapple with these technologies
12:07 am
and learn possibilities good and bad and figure out a policy way forward. i think whether it is the insight forums or ai working groups put together in the house, congress sometimes gets a bad rep and it is not earned. in terms of what we would like to see, i will say within the realm of -- again, this is an election year -- would love to see a common sense, if targeted approach, like the one taken by the federal ai risk management act. this is a bill that would require the federal government and companies selling to the federal government to adopt ai nist ai risk management framework. it is incorporating nist product, not making nist super regulator, but it is an important step forward in terms of articulating what are basic expectations around ai governance and would put the federal government in a place where it is really leading in
12:08 am
terms of what good ai governance looks like. >> voluntary approaches. there is so much fluidity at the moment that practitioners are going to be interpreting a lot of what is coming out of nist or other agencies. at some level and some point, voluntary approaches will not be enough to change how things are prioritized and practiced, how money is prioritized to be spent. i think sending some of those baseline rules of the road that change that behavior were , like voluntary commitments out of the white house, for instance, you can use commitment language and truthful simmons to -- truthful statements to get companies to take action that they wouldn't otherwise take. to make sure that companies, developers of ai are integrating sufficient risk management practices in the developing of ai, and changing the way they
12:09 am
make decisions around what should be deployed and under what conditions. i think we need clearer rules there, and as we heard on the panel, that might not be the role of the executive to set. they are doing what they can knowing that the attention is on them, but ultimately having , something stronger that will be durable that will change those incentives is going to be necessary. the trick will be water they focused on, where those rules land? there's a lot of bipartisan action on these high-level concepts and once we get into the details, that will be the trick of can we find some alignment, where people will relies that those rules will make circumstances better for everyone because it creates certainty and increases trust and sets the parameters in a way that conversations move from one set to deeper questions of other details, what are our values, what do we want to show the world the u.s.'s approach is to tackling the challenges? >> one of the reasons it is bipartisan right now is for
12:10 am
companies that are developing and deploying these technologies, there is some baseline agreement in addition to the nist framework, that is a good baseline of what ai governance looks like. it is emerging technical standards, as i will always say. they are still nascent. but when it comes to putting in a governance program that manages, measures, and governs the risk of ai, carrying an out an impact assessment pre-deployment or presale so you understand the potential impacts on protected classes or algorithmic discrimination. being able to provide some baseline of notice to folks that would be impacted by these consequential decisions. i think there is a lot more agreement there around those basic building blocks than sometimes we give people credit for. >> just a brief comment -- i've given away my claim that we need
12:11 am
to break ai policy if we get anything done. but if we go high-level, some of the things evangelos spoke about, and endorsement from congress on the nist framework, utilizing it and to extend that framework, great. i think everybody's behind that. i think the trickier thing is the one i already mentioned about national framework and what are we going to do about state and local activities? that is going to be extraordinarily tricky. it is hard to preempt things like privacy and cybersecurity. it is going to be even more difficult on ai policy, but a conversation that many people in congress are thinking about. they don't know where to go with it. i think it is needed. what is also needed and we are not going to get, in fact the exact opposite, is we need some potential section 230 for ai. we may not even have a section 230 for the internet anymore, we are one committee from senator hawley getting rid of it.
12:12 am
section 230 was the secret sauce that powered internet commerce and freedom. but we run a real risk of backpedaling on that and undermining the computational revolution if we don't get it right for ai. >> i think the analogy is -- analogies are somewhat different. the liability conversation is fairly important. ai systems, if we think of them only as language generators, which is the analogy in the news these days, it can end up misdirecting policy conversations. so many ai systems are effectuating outcomes directly and are plugged into systems that otherwise actors would have some responsibility for. i think making sure we are thinking about going back to the definition of ai. certainly generative ai is not , the focus of ai that we ought to have when we are thinking about these interventions. and what is the scope of that conversation will determine what might make more sense from a liability standpoint. >> one of the most helpful
12:13 am
documents the administration put out was a joint statement by the ftc, doj, eoc, and cfpb -- that was a lot of acronyms -- all sort of underscoring that there is no loophole for ai in existing civil rights and consumer protection laws. that element of certainty is particularly helpful to the marketplace. and one that i think serves as a baseline for us to understand what are the expectations around when you are developing or deploying these tools. i would also give a shout-out to work that is built on top of that. we worked together with the future privacy forum -- a lot of this is concrete. in a document around workplace assessment technologies, we and a bunch of other ai vendors got together and mapped out what it looks like in practice. because statements like those from the enforcers are particularly helpful, but you have to find ways to instantiate
12:14 am
them in practice. >> i think for travis i would ask how closely is the administration engaging with the hill on these issues? and what would the administration like to see from the hill in terms of police the -- of at least the first steps with ai regulation? >> so, i would say -- in terms of engagement, we regularly provide technical assistance to the hill on a variety of bills. the administration has been very heavily engaged in conversations with the hill. there are certainly lots of open lines of communication on this. and it being bipartisan is a very helpful thing with that. in terms of what it is, looking for what the administration wants, i will say two things. one, i think the executive order
12:15 am
did a good job of laying out where we have existing authority to rely on existing authority, similar to doj, eoc, fcc, fpb, putting out the statement of what is and is not a loophole. with it mainly being not. but there are other areas where the executive order is putting out reports, guidance, things like that, that at some point in time, probably would need to have authorities be enhanced in order to make them fully actualized. the second point i would say is the president has put out his fy25 budget. there's a lot of activities that the administration has in there that are focused on helping to fund this type of work. the president has put out his vision for what the budget is and what the budget should be, and certainly that is another area where there is active engagement.
12:16 am
>> for example, nist has been given a lot of authority and not necessarily the money to go with it. i think there were a couple of legislative proposals mentioned, but i want to ask what bills out there right now are you all paying attention to? there is one mentioned and maybe i will mention one. the create ai act has been thrown around a lot. there is some proposals to ban the use of deepfakes in federal elections. what are you paying attention to and where do you see some movement potentially? >> the election stuff could be getting a lot of interesting potential traction. that could be an example of breaking issues down a little bit to address them, although i'm concerned about potential overreach from a first amendment perspective on some of those issues. at the high level, it is difficult. the create ai act has got a lot of support. a lot of people in industry are interested in the thune -klobuchar bipartisan proposal
12:17 am
which builds on the nist framework and looks at high risk systems and tried to narrow the focus a little bit. i think that is an interesting approach. i wrote a paper about it. there are a variety of other things that would look at expanding government capacity. one thing we always fail to mention is how much ai is already regulated. i spent a lot of time writing about this in individualized contexts. our federal government is massive, we have 2.2 million civilian employees in 46 different federal departments. the idea that nobody is taking a look at it or regulating ai's ludicrous. i finished a law review on fda regulation of machine learning in algorithmic systems, eeoc is active on this. we will hear from the commissioner about that later today. we will go with ftc's actions, the letter that went out there. part of what congress can do is an inventory. just make sure what are we already doing on ai policy and how do we support or improve
12:18 am
that? that could be the most important short-term take away but it is not as sexy as the saying "i've got a big grand bill to take care of ai." we get caught up in that debate the smaller micro is more tractable and get action on capitol hill. >> i will cheat a little bit and give you a state bill in california. it was introduced by rebecca barry kane last year. we will probably see some momentum around it this year. it does a lot of what i already mentioned, which is it focuses on consequential decisions leverages impact assessments, requires companies developing and deploying these high-risk tools to implement a governance program built on the nist risk management framework. we have seen california with privacy willing to take the lead here. i think in terms of what realistically we expect, to move the ball forward this year, it is going to come out of california. short of that, however, we have
12:19 am
seen abb 31 influence bills in other states, be it washington state, be it bills in virginia and vermont and new york. i think we will see 2024 be a year of the states. ab 31 is what is on our watchlist. >> we are watching a lot of different components and some of them will end up in some vehicles. some might try to move on their own. what i am most interested in our the attempts to fill the gaps in existing regulation legislation. employment, title vii has been a long time. it has enforcement and power, and others in time there are challenges to individuals who might be experiencing discrimination when it comes to ai because they lack visibility and it is challenging to bring the counter evidence that there might be an alternative that the
12:20 am
employer ought to have used, etc. the things we are focusing on is building blocks. what documentation do they need to have so that regulators, individuals or their advocates know that an ai system was implicated such that they can leverage the existing structures? what sort of third party visibility or oversight might be needed to ensure that conduct that risk management is happening to be sufficiently be addressing that risk? i think we are following all of those things and trying to give advice to make sure that the proposals will have the intended effect and will create that environment of trust. for example, auditing can impact -- and impact assessments have come up quite often as concepts. but the general consensus is there might need to be something like that, but what does it look like? right now there's not even a market for the type of expertise that third parties would need to have to do an audit effectively, but that might be critical with some of these high-risk systems.
12:21 am
how do we make sure the legislation is both pointing to the standards that are going to need to be underpinned, whether it is third-party or self-assessment and how do you , create the conditions where there are actors that are sufficiently independent to bring the accountability that legislation is going to want to impose in a particular context? >> fortunately there is an , agency in the department of commerce that is coming out with a report shortly on that very subject. >> i wanted to step back and maybe speak to the overall approach. earlier it was mentioned the risk-based approach, not a one-size-fits-all approach. because you hear administration officials, members on the hill saying they are trying to strike this balance. we want to promote ai for its many benefits and we want to stay competitive with china, but at the same time, want to protect americans from these
12:22 am
risks and concerns and harms. but how do you ensure that all these voices are being heard in terms of all the different stakes of ai regulation? how do you strike that balance? because there is concerns of course, of industry playing too much of a role in regulation, or maybe playing an outsized role. there are concerns about being too heavy-handed and stifling innovation. how would you suggest policymakers strike that balance? >> anywhere that the details are being talked about, that multi-stakeholder conversation is really important. i will talk about nist as an example. the ai safety institute just announced this consortium, a variety of organizations and civil societies in academia, essentially coming together to talk to some of these issues. cdt is a member of the consortium, which i'm really glad about.
12:23 am
there are a handful of other civil society organizations that we work with that are part of it. but remember the amount of staff and time and resources that different components of that multi-stakeholder group have to spend on that process, it will have a natural effect. it means that policymakers need to make more of an effort to include voices of communities who wouldn't otherwise be in the room. because otherwise, it is easy for them to be drowned out due to the amount of resources that folks have to spend. at the same time, people building the technology who are close to have interesting insights that are important to consider. we want to make sure that policies are tackling the problems that they want to tackle. sometimes there can be a mismatch when the technology is moving very quickly. i think that has to happen out in the open. we saw with the insight forums on the hill for instance, they did involve a lot of folks but they weren't as open as they
12:24 am
could have been to let people share their experiences. making sure that the voices of individuals who are consistently not considered are held up to the same -- are weighed in the same breath as the more macro issues that policymakers are considering. it's their job to consider all of those. >> industry is not a monolith. i will give you an example. we are, as i mentioned earlier, an enterprise be to be -- b2b software company, which means that we have very large customers who are not going to use ai tools they don't trust, full stop. that has created a very important market pressure amongst us and others that are taking steps to institute safeguards on the ai development side. so that we can set up our customers to succeed and comply with either existing law or laws that are coming online, so that trust gap is addressed.
12:25 am
because again, large organizations are not going to adopt ai at scale if they don't trust it. as i mentioned, adopting the nist framework, we were the first company for nist to put a case study out on our reduction -- adoption of the nist framework. using impact assessments, having an ai governance program, enabling testing of data tools for algorithmic discrimination. these steps are already out there. why do i bring this up in the context of what policymakers can do? because they can find ways to codify this to help take advantage of these paths that have already been laid out by many of the leading companies. again no one is going to adopt , ai if they don't trust it. this is not a call for self-regulation. i don't think that is going to work in this case. but you can find workable regulation that puts in place safeguards, but also moves the
12:26 am
ball forward and follows what -- and can enable innovation. >> and making sure we are reaching out to as diverse a group of the community as we can. to get as many stakeholders in the room and talking to us as possible, in particular people who are directly affected by the technology. it's an evergreen challenge and mandate we take very seriously. >> just bring those themes together, all roads lead back to some sort of agile, iterative, multi-stakeholder approach to technological governance to keep pace with the evolution of this fast-paced technology. that has really important ramifications for our broader economy and our geopolitical security, as we square up against china and other nations on this front. right now because we have policies -- 18 of the 25 largest by market capitalization technology companies in the world are american-based. right around 50 of the biggest
12:27 am
employers in information technology are u.s.-based. that is a result of us getting a more iterative and flexible approach to e-commerce and the exact opposite approach of what the europeans have done. i always challenge my students, to name leading technology companies based in europe. there are some, but it is hard. that is result of bad policy. we don't want to take that approach to ai that the europeans are taking. we need a different u.s. approach that is more flexible and iterative and yes, quite messy. the think people don't like about multi-stakeholderism, it is messy. it is agile, it goes with the flow while it is being developed. i understand that. they want an overarching, upfront, precautionary solution. but that is not the right solution if you care about national competitiveness and our geo-security. >> i want to thank you all for joining us for this panel and thank you to our panelists. [applause]
12:28 am
12:29 am

0 Views

info Stream Only

Uploaded by TV Archive on