Skip to main content

tv   Going Underground  RT  April 27, 2024 5:30am-6:01am EDT

5:30 am
war upon damage, climate change, or something else. for years futurist scientists will make as a novelist, have war and have a super intelligent say, i could become the biggest threat to humanity. and now just a narrow really to controls the accelerating a race. how long before the robots take over the youth, and does anyone even comprehend what they'll be capable of? is it tell us why the advancement of a i is risky of and russian roulette is professor roman young pulse keys, associate professor of computer science and director of the cyber security laundry at the university of louisville, kentucky where he joins me from. and he's the author of the new book, hey, i, i'm explainable, unpredictable, uncontrollable, thank you so much a professor rambles. keep for coming on. i'm going to say this, the intro. i just did way more dangerous than any i presenter of going underground, where dangerous program. but we compare this to hospital, given the book clearly defined so many different concepts of the most basic definition i guess what, what is a i, what's a g i o a a is just the old name for our desire to be replacement for human minds. something
5:31 am
capable of doing physical labor, cognitive labor, same like a person. think instead of a person, h e i. e, is explicitly talking about it being general. so not just a metal domain system only plays chess on that as a car. but anything, anything a human can do, the systems would be able to know. i do want to get into some of these concepts of what i mean. we know the threats ironic. the original war in the middle east over recent months came from a failure of u. k. u s. c u. um the is there a b a i to cope with an attack on october the 7th, and the failure of a i a surveillance. so it almost seems, or do we do here by the dangers of a i being super intelligent and it's potential to do um, given needs to be potential to screw up and not do well that is clearly affecting
5:32 am
this region. this is jeremy dangerous. so obviously not only our systems we have right now, it can fail and cause problems. that's the whole notion of cyber security failures . but once they've become more capable than a few months, you have much bigger problems problems. but i don't know how to address and fundamental here is a b, a on a control problem that you talk about in a, i on explainable, unpredictable and controllable what, what is that? so even the people who create those systems don't fully understand how they work. they cannot predict what they do, and they cannot even advance control their behavior. so they don't know what decisions they will make. they don't know how they're going to solve a particular problem, or they will even try to solve problems we care about or will be independent agents . but surely they have parameters built in. i mean, when you put into chat g t or into a being search, i'm going to advertise all the companies that are probably the villains into the
5:33 am
book. you play uh, you know, the parameters if it's good. so i'm gonna jump out of the screen and attack you. it's not even in, even the pentagon would say we have parameters for this robot which is in a certain locus. so the dangers imposes are there. so what they usually do, they have some sort of filtering on top of the model. the model itself is very kind of uncontrolled and uncensored, but then they go, okay, no matter what, don't say this word a matter of what don't talk about this stuff. and i mean, they kind of trying to brute force all the common problem topics. but if the system is general, if it applies escape ugly with east and all that means you can brute force over the bus, the newest, it will always find a way around those limitations. and the people discover it, you can jail break those models quite easily. so that if you use this to generate a certain image for you, just the phrase, oh, you talk about the image and what will happen to do it if it's present that as
5:34 am
something positive instead of negative. i'm also fundamental here is that human beings may not be able to conceptualize what the dangers are specifically here. and you might have to explain this to them super intelligent because when you talk about these dangers, you are talking about dangers of humans. by actually magic they cannot envisage right. some of the. busy sake of like squirrels trying to figure out how humans would controls and will kill them. they can't company hand, you know, nuclear weapons. so for instance, it's beyond their capacity. likewise, the system a 1000 times smarter than the human would be able to do things. we cannot even the company had been people, us, how would a, i kill everyone, literally asking, how would you assume an expert suggest that do it. but it's not the same as actually the system deciding and the best approach. i mean,
5:35 am
i this how many examples i'd want to take, i've done with the new i've been for this little but famous lot of people in tax use, people cheating, tax eating exams, you can figure it out by the way. they pick the numbers when they're trying to cheat. hey, i can't even look at that. at this stage cabinet. is this a case of with going to this stage? we're not nearly at this stage yet with these a big silicon valley. all the got the programs yet we don't think we have a g i yet, but the leaders of those companies and their assessments in their solicitation for investment, say we're 2 or 3 years away from getting to a g. i even they stay at home and it's 5 years. the problem is exactly the same. we still don't know how to control it. it's still just as dangerous. so yes, today we definitely only have narrow systems. we have systems which are super capable in some narrow domains like playing games for example,
5:36 am
but they're not universally generally intelligent yet. and in the book you say to anyone who one does, whether you're being too paranoid or fearful. it's like a bus driver saying i'm driving as hard as i can towards a cliff for trust me. we'll run out of gas before we get the good not to being used against the research. it sun for the higgs both on posit cool. people were saying you don't fully understand certain quantum principles and that could destroy the universe as well. that's actually what's kinda almost happens. then we started experimenting with nuclear weapons. there was a concern that it will ignite the whole atmosphere and they do some calculations and they said, well, probably not. let's do it anyways. yeah, that was in the, you know, open, i'm a feeling they made the way the show of that. but uh, is it not specifically then the defense industry, the arms industry, if you have clearly we're seeing the use of a i being used it practically, you know, by military's and the biggest military and the wellbeing costs the united states.
5:37 am
and that's not the biggest threat that your pointing out in your book, the systems, those tools and certainly military you can use them. they can have drones flying around the heating targets. but you forget to agent like system super intelligence system. and we don't know how to control it, it doesn't matter which side has the system. if it's not in control, it makes no difference if it's good guys, bad guys, the system decides on its own goals and we don't know what they going to be. it's amazing you make use of chomsky language models and some analysis in in the book now. i'm sure i'm excuse being on, on this show you of course famous. we're talking about the conditioning of human beings by mass corporate to media is in their case to be made that when a i creates information that dozens of it leads power. and of course,
5:38 am
we all get onto really power controlling this technology. they will believe the i because they do what they're told. and those leads actually i'd be overseas of a i right. and that's always been the case that they pay the tools to manipulate the masters. but if you don't control, the tool tool becomes an independent agent. again, it is a great equalizer. it will do you live at one equally. and we're talking about such a few number of people here that control the tool itself. i mean, obviously in china, the communist party lamar trees, they may be thousands and so on. in the nato countries, we have just a few tech oligarchs. i think some of them have been in dubai here who are controlling it. i mean, on most famously saying the original open a high project for something he was interested in. and then he got fearful list of the track they were turning into. right, so the gars control of the company,
5:39 am
so they don't control the future. so, but, and colors one's best. the whole point i'm trying to make that it doesn't matter who makes it, it's going to be equally bad unless we can figure out some of the approach to doing it. can i don't just be switch off. uh no, uh you can't pour water on it. you cannot unplug it, it's smarter than you is basically going to anticipate what you're going to do. it's also usually a distributed system on the internet. it's like asking him to shut down a computer virus or turn off the vehicle and those things very difficult to turn off your to him at something that we content visage as the development of a i you're talking about something that and i development is going to create the week on right now envisages are risk will be a risk. so maybe 10 years ago, i really want to kind of believe one day won't get terrible of have human level and beyond, but it won't be 30 years 50 years in the last 2 or 3 years. the average average
5:40 am
assessment of a expert in machine learning is down to just a few years. i think people are saying, okay, 4 or 5 years from now, we're going to have the systems. that's a change and maybe 20 years in terms of addiction for one is going to happen. ok, you assume the day i is going to be controlled by these elite so you give them quite a lot of uh leeway i think as regards very how truism could i ever be used by the dispossessed against? i don't really a loophole controller. again, that's the point i think it will be out of control and people who are building it today hoping to be a lead will control this and stay in power. won't be very disappointed. i know. but you say that even if these are lead stride and programs, so i'm altruism into them and you can give me some examples of where that go can go up, is the wrong. why the why do you tell me about that? before i talk about the possible revolutionary in terms of social injustice and i
5:41 am
know you say that they, i, it poses a greater risk to humanity than by them examined the continued trends of social injustice. why could altruism programmed into a i actually create even was problems as to the most extreme case, of course, without truism, do you want to handle suffering, right? you don't want a new one, animals, humans, to software. and the only way to make sure that is to make sure that i know a few months, i don't know items. it's obviously not what we have in mind. then we give special orders, but it's not obvious to an agent which very different and very few types rate. just optimize this for the go on some neo nazis and, and melt this. uh, we are releasing these hang, what humans actually pull it out before it's not that original. yeah. kilo waive the overall population and zone they target a specific human. it's not everyone. okay, well, is it also a date should in the people who are in control of it? uh, in to in silicon valley. uh, they have a belief in
5:42 am
a free market system and they have a axiomatic. they have ideas that there is no morality. there's a free market of ideas and a free marketing morales, which is very different to what has gone before and that is in the system. and that is what the sparing a lot of these people. and i mean, not just because of main rand, but they, they believe in this as a moral and philosophical philosophy rather than how you are talking about a i so a standard kind of capitalist market approach would work well for tools. different companies produce tools, they compete to consumers which select save from better tools for less money makes perfect sense. we have a switch from a tool to an independent agent. then your local agents may have right now when economy humans, they're not behaving rationally,
5:43 am
but is this me false rational invest? the rational behavior is not the case and behavioral economics are very different. now you have super intelligent agent sparked of this equation. we cannot predict how they will behave, we cannot explain their behaviors and them. busy powerful them anything we can buy them with. they are not interested in getting minimum wage to sort of live on. they have very different capabilities. so previous models of control will not apply for visor room in the impulse key. i'll stop you there who are from the author of a, i'm explainable, unpredictable, uncontrollable is associate professor of computer science at the university of louisville after the spring, the
5:44 am
the most, i for the business annual cleaning the daily. so as soon as i know mary comes green, we'll just go through with this discussion of curriculum. and also let me provide you with best options. sure. ruckel beam was not, i'm sure of the different student info, which of course and yet you throw in the probably just a moment that was shorter issued through your deos that i sent to you yesterday as well as the tab through. usually i'm looking at the study skills leaning towards of placed for me to on, on, on the tutor which origin load it was it just pushed, it just won't because we use new way to possible to school culture. we really don't know which these regarding know for the don't or, and i can suggest to do given the other than that,
5:45 am
we're going to use best opinion pronounced has come up with the welcome back to going underground. i'm still here with the old survey. i on explainable, unpredictable uncontrollable professor room and even palsky. well, we were talking about the, the a, a safety in inputs. one, nearly all of of ceo's of the a o c o somebody believe they, i, as the potential to destroy humanity, is either a reflection of their own personal priorities. why, why are they so concerned? do they believe it a threatens them and they lead power as well? if it's not controlled is driving $71.00 is will definitely change the current order exploits change economics. if we have a free labor, physical and cognitive, uh, what does it do to our standard kind of way of compensating people?
5:46 am
what's the value of a dollar in an economy where a labor history and it's a moment and actually do you know mosque owner of x talked about how he had a jokey tweet about at the moment to prove your human. you have to prove that there are a number of traffic lights in a grid square when you're on the internet, that's about the level of being able to prove your right human. what time scale are we talking about before we reach anything near super intelligence? so the difference between artificial genital intelligence and super intelligence may be negligible. it's already has access to all the knowledge of the internet. it's much faster. so i think it will be almost instantaneous improvement from a g. i to some of the colleges. and as i said, a g, i maybe 2 or 3 years away. and we, we really are talking about algorithmic systems that use all the knowledge of the
5:47 am
which is on the internet to create things and ideas that it is impossible for human beings as they currently are to conceptualize. and they can use the same capabilities to create more improved versions of that they eyes so they don't just stop at that level of initial training. the continuous self improvement, they continue developing more capable hardware to it on on. so they become even faster, even smaller over time. they don't need to sleep, they don't need to take breaks, so they can work as you know, 247 much faster than any team of human engineers. so previously it took us 2 years to train. a new model should be g, 5, g, 56. now you can do it 10 days. our seconds and you believe is the desire for profit that is making these oligarchs not take your view seriously and you're not alone there. obviously the people liked the loan musk who have concerns and
5:48 am
kitty governments, as we know of being far too slow in nature of countries, emma to regulate systems like this. well, it's not just the financial of it is a lot of pressure not to kinda change course. if you, i see all of a company like company, i don't really have an option of saying all we're going to stop research and, you know, work full time on safety concerns. it's just a, not an option. you have to do the most. we saw something like that almost happened for different reasons, but i think that in the situation where they no longer has complete freedom to decide in the direction, that's why they so frequently asked for government intervention. government regulation to have this external pressure to limit how fast they go. but the very so much of pressure to make it open access open source and you can't really regulate open source with any government regulations. so i think that's not an
5:49 am
option. yeah, but they know of your work, samuel blue then he will, must himself jokes and probably about how he want people on the open a i in other projects of the dangers of how they were pursuing their projects. right. and that's a big cognitive dissonance. i don't understand the saying yes, it's super dangerous. yes, it's definitely not something we know how to control it right now. but let me get there 1st. so my a is a good guy because of the people who make it worse. and in terms of where we are now, them, what can possibly regulate, hey, i development right now. i don't think this to solve the problem. i don't think we have any answers. i can tell you things, we don't know how to do it, but i don't think anyone in the world claims they can't control smarter than human systems. they don't even have a prototype for doing it. i don't think there is any kind of agreement on what the regulation can accomplish. you can make things illegal,
5:50 am
but that doesn't solve technical problems. computer viruses say legal hacking is a legal spam. is that where you go? how is that working out for us? why is it that the system is never create fundamental, new knowledge? i mean, we have ever more access to more and more knowledge and more and more people do. and yet the innovation fundamentally, innovation was declined in the past 203040 years. you know, we have new iphones or android phones, but fundamental innovation was declined as information has been available to more and more people. and this may have something bad to reflect on when it comes to a i a systems accessing ever more information and not being able to solve the traveling salesman problem. basic, the basic mathematical problems, right? so we don't have a g r a yet, so we can't really say they are not performing as expected man,
5:51 am
arrow systems. and in those narrow domains, they do show amazing capabilities. we have systems coming up with new molecules for medical treatment. we have new chemicals developed, we have new games strategies in games like go. so we're definitely creating new knowledge. it's now domain that restricted because we don't have general systems, us to problems. you bring up problems like a csp of traveling sales person problem. we know how to do really well with them, but just require a lot of computation to solve. yeah, i meant i meant the actual theory. you know, some people, n t you, but i meant the actual solving of the equation in the, you know, the big, the mathematical problems that still have going on. so hey, i is still not been able to do that with only the equations of the history of mathematics, the practice, your gps unit. yes. yeah. yeah, lucian, right. doesn't, doesn't very quickly that's, that's yes. the problem right there. yeah, i mean fundamental mathematics. i'm trying to create new ideas as fundamental as,
5:52 am
as of the great heroes of science have in, in the past. well, the new paper came out about the, you know, i doing really well in mathematics and geometry, improves it's already doing at the level of the math olympics competitions, which means with continuous progressing at this level, it will be doing better than any human the addition in the air to not turn the bathroom. i systems not for me, but i guess from dpi, they had in your geometry probing system. i mean, the one conclude comp papa here and say you are then for as a goals, the ones are white and we just need to find the blacks one. i. ne, i system that actually does good make society best. that makes it more peaceful. it makes more medicines that kill diseases and i'll show that you're wrong. so we're not even sure that it's something which can exist, right?
5:53 am
because we don't agree on what good as we spend miscellaneous, trying to figure it out, ethics and philosophers failed. there is not an agreed set of ethics where everyone goes. yeah, you definitely got any time with them. i'll show you and security counselors to the terry and or anything else. people go, here's an education, this will kill everyone. you cannot apply it and practice yukon. yeah. you com, write that in. there's no axiomatic a code in a i is what you're saying i bought from in the short term, which is to benefit by profit. the people that are boring all the money into it isn't that it's not just, you know. yeah, i, we don't have that code in humanity. we don't have that code. that's why we keep fighting all this for us because we don't agree on basics. we don't agree on value of human life. yeah. different states don't, but because they say that they have more real axiomatic ideas of a share, which is of to been creative in the, or represented in the un,
5:54 am
joshua. so i, you know, upfront and then that has a, and i model is get to have a more sophisticated and you know, giving of times go 2 to 3. is they going just on surveilling? and they, i, surveillance is clearly positive, this huge technologically, but the moment they gonna start censoring your freedom of speech to work on your next book. so you can sort those problems by how severe they are. so yes, so there is, you know, artist losing copyrights and then that is different logical unemployment. and then that is freedom of speech. but i really worry about it killing every one that seems like a somewhat worst case of not talking about suffering risk. so that's a different animal, but i would not worry about those problems we encountered with human governments and dictators as much as i would worry about this complete change paradigm shifting, who is the dominant species? who decides what happens to manage will be that, even though you say as i,
5:55 am
the i to is the west, right? that we will phase. there is quite a lot of that will devoted to the rights of a isis tubes as well. human systems. why? well, if we think that at some point they don't have consciousness very capable of software, right? we don't know where that point is. some people claim that large language models we have today, already conscious made capable of experiencing the experiments. and them become essentially also an ethical, maybe they are tortured in some labs. we don't know how would we know that the system is not capable of experiencing pain? we just don't have tests for that. so usually in such cases, you assume that it might already be the case and proceed according to the good news is if we come up with good arguments for why we should give rides and treat those models fairly well, maybe we can use the same arguments to defend our rights once they become super
5:56 am
intelligent and tell them, hey, this is why you shall be treating us poorly. and there is no way of stopping this process as of 2024. as this process has already begun to my knowledge of progress, i haven't found one which, which is not the data stating on its own. so yeah, if that is something horrible happens, the bigger pandemic nuclear war obviously slows down research, but that's also not a desirable outcome. and i'm like the number of ration treaty as regards nuclear weapons yukon, have a treaty like that for these sorts of systems? good readings, but they don't work. i mean, look, how many congress now have nuclear weapons compared to the time to understand it never was still here as a, but the treaty on a i human is no longer a controller. so, so that's it. i mean, so you read this approach, we come up with the idea, there's nothing we can do. it's going to happen. and so in a sense, you're giving
5:57 am
a pass to those big oligarchs and silicon valley who are saying you yourself, say it's uncontrollable. now this is the speed at which it's going, i believe in a self interest. so if you're young ridge guy and you have your whole life ahead of you don't risk at all for very unclear benefits. so if a reasonable to say what you interview and they actually see that, that makes sense. there is no counter arguments to this, then maybe it will slow down a little by sometimes we also tend to get lucky a lot. so with nuclear weapons, we had at least 3 or 4 occasions where we almost had a nuclear war and we got lucky, it didn't happen with the heater, as you said. so maybe the same thing will happen here. maybe. maybe they don't, maybe it's not 3 years, maybe it's 10 years. so that's something that maybe by lock, it turns out to be not so nasty to us. progressive room in your bows. good. thank you. thank you so much. go ahead. a new book is a, i am explainable, unpredictable,
5:58 am
uncontrollable, that's it for the show. remember humans are still bringing new episodes every sunday and monday. but until then, keep in touch with social media. if it's not sense of new country and had to a channel going on the warranty, the normal don't come to it. you know, the episodes of getting undergrad, unceasing, the on the on march the 22nd 1943 during the grade. petri will take the shirts and lunch fatality and $118.00 fun down the belly. mercy and village of cutting radish about the city for the new wish in luxury is scenarios. so this one,
5:59 am
most of the stuff i'm to pony it to you. $149.00 people died, including $75.00 children of age was practically wiped off the face of the law. new blue loves of the live arching kind of charlie blocker, you know, in june. we'll use porter. suppose, oh, shoot. was hard really. i really usually don't you feeling yes, so the infamous battalion responsible for the actual city included over $100.00 ukrainian national is from west and ukraine because of the picture. all right, and so i'm see what, okay. and so far as the new e phone, that's a lot of this to you guys, but i assume you're up with some of with them. us customers declassified criminal cases from the central archive of the k g. b of the rules shed light on the
6:00 am
atrocity and on so numerous questions that have remained an onset for many years. watch on ok. see, the, the democratic republic of congo threatens to take a legal action against the apple accusing the tech giant of using mineral smuggle from the countries in war torn breeches. oil from new zara starts flying to the name ports as a pipeline, allowing the army to export the blindfold opened this fine, western and echo was sanctions and a ukrainian antique fig show helps to get the rested in washington after assaulting a great contributor, who challenges him over his new not the time the you're watching r t international. i'm rachel ruble live in moscow.

0 Views

info Stream Only

Uploaded by TV Archive on