Creating human-level AGI or functional human brain emulations should be considered as an act of war against all other...

Creating human-level AGI or functional human brain emulations should be considered as an act of war against all other nations.

Nuclear retaliation by all (other) nuclear powers should be ensured as the automatic result, to deter such an act of war.

This is the only way to prevent the AI apocalypse or a malthusian emulation scenario, and therefore to preserve humanity.

...

>Blowing up the moon or mars should be considered as an act of war against all other nations.

Sure, if it were possible.

AI or brain emulations are possible, and they endanger the national security of literally everybody else.

Mass Effect fanboi detected.

>human-level AGI or functional human brain emulations
Why? It's literally just another person. Big whoop.

>People confusing human-level intelligence with superintelligence, or thinking that they are at all related

>AI or brain emulations are possible

No they aint

You can run them faster and you can copy them. That alone makes them a threat, even aside from possible enhancements. Imagine the economic and military asymmetries that result from this. Classifying it as an act of war is appropriate.

Yes they are. Seriously, are you 12?

>Yes they are. Seriously, are you 12?
he's right.
after all the things we've invented we still haven't invented a fake brain.

there's no reason yet to think it's possible. It's demonstrably not possible at the moment.

This is such a dumb argument. You can say the exact same thing about EVERY invention EVER, one day before it is made.

In fact, that's what happened to nuclear reactors for electricity generation.

There's absolutely nothing about AI or brain emulations that make them somehow impossible.

...

Stupid, childish response. Nothing in that textbook says AI is impossible, or not a military threat to national security.

>You can run them faster and you can copy them
Not if running one already takes up a sizable chunk of a country's computationally resources. Are you even thinking?

The odds of that are low. The odds of that remaining that way are even lower.

Wow you've totally convinced me.

You are assuming that Moore's Law just halts completely, and that current computing resources can't provide AGI with any practical efficiency once it is invented.

What are your reasons to assume both of these things?

i have read through this 1000 pages motherfucker. over the last year now. and i think the guy made that picture haven't read it since it says the opposite. it even has the last chapter dedicated to debunk all the naysayers.

>Creating human-level AGI or functional human brain emulations should be considered as an act of war against all other nations.
Is this a bait or you're just retarded? Nonetheless, what the fuck are you doing on this board?

Do you just copy and paste this non-answer into every other thread?

Well we're fucked unless it's completely isolated from the internet.
But isolated means it can't develop itself beyond retarded.

>You can say the exact same thing about EVERY invention EVER, one day before it is made.
true, but look at all the amazing shit we've invented... except for this one thing.

hell, you'd begin to think it's not possible.

>There's absolutely nothing about AI or brain emulations that make them somehow impossible.
then why don't we have them yet?
hmmm?

...

You can say this for literally anything that hasn't been invented yet

yes, but we're running very low on things that haven't been invented yet.

scraping the bottom of the barrel.
but we still haven't got this! Probably never will.

>yes, but we're running very low on things that haven't been invented yet.
How would you know? There was a time when they thought physics was done. That was before general relativity.

>but we still haven't got this!
The massive incremental improvements over the years and decades cannot be denied.

>How would you know?
the rate is slowing. most new inventions are now ideas or programs, not things.

>The massive incremental improvements over the years and decades cannot be denied
sure they can.

there's no reason to think mimicking human behavior will result in human intelligence.

and while we can now synthesize the brain of a worm, that's not exactly promising progress in producing human thought.

Contrarian attitude is quite offputting and id go so far as to say you are being willfully ignorant

an AI is only as dangerous as it can reach. If you're thinking of an Ultron kind of thing, I'd say it's unlikely that the first successful AI is made in a huge robotics facility with access to the internet where it can immediately make bodies and weapons for itself.

>science fears criticism
prove me wrong then. Go build your AI. I'm pretty certain it won't happen in my lifetime. Mostly because you guys still haven't bothered trying to figure out what it is you're trying to build in the first place.

and I strongly suspect once you understand consciousness you're going to find it impossible to emulate because it doesn't really exist. It's an emergent property of an organism, not some fancy computer.

I'm not talking about rogue AI. I'm talking about deliberate military use in every niche it can be used.

Imagine a technology that allows arbitrary multiplication of expert military personnel on the fly, for the marginal material cost of a person. That is clearly as dangerous as nukes, if not more.

An AI or em technology could do exactly that, even if it's digital rather than protein-based.

try and stop me, faget

so basically you're saying that it should be considered a war crime to invent it because of its potential efficiency in combat?

Yes, in combat but also in other niches, such as military production, espionage, cyber warfare etc.

Maybe not a war crime in the legal sense, but there should be an agreement that it will be considered an act of war, to deter its development.

>not in my lifetime
So now you dont dismiss the possibility

>consciousness
neither here nor there mate. I agree with your assertment that it is emergent and doesnt exist in a scientificly testable way. However, putting aside the fact that it is not a requirement to prove consciousness to have human level intelligence, your presumably rigid definition of organism ie 'not a fancy computer' is again indicative of a self imposed state of ignorance

>So now you dont dismiss the possibility
what I say has nothing to do with whether or not it can be done. I still hold that the fact that it hasn't been done is good evidence that it can't be done. Every year that goes by without success just adds to that.

>it is not a requirement to prove consciousness to have human level intelligence
not by definition perhaps but you'll find it impossible to skip it in reality.
>your presumably rigid definition of organism ie 'not a fancy computer' is again indicative of a self imposed state of ignorance
I'm not the one trying to emulate a biological organism while denying what it is.

AI will inevitably be invented. wouldn't it be smarter to try and prepare for it and build safeguards rather than just try to stop it?

I don't see how you can build safeguards against the military dominace by the power who invents and coopts it first, unless you plan ahead with an international agreement.

By the way, the "agreement" can consist in a couple of nuclear powers announcing that they will nuke anybody who builds AGI.

That's assuming it would be invented and owned by a government first as opposed to private companies. Also, wouldn't the best deterrent to that be to have all countries try and Develope AI as fast as possible then?

What you are describing is an arms race, not a deterrent. The point is to prevent it from happening, not rush towards it.

No one benefits from an unfriendly AI, even the nation who builds it. But even with perfect AI value loading, no other nation benefits from having their sovereignity threatened by another nation with AI.

Therefore, it is in the best interest of all nations to announce that they will not tolerate it. Threatening nuclear annihilation still works.

As for private companies vs. the military, don't fool yourself. As soon as the military sees the potential, they will step in either way.

when you say prevent "it" from happening do you mean the invention of AI or military dominance by using AI?

I can't see one happening without the other being implied.

I can understand that. Are we using nuclear annihilation as a deterrent to stop AI from being produced or as a deterrent to stop governments from producing it while encouraging it in the private sector?

Well, if you allowed it in the private sector, the government can use a private firm as a front. Also they can step in at launch day and nationalize it.

In practice, there is no difference between the two.

nuclear annihilation will never be a credible deterrent, the only people that can threaten it are ones that have already defied it.

are you arguing against inventing AI or are you trying to see how we can go about inventing it in a way that is safest for everyone?

Nuclear annihilation worked well so far and if you are faced with a threat like other nuclear powers or AGI, there's really no other rational choice.

T O P J E J

You probably think humans arent part of nature and that bruce jenner meming on tabloids isnt a logical result of the big bang

There is no way that's safe for everyone, or even most nations.

In order to accomplish that, you'd need an international cooperation project that is utterly transparent AND competent in getting the value loading right. And then people couldn't agree on the values.

This is clearly out of scope for realistic human coordination. It has to be deterred altogether.

>Nuclear annihilation worked well so far
yes, look at how it stopped everyone from developing nuclear weapons.

That was not the goal of nuclear deterrence, nukes are fine as long as you don't use them.

AI is different in this regard, because it can be used in a myriad of subtle ways and once it is invented, it can basically not be controlled again.

One can make an argument that the US should have deterred the development of other nuclear powers completely in the 1950s and onward, but that ship has sailed now.

I think trying to deter it altogether is just as far outside the realm of possibility. has a point, nuclear annihilation doesn't really deter anyone.

>That was not the goal of nuclear deterrence
kek

our only choice may be an arms race. while it may not be safe it might be our best and most realistic bet.

If the US had made a credible announcement after Hiroshima/Nagasaki that it will annihilate everyone who develops nukes, it would have worked.

They just didn't do it, and I suggest several nuclear powers should do it now for AGI.

Are you an idiot or something? They never announced that they would nuke everybody who build nukes, just that they would retaliate against nuclear strikes with nuclear strikes.

As pointed out already, that doesn't work for AI.

Wrong. Deterrence by nuclear powers will work, they just have to do it. The alternative clearly doesn't work. Without deterrence China or Russia might panic once you have AGI and they see the military applications. Then you might get into a nuclear war after all, without the deterrence working. It must be done well before that point.

do you really think that would have stopped other countries developing nuclear bombs/AI in secret?

>If the US had made a credible announcement after Hiroshima/Nagasaki that it will annihilate everyone who develops nukes
Except such an policy could never possibly have been credible.

Yes. Humans are not that good at keeping important projects secret for long.

Of course it would have been. If you can nuke their cities and infrastructure, they pretty much have to oblige.

>Are you an idiot or something?
it was a little more complicated than that, big boy.

some of the people that got nukes were our allies at the time, then they ceased to be.

this isn't a situation you can prevent. And you know full well nations will threaten annihilation while working as fast as they can to be the winner of that race. Because fuck you, they have to. If you know there's a game-changing technology out there you have a responsibility to be the one controlling it.

le mao shy lil human boi

You can deter the race if you have more than one nuclear power, which we now do.

If the "winner" of a race faces annihilation, there is no incentive to race.

The military reports to civilian leadership and in turn is limited by the political climate of the country.

Everyone on Earth would know the threat is not credible.

>If the "winner" of a race faces annihilation, there is no incentive to race
unless the reward for winning is the power of annihilation of the nations of your choice.

not that it matters since current nuclear stability rests on the ability of pretty much any single nuclear nation to annihilate any other if attacked. So the threatening parties would have to assume they won't survive meting out their promised punishment.

none of which matters in the slightest since the thing isn't going to be invented. If it was it would say "tfw no qt3.14 gf" and then fry its own circuits.

Not true for China and Russia, and in the case of the US, France and UK, the combined nuclear NATO threat sufficed in deterring nuclear war, even conventional aggression.

You make up false reasons why deterrence doesn't work, it's almost as if you wanted a permanently stable Chinese world government and roll the dice on all the bugs in AI ethics under time pressure. There is nothing more crazy than this.

Funny how you have magic knowledge that it won't be invented unless we prevent it.

Also people can detect if you are winning, and nuke you when you are. AI is a threat, but you can still nuke it as it becomes detected.

>Funny how you have magic knowledge that it won't be invented unless we prevent it
It won't be invented whether you try to prevent it or not. It's like worrying about your enemies' FTL missiles. It's not going to be a problem.
>people can detect if you are winning, and nuke you when you are
ok, that honestly made me laugh.

There is a huge difference in annihilating someone for simply developing nuclear weapons and annihilating someone for using nuclear weapons.

Also, everyone but the US has even gone so far as to implement strict No First Use policies. US policy allowing the option for pre-emptive strikes in the case of an imminent nuclear threat.

FTL violates the laws of phyisics, AI is pefectly plausible.

You are talking out of your ass. I wonder if that's just stupidity or deceitful agenda.

This doesn't mean the deterrence wouldn't have worked if they had actually used it to prevent a nuclear arms race.

They chose not to do it in the time window they had, for whatever reasons.

Without the deterrence, the AI is still a treat, and all nuclear powers will realize that soon enough. What do you think happens then?

>FTL violates the laws of phyisics, AI is pefectly plausible
lol. That's what physicists love to think.

Here's what really happened:
>Physicist: All things are reducible to physical processes.
>Biologist: Emergent properties are irreducible by definition.
>Physicist: ALL THINGS ARE REDUCIBLE! I will prove it, I will build emergent properties out of legos!
>Biologist: LOL what a maroon!
>Physicist: *stomps off to try to build AI or something because everything is information or numbers or something

I trust physicists to tell me FTL is impossible.
you should probably trust biologists to tell you AI from a computer is impossible.

AI already exists...it only needs attention.

Not sure if you're trolling or just completely retarded.

The deterrence wouldn't have worked because everyone would know they could ignore it, since the American people wouldn't have allowed the nuclear annihilation of the Soviet Union based on a couple intelligence reports that they had been running a nuclear program.

You are fuckin stupid.

Even now, we don't make threats like that against rouge states like NK and Iran because it just provides further incentive for them to pursue nuclear weapons.

First encounter with disagreement?
get used to it, physicists and CS are well known for their hubris and stupidity.

I was referring to your stupidity, not your disagreement. After all, you had nothing but bullshit posturing to offer.

Yes, I'm sure the North Korean, Russian, Chinese or Pakistani people won't "allow" their governments to use nukes.

But let's say you're right and no deterrence works. There will be an AI arms race and a winner is about to take over the world and establish a permanent hegemonic torture dystopia - based on buggy cobbled-together AI ethics under time pressure.

Surely none of the above mentioned nuclear powers would respond to their impending enslavement by, oh I don't know, nuclear warfare?

>you had nothing but bullshit posturing to offer.
I'm advising you that you're discussing fantasy, not science.

sorry if I hurt your feeling.

Except you're wrong about that, and you have zero arguments to back up your idiotic claim that AGI is impossible.

Vigorous insistence is not an argument. You're an idiot child who has not understood the basic principles of rational discourse.

You are both retards

Great contribution.

>you have zero arguments to back up your idiotic claim that AGI is impossible.
I have the argument that conscious intelligence is an emergent property and by definition irreducible.

You're free to ignore the argument if you like, but be aware that almost every biologist alive is laughing at you.

just like you would laugh at a biologist that says he's going to invent FTL travel via gene splicing.

We were talking about intelligence, not consciousness (whatever that is).

Intelligence is not irreducible, why would anyone believe that? A process as stupid as evolution by natural selection has created it. We can clearly reverse-engineer or otherwise replicate it.

We have self-driving cars, AI beats humans in chess, jeopardy, go and many computer games, not to mention raw calculation. For domain after domain of intellectual pursuit, we see computers beating humans. And yet for some reason humans keep insisting that their general intelligence is somehow special enough that it will never be replicated or surpassed.

Humans are at the center, nothing can change that - sound familiar? It's not like we made the same mistake several times before, from cosmology to evolution.

you'll find biologists in general are not particularly anthropocentric.

they also don't think you'll be replicating biological emergent systemic properties with legos. If nothing else you've chose to work with the wrong clay.

>We were talking about intelligence, not consciousness (whatever that is).
see, you don't even know what you're talking about.

intelligence as you've defined it is a scientific calculator. And I'm pretty sure you're not worried about the military advantages of a calculator.

while you don't realize it, the intelligence you're discussing is a conscious one. You don't realize it because you deny the existence of irreducible emergent properties. You're not comfortable with anything a computer can't be programmed to copy.

it's actually pretty circular, your blind spots are self-reinforcing.

So if we can make ai, can I finally get a gf?

no.
the CIA will assassinate you and take your work.
sorry.