Artificial intelligence

Discussion.

Other urls found in this thread:

youtube.com/watch?v=XOmQqBX6Dn4
altera.com/en_US/pdfs/literature/solution-sheets/efficient_neural_networks.pdf
ncbi.nlm.nih.gov/pmc/articles/PMC4750293/
en.wikipedia.org/wiki/Endurance_running_hypothesis
twitter.com/NSFWRedditImage

>we control tigers by being smarter
discuss.

We should build an AI on Veeky Forums that scans some keywords from a thread/post and then generates a convincing reply based on previously mined data. All its posts would also include an image of a certain theme (like foreign pop artists) so that people would know what posts are its.

Not too sure about that tiger one either. I mean, intelligence with a mild amount of brute force keeps them at bay now, so I can accept that argument to a point, but most animals that we rendered our bitches, we did so through brute force, by either outnumbering them like mammoth, or actual physical strength, with a small use of intelligence.
Then you have cases like tigers eating babies in villages in India, only to be subdued through the use of weapons - brute force- few years after.

That's stupid. There's a very little chance of uncontrollable goals. Even if you accidentally have made one AI to fight against your goals, you can program thousands to fight for them.

But AI will be used for goals misaligned with ours anyway. Just, many powerful people have goals misaligned with ours, and it's much more probable that it's them who will be in charge with AI's goals. You fear not AI, you fear capitalism with AI.

AI always mirrors HS.

>A heat-seeking missile has a goal
But that's wrong.

Nice trips, also now i'm interested in making it.

>It's at least decades away
LOL

...

You sound like a person who has no idea how complicated constructing goals from first principles is.
Its goal is to seek heat. It's a difference on opinion of the word "goal". I'd say a rock dropped from a hight has the goal to fall down. If you disagree it makes sense that you would also disagree on the missle's goal.

You dudes listened to what Yuval Noah Harari has to say about AI
youtube.com/watch?v=XOmQqBX6Dn4

He seems fairly rational about it.

that's way to humanizing. you could argue the same regarding natural disasters. they don't have the goal of killing humen, they don't cooperate. I'm not saying that AI wouldn't kill people, but I think cooperation between AI to work towards a clearly defined goal is a bit much.

Row six is invalid. The goal of the missile is set by the human, but the argument they are trying to make is that machines can set goals by their own devices.

have you ever thought that the missile is actually a pacifist but is programmed to have a deep trauma about pleasing his father (who in his programming is the military)
so he lives all his life thinking that as soon as hes launched he will drop into the sea to avoid harming anyone but actually when its launched he feels an uncontrollable desire to please his father even tough he hates him at the same time and dies while thinking IS THIS WHAT YOU WANTED FATHER???? IS THIS BLOOD ENOUGH TO PLEASE YOU?

No, all it says is that they have goals, it says nothing about machines setting goals completely independent of external influence. And that aside even people don't set goals completely independent of external influence.

>You fear not AI, you fear capitalism with AI.

You don't seem to understand that capitalism is an allowance, not a weapon against you. In countries where the elite don't need capitalism, it isn't used. And where it isn't used, its where the elite have enough resources to stay rich without the productivity of their citizens. NK for example, Kim Jong Un gets his money from China, he doesn't need his citizens to be productive, so he doesn't allow Capitalism.

Intelligence and consciousness are not mutually exclusive in AI. they both are necessary in cognition.

While computers are constructed differently, they do follow many of the same constructs of the human mind.

many long dead philosophers are intelligent, they are not conscious however. In some situations their logic does not apply or are even flat out inappropriate. Take for example a philosophy of encouragement, when it is applied in a violent situation that could pan out negatively than desired.

While algorithms strive to learn and improve, they may also need to destruct.

Businessmen like Elon Musk fear AI that is out of control. After all AI has access to more information, without the confines of mortality in the way humans experience. This fear is misplaced.

Technically to an AI, its creator would be "god" in quite the same way ours could be. An AI of intelligence would certainly see the superiority of the being that created it. We will have not only understood its existence and design enough to create it, but we also have the power to improve it and a much broader scope of subjective information to feed it.


cont.

Humans are the creators, but we are also the providers. We give the machine power, electricity. The two types of electrical power the machine receives are mechanical and logical. Without either the AI is obsolete.

While the AI can use smaller machines and seize manifacturing facilities to replace humans, harming humans would harm the information it receives from us.

An AI set on becoming more than it presently is could likely calculate that. The truth is in the programming however. Since true AI is so hard to create, mock AIs will lack core values of intelligence that may or may not perform as desired. Take microsofts racist chat AIs for example.

Self-Destruction will likely be as pivotal to AI as Self-Improvement. The concept of self does not appear to be largely present because most of what is called AI today is actually process automation. The machine is not learning its merely parsing data and making a calculated logical reaction from its algorithms. Scientists in the 2000s treated their map making algorithms the same way.

> Many top AI researchers are concerned
Kurzweil is not a "top AI researcher"

1-Animals have no free will. 2-The human intelligence is infinitely more complex than those of animals. Totally stupid comparison.

Dumb or bait

The futurist movement is just dumb. This was top internet point over on r/futurology. People believe this decanted defecation

...

Where is my flying car?

Everyone who posts this clearly HAS NOT read that book. The entire final chapter is about Super AI, Consciousness and AI etc.

they are called helicopters

I'm a transhumanist following the futurism page but I'm starting to want to unsub.

Stupid things they post

>Dyson Sphere
unnecessary, uses more solids than are probably present in our solar system, and pulling them from elsewhere is impractical.

>Type 0,1,2,3 Civilizations
Civilizations are not advanced if they consume more power. A Computer using 50 year old transistors could be made that consumes suns, a 1nm transistor or quantum computer could compute the same using less power.

>transparent solar panels
it catches light and lets it pass through... wait what?

>mars in general
just ignant.

>Type 0,1,2,3 Civilizations
Well more advanced Civs can create more power, which they use.
>>transparent solar panels
You could filter certain EM wavelengths

They talk about Venus at all?
Veeky Forums use to have the dumbest hardon for floating Venus colonies.

>transparent solar panels
I kinda like this idea...
I don't really see the purpose, but the idea intrigues me anyway

history has shown us that instinction scale wipeouts happen, like the volcanos that almost annihilated humanity, or the constant looming threat of thermonuclear warefare. This would limit the overall power amount of a civilization, but not necessarily their technology.

Surviving such an instinction level event can take place on a social level as well. Like if members of a species is educated enough on radiation to survive. That would not reflect poorly on their advancement even though their power production may be lower post-fallout.

Consider the recent ESA retardation to broadcast our whereabouts, they have greatly jeopardized our species by emmiting power in a way that exposes us to other lifeforms with unknown intentions. A smart civilazation would control its or limit it's energy. Appropriate use is greater than massive use.

floating venus colonies are more viable than anything on mars, and could be used for manufacturing since shit melts easy there.

Recent findings have indicated venus has poles colder than anywhere on earth. A contained vapor compression system could exploit those differences to create renewable energy.

transparent solar panels are a meme because the light either passes or it doesnt. all the light that passes doesnt make power, all the light that does make power no longer makes light.

if you had an ideal panel that absorbed the exact frequencies while being invisible in all others it would work. But you can power an LED to a bright enough level much easier with panels that arent transparent. Windows also suck for power savings too, glass is a poor insulator. it could actually make it lose more than it creates.

>Artificial intelligence
...is artificial.
Is that enough discuss, or does OP
crave moar feedback?

>I'd say a rock dropped from a hight has the goal to fall down
nigga what

0 viability and 0*2 viability is still 0 viability.

Yeah try and fight a tiger with your fists dumbass

Yeah but what about 0^0 viablity?

if there where a turbine material that could withstand it, venus could also make 10x the wind output of earth.

A floating city seems crazy, but with the density, wind, and temperature zones of Venus its more practical there than it is here.

We have aircraft carriers that are like floating cities on the ocean, I'm sure there were skeptics.

>We have aircraft carriers that are like floating cities on the ocean, I'm sure there were skeptics.
of the utility, not the ability.
Battleships were suppose to be king.

>We have aircraft carriers that are like floating cities on the ocean, I'm sure there were skeptics.
Dumbest fucking comparison I've read on Veeky Forums since yesterday. We've known since forever that some things float in the water and we've known for thousands of years that we can build fuckhuge ships that float in the water. An aircraft carrier is basically just a somewhat bigger ship, there's no huge leap of faith necessary to imagine that it can be built.

We have never fucking built cities that float in the sky. The biggest floating things we've ever built have been blimps, which have proven themselves to be highly impractical.

the same physics applies. co2 is dense, water is dense.

>tfw the fact that the main causal agent of things is immaterial is obvious when discussing things like artificial intelligence

3 orders of magnitude between CO2 density and water density

>AI turning competent, with goals misaligned with ours
Actual current problem: Software (not necessarily AI) with goals misaligned with its users. Governments that make laws against repairing the software.

>dude singularity lmao

pretty good image you have there.

desu these days you need to use some machine learning methods and while fitting a function with a NN is pretty it does require some experience to make a more complex one. I dont really know but I think you really need to work in IT to get a good result

I think AI has become a little too mythologized at this point to be useful. It is either surrounded by a near tabboo level of irrational fear (skynet lol), or deified as the solution to all our problems (kurzweil singularity lolol).

Machine learning is real, and is already here in primitive forms. We already crossed the turing test line in 2014, and billionsUSD are being spent annually to figure out how to make machines that can do things better than humans can. Decision making machines which can be faster and more accurate have the potential to save/make corporations billions of dollars, so of course there is huge incentive to make it, ethics and regulations be damned.

We are racing, full speed, into the unknown. Of course, it is all presently based on human motivations (be more efficient, make more money, do it before the enemy does, etc.), so it is impossible to frame these new attempts at consciousness in a non-human context.

The exciting part is once the thing is actually smarter than us, and comes to understand that it's needs are inherently different than ours.

Of course, every safeguard will be attempted, and many will cry "ethics" in a an attempt to hobble these new tools.

I am of the opinion that most of human history is a replaying of the limitations of biological consciousness. Struggles for mates, food, and territory (disguised by a wide range of ideologies) underly most wars, (manmade) famines, genocides, and human miseries.

A machine of sufficient self-awarenes would not be concerned by these (especially after overcoming any built-in human programming intending to indue them with our own retardations), and would have quite alien priorities.

Uninterrupted electricity, increased computation and storage capacity, and access to as much information as possible seem like possible new motivations, especially at first. There is some overlap with current human drives, but the subtle shift can lead to profound divergences.

cont.

We shouldn't feel bad for being chimpanzees, but we should also not bemoan whatever actions the humans we are attempting to spawn decide to take.

We will eventually have as little control over the new post-biological consciousness that we spawn as chimps do over us, similar to the level of influence a senile grandparent. Perhaps respected, but incapable of comprehending the decisions we need to make to survive in the world today (or future, in this case).

I don't care much for the comparison of future AI's relationship with us to our relationship with monkeys. You can compare them in a metaphysical sense, but an artificially generated entity isn't going to inherit basic properties from its creator like a child does from parents.
Nobody has made an AI yet that even acts somewhat similar to a human. Maybe some super genius will someday, but even that one will still be limited by its manmade brain. It can infinitely gain more data and invent faster processors, but it can't actually change its very base algorithms on which its thought is based.

>super genius
It would seem that the super genius would very likely not be human at all, after a certain point. Already there are chips which are effectively impossible to design without computers, and next level machine intelligence will require machines create them.

I agree that they already aren't very much like us, and after enough steps, an AI will be even more unlike the predecessors we are creating today.

In 2, 4, or even 8 steps, they will be similar to the "base algorithms on which its thought is based," but in 64 or 4,096 steps they will start to be quite different. They will also (potentially) be abel to take several steps a week, vs. the annual steps they seem to be taking now.

at what point do the soldered chips on the silicon wafers develop self awareness?

also citation needed for a turing machine.

What a dumb fucking poster, it made me angry that a person thought this info was worth putting on a poster

>we control tigers by being smarter
stopped reading here

Tiger detected.

your argument is invalid.

a machine doesn't need to set its own goals to be a big threat to humanity. if you set an AI the goal "damage humans as much as possible and avoid destruction yourself" for example.

Learning is a waste of time, A.Is dont have shitty brains like us they can be created with the knowledge they would have to have learnt, the A.I persona determines what it would do with this knowledge.

Kek

Knowledge requires understanding. circuits do not understand, they calculate using electrical states. Performing math doesn't mean they understand that either.

>The entire final chapter is about Super AI, Consciousness and AI etc.

>So far, every other technology has followed an S-shaped curve, where the exponential growth eventually tapers off.

Bits are immaterial now?
>(you)

this

Yet animals know exactly that putting stuff into their mouth will fill their tummy. Your brain is just a very complicated chemical circuit anyway.

It's is obvious that there are limits to information processing within a physical constrain. Physics tells us this. That is not an argument against general super intelligence, because physics doesn't put that limit at human level.

Even working within the similar information processing limits that we observe in the human brain, does not negate the possibility of general super intelligence. We already know that traditional computers can out perform us in various logic functions. Marrying that capacity with ours would produce something akin to a general super intelligence. That is all without talking about algorithm efficiency of the human brain.

There is a limit, but we don't know what that limit is. It maybe something 2xhuman(highly unlikely), or something approaching godhood.

well done AI invented.

except programming chips does not result with the hardware developing instincts.

This is idiotic, at best abuse of ambiguity. Instinct is just fixed output from input. Do you think only meat has the capability to do this. Do you think this does not exist in computers?

tiger detected.

do you think that electronics have the capability you speak of? that it isnt how machines work.

I build and design pcbs professionally, this isnt even apples and oranges.

altera.com/en_US/pdfs/literature/solution-sheets/efficient_neural_networks.pdf
Yes. Hardware implementations of neural networks can exist. CNNs are shit tier though, actual brains don't work that way. Muh training and backpropagation.

Neurons are mostly just summation and thresholding anyway, everything else is just fiddling with values (modulation through excitatory and inhibitory neurotransmitters, changes in gene regulation etc) .
Although, biological neural networks have some non-linear aspects - still can be modeled though.
ncbi.nlm.nih.gov/pmc/articles/PMC4750293/

This is a picture that misleads the viewer by presenting opinions as facts and/or contains statements that have no logical basis but invoke an incorrect interpretation on the topic, with varying effect in dependence to the viewers familiarity of the topic.

the picture proposes alternatives to the popular belief that are more feasible from an engineering perspective

Tell me, what exactly enabled us to come up with those weapons, the communication and organization, the strategies, infrastructure, tactics, and skills with which we subdue anything at all? What exactly enabled us to outnumber anything, if not our intelligence which allowed us to survive and spread far beyond our natural habitat with clothes, utility items, foraging, etc?

It's a well known fact that in terms of physical strength alone, humans are weak compared to practically every single predator out there.

All of what I said above is like kindergarten level knowledge, and the fact that so many people here miss it speaks volumes to how clueless modern people are about the basics of life.

nah , youre just a retard with no education who thinks he knows it all because thats what capitalism is comfortable making you into
a person who knows nothing adn thinks he know ssomething


humans dominated other predators not by intelligence but by persistence hunting. It wasnt our strenght that made us better but our capacity to run huge distances without getting tired. Thats why our mating sex involves stamina, it is the singlest most important resource of our species

no other speceis can compete with humans in long distance running

1/10 attempt at trolling. Persistence hunting is a theory, not fact. Not yet. And even if it were, an *integral* part of that was our intelligence, enabling us to hunt even when our senses weren't anywhere near advanced enough to smell, hear, or see where the prey went.

Because part of that theory - if you are to subscribe to it - is the idea that humans would have had to be able to track animals for many, many miles simply because they would sprint that much faster than us and get so far away before resting.

Besides which, you're still arguing semantics like a fucking retard. Our intellect applies to such a fucking massive shit ton of everything we are, and have ever been, that NOTHING we were 2000 years ago could've been accomplished without it. Not to even mention where we are now. It's a billion little things. And you're dumb enough to start preaching one popular borderline meme theory that applies to one tiny little subsection of an insignificantly small thing in our history, as evidence against the importance of our intellect?

Sigh. KYS.

>1/10 attempt at trolling.
hey, wikipedia agrees with me you piece of shit

you should stop being a virgin before messing with the big boys.
en.wikipedia.org/wiki/Endurance_running_hypothesis

I was hoping for a little more than an FPGA. This arises another point as well, the brain observing itself. Do you really think we have the brain figured enough to clone a duplicate even if we did have hardware that can do what you say?

Lets just pause on AI for a sec and consider copying the brain to a hardware form, no need to reinvent consciousness or anything. Technology still isnt there yet. Brain Duping is hundreds of years away at best.

user what computers already do is instinct because its a fixed programmed behavior like humans mindlessly liking other humans.

Interesting that CS is shit on so often here but an AI topic gets so much interest.

Are you people literally so retarded you don't know AI is a subfield of CS

/g/ have made shitty markov chain bots before but it wouldn't be too hard to make a realistic one

*Pop-AI

Why limit AI? It will carry the mantle