Every "who should the car kill" low IQ trash BTFO

consumeraffairs.com/news/waymo-gets-approved-for-first-commercial-self-driving-car-service-021918.html

This should be a lesson to everyone low IQ like Sam Harris and "philosophers"/thinkers of Veeky Forums. In the real world your masturbatory questions don't matter. They have no effect or reflection on reality. Self Driving car adoption is not limited by whether we decide in some infinitesimal whether they kill 1 person or 2 people. AGI is not limited to whether or not a retarded shithead like harris decided subjectively what they think consciousness is. These questions are for morons to think about.

Please for the love of fucking GOD, before you say a question is super duper important to solve at least analyze that proposal first. The edge of edge case "who to kill" pretty much never happens in reality. It was a useless masturbatory thought experiment question. The same for the subjective analysis of consciousness. All such questions are better asked to either intelligent enhanced humans or AGI/AI. They are not something that need to be solved ahead of time yet retards like Sam Harris called them obstacles to self-driving vehicles while ignoring that if you down the hierarchy to questioning who to kill, you've already eliminated 99% of deaths due to vehicles.

>It's important to solve this problem before self-driving cars can exist

BLAAAAAAAAAAAAAAAARP fuckheads

>It's important to solve this problem before self-driving cars can exist

Selected Quotes from large tech articles on the subject:

AXIOS: The biggest difficulty in self-driving cars is not batteries, fearful drivers, or expensive sensors, but what's known as the "trolley problem,"

The attempts to fully automate such a lethal technology have given not only inventors but also regulators, academics and journalists much to ponder, to a far greater extent than with earlier consumer technology breakthroughs. By far, the question receiving the most prominent discussion is the so-called “trolley problem”.

One of the most interesting aspects of self driving cars, and one too often passed over by car companies extolling the virtues of this new technology, is the ethical dilemmas inherent to autonomous vehicles. Yes, I’m talking about the undergraduate philosophical thought experiment called the trolley dilemma.

Giving machines the ability to decide who to kill is a staple of dystopian science fiction. And it explains why three out of four American drivers say they are afraid of self-driving cars.

The field has been particularly focused over the past few years on one particular philosophical problem posed by self-driving cars: They are a real-life enactment of a moral conundrum known as the Trolley Problem.

Other similar questions showing extremely low intelligence and diseased brain.

- Should we research cures and medical procedures that are expensive?

- What is the effect on inequality of AGI?

- Discussing this topic will lead to genocide because all genocides in history occur because fo scientific papers instead of propaganda.

I think the simplest solution is to have no cars.
Or implement "societal value" to every person and use that as a gauge who gets to be prioritized during accidents.

hope you are trolling

Yeah happens with everything sustainable. People overly philosophilze/politicize the issue so no progress is made and people can keep goofing off. I wonder (((who))) could be behind it all.

The only correct answer is that the car adopt a "In for a penny in for a pound." philosophy and immediatly go about inflicting as much damage as it physically can before offing itself off a cliff.

The problem fixes itself bro. There was this philosopher who philosophized that we should have moral robots until the day they realized moral robots would be bad. If we had moral robots it would be trivial to flip a switch to turn them into evil robots.

OP, do you realize that "who should the car kill" question will eventually be tested in a court who has the power to ban their use entirely. Also OP the vehicles aren't actually driver-less. There's a "backup" driver there that has to take over when the vehicle inadvertently gets into a problem it cannot solve. Hilariously and unsurprisingly, these situations don't occur as often in Arizona than it does large cities because roads in AZ are built for cars only.

The concerns over "who should the car kill" exist because if we don't solve them, a Judge appointed during the Clinton years will. And history suggests a Judge will take the most risk-adverse action plausible, or a total ban of full auto vehicles in public property.

>Or implement "societal value" to every person and use that as a gauge who gets to be prioritized during accidents.
>tfw self-driving cars keep on trying to swerve into me

Well, Veeky Forums?

>ban their use entirely
That will be a pretty easy court case for self-driving car companies to defend themselves against since the one obvious point of comparison they have for this new situation is "what would happen if a human driver did the same thing?"
And the answer to that question is never "ban all human drivers."
Also even in it's infancy self-driving cars are already looking much less accident prone than human drivers, so it'll be an uphill battle for someone to claim 1 death with a self-driving car should mean killing the entire concept of self-driving transportation when you can point to 40,000 other deaths caused by human drivers in a given year for the US alone.
Whenever a death happens early on in the expansion of this industry it'll get a lot of media attention and a court case but the end result will at worst be some new regulation about how the companies making these cars need to add X feature to prevent that sort of accident going forward.

>risk-adverse
It's "risk averse."

Imagine if this line of questioning was something you had to answer on your driver's license.

No because the Judge will say "the car owner would have not gotten into this situation in the first place, or would have slammed on the brakes to emergency stop sooner". That's it, in one stroke full auto use is entirely banned on public property and it's only allowed as an operator aid with built-in alerter device (similar to ones used on trains).

In fact, this entire argument also applies to trains but vis-a-vis electronic signalling. If the system fails there needs to be an operator right there as a backup to stop an accident. The court system is risk-adverse.

>Also even in it's infancy self-driving cars are already looking much less accident prone than human drivers

They aren't. Already existing self-driving cars needs the operator to take over at least once or twice per hour in a pedestrian heavy city like SF or a few times per day in a car city like Phoenix. If left on their own they'd have a 100% daily accident rate which is far worse than any human driver (as human drivers will have their license restricted after the second or third accident) and unacceptable to a traffic court.

Self-driving cars are a completely wrongheaded solution to a serious problem. The solution to all the problems that cars have caused have existed since before cars existed - namely, bicycles, trains, trams and buses. American cities have been built around car ownership, if it wasn't for this grave error of 20th century planning then developing "self-driving cars" wouldn't be necessary. Car ownership should be restricted and urban planning be totally restructured around walkability.

I recall running into a quiz from some serious american university which proposed to poll the public about specific hypothetical cases as part of their research. I made up a number of priority rules about whom to kill and ranked them (pets before humans, other people before passengers, as few people as possible etc.), and completed the test. upon completion the test page threw up an evaluation sheet lecturing me about some implicit bias I supposedly held against blacks and women, as evidenced by my choices.

>the test page threw up
as well it should

>The edge of edge case "who to kill" pretty much never happens in reality
/thread
people working on self-driving car safety will spend their time thinking how to actually decrease the amount of accidents, not 'who should the car kill if there's an unavoidable accident'

an actual nice post

funny how many low IQ shitheads responded to this thread.

Well if my goal is to stop cheryl from accomplishing her goal, which is to kill 50 people, i dont have to do anything, because there are only 20 people in the trolleys total.

No dude, there are 50 people outside the picture which get killed if the trolleys go through. To prevent that you have to crash them against each other, killing 20.

Should've would've could've
Meanwhile technique marches on

The car doesn't choose to kill anyone, you fucking retard. Choosing to kill someone isn't part of emergency procedure for rules of the road. SDC will just follow those rules. Why don't autists understand this?

Oh, wait. They don't fucking drive.

lol good point

Cars without a human driver will never happen. Just because they can be build doesn't mean they will. You could do most continental airplane flights without pilots, but that still will never ever become a reality. The driver will still be responsible for accidents with the car.

>You could do most continental airplane flights without pilots, but that still will never ever become a reality.
Because a pilot is actually necessary in case of some failure.
Meanwhile self driving cars can potentially get a lower failure rate than human drivers.

you're retarded
"who does the car kill" happens all the time in cases where swerving away from a pedestrian poses a risk to the occupant

Simple: The car
will kill Android users who are poor so less likely to sue
and save iPhone users who are rich that could sue.

t. the ghost of Steve Jobs' cancerous pancreas

>realistic solution to real life trolley problem

• slow down velocity
• let pedestrians know they are in danger
• honk horn and flashing lights
• announce on loud speaker AI controlled car has no breaks (electric cars will have loud speakers which simulate internal combustion engine sounds to deaf people can hear the car coming)
• do not change direction of the car until you are 100% certain of which direction the pedestrians are fleeing to
>always protect the driver (which AI has control over) and assume pedestrians will protect themselves (which AI has no control over)

simple

What if your societal value is so high it causes an integer overflow?

This is retarded. If a moral robot knew you were going to flip a switch to turn him into an evil robot, he'd try very hard to kill you and prevent evil, and be completely morally justified in doing so.

Nah, they are also necessary in case of failure. Anyway, people are not going to allow 2 ton metal cubes driving around with 100kmh and more. People are scared of technology.

>buying a car that would kill you in order to save a kid who walks into the street

more like

>buying a car that would kill you in order to save a person attempting to commit suicide

I think that in the case of a “trolley problem” type scenario the self driving car should just slam on the brakes, assuming that doesnt put the occupants in danger(train tracks intersections etc.,). This is exactly what a human would do and a well equipped self driving car will respond quicker than a human would anyway. If a self driving car is doing human life value calculations, and there is even the slightest chance it will throw me off a bridge to save 5 homeless crack addicts theres no way in fuck im getting in that car. Let alone purchasing one.

Honestly, how would a "who to kill" problem even be framed? The AI will just seek to avoid a crash if its sensors detect an imminent collision, it's not fucking omniscient. Do drivers make a decision like that? It's all a bunch of bullshit thought up by 110IQ faggots who think they are special.

Again, you're assigning arbitrary assumptions to the system. The AI is just gonna avoid crashes to the best of its ability, whether it be by swerving aside, slamming the brakes or whatever - just like a human. Drivers don't solve ethical dilemmas on the fly. Trying to turn the car into some sort of road authority is crapshoot since it has imperfect knowledge anyway.

pop sci and intellectuals think about it for some reason, see sam harris

But engineers, enterpreneurs and lawmakers don't. Throwing all the relevant real world details out and framing some sort of philosophical problem where there isn't any is the sort or thing only brainlets who are insecure about their brainletude do.

agree

see "consciousness" debates.

As long as the problem isn't solved, it's undefined behavior, meaning there's no wrong answer. Using Occam's razor, if all answers are good answers, we should keep the simplest one (simpler programs are easier to test and debug too, so Occam's razor perfectly applies here). The simplest behavior here is continuing and killing the five people. Therefore, not crashing into the wall is the right answer.

The problem is solved, meaning there's only one good answer now.

>As long as the problem isn't solved, it's undefined behavior, meaning there's no wrong answer
Do you actually think AI works by programmers giving it an answer to every possible problem it might encounter in advance?

The whole thing is just framed wrong from the outset. Self driving car decision making will be nothing like this, unless/until all vehicles are self driving and connected into a single overseeing controller. It's much more like a human driver, using its sensors to create a picture of the environment and draw all the relevant cues for selecting an optimal trajectory from imperfect and incomplete information, using a neural net to approximate the best decisions. A human driver doesn't think in terms of ethical dilemmas when trying to avoid crashes and neither do self driving cars.

I am 100% ready for the era of self-driving cars. This is the only new technology I'm actually excited about.

Not working on self-driving cars, but actual AI/machine learning scientist here.

Why the fuck are OP and most posters in this thread so retarded?

Of course it is a desirable feature of a self-driving car AI to evaluate the best action in such a situation. If a person runs in front of the car and the only avoidance maneuver is to turn towards two pedestrians, the AI has to make a decision. The calculation of expected utilities in general emergency/damage control scenarios is an important aspect of engineering a self-driving car AI. It is not the only aspect, but successfully engineering such a systems involves numerous considerations from many different perspectives - including moral philosophy for this particular problem.

OP, I'm assuming you don't have a background related to what you're talking shit about with such confidence. Also, pro top: Just because you disagree with someone or have differing interests, it doesn't mean they have low IQ.

The simplest solution is that self driving cars are a fucking retarded idea.

If you dont like driving then take the fucking bus or a taxi or ride a bike or just walk. What the fuck do you even need a car to drive itself for? So you can fuckin full-on masturbate while you're going somewhere? Tinted windshields are still illegal shithead. So you can get fucking plastered at the bar and still get home "safely"?
>i-i wasn't driving officer so it's okay
Yeah fuckin sure that would work.
So you can try to get rich quick being a shithead blaming an accident and death on the car manufacturer instead of your own actions?
>T-the car did it...
yeah ahuh, like any company haven't already thought about that to make sure they wont get sued.

Humans are not even smart enough to make trains work on rail networks without crashing them into other trains or derailing. You have 1 axis of fucking movement, forward and backward, and a monitored rail network to know where all trains on all tracks are at all times, and you still get two trains going opposite directions towards each other on the same track.
1 goddamned fucking axis and you can't even get that right with automation that is constantly assisted and monitored by real people, so yes the obvious next logical step is to try automating 360 degrees of motion without assistance or monitoring.

holy hell. Too many people with money and influence in this world have spent waaaaayyy too much fucking time blowing their minds to fantasyland with science fiction and have begun actively trying to ruin the world by attempting to accurately emulate shitbrained FICTION.

Morals of who to kill? Who fucking cares. Your automatic self driving car should only kill you for being dumb enough to have faith in a fucking automatic self driving car during the era where even fucking trains crash on 1 axis of directional movement.

>The calculation of expected utilities in general emergency/damage control scenarios is an important aspect of engineering a self-driving car AI.
What do AI researchers think about cuckoldry?

3C

As AI scientists we have no interest because it is completely irrelevant to our work. As human beings with mostly normal opinions, probably most are not familiar with the concept because they are not basement dwellers obsessed with cuckoldry and other neurotic projections of personal insecurities onto politics, society and social relationships.

Your example is pretty bad though. If there are pedestrians, it means speed limit is 50 km/h tops. So since the car would in no case break the speed limit, it would drive maximum 50 km/h. This relatively low speed should be enough to simply brake and avoid any major damage.

All in all, the examples where a car would have to make such decisions are extremely constructed and are probably never, ever going to happen in real life.

This being said, there is always going to be a licensed driver commited to be in the driver seat. Legislation will never let cars drive around that wouldn't be able to be immediately controlled by a human in case of emergency. So the whole issue is kind of be solved by that, because for whatever happens, the driver will be held accountable.

You could be right that driver-less cars will not be allowed, but I am not so sure. What if they are empirically shown to cause almost no accidents compared cars with human drivers?

In modern AI, we do not program an agent for every single possible scenario. That is not a viable approach, not even for detecting cats in images. For self-driving cars, we want to build systems that can handle real-world uncertainty and learn with generalization to take appropriate actions in any scenario, including emergencies.

While I appreciate the overall benefit of self driving cars, I don't see why any man with a set of testscles should be excited by them. Driving is fun, estronaut.

I forgot to say that you are fucking retarded, maximization of expected utility is a technical concept fundamental to both reinforcement learning in AI and other disciplines.

Yeah, but even if the software is perfect, hardware damages can still happen. So you are going to need a human driver anyway who can take control of the car when he notices something is wrong.

Now it might seem unfair for some if you get locked up for an accident that was caused by the car AI, but nobody was forcing you to let the AI drive for you. In other words, if you turn on the auto-pilot, you accept that you will be held accountable for whatever the auto-pilot is going to do.

This is very likely how legislation concerning selfdriving-cars will look like.

you are a moron

The architecture of waymo is for remote driver center to take control in edge case situations, while onboard handles the rest. It will be networked and have a central command overseeing them all.

So the case of "human driver has to take over" will happen, but it won't be a human sitting in the car. This way they can compete in the market ahead of having a 100% solution.

If there is a hardware damage on the car there won't be much remote driving possible. And hardware damage is going to happen 100%, regularly.

They can't handle anything out of the ordinary.

It's going to break down and the people will be stranded in phoenix for 10 days.

>something breaks down
>nearest available taxi comes to pick them up

tough shit, took me 10000000000 IQ to come up with

If you have an argument m8, make it at least decent. Pretty sad

inb4 someone says "We are simply going to build selfdriving-cars that never break lol what da problem"

I don't know if you are serious or retarded, but a hardware damage can cause an accident before the car finds itself a safe spot to park and leave the passengers out (if it is even still capable to do so).

I can't handle how stupid you are.

There is an entire research field (safety-critical systems), dedicated to the type of issue you are raising. Also fiy, your concern also applies to standard modern cars, airplanes, nuclear reactors, dam gates and countless other technologies.

I'm really getting the feeling that people in this thread think that (computer) scientists are intellectually lazy idiots. We're thinking really long and hard on these things, sometimes we even come up with very clever solutions and ideas. In general, trivial issues that can be identified through superficial critical thinking have probably been addressed long before we got to the deeper issues that we are actually busy working on.

>old dude without driver license gets into his selfdriving-car
>doesn't notice any hardware damage
>"drive me to the grocery store please"
>car starts
>old dude immediately notices that sensors aren't working properly because the car is driving in the middle of the road
>he can't do anything
>he's trapped in the metallic death box
>he is desperately trying to call the emergency hotline but he has reception
>a curve is coming up
>car is in the middle of the road
>some unlucky bastard is crashing with the car
>both die

>car has nuclear reactor in it
>driving down street irradiating everyone
>tons of people die
self-driving cars are a mistake

>family of four is happily driving in a selfdriving-car towards their vacation destination
>oh oh, what is that? the road ahead has been renewed, but the white stripes aren't painted on yet
>it's also kind of foggy and the view isn't very good
>car loses orientation
>dad tries to take control of the car but there isn't even a steering wheel
>"daddy, are we going to die?"
>car crashes on a tree
>the family dies

>its also

>renewed roads
>deviations in road appearance
>fog and view obstruction

>I'm really getting the feeling that people in this thread think that (computer) scientists are intellectually lazy idiots. We're thinking really long and hard on these things, sometimes we even come up with very clever solutions and ideas. In general, trivial issues that can be identified through superficial critical thinking have probably been addressed long before we got to the deeper issues that we are actually busy working on.

I forgot to mention
>appropriate action sequence during lost orientation

But seriously, do you really think we are that retarded as a community?

>Dude we are COMPUTAH SCIENTIZ!!! WE THINKIN EVERTANG TRUUUU!!! TRUST US MAN NOTHING CAN GO WRONG EVEN IF HARDWAR DAMAG!!!!
>What do you mean your phone crashes everytime you download an app?

>appropriate action sequence during lost orientation

Okay man, I really wanna see what that looks like in the middle of a busy highway.

BUT WHAT IF TWO PEOPLE HAVE THE SAME VALUE? DOES FLIP A COIN?

you aren't supposed to use English with apes, just make monkey noises and wave back

Just make it decide weather or not to kill every pedestrian in it's lane with a digital coin toss.

It's only what's right for the planet.

Your phone is obviously not a safety-critical system.

What to do if orientation is lost in the middle of a busy highway clearly belongs to a difficult class of problem, and you can be sure as fuck there's a lot of computer scientist (maybe even together with a philosopher specialized in the trolley problem) working on solutions.

>Okay man, I really wanna see what that looks like in the middle of a busy highway.

Probably what you'd expect a normal driver to do if they have to make an emergency stop. That usually seems to be the answer for these silly scenarios.

The solution is simple, there needs to be a human ready to take over whenever the car doesn't know what to do, or it for whatever reason thinks it does know what to do, but would actually endanger humans by doing that. There is also no reason whatsoever to do it any other way. It's not like we don't have enough humans.

*A normal driver who for whatever reason isn't seein the road anymore

You just agreed with me using a tone of disagreement.

Oh right, I totally forgot that drivers are absolutely immune to medical conditions that could impair their ability to drive a car.

This is obviously not a realistic or scalable solution. Those situation could occur at a moments notice and the car might have to take an action in a split second. Should every self-driving vehicle have an employee on hold? Have you thought about how emotionally traumatic and legally challenging such a job would be? How would this work if the majority of cars are self-driving in the future?

Anyway Veeky Forums, I am disappointed with you and with myself for spending way too long in this thread. Finished masturbating to some cute /gif/ girls and now it's time for sleep. There's science to do tomorrow

What? Almost every adult knows how to drive a car. How is that a realistic solution? WTF?

You just need to make humans being able to override the AI's decisions at any time, and make it mandatory that a person who is capable to drive sits in the driver seat, since he will be held accountable for the car, as he was driving himself. And if the driver chooses to watch a movie instead of watching the road/car, then it his fault if the car fucks up and kills somebody. It's really not a difficult concept to grasp.

*How is that not a realistic solution

The car does choose to kill someone because it can't have any other option, because it broke and got itself into an unfixable situation.

A Judge will see right through this (as they did with the railroads) and demand 100% manned operation unless all external elements are removed (eg, used on private property).

The thread about how brainlet philosophical questions are pointless
devolves into said brainlet monkeys discussing brain dead """"ethical"""" hypotheticals

>No because the Judge will say "the car owner would have not gotten into this situation in the first place, or would have slammed on the brakes to emergency stop sooner".

Well, they would instantly be proven wrong by the mountains of data suggesting that autonomous vehicles have significantly better reaction time than literally any human on the planet, thereby making this ruling blatantly false. Talk about an easy appeal process, that's any lawyer's wet dream.