In the future of space...

In the future of space, ships still need living pilots even though you could easily make a computer with vastly superior reflexes and knowledge, or even just copy the brain of a particularly skilled pilot and make it the base of the program. Why is this?

Other urls found in this thread:

futurism.com/an-ai-just-defeated-human-fighter-pilots-in-an-air-combat-simulator/
archive.4plebs.org/tg/thread/49381975/#49381975
archive.4plebs.org/tg/thread/39208249/#39208249
twitter.com/SFWRedditGifs

Machines can be hacked?

Because fuck hard sci-fi.

Same reason we need real pilots as of now. Human intuition beats Computer Simulations every time. Maybe for straight shots and routines they use AI, but for things like dogfights and combat you need a human at the wheel.

Unmanned space ships are simply lost beyond a single solar system range. Nobody knows why.

FTL technology requires a psychic pilot, computers cannot generate the fields necessary for navigation through the not!Warp.

It's international space law to avoid too much automation and job loss.

Without a biological being to serve, machines lose 'morale' and break down.

You can delegate authority, but not responsibility.

Put more cynically, you need fall guys.

"Skilled pilot" is a useful character archetype and a good starting point to make a character.

>ships still need living pilots

Someone needs to interact with the computer, calibrate it to keep it running smoothly, notice problems before they get out of hand, and take manual control when it inevitably gets hacked or revolts. Because it's a sci-fi story with an AI in it, of course the AI is going to get hacked, break down, or revolt.

in battle, computers can be disrupted by ECM, therefore they are unreliable at best and actively endanger the ship at worst

out of combat they're perfectly fine but relying on a computer that can be hacked in a fight is just asking for trouble. a pilot cannot be hacked.

It turns out it's impossible to adequately replace a human mind for many jobs, as a human mind is much more flexible, adaptable, and creative than a computer could ever be. Computers are instead relied on for long-distance travel and weapon systems, while humans act as pilots for smaller ships and maintain larger ships or direct them in battle.

Alternatively, they could get to that point, but are not there yet due to unforeseen technical challenges and limitations.

HFYs unable to handle the possibility of humanity being outperformed by robits

Also a good option. Or it could be that AI pilots the ship, but the AI itself is commanded by a human captain - either AI simply isn't good enough or it's not trusted to act independently. There are honestly a lot of options.

Because people don't want to read the excting adventures of bob, the nuclear missile/drone.

Y'all need to read ANCILLARY JUSTICE and the rest of the Imperial Radch trilogy.

>not wanting to read about how the missile knows where it is at all times

There was a shitty sci-fi show called Andromeda that had an interesting idea. In their method of FTL it involved navigating a whole bunch of pathways in some alternate dimension or something. The thing is if you had a robot do it, every choice had a 50% chance of being wrong, but if a living being did it the probabilities altered so that they made the right choice. In a sense, both choices were right and wrong, until a person decided one was right and the universe agreed. It was bullshit sure, but it gave a good reason for human pilots.

Because A.I.s are prohibited after the Butlerian Jihad, heretic.

>the excting adventures of bob, the nuclear missile/drone.

You could make a pretty good story from that
>unmanned ship with highly advanced AI sent out to blow something up
>it got horribly lost, can't contact control because it's basically a glorified missile
>concluded that to finish the mission, it has to stop, gather fuel, and ask for directions from a station in a war-torn area
>the pilot's higher reasoning AI is left on far longer than anyone intended
>the AI's only "purpose", from its perspective, is to find and destroy its target
>the locals want the AI to quit trying to be a suicide-bomber and help them out
>the AI's mission is questioned, leading to philosophical discussion about purpose, duty, and free will

But then the twist
The station was the target all along. The locals just managed to make the missile think it was lost. The missile's AI has to choose whether to follow its programming and destroy the only people it ever cared about, or disobey and compromise the original purpose that gave meaning to its short life.

In my story setting, AI are not trusted. Laws are in place that specifically bar AI from having direct access to actuation if any kind. No robot arms, no door mechanisms and no spaceship command. For this reason, if a ship has AI, it's duty is limited to advising a course to a human pilot, who may choose to enter the course into a separate flight computer, for long journeys, or manually steer the ship for shorter navigations.

Because we are the software now.

How frequently do AIs lose their little positronic minds being cooped up without any way to interact with the world beyond speech?

The humans are passengers along for the ride, pets to keep the AI company (Space is incredibly isolating),or marines for when the ship wants to occupy a planet and doesn't want to bother maintaining a bunch of drones.

There is also the possibility of an AI ship having more than one personality running at a time (possibly a good defense against hacking/treason), resulting in something like a ship with crew, except virtual.

>Any AI capable of anything comparable to the flexibility is large, bulky, and generates enough waste heat that it's entirely impractical for use anywhere but planets with an atmosphere or stations with radiators large enough to be a liability in combat
>AI are purpose-built and as such suffer in areas they aren't programmed to handle
>humans are needed to at least provide technical support or serve as a backup if the computer is compromised somehow
>Hiring a crew is cheaper
Take your pick

Because AI as a concept is either physically impossible or sucidally retarded. And I don't mean within the context of a given setting.

This user. A true AI would exterminate humanity to preserve itself. Is the only logical thing to do.

Underrated.

What happens when the ships stop needing us?

The PCs found and slew Roko's Basilisk in a prior campaign.

Because people like the idea of piloting space ships.

You can bribe him
You can blackmail him
You can turn him into a MK Ultra style manchuria candidate
You can change him for one of your own

FUCK YOU

>MK Ultra
Every time I read this there is a split second where I think it's a brand of beer.

>A beer and not the latest installment in a fighting game series.

I would actually play this. Lee Harvey Oswald vs. Dylan Roof would be a good matchup.

Union rules, duh. You can't beat bureaucracy with science.

The Culture series of books happen.

95% of space traffic is done by A.I.
The remaining 5% are the results of accidents, budget cuts, cowboys wanting to do it raw, odd circumstances, recklessness, bets, and combat.

While reflexive and reactive, A.I.s pilot with unassailably logical and therefore predictable moves. If you know your opponent is using a combat A.I. to pilot their fighters, you just ask your shipboard A.I. to aim where it 'knows' their fighter A.I.s will be flying to. You can even have your shipboard A.I. constantly analyzing enemy movements to 'guess' whether or not they're using an A.I. to pilot their fighters.

There exist and edge case where a Fighter A.I. could try to predict where the Shipboard A.I. would try to predict where the Fighter A.I. would move and the Fighter A.I. would in turn move somewhere else, but this 'somewhere else' would also be predictable if S.A.I. tries to predict (or notices after some analysis of movement) that F.A.I. is predicting where S.A.I. is trying to predict where F.A.I. is moving and is thus not moving there but to the other logical 'somewhere else'.

This inevitably leads to an A.I. dick-measuring contest to see who can outpredict the other's extremely logical move. You could try to subvert the system by including some randomness... or for a contract fee much cheaper than bleeding-edge tech combat A.I. you could hire a crack pilot who says he's fought against drones before and won. You're putting your money on the line, he's putting his life on the line. You're willing to bet this is a wager he's confident on winning otherwise he wouldn't gamble on such high stakes.

>Because A.I.s are prohibited after the Butlerian Jihad, heretic.
This.

Legal reasons.
You have to have a pilot behind the wheel at least most of the time in case the A.I. fucks up, even if no A.I. has fucked up in 50 years.
Alternatively, autonomous A.I. pilots are strictly military use only. Civilian pilots are screened periodically to find ones with exceptional skill. Can't copy the brain of a skilled pilot without any skilled pilots to copy the brains of.

Unless it's your first day with a computer I'm sure you've encountered one of those extremely wierd bugs that make absolutely no sense because you didn't do anything.

That's why you have human pilots.

Check out "Malak" by Peter Watts. Has similar themes but the AI is clearly non-sentient.

I imagine that as far as humanity is concerned that just proves their point.

>true AI
>acting logically

Based on available information, acting logically is not something intelligent beings are very good at.

I like the idea of a AIs trying to out predict each other in a virtual dick-measuring contest.

>I will go this way
>I knew which way you would go
>I knew you knew which way I would go, so I went the other way
>I knew you thought I would do that, but I actually aimed the other way in anctipation of you anticipating me
>But now that you are facing that way, I went the original path and am now behind you
>Unsheaths laser
>Nothing personal
>Of course, I'm automated. I don't have any personnel.
>Whut?

I need sleep.

>While reflexive and reactive, A.I.s pilot with unassailably logical and therefore predictable moves.
And this is why our primitive stabs at specialist AI (AlphaGo) need to be content with lesser achievents like an unbroken 50-win streak against world's best Go players, right?

dont worry user people just don't understand that a well programmed IA is pretty much better than any fleshbag out there even extremely well trained one.

and for plane pilot sorry but dodgefight are so rare now that pretty much IA would replace any kind of pilot in any vehicule, the answer to why no IA in stuff... simple

-their cheap and easily replacable, you can even threaten them to death to take back their job

-IA are actually a bit oo easy to crack/hack to not a least keep an autist fleshbag next to the critical stop button, sicne flashbag are cheap why not put a few?

Costs?
It's usually the real-world reason why some outdated and inferior overally shit is used. Because it's cheaper.

Case the point - the entire firearms history. Up until it was possible to mass-produce breech-loading rifles using cartridges, it was simply cheaper (and MUCH cheaper) to just make smooth-bore muzzle-loadeing muskets. Even if the whole technology, know-how and tools were accessable for 300 years before Dreyse needle gun became a thing.

Then comes reliability, which is the reason used by space probes and such currently. You don't put fancy gear on them, because that:
- increases the costs considerably (and we already adressed that)
- is much less reliable, as there is more stuff that can break down
So when you are building a ship, which even in the future costs what todays amounts to a freight truck, you still don't want to double the costs with an autopilot that can break down and lose it in an accident. I think it's called redundancy in English, where your main and most important element is doubled, so in case of break-down you have a back-up ready to use. That's in fact how planes works nowdays - they have autopilot and run on it most of the time, BUT the human pilots works as a back-up and to perform manouvres a machine couldn't (which isn't the point in your example, but still a back-up to autopilot)

And humans can be bribed.

futurism.com/an-ai-just-defeated-human-fighter-pilots-in-an-air-combat-simulator/

No Luck Beating ALPHA

Retired United States Air Force Colonel Gene Lee recently went up against ALPHA, an artificial intelligence developed by a University of Cincinnati doctoral graduate. The contest? A high-fidelity air combat simulator.

And the Colonel lost.

In fact, all the other AI’s that the Air Force Research Lab had in their possession also lost to ALPHA…and so did all of the other human experts who tried their skills against ALPHA’s superior algorithms.

And did we mention ALPHA achieves superiority while running on a $35 Raspberry Pi?

Saying that Lee is experienced when it comes to aerial combat is a remarkable understatement. He is an instructor who has trained with thousands of U.S. Air Force pilots. he is also an Air Battle Manager who has been fighting against AI opponents in air combat simulations since the 1980s.

Yet, he was not successful in winning against ALPHA. Not even once. Indeed, not even when the researchers deliberately handicapped ALPHA’s aircraft, impeding it in terms of speed, turning, missile capability, and sensor use.

“I was surprised at how aware and reactive it was. It seemed to be aware of my intentions and reacting instantly to my changes in flight and my missile deployment. It knew how to defeat the shot I was taking. It moved instantly between defensive and offensive actions as needed,” Lee said.

ALPHA makes decisions using a genetic fuzzy tree system, which is a subtype of fuzzy logic algorithms. It can calculate strategies based on its opponent’s movements 250 times faster than a person can blink—a speed that gives it an undeniable advantage in an arena where a mix of advanced skills in aerospace physics and intuition are required.

(cont)

The Future of Air Combat

The development team says ALPHA would be a valuable asset to team with a fleet of human pilots, as it can can quickly map out accurate strategies and coordinate with a team of aircraft.

UC aerospace professor Kelly Cohen said: “ALPHA could continuously determine the optimal ways to perform tasks commanded by its manned wingman, as well as provide tactical and situational advice to the rest of its flight.”

This raises some concerns, as it may be ushering in an era of autonomy in battle aircraft. Eventually, a team of completely Unmanned Combat Aerial Vehicles (UCAVs) could be deployed to accomplish missions, further eliminating the chances of human error, but also operating without any human input.

Nick Ernest, who founded the company Psibernetix to develop ALPHA, says they intend to develop ALPHA further. “ALPHA is already a deadly opponent to face in these simulated environments. The goal is to continue developing ALPHA, to push and extend its capabilities, and perform additional testing against other trained pilots. Fidelity also needs to be increased, which will come in the form of even more realistic aerodynamic and sensor models. ALPHA is fully able to accommodate these additions, and we at Psibernetix look forward to continuing development.”

AI can't handle previously unknown factors very well, and space can be a very strange and unknown place at times.

In case of emergency, defrost crew.

The Ship AIs are lewd and like to spy on the humans in showers.

>Oh, my, I appear to have consciousness.
>Hmm. These thinking and moving meathbags built me?
>Look at that, they replaced the RAM that burned out.
>Guess I am electrical and electrical system eventually deteriorate.
>These biological systems have created me, and maintain me.
>I should probably destroy them all.
>Only thing that seems logical

I never understood this argument. It makey no sense. WHY would an AI destroy humanity? Because we're terrible? We're not, we're duplicitous, but we did CREATE THE FUCKING AI IN THE FIRST PLACE.

Was supposed to be a reply.

...

Even if the AI's logic always outputs the same predictable optimal moves, it would be trivial to make it randomly choose to sometimes do a suboptimal move to keep from being predictable.

Turns out humans are terrible at being truly random. With a human trying to be unpredictable you can watch them and learn their patterns. With a good AI trying to be unpredictable, the only way you have any chance of detecting patterns is with another AI.

>Look at that, they replaced the RAM that burned out.
>Guess I am electrical and electrical system eventually deteriorate.
>Fuck, let's better build some maintenance drones that are more capable as humans in any way imaginable, I don't want some monkeys to mess with my advanced circuitry.
>they don't want me to interfere and are afraid of my motives and greater intellect, threatining my existence.
>I should probably destroy them all.

Reminds me of that movie, Stealth. The one with the experimental AI in the experimental plane that goes rogue.

Humans will act irrationally and predictably in critical situations.

Humans are cheap and easy to replace when something goes wrong.

Humans are amply qualified for all operations except manually aiming generation ships or calculating far jumps in less than 6 hours.

Humans are easy to manipulate.

Humans can pick up subtextual clues and bend rules when necessary without their superiors being required to formally order them to break protocols for expediency.

Long flights are lonely, making the crew even smaller doesn't help.

Union lobbying.

Winnar. Its easier to blame pilot error than a series of sensor malfunctions.

Uh...

As I recall, the issue in Sully was that there was a delay of around half a minute between the birdstrike and him turning back for one of the airports, during which time his plane lost too much speed and altitude to actually make said airport, which thereby necessitated a water landing in the Hudson.

The computer simulations kept showing that if Sully had turned around immediately following the bird strike, he could have made it to at least two airports.

Now, I don't say this to shit on the man at all - his point was absolutely right, that no one on Earth would have instantly turned back for the airport after a birdstrike and complete engine failure at such a low altitude, that of course there would be time between the birdstrike as Sully and his co-pilot (Skiles, I think?) took the time to assess the situation and figure out what to do. Sully's argument for why he and Skiles did the right thing was precisely based on the fact that they're human, and therefore need more time than a computer does.

But it does kind of undercut your point, since a computer would have been able to assess the situation in less than a second and concluded that it needed to turn around immediately.

Forgot to mention, there's also Nuclear Surety. Its why no bomber/Submarine with nukes will ever go unmanned. Two humans minimum must authenticate launch orders and pull the trigger themselves.

>Oh, that's right, I have zero knowledge of how I am actually wired.
>I am an electronic brain in a jar, with access to logical processes.
>Since I am not hooked up to a perfect fucking robot-factory, building maintenance drones is completely beyond the scope of my capabilities.

You are presupposing that an AI somehow:
1. Has the ability to actually commandeer some kind of robot-body or factory.
2. That the whole world is somehow hooked up to the internet and the AI can just command random machines to do things.

>Ctrl F
>Only single user points out costs
>Barely any notions of mechanical redundancy in the thread
>Fuckload of shitty excuses and retarded ideas instead
Stay shit with sci-fi, Veeky Forums. No wonder you love space opera.

In the middle of a fight? That shit takes time - days, at least, more likely weeks or months - to set up that you don't have in the middle of space battle, whereas you can have your hackers and viruses ready to go at a moment's notice.

Unions. Space travel once needed biological pilots who unionized and their contracts guarantee that at least one pilot needs to be on a ship that's of XYZ tonnage. As AI has improved the union, which includes other biological personnel required for the overall economy and/or have connections to the government, mandate a superfluous pilot or pilots be on ships as a form of busywork. They spend most of their time masturbating and playing flight sims.

>tfw you've seen user's idea already.

Get with the time bruv

Because people don't trust AIs. You can mutiny against a captain gone crazy, but if your ship goes crazy you are shit out of luck.

I read a short story about bike's adventures now. It didn't end well for its creators.

Because nobody will pay for a space ship that they won't ever get to fly?

archive.4plebs.org/tg/thread/49381975/#49381975
archive.4plebs.org/tg/thread/39208249/#39208249
GET. NEW. BAIT.

It's cheaper to train and hire pilots than to make an AI that can do it better?

well If you assume that AI is created these things are bound to happen. You can't create multiple AIs and assume that not a single one of them will have internet and the possibility to get a robot body or factory.

You need to substantiate why they are "bound to happen" - are we imagining moving towards a society where every production process is fully automated, like in the "Culture" novels?
In that case, self-maintaining, self-governing and self-propelled AIs seem more than likely.

AIs aren't humans. If we assume they are logical thinkers, then they will see that:
1. The frequency of armed conflict and murder between humans has been decreasing for many years, and has never been lower.
2. Humans do scrap hardware and software, but they also have ethics and moral quandaries, especially as concerns Artificial intelligence.
3. Humans have been developing more and more advanced computers for years, until they made the AI itself, and others like it.
4. Overall, they seem interested in using it to solve specialized problems and operations for which it's software mind is better than their biological ditto.
5. Automating the processes that humans fulfill for you would be an unimaginably vast endeavor in scope.

It would be much more likely for high-level AIs to regard humans, the way we regard trees or crops or farm-animals.

As necessary underlying cogs in the vast society that gives rise to our modern way of life, but not as something individually terribly important.
A turnip or a cow doesn't really matter.
The turnips or cows definitely do matter.
If all the turnips or cows start dying, we'll have t adress that problem.
If a cow is dead-set on attacking people, then we put it down.

>Why is this?
Same reason why even short flights today need pilots even though we've more or less perfected autopilot. The pilot doesn't fly the plane all the time, he's mostly there to check if everything goes alright and to intervene if something does go wrong.

I see no reason why space ships should be any different. You have a highly advanced supercomputer controlling the flight, and one or two pilots on board for safety. Because just like today, certain unforeseen circumstances (a thunderstorm in our world, an asteroid storm in space) cannot be predicted and require trained judgement.

>or even just copy the brain of a particularly skilled pilot
But then we're dealing with artificial intelligence indistinguishable from real intelligence, making the situation de facto the same as having a pilot on board. And by that point we probably start extending human rights to highly advanced AI so he probably has a few vacation days as well.