Are there things in computing or programming that just work and we don't know how they do yet...

Are there things in computing or programming that just work and we don't know how they do yet? Like any mysteries or something that can't be figured out?

I can't imagine there possibly could be, since we made all that stuff from scratch, but I won't know unless I ask I guess.

Other urls found in this thread:

damninteresting.com/on-the-origin-of-circuits/
youtube.com/watch?v=qv6UVOQ0F44
twitter.com/NSFWRedditGif

Ya a lot of stuff is just guess work. Best example I can think of is in real time applications. You really don't know what's going on at any point with the data but just have statistics to go off. There's noise in computing that we can't explain or predict completely. We just know how to get around it and such at the expense of efficiency.

damninteresting.com/on-the-origin-of-circuits/

This might interest you

What you are describing is in fact actually happening now, with some regularity.

It goes like this: computer programmers program a computer or series of computers to do a bunch of different, very complicated and tedious/onerous things. Prove a theorem, trade on the market, etc. But a complex and rich behavior emerges.

Then the computers do something that is either a pleasant surprise, or unexpected. They cause the market to start crashing, they prove the theorem, or they accurately model the physical phenomena, and reproducibly so when the computer program is run again.

/And in each case, the programmers themselves cannot understand in specific detail exactly how or why it it is that the computers managed to do just exactly what they did/. Or, /the programmers themselves can't understand how it is that the computer is right/. But right it is.

This goes directly to your completely reasonable suspicion: humans programmed the thing, of /course/ they should be able to predict or explain its behavior! In principle, I would initially expect the same thing: you ought to be able to reverse-engineer the whole thing, given enough time. But I've heard quite the opposite enough times now to know (because I've been told enough times at least) that it's a real thing, an emergent property of computers. They are turning into "Black Boxes", for all practical purposes.

I am not a CS person and I tried to look up some concrete links on this, but I regret that I could not find any. I know that someone else on Veeky Forums knows what I'm referring to though, and can buttress the claims in this post.

Machine learning has that property in parts, hasn't it.

The self-learned valuation function works, yet we don't yet know how it works.

it depends on whether P=NP or not

Artifitial neural networks and machine learning are computational behaviours that work and we don't know exactly why the final sistem works. Look it up, it's basically using evolution to solve dificult problemes that seem dificult to automatize because of their nature. This video explains really well how a neural network learns how to beat a mario level without any human intervention. youtube.com/watch?v=qv6UVOQ0F44

neural nets/AI. One of the biggest problems is that you don't know how it's getting the answers it gives you. Funny enough, liberals are worried about candidate selecting AI being racist.

also, I personally still don't know what the where the fuck 0x5f3759df comes from.

this is pretty cool

at this point if we reach any sort of advanced artificial intelligence it's probably going to have to be developed in a manner that leaves us uncertain how the underlying mechanisms function

Wasn't there a mathematical proof generated by a computer that was so long and complicated that it was literally impossible for any human to confirm it? I remember reading about it a few years ago.

boolean pythagorean triples problem?
four color theorem is the earliest controversial one i know.

I /am/ a math guy and the other user is correct that the four-color theorem is an early and well-known example of a mathematical proof/theorem which was carried out by computer, since it involved a tedium of many cases. The computer's version checked out, but this left the humans with a philosophical problem. A gold standard of transmitting mathematical truth is that human beings can communicate the idea to other human beings, which is a slow process. There is always a possibility, however remote, of some goof having been made somewhere - humans do this all the time, of course, and even a computer can have a machine error of some kind (as opposed to intervening human error) veeeeeery rarely, as I understand it. But it's not impossible.

And this slight possibility is one thing that provides philosophical grounds to reject computer proofs, oddly enough since as I've just said, humans are much more error-prone. But we flatter ourselves (legitimately, I think) that we have uniquely creative capacities to judge our own work after long reflection. The trick is to set aside the time for the long reflection.

There is a philosophical case to reject the "black box" in favor of only what we can deliberate and understand amongst ourselves, however limited our capacities in this wise may be.

We also now have proof-generating software which as I understand it is (of course) eclipsing humans yet again, and sometimes in the ways that I alluded to above.

I wonder what will happen after we have created a strong general AI. It's would be black box that's too complex for humans to understand just like our own brains. If it can examine societal or political problems and generate a solution that's too complex for humans to understand, will we reject it? And what happens when this AI interprets this rejection of an obviously correct solution as yet another problem to be solved, and creates a solution for that as well? Rogue AI is always depicted as being evil and wrong, but what if it actually turns out to be right?

An area of research that comes to mind is variability in HPC (high performance computing.)

It goes like this: The modern systems we build are increasingly complex. The hardware is more complex because people want more features, like more instructions, and wider vector units. We have more of this hardware in every node so that we can do more work. So we are build computers with more nodes and more cores per node, and we are putting different types of hardware on each node, such as GPUs, and FPGAs. This makes the performance of the hardware very hard to predict.

Besides the nodes themselves, the interconnects are getting more complex, with the current best (the dragonfly topology) actually having random aspects to it. You don't even know how a packet will get from node A to B with 100% certainty.

On top of that, the operating systems that run on these nodes are more complex, making their performance harder to predict as well. And on top of the OS, the software running on these computers is getting more complex as well, as people come up with techniques that sacrifice simplicity for scalability.

And finally, the compiler writers are coming up with more optimizations that make the machine code more complex, and may increase the runtime with some probability but decrease it in the average case.

If you add all of this up, we see that it is getting harder and harder to predict the performance of modern super computers. Even if you run the same job on the same computer, you may get wildly different runtimes, say 50% slower the second time. This is a tough price to pay when your simulation is supposed to take 24hrs+ to run. And it could be due to network interference, OS interference, processor variation or who knows what.

So to connect this to what you're asking, we don't really understand the performance of these giant systems. We have models but they don't always work.

If you're interested, you'll want to search for "HPC variability" or something similar and look for papers by Torsten Hoefler and Kirk Cameron, for starters.

And papers on "lightweight kernels" often fall into this area.

>Rogue AI is always depicted as being evil and wrong, but what if it actually turns out to be right?
>evil and wrong
>right

Ah yes, advanced artificial intelligence will be the first time the human race is ever confronted with horrible immoral decisions that have a basis in logic.

I guess there can be a infinite number of design patters, but the ones we have are more than enough.

So technically, "programming" is endless.

There's also "quantum computers" and the memes applications they could have.

why the fuck are you puttin /s everywhere, mongoloid?

If they understand there is variability and why there is variability does this really count as something that is "poorly understood"?

This denotes italics, and is commonly used throughout the site, since italics as-such cannot be implemented, last I checked.

>Are there things in computing or programming that just work and we don't know how they do yet?

most people don't know how anything works, to be honest.

> liberals are worried about candidate selecting AI being racist.
Garbage in, garbage out.

If you give a neural network a bunch of photos, and 95% of the photos tagged as "human" are of caucasians, don't be surprised when other ethnicities end up being misclassified more often.

The ease with which stereotypes can be used to instil racism in an AI suggests that it's a reasonable model of human cognition.

>0x5f3759df
i got that reference without googling :)

Then I suppose you would say that 'the runtime performance of jobs on large clusters is poorly understood'.

Its only well understood in the sense that we know all the parts of the computer. I listed pretty much every component of computers so its not like we've narrowed down the problem.

And I didn't even get to power, which is increasingly important, and hard to predict.

Yes you do know. Statistics in, decisions out. If you get fucked results, review the data you feed it.

Various forms of Shell sort and Comb sort