Let's talk about computablity and complexity theory

Let's talk about computablity and complexity theory.

Beginner: Go to rajk.me/static/busybeaver.html, toggle instructions, and find the 2-state Busy Beaver (start with 1-state).

Advanced: Recently Stephen Wolfram's 2-state 3-symbol Turing machine was proved to be universal. Does a 2-state 2-symbol one exist? Additionally, Wolfram's machine is not a decider; it never halts. What's the smallest universal Turing machine that does halt?

Other urls found in this thread:

scottaaronson.com/writings/bignumbers.html.
github.com/raj-kesavan/slashdot
en.wikipedia.org/wiki/Busy_beaver#Examples
en.m.wikipedia.org/wiki/Church's_thesis_(constructive_mathematics)
twitter.com/NSFWRedditImage

Nice riced desktop dude how many chicks has that gotten you

Not as many as you might think. I should start hanging out in coffeeshops more often.

no one /compsci/ here?

I'm an AIfag so I don't know much about these computability theory stuffs.
some mathcucks maybe knowledgeable but they say most of Veeky Forums poster are basement dwelling undergrads...

Well, at least try the busy beaver puzzle. It's a nice challenge, let me know if the instructions are unclear.

This is pretty fun.
Does there exist a game of this with a better interface?

can you give me a link to explain what the puzzle is?

I don't think so. I made this just to learn React a while ago. If you have any small suggestions I can try to add them.

If you think you have the answer for 1-state or 2-state, let me know how many 1s are output and I'll say if it's correct, so you don't have to look it up and risk spoiling.

So basically a Turing machine is a machine that moves in a specific way.

There are n states, 2 characters (1 and 0), and rules. For each state "s"/character "c" combination, the corresponding rule should be (character "d", left/right, new state "t").

So at the beginning the infinite tape has all 0s, and the Turing machine's head is at the center. You're at state s=A. Since the head is above a 0, we have c=0. Then you just look at the rulebook (at the top) to find out what the next step is. For example, it might say to write a 1, move left, and enter a new state. And so on. There's a special halting state H. If you ever enter H, the machine stops. Some Turing machines halt, some go on forever.

The goal of the game is to make a rulebook with a given number of states so that the machine writes as many 1s as possible, but does eventually halt. If you click "Ideal 3 state" and then "Start", that's the solution for 3 states.

The cool part about this game is that it's impossible to write an algorithm or computer program to solve it generally, since doing so would solve the halting problem: scottaaronson.com/writings/bignumbers.html.

>Stephen Wolfram's 2-state 3-symbol Turing machine was proved to be universal.
I consider this a ternary computer.

>Does a 2-state 2-symbol one exist?
binary computers.

What's the smallest universal Turing machine that does halt?
Small in what sense?
from wiki about radix economy regarding 3 states:
"Since 2 / ln(2) ≈ 2.89 and 3 / ln(3) ≈ 2.73, it follows that 3 is the integer base with the lowest average radix economy."

did that help you?

Let me define a new complexity class [math]\mathcal{CM}(f)[/math] which is the class of all languages which can be decided by a Conscious Mind in time [math]f(n)[/math]. As usual, we denote [math]CMP = \bigcup_{k}\mathcal{CM}(n^k)[/math].

It should be obvious that [math]\mathcal{P}\subset \mathcal{CMP}[/math].

My claim now is that there is no non-conscious complexity class properly containing [math]\mathcal{CMP}[/math]. In particular [math]\mathcal{CMP} \not\subset \mathcal{EXPSPACE} [/math]. In words: There are properties of conscious minds which will never be simulated by conventional Turing machines.

Of course you're free to criticize my suggestion.

Do you have any reasons for believing so? Also, what does conscious mean?

What WM and font?
pls post config

do you really think someone's going to buy your shit? fuck off

A bit. I didn't know about the term radix economy, although it makes sense that it's a studied topic. I remember reading a blogpost a long time ago that proved that e had the best economy.

i3, Fira Mono, github.com/raj-kesavan/slashdot

Cool, I thought that was i3.
I already use it so I'll just steal your config.
Complexity theory is pretty cool so I'll come back and actually try to contribute to this thread by doing the beginner problem in a bit

How do I beat this?
Is there a list of the maximums somewhere?

en.wikipedia.org/wiki/Busy_beaver#Examples

Wew, I did it.
I'm trying three but it's pretty hard.
I don't really have a method other than trial and error.

Yes, this is the correct answer. Good job!

I don't think n=3 is really viable by hand, I believe the solution was found through brute force (and then analysis of the possibly-looping machines, I guess).

I know a lot of computability theory but very little complexity theory. The OP questions aren't really interesting to me.

Who constructive math here?
en.m.wikipedia.org/wiki/Church's_thesis_(constructive_mathematics)

Is that the math where you can't prove existence by proof by contradiction?

Can you explain a bit more?

Yes, but that's really more of a repercussion. The actual issue is double negation.

In classical logic you have two ways of dealing with negation. The first is "negation introduction" and it allows you to create a sentence of the form [math]\lnot P[/math]. The way it works is:
>First suppose that P is true.
>Then use that (auxiliary) assumption to reach a contradiction.
>Therefore you can conclude that P is not true (i.e. you can conclude [math]\lnot P[/math]
This is called a negative proof by contradiction (or simply a "proof by contradiction", in common math lingo).

The second way that you can deal with negation in classical logic is "double negation elimination". This one is much simpler. The rule basically says that whenever you have [math]\lnot \lnot P[/math] you can reduce it to simply [math]P[/math].

By combining both rules you can do a perform a positive proof by contradiction (also commonly referred to as just "a proof by contradiction" in common math lingo). It goes as follows.
>Suppose [math]\lnot P[/math]
>Reach a contradiction.
>Conclude [math]\lnot \lnot P[/math]
>Reduce to [math]P[/math]
In other words, suppose a statement is false, reach a contradiction, and conclude it's true.

In constructive math (i.e. intuitionistic logic) you lose the second rule, "double negation elimination". As such you lose the ability to perform a positive proof by contradiction.

This means that in constructive math, whenever you assert a statement is true then you actually have to have a direct proof for it. You can't have reached it by just assuming it is false and reaching a contradiction. As a result you don't end up in retarded situations where you have to say "well x must exist but I can't give you an example of it". For this reason, constructive proofs are more useful than non-constructive proofs (even within the context of classical logic).

It's also not hard to see that intuitionistic logic is essentially a generalization of classical logic.

Right, that makes sense. There was some guy in my algebra class telling the professor to redo his proof everytime he used contradiction, so I guess that's where that was from.

Can you give some examples of why we might want to get rid of LEM and accept Church's thesis?