Continuing my thread that hit the bump limit

Continuing my thread that hit the bump limit.

Previous Veeky Forums threads
I don't know anything about compression in detail, but if this gives you some compression ideas I'm sure you can use them even without building the whole language

Could it be this language needs some kind of antivirus / anti-malware in the name structure? Like root.security, which might expose functions that scan a local name tree for possible threats.

Other urls found in this thread:

wolframalpha.com/input/?i=mathematica:2/(3/(3/(3/(3/(3 + 1) + 1/(3 + 1)) + 1/(3/(3 + 1) + 1/(3 + 1))) + 1/(3/(3/(3 + 1) + 1/(3 + 1)) + 1/(3/(3 + 1) + 1/(3 + 1)))) + 1/(3/(3/(3/(3 + 1) + 1/(3 + 1)) + 1/(3/(3 + 1) + 1/(3 + 1))) + 1/(3/(3/(3 + 1) + 1/(3 + 1)) + 1/(3/(3 + 1) + 1/(3 + 1))))) sig=mfouiy&lk=2
en.wikipedia.org/wiki/Idempotence
aws.amazon.com/kafka/
twitter.com/NSFWRedditVideo

Alternatively, you could have name subgraphs signed with certificates, but authenticating them is some kind of byzantine generals problem in this case, so it seems like it comes to the same thing. I'm a bit fuzzy on distributed computing stuff, so I'm not sure.

Maybe the entire global name graph could be stored in some kind of blockchain, or market of competing but structurally compatible blockchains?

In fact, maybe the entire distributed store in the back end could be modeled abstractly as some kind of quasi-blockchain like object?

I imagine there's a possible connection here with Clojure's persistent data structures. I don't think this language should have persistent data structures because semantically it doesn't need them, but maybe they can be used in the back end where you have to worry about physical storage and networking

If you have blockchains and cryptocurrency, you don't even need to have an app store for paid software. Since code is data, just call (trade (bitcoin cons structure) (your desired package) (trade safety parameters) (trading venues) [simulated real])

If you hook this language up to a 3D printer, what kinds of things could it make? I don't know much about 3D printers, but the language can probably model them and their materials pretty well, so you could have a very nice link between a cons structure and a physical object along with a precise method for constructing it

Or how about a 3D geometry and texture or surface voxel scanner for input? They have those for computer graphics, I hear.

There might be some kind of cons/decons/uncons/recons quadrality that could help you connect 3D scanners and 3D printers (or just industrial robots and sensors in general), and abstractly model the whole industrial design and manufacturing process. It wouldn't necessarily help you build anything, but the mathematicians would like it

I wonder how hard it would be to take some of the simpler UML type diagrams we have lying around and automatically generate attempted drafts of simple software or at least type hierarchies that implement them, using machine learning on the structure of the diagram itself and the labels on the boxes and arrows

>Posters: 1
Holy fuck. Quit shitting up this place with your schzophrenic, incoherent babbling.

Just stop, it's embarassing.

Stop commenting on things you don't understand. I can see how it could be confusing if you don't follow it all the way from the previous thread, but some people were following me just fine in that one, so the problem here is clearly with you

In truth I'm deeply interested. Rather feels like this 'thought' is something that is making its way to a very interesting conclusion.

No doubt some mixture of Encryption/Decryption x Compression/Decompression, ultimately.

The words are simply the joinery in the woodwork.

If schizophrenics do not get embarrassed or experience the feelings in the same way you do, why direct them towards an obvious impasse for you 'both'?

I hope you're not accusing me of sockpuppeting with that guy. I have never done anything like that on Veeky Forums, I don't even use proxies

Why would I accuse you of anything, Anonymous? Except with racism/sexism/rape/murder/gore/traps/fags/niggers/kindarunningoutofsteamhere...

Sorry, I thought you might have meant that I put on a proxy and accused myself of samefagging for some insane reason

Even if you did, why would it matter?

So long as 'these' sort of threads continue to completion and someone benefits from the path, who could EVER give a fuck that would matter?

Using the same four cons primitives, is it possible to write an (unquote) as a function of (quote), and (quote) as a function of (unquote)?

Ordinarily, managing what quote and unquote mean in the same parse tree is what Lisp macro systems are all about, but can we dispense with the macros and just write these as some sort of mutually recursive set of functions?

To go back to math territory, how would you characterize in the most general possible way this kind of construction of 4 primitives based on intuition, then a reduction to 2 primitives also based on intuition, and then a formalization with some kind of geometric and/or exponential (or actually arithmetic,
even) structure based on the relationship between 4 and 2?

More generally, if my object is part of a generalization of standard algebra, how would you find roots in it without referring to any of its non-algebraic parts?

What about odd numbers? If you start with 3 and 1, it seems as if you get an entirely different set of relationships if you apply the same operation, but this seems to be only an illusion based on a kind of fencepost error that becomes infinitesimally small very quickly

In fact, I'm sure there already exists in some branch of math the obvious generalization of even and odd to 3-ary, 4-ary, 5-ary, etc. If the Jews have anything to say about this, they'll probably say that such heresy must stop at 6-ary. There's probably a relationship to numeric bases here too.

I on the other hand believe that 7 is only some kind of ordinary root, approachable either from the direction of 6 or the direction of 8. But that determines the rest of the number system at least up through 9, and most likely the whole rest of it from there by decimal arithmetic, if you're only allowing yourself to have natural numbers, and not, say, the real line or complex plane.

If I had any kind of visual intuition, I could probably relate this to polygons or polyhedra, or to knots in knot theory

And how about conic sections? Good old conic sections, never served me wrong.

Can you express all arithmetic operators and functions in terms of polyhedra and sections of cones, maybe with some kind of weird recursive exponential fractal things mixed in there in places?

What about functions, just in general? What kind of shapes do you need to fully characterize them? How about functions that have "side effects" of sorts because they fail the vertical line test?

How would you express rational and irrational numbers this way? For example, how would you geometrically express pi, without using any circles or trigonometric functions? Maybe we can answer why pi has exactly the value it has and not any other value, just kind of in general.

We also have a mess of graph theory that makes no sense to me. Is it possible to relate these kinds of constructed "shape-based explanations of things" to graphs in a fully general way, by creating shape-based explanations of example graphs? Kind of like gadgets in computability and complexity proofs.

How about measure-based probability theory? Can you model stochastic random graphs? I know there's a theory of random matrices, and a matrix is a kind of graph

Actually, a matrix is a kind of graph in two ways - structurally, in terms of where things are in relation to each other when you write it down, and semantically, in terms of the graph structure of the linear operator it describes. Or is that really only one way? Or like 1.N or 1.X ways?

1.N is 1 + 1/10 * N. Is this commutative, so that 1.N = N.1 which is N + 1/10 * 1? I'm not sure. What about 1.X and X.1?

X.0 is definitely not commutative with 0.X, neither is N.0 with 0.N, but they are somehow reciprocally commutative

In what way is N.X a commutative operator? In other words, what is the literal difference X.N - N.X, where N is an integer and X is a real or complex number?

How about something even more ghastly - what are the properties of NXXNXNX.XNXNXNX?

I suspect they should at least be the same as those of NXNXNX.XNXNXNX

Or maybe they are only the same as N.X.NXNX.XNXNXNX, or possibly N.XNXNX.XNXNXNX

What if you could embed arbitrary functions in there, like

N.F(X.N).XNX.G(F(N.X),X.N.G(F(X),N))

I think the "type signature" of this is reasonable; are there any functions F or G you could write that would type check an object like this?

In fact, "." and "," are both functions in that case, so the original NXXNXNX;XNXNXNX could be rewritten as .(,(NXXNXNX(?)XNXNXNX)) for some very unknown operator (?). What are "(" and ")", two parts of a single N-ary operator? What kind of notation should be used for operators? Maybe (op operand1 operand2 operand3 ...)

So using F{a,b,c} for functions because we want to use parens for operators, this ould be expressed as

{. {, NXXNXNX XNXNXNX}}

where ' ' is the whitespace operator which is kind of like nil

You can then look at NXXNXNX as syntactic sugar for (unknown-function N X X N X N X), so maybe some math could be applied to solve for unknown-function.

This also shows the way to "auto-simplification" for symbol names themselves, and almost Microsoft style embedding of type information in the symbol name

Then, using the cons/decons duality, could you apply auto-simplification to integer symbols like 0, 1, 2, 32, etc, and construct a dual auto-desimplification operator to reconstruct all of number theory?

like (simplify 124124) and (de-simplify 2). Can you write a test for evenness in terms of (simplify) and (de-simplify)? Or do you need (re-simplify) as well? I don't think you'll need (un-simplify), since that's only for the garbage collector.

I can give it a shot

(define (even? n) (de-simplify (simplify n) (lambda (x) (= x 0))))

Or equivalently

even? n = de-simplify simplify (= n 0)

in more Haskell-like syntax

In fact, thinking Haskell-wise a bit, you might have

re-simplify assoc-rules = assoc-rules simplify de-simplify

even? n = re-simplify n (= n 0)

Then to generalize this,

re assoc-rules commute-rules = assoc-rules (commute-rules 0 1 de)

simplify commute-rules = re simplify assoc-rules

re simplify assoc-rules = assoc-rules simplify (re simplify)

Or even

reader (in-symbol) (quote -) = (commute-rules assoc-rules /)

in-symbol = ???

(in-symbol 1 de re assoc commute rules simplify
re assoc-rules commute-rules = assoc-rules (commute-rules 0 1 de)
simplify commute-rules = re simplify assoc-rules
re simplify assoc-rules = assoc-rules simplify re-simplify))

Just to present my own thoughts (because you eventually ended up with computational confusion)...

2 mod 1 = 0
2 mod 3 = 2

I think the trouble is NOT that you don't undersatnd/grok the concepts/idea but rather than you don't realize that there will never be a 'bottom' to your idea unless you stipulate your own goal for this.

The more you form your own goal from this (Theoretical -> Practical) the clearer the path becomes.

Mathematics is simply the definition of precision (arguing about the number 1), and precision is infinite.

On a graph you place a point, which is also known as 'position'. The size of it is irrelevant until AFTER resolution/intersections have been identified (x/y axis).

Perhaps it is best if you make a txt spk to haskell lambda calculus translation dictionary? As stupid as it may sound the limitations of vowels (infinity/varied) are always limited by the consonants (constants?!).

Syntactically you can EASILY start with common defined variables (t, x, y, z, c, g, y, e, p) and give a common English word (t = ASAP or Measure) lookup. From there you can start to construct an 'actual' language, which is ultimately what you want.

Actually, what 'is' your intended purpose/goal/result from all this?

What about a check for primality using the same method? Primality is kind of like the evenness of multiplication, so could it be something like

prime? n = de-simplify simplify (= n 1)

?

I am amazed at the level of initiative members of this board give. Most excellent!

tl;dr: bump

Can I prove Fermat's Last Theorem using this? Not really, but if I knew what the analogue of primes is for exponentiation as primes are to multiplication, I might be able to.

Something like

exponential-prime? n = de-simplify simplify (= n 2)

Note that it's not (= n n) as you would kind of expect by mathematical substitution, because that wouldn't make sense unless you're defining a new = operator. Although actually maybe you are, so maybe in some sense it really is

(super-equals exponential-prime? n (=) de-simplify simplify (= n n))

Maybe 2 super-equals n just kind of in general, even for real or complex or tensor value of n, which is a pretty exponential phenomenon

In that case, maybe 2 super-equals x, but only when 2 regular-equals n.

Then maybe Fermat's Last Theorem can be written as follows

a^x + b^x super-equals c^x
a^n + b^n super-equals c^n, for all integer n

Then this theorem is true if and only if a, b, and c are integers.

Alternatively, a^n + b^n only regular equals c^n, but it super-equals c^n if and only if at least one of a, b, or c are not integers.

Sorry, that's wrong, it's if and only if at least one of a, b, or c are not integers, OR n 2? I'm literally not sure which it is.

In fact, can you abstract away equals and super-equals into a hierarchy of equalities and complementary inequalities, such that eq(0) is equals, either eq(1) or eq(2) or somewhere in between is super-equals, and eq(f) is a custom equality comparison with an operator that returns -1 for less, 0 for equal, and 1 for greater?

In fact, how would you define a "predicate" in this language? I would say

(eq (predicate error-tolerance) (eq 0) error-tolerance)

Or should that be (eq 1) instead of (eq 0), or should there be a lambda in there instead of 0 or 1?

How about an epsilon delta definition of a limit? Actually, I'm not sure how to do this one, but it would be nice if it had a property like

((eq 0) ((eq (n + 1)) (eq n) (limit f x h epsilon delta) (f (x + h)) - (f x) / h)))

Maybe that's really almost it already, maybe just pull the h, epsilon and delta out to the appropriate level of (eq)s

Maybe, since code is data, and some data is also a mathematical proof, then some code is also a mathematical proof as-is. Can you make a (prove) function that takes arbitrary code and returns some minimal modification of that code that's also a proof?

All proofs, correct or incorrect, should always evaluate to (true (eq)), which is just a boolean value parametrized by the eq function/operator using the topological embedding of booleans I mentioned in the previous thread

Or, maybe or maybe not equivalently, it may be acceptable for them to evaluate to (eq true). Maybe this formalizes the notion of "double evaluation" - maybe (true (eq)) should evaluate to (eq true), so that if you evaluate a proof once it should return (true (eq)), if you evaluate it twice it should return (eq true), and if you evaluate it three times it should return true.

So you are just trying to define your margin of error. Using your 'super 2' concept then : n = 2 | n = 2% (Final)

You can also sub-group the super 2 into slices of 20%, or simply bifurcate the entire spectrum at 50% and then apply a 20% reduction twice. From there you can apply however many loops you require to get your preferred 'within reason' result.

100% > 50% > 30% > 10% > 5% > (3, 2, 1)%

A good development hint might come from physics. All the notation they use is barely even math, but it should be invariant under double-evaluation, because physicists barely seem to think about how anybody is going to evaluate anything they write at all

I have no idea what you're saying but I like it a lot. I could never into number theory of any sort.

Actually as I was writing that I think I got what you were saying. Yes that makes sense, but keep in mind that for this to really work out the subdivisions can't be arbitrary. There is some number theory you have to do here to make all the types match up that I can't even imagine

Eh, difficult for me to agree there. I think the word that might help you is "idempotent".

When I eat, I change myself infinitesimally on a chemical level over time (calories/nutrients/yo mamma's pussy). I am still idempotent and I performed an operation on myself that was within my 'acceptable margin of error'.

However if every time I ate your mamma's pussy I thought I was dying or tripped so hard that I was permanently scarred, that is OUTSIDE of my acceptable margin of error.

Oh, also you are trying to define a 'continued fraction' but there is plenty of math for that shite already.

A good visual aid for the concept = wolframalpha.com/input/?i=mathematica:2/(3/(3/(3/(3/(3 + 1) + 1/(3 + 1)) + 1/(3/(3 + 1) + 1/(3 + 1))) + 1/(3/(3/(3 + 1) + 1/(3 + 1)) + 1/(3/(3 + 1) + 1/(3 + 1)))) + 1/(3/(3/(3/(3 + 1) + 1/(3 + 1)) + 1/(3/(3 + 1) + 1/(3 + 1))) + 1/(3/(3/(3 + 1) + 1/(3 + 1)) + 1/(3/(3 + 1) + 1/(3 + 1))))) sig=mfouiy&lk=2

I think the ultimate question OP asking is : When can I stop counting?

The only people that can answer that (in the case where 'you' yourself are unable to) is OTHERS.

Well, what else is double-evaluation invariant is constants, so you could say that physicists write equations that attempt to be constantly true (double evaluation invariant), and mathematicians write steps of proofs that attempt to be constantly true in the same way

This is probably a better simplification of the fractional thing I'm trying to explain. Alpha is usually used as 'the true beginning' in... well, pick your discipline.

[math]\frac{n}{\alpha}=\frac{n}{\alpha+1 n}+\frac{n}{\alpha (\alpha+1 n)}+\frac{n}{\alpha (\alpha (\alpha+1 n))}[/math]

The above allows the integer solutions of

[math]\alpha = -1, n = 0[/math]
[math]n = 2, \alpha = 1[/math]

If you utilize 0-index counting (which we all do if we're not stupid), then the delta between 0, alpha, and N is 1

But we are usually only interested when n=2 (n=0 is non-constructive), and that can only be when alpha = 1 (Prime number)

Positional counting
N = { 0,?,2}
Alpha = {-1,0,1}

This is very accurate.

I don't know what idempotent means but I'll believe you if you say it solves the problem with 2s and evenness. But does idempotence handle the generalization to primality I did a bit later? Or the epsilon-delta limit definition? I don't think it can, which is why I say that you need to get some number theory in there to get ALL the types consistent. Hilbert or Godel might bite you here if you just try to apply number theory by itself, but I'm sure there can be some progress with a little perseverance and faith in the power of math

You'll want this : en.wikipedia.org/wiki/Idempotence

Godel and Hilbert actually agree with me.

Hilbert Hotel = Gödel's incompleteness theorems

All physicists/computators ultimately lead up to their holy grails of the continuum hypothesis OR singularity.

Mathematics already knows this as: Infinitely many primes.

Can you encode all of "circular reasoning" as an ordinary non-terminating cons list? I am confident that none of the reasoning in any of my threads so far (linked in OP) is circular by this definition.

Ah, sorry.

Continuum hypothesis = Continued Fraction
Singularity = Infinitely many Primes

Oooh that's tasty. I agree that your reasoning is not circular, however your 'purpose/goal/desire' is.

I am willing to defy Hilbert and Godel since I've already bitten the entire bullet of using higher order logic anyway.

Sure there are infinitely many primes, but if primality is to multiplication as evenness is to addition, HOW infinitely many?

Again, for 'what' purpose?

Or are you asking for a 'how' to every 'what' in existence? I'm fairly sure the answer to that is void/0/death/oblivion.

Fun fact: A 2-d circle is the only pornoglyph in existence.

I'm sure I can't disprove anything they've proven, even using my own nonsense logic, but I can defy them emotionally all I want

> are you asking for a 'how' to every 'what' in existence?

YES! Thank you. That's exactly what I'm asking for. I don't agree with your answer though, that is only the trivial solution. You've got to get some lambda calculus or something in there to get more complex structured solutions.

I'm sure that even with lambda calculus you can't get the 'how' for every single 'what' since then you run afoul of Turing, but how about an approximate 'how' for approximately every single 'what'? Or vice versa?

Your emotions = desire, and you cannot hold the emotions of those whom are 'dead' higher than yours.

Again, Goals. It is essentially trying to 'defy God'. Humans however have a constant, "Why would aliens pick ME out of billions of humans?"

Well, yes. It is the trivial solution in the sense that 'divide by zero' is mathematical stupidity/not allowed/death (you wouldn't kill a fellow mathematician's formula, would you?).

If you don't want the trivial solution then don't present it as a general case or further define the generalization applied to the case of 'infinity'.

You're back to my 'margin of error = 2%' idea.

I would highly recommend you replace 'infinity' with 'your desire' to help refine generalizations.

For example, how about doing a Google style indexing search over all possible such lambda calculus based cons structures? Checking them for proof validity would be linear time in the size of the cons structure, though running the program could be any complexity or even uncomputable

Then apply a sieve.

Reduce all N to simplest factors.

Measure output of functor

Apply permutations to operators/operations only (as they are a limited set compared to N)

Compare to measurement of functor output

If less time or resources were used to achieve 2% variance (or n=2) then apply the permutation as a permanent change.

Actually, checking the proof MIGHT NOT be linear in the size of the cons structure, if you allow the checker to evaluate parts of the structure itself. This is an interesting caveat / extension

Nice. I bet Google even already has software that can do this sort of thing at scale already running in their data centers. This is basically all they've been researching in the search space this whole time

Ultimately, yes.

The GMail Smart Reply thing they have running is ultimately that.

There is a general pool of responses, then your e-mail's are parsed for deviations/variations/uniqueness/prime examples, and it sporadically presents you with a number of choices ranging from 'safe' to 'risky' to 'personalized

Here's a (hopefully) novel language idea - why don't you add to the (again, not built-in, only functions) exception system a feature where you can throw an exception, continue running your computation, and then "take back" the exception later and if nobody objects, resume from before the exception using the results of your own computation and any computation results you got in the no-objection messages. This is sort of the extreme of imperative, stateful, multi-threaded programming so theoretically it should be the devil, but perhaps with enough higher-order theory (particularly in the exception system) even this could be tractable

What if, in some grand metaphysical irony, an entire continuation+exception system of this type can be encoded in 100% functional lambda calculus?

Really, once you're taking back exceptions, you might as well forget the exceptions and model them as a simple special case of a continuation+message-passing system. In fact, forget the continuations, and pass an entire branching, hierarchical world-timeline along with each of your messages. It's like Smalltalk, but with time travel

Maybe there's an inherent duality between message passing and function calls lurking in here, that's only expressible as either higher order logic, or some kind of meta-level system that lets you dynamically expand and evaluate the stack of all your function calls

Be careful about the stack though. Since this isn't C, there's no such thing as the heap. There is only the stack. Manage it carefully. If you absolutely need the heap, it can be abstracted away as a reference to a part of a cons graph that happens to double as a function you can call that allocates you a block of memory, addressible through references to other parts of the cons graph

Similarly, you certainly don't need references to files, the network, hardware resources, interrupts, or anything like that. Abstract them all away as references to a cons graph element to be provided by a central name server system shared by all users and developers of this language worldwide. Or pick and choose from a series of open source or proprietary competitors, whatever you like.

How would competing name servers resolve compatibility issues without having to constantly copy each others' new features manually and reinvent the wheel? A standard library of names selected by an actual real-world real-people committee without any simulations or game theory could help with this. All the simulated game theoretic standards committees could be derived from this real-people one to predict future standard revisions.

Here's some biology - if prokaryotes are just like adorable little cons cells, is an eukaryote a government of prokaryotes?

I guess not really, because a true eukaryote is a government of prokaryotes that have been forcefully tamed into servile docility, to the point that they can no longer even be truly called prokaryotes. Day of the ultra-cancer when?

Ah, you are describing a message broker.

aws.amazon.com/kafka/

This sort of makes metastatic cancer a kind of tragic revolutionary story - a single brave prokaryote brings revolutionary awareness to the entire local universe, not to realize until it's too late that it cannot survive without its autocratic host

Singularity

From that perspective, if you want to tell yourself a real ghost story, could brain cancer have its own particular scared, hostile sense of awareness?

What if all it ever thinks is something like, "they're putting thoughts in my head. I have to either understand them all, or kill them all, or I won't survive".

It might not even understand how to kill anything, and only just crave its neighbors' blood supply like a vampire, but I like to think that on some subliminal level it understands that it's killing everything around it

If you look at it from that perspective, maybe the brain stem is genetically fundamentally closer to a cancerous state than the neocortex.

If you go fully down that rabbit hole, you can view the entire mammalian nervous system as a non-metastatic cancer that starts at the brain stem and expands outward in all directions from there

To go even more fully down that rabbit hole, if you use my idea of tiling and knotting space in 4D, you can view the entire evolutionary tree of mammals and all life as a single spatially disconnected cons-like tiling structure

Sorry, calling it cons-like is wrong, since I've already subdivided cons-like things into four categories. So it's like whichever thing those four categories belong to, or whatever number of categories you like, really