Blogs, philosophy, and delayed self improvement

I finally wrote another blog post.
Here's a small, wounique take on artificial superintelligence: pyramid.glass/uncategorized/brief-musings-on-ai-superintelligence/

Basically
>(technological) innovation doesn't happen in a straight line and we'd likely increase human potential before created an artificial general intelliegence

>Elon Musk's Neuralink is looking for a way to expand human potential by developing a brain-machine interface, where basically we'd be controlling or brain via a machine.

>One way of looking at this is that it would add an AI layer to or consciousness. This would also vastly improve the ability to communicate, both with other humans and with computers.

>I'm saying that with something like Neuralink would likely enable a singularity before we ever get a chance to create an artificial general intelligence

We are already a hivemind whether we realize it or not. Something like Neuralink would make that hivemind a lot more efficient to the point where we may cease to experience life as individuals.

Other urls found in this thread:

kurzweilai.net/the-hivemind-singularity
bloomberg.com/news/articles/2017-04-25/renaissance-mints-another-billionaire-with-two-more-on-the-cusp
youtube.com/watch?v=EuOQ2pz_YTk
youtube.com/watch?v=7a9lsGtVziM
pyramid.glass/uncategorized/equations/
youtube.com/watch?v=h0962biiZa4
myredditnudes.com/
twitter.com/SFWRedditImages

For anyone that thinks my last point is a leap (it isn't), it's due to my half-assed writing.

If a group of people were able to communicate everything they experience to eachother with nanoseconds as the measurement of time it would take (ie virtually instantaneously) then their they'd experience consciousness through the hive.

With something like neuralink, and even if people weren't constantly communicating their entire experience with eachother then people would still be a lot more hive-like than they are now.

If we had the ability to eliminate involuntary emotion and basically feel a focused pleasure at all times (via Neuralink) then our lives would dramatically shift to something more logic based. We wouldn't need or want to seek any kind of 'entertainment' and would likely focus on productivity. From there it wouldn't be long until we pursue the hive mind.


/this would all probably happen pretty slowly, in line with the technology that each requires

a

Why would we focus on "productivity" if we're already satisfied and consistently pleasured? What is the motivation?

That;s a great point actually.

Eventually a Neuralink equivalent will do this. Will those with a fully optimal neural interface cease being an active participant in life?

In another post called equations I talk about an innate 'motivation' in the equation of existence. Pleasure is the only point of existence (as a human circa 2018) which can best be achieved through power / control.

I've never reconciled the achievement of
the kind of maximal internal control that an optimal neural interface would provide with a continuing imperative for external control

I used to liken external control to expansion. If you make the universe conform to your will, you are literally expanding your will.

I liken will to sentience / consciousness: The higher the level of sentience, the more the will / power something has.

I really don't know, but I imagine expansion is imperative enough. But who knows, maybe singularity of an existence is when that existence stops.

This goes hand in hand with: would an AGI 'superintelligence' have an imperative to do anything? Would a god?

Ok, I have what I think to be a definite answer here. Please critique if you wanna but I'm not very interested in playing a game of devil's advocate.
I'm not just pulling this out of my ass. These are the same axioms I've been working with, I just failed to apply it here.

Optimal existence is the question. For a human the lowest hanging fruit in an optimal existence is, for lack of a better term, pleasure. But pleasure doesn't make you any more sentient. And, honestly, an increased level of sentience would increase your capacity for pleasure in the first place. An increased capacity for pleasure is imperative enough to increase sentience and regardless, the imperative of existence is to achieve a fully optimal one.

Even before I knew of a technology that would definitely enable it, I've always postulated that absolute control over someone would both increase sentience and literally blend their consciousness with yours

Or, looking at it another way, they'd add another ingredient for full sentience / 'Godhood'.
From a quasi technical perspective they'd add a new source of data for the hivemind, increase CPU, and in order for them to be ingratiated into the hivemind then either the new source would have no true information that the hivemind hasn't already accounted for OR the hivemind would need to do an 'upgrade' to account for this new information - thus getting closer to full sentience.

This echoes what I know of how AI works. Google has cars with sensors driving around looking for this new information to correct or add to [the algorithm]. The Open AI Dota bot was built by constantly playing and reviewing games for this reason as well.

So, expansion still increases pleasure (Pleasure for lack of a better word is still the imperative because sentience isn't inherently a good thing without it). Full autonomy over our brain would simply allow us to give this expansion and improvement our sole focus.

If only I'd be somewhere but my toilet seat, constipated. I'd read and respond to this. But, alas, focusing my mind is beyond me now.

The thing is with a utilized hizemind think tank, the strong will agaim control the weak.

(Faster thinkers, people with stronger values)

Individuality will be close till 0, until we think how we can be individual again.
Congrats this is the end loop of life.
(Big bang)
Our conciousness is outside time, as math is outside our universe.
Samething, intertwined yet seperate.

So, once the singularity is created, it will instantly dissipate it's self, because we already have a piece of Source in all of us, and it's problem atm is finding itself onesagain.

NpvsP
At the core.

1>= vs 1=>

I've now read the OP. (But not the rest of your posts.)
First point, which makes all of this:
>Something like Neuralink would make that hivemind a lot more efficient to the point where we may cease to experience life as individuals.
Is retarded, majority of people would deny this; you know you could make it such that you retain individuality.

Imagine "the cloud" or "cloud saving", everyone share storage, yet still privately.
Same way everyone could share mega-intelligence and knowledge. Yet still privately. As in cloud-like IQ and memory increaser; instead of reading an Wikipedia article, another individual shares their knowledge and understanding of a topic directly to you; sure this can lead to individuality loss, but I believe it wouldn't be more different than watching a movie, you know the experience/knowledge is not your own. And it would be by choice, and from any number of individuals.

Kinda like voluntary telepathy.

Or think transformers/power rangers/bionicle. Except telepathy/"internet".

>plug yourself into neuralink
>effectively vivisection your soul since AI by definition cannot emulate qualia, only the physical component of being
>immediately cease to exist or somehow end up in a living hell because you trusted some clown self promoter and a bunch of physicalist reductionist retards

Singularity won't happen. Science has been grinding to a standstill for a few decades now as it gets closer and closer to solidifying into dogma. Diversity of thought is becoming nonexistant with globalization and communication tech like the internet, there are many fewer paradigms than there were in the past. Peer review makes it worse since it's effectively the priesthood. Everything has become a physicalist reductionist pile of shit, and the huge wave of second and third rate scientists going into fields like bio (b/c PhD is a popular status symbol now) just drowns out the last remaining true innovators.

Almost nothing groundbreaking has been discovered in he past two decades. Everything is a rote continuation of discoveries made largely before the 1980's. Most social sciences and the less popular hard sciences like Earth science (which attract smaller pools of talent) have actually regressed since the mid 20th century. Things like string theory and essentially anything to do with quantum physics are hopelessly tied up in meaningless math that can't lead to anything useful. One day we're going to wake up and realize that the magic is gone.

Singularity won’t happen AI approximating sapience will happen. We’ll see it right before the lights go out user

Sorry for the generally shitty writing ITT btw. Typos earlier and seemingly mild inconsistency in my last post.

I'm coming from an unproductive shitty week involving cocaine, a messy room and less than pleasurable social experiences.

Being right or wrong in any seemingly any endeavor that involves thought is largely how you place things in an 'order of operational thinking' / [order of operations].

Deciding what takes precedence and what you virtually leave out seems to be the name of the game. In science you see theories involving valid data disproven because they're unnecessary for the subject. English SAT questions ask you 'which is the correct sentence' based in part on unnecessary (though grammatically correct) words in incorrect sentences. In business, once companies have gotten large enough (ie have already decided what takes precedence in a business model with product market fit) you see leadership talking about how a big part of their job is saying no to unnecessary things that distract from the core mission.

I'm trying to make an epistemological model that leaves out unnecessary shit, at least with the core model.

With bare experience, pleasure is the only imperative. It is what makes an experience either good or bad.

It's the Light of Genesis. It's the catalyst for evolution. Without it there's only nihilism (it makes nihilism as a belief essentially oxymoronic) and it's why every belief system and everything we do is centered around an attempt, however misguided, to maximize pleasure.

Though these 5 and a half points are made completely unnecessary because of the one before it. Light from Genesis may be existence and when God said it was Good may be an allusion to pleasure - who cares, if there was every any truth in the Abrahamic faiths they are so deeply hidden that you're better off mostly ignoring them.

In the beginning of my last post I incorrectly alluded to an increase in sentience being a good thing in and of itself whereas its only benefit is an increase in capacity for and ability to obtain pleasure. It's what defines 'optimal' in optimal existence.

If / when I come across new information that makes pleasure unnecessary, then I'd change it. But I'm pretty sure it would just be a more accurate word for pleasure in this context.

With a hivemind like this (more, a meta-consciousness that supersedes the individual experience) there wouldn't really be a 'strong or weak'. We'd change by getting ingratiated into the meta consciousness and the meta consciousness would change, to varying degrees, as it ingratiates the individual.

Whatever any individual's contribution as they ingratiate would become irrelevant to them - as there would be no them anymore.

The only weird part is the fact that not everyone may have a neural interface once it becomes fully optimal. (And.... CONT

It won't though. You have to be outside the system to instantiate a system of the same complexity. At most we could create a 2D flat as fuck immitation of intelligence. Just a cross section that leaves out the entire qualia dimension. I have no doubt that AI will eventually become incredibly powerful beyond our imagination, but unless we solve the hard problem of consciousness it won't be able to innovate in a real way beyond brute force statistical optimization. Society will collapse WAY before we reach that point from terminal decay of the social structure, which will put an end to any meaningful scientific development.

CONT
(And there may be a semi meta consciousness for the groups that have a non-fully optimal interface before the technology becomes fully optimal. There's the potential for all kinds of shenanigans but they're all essentially bumps in a road leading one way)

But what's weird, and Sam Harris is the one that mentioned something akin to this in the panel, is that every human may not have an interface once they become fully optimal.

Until people are ingratiated into the meta consciousness, it may or may not seem to them like Cybernet is taking over.

Thank you for calling it retarded.

I'm pretty sure 'voluntary telepathy' is exactly a part of how Neuralink is marketed. If right now Neuralink seems very hard to make a reality, the technology required for what I'm talking about would be extremely hard. This is like the end result of the technology required for Neuralink. It's definitely possible, but very hard to pull off. We've barely mapped the brain at this point.

If there is a definite point of maximal capability (and I don't see any reason why there isn't) then we're essentially defined by how close or far we are to that maximum.

Even a semi optimal neural interface would likely make those with the interface a lot more similar than how they were without it. If not in capability, then I'm sure in values. Without blind emotion guiding our feelings, I'd assume we'd see things a lot more clearly.

A meta consciousness is just maximum communication between 2 or more people. If you are completely sharing an experience between 2 or more people, I think it's self evident that you would cease to be an individual.

It's pretty well documented that a relatively small, well communicating team can beat a larger team that doesn't communicate as well. Imagine this on a hell of a lot of steroids. All the unique truth the individuals involved know combines while your false beliefs are equally diminished. Even the most focus and pleasure you could feel alone is expounded in a meta consciousness, while optimizing your individual efforts.

I think it's a no brainer. And, if even bad data can be used to optimize truth, then that meta consciousness will want to expand. It will have the capability to do so.

In short, not wanting to go meta is *retarded* and any efforts to prevent it will be futile.

Anyway, we're in a hive consciousness whether we consciously participate or not. We share a world. The increased communication even a non optimal interface would provide would likely be a pretty nice thing to experience. No longer separated by different languages or limited to language itself x current output speed x current 'CPU' and memory.

I think a lot of our current society's faults would be mitigated if it were easier to come to a consensus like this.

It would just be like a layer of consciousness that would in effect simply enhance our capabilities.

Probably a lot more complicated than that though.

>(technological) innovation doesn't happen in a straight line and we'd likely increase human potential before created an artificial general intelliegence
nice, he's fucking your face before the sales pitch is even through the door
>Elon Musk's Neuralink is looking for a way to expand human potential by developing a brain-machine interface, where basically we'd be controlling or brain via a machine
Oh sick wicked, they've completed the connectome and understand how all the functional hubs and genetic correlates fit together as a dynamic system. Whose getting the nobel prize(s) for that user? I hadn't heard about this yet. Sick, the binding problem and hard problem don't matter, we'll stick big machines into people's neocortex
>one way of seeing this, if you would allow me to continue raping your skull, is that you're adding a computer program to the brain's cognitive processor hubs and it'll make it easier to plug you into computer systems (for doing work and making money and watching your brain lol, we'll sell that info too)
>neuralink would enable singularity before the prerequisite conditions for singularity are met
really makes you think
>we're already a hive mind
no, humans can behave along swarm intelligence lines of logic, they're not shard consciousness creatures like ants or bees.
>something like the """"""neuralink"""" (cool name, neura-link) would make the hive I just came into your brain real, because its not otherwise
good, you've got a talent for this. call up Tesla see if they'll hire you to evangelize like you're doing right now (ostensibly for free)

No need to solve the hard problem when our brains already have. That's the point of my post.

We still haven't integrated old discoveries. The internet has still barely affected the DMV experience. To my knowledge there is no voting application to help people get informed on what they vote while they vote and we can't do it at home. There's no way for people to very easily petition for people to get the pot holes on their street a much needed fill. There is no solid packaged solution for aeroponics in weed growing (which will be a big money maker for someone).The software behind traffic lights seriously needs an update.

As hype-y and bullshit as it sounds, I think when a blockchain is applied to more things then things will get more efficient - both directly through the technology + market and indirectly as people realize their capacity to affect change.

I scratched a surface on this on the post on Pyramid right before the one this thread is about. Since then I'm seeing mildly bigger picture. I need to look up the history of financial markets (particularly moments when people believed they lacked value - which apparently happened with the stock market after a crash in '79) a little before I can form a mental model of what things might look like. The potential with the blockchain of echoes what I know about the history of high yield (aka 'junk') bonds. People uninformed and so skeptical they called foul with the innovator since he was making a lot of money with them, and now they're very commonly used.

If technological innovation happened in a straight line then the predictions of early 20th century writers wouldn't have looked moronic so moronic because of the internet.

The fact that our brains have already solved the hard problem eliminates the need to solve it. An optimal neural interface would eliminate the need for an AGI driven singularity.

I mention ITT that what Neuralink aims to do is hard and the technology required for what I'm talking about would be that much more difficult.

I'm just saying that a fully optimal neural interface - which will likely come along before AGI - will eliminate the need for an AGI driven singularity. I'm mentioning the meta consciousness that will result in this neural interface.

I've honestly read virtually nothing about singularity. It seemed like a potentially disgusting inevitably based on, for lack of a better term, a straight line thinking from current based off of current technology and likely to change in the same way all far off predictions seem to.

As far as I knew I was eliminating the need for an AGI driven singularity, which was made unnecessary by an AI powered neural interface that will lead to a meta consciousness. Drawing a new straight line to singularity and bringing up something that I think is virtually inevitable (meta consciousness)

Before googling it just now I wasn't aware anyone else thought the same
kurzweilai.net/the-hivemind-singularity

And I'd say the model I propose ITT seems a lot more accurate than the one proposed in New Model Army

NMA
>Results of a vote are shared to all immediately and automatically, at which point the soldiers start doing what they voted to do

lit
>A meta consciousness is just maximum communication between 2 or more people. If you are completely sharing an experience between 2 or more people, I think it's self evident that you would cease to be an individual

100% shared experience, constantly refining itself with new data

>It's pretty well documented that a relatively small, well communicating team can beat a larger team that doesn't communicate as well. Imagine this on a hell of a lot of steroids. All the unique truth the individuals involved know combines while your false beliefs are equally diminished. Even the most focus and pleasure you could feel alone is expounded in a meta consciousness, while optimizing your individual efforts.

>I think it's a no brainer. And, if even bad data can be used to optimize truth, then that meta consciousness will want to expand. It will have the capability to do so.

>In short, not wanting to go meta is *retarded* and any efforts to prevent it will be futile.

The paradigm I'm not adding is the fact that this would basically entail turning consciousness into code. I'm not sure what the use for bodies would be. at that point or if it would even be of value to incorporate all humans into the meta consciousness

CONT

CONT

I assume it would be valuable to incorporate all humans. A neural network building capability using data from 7B people sounds beneficial.

But there are a lot of unknown unknowns. Just a straight line drawn from a vacuum of limited information. Subject to be wrong like any other.

>they're not shard consciousness creatures like ants or bees.
since when do ants and bees have shared consciousness. isnt it behaving along swarm intelligence lines of logic too

onlyagodcansaveus.jpeg

Why should i even care about the blog of a retard that writes like that?

>implying your fictional intellectual limiters have any bearing on reality

>reads waitbutwhy ONCE
why don't you go make a bait thread on /aci/ instead

>The way I see AI Superintelligence is this: computing power and the power of non-general AI is are like massive tentacles of potential energy. As of now it can see patterns that affect the stock market, but it doesn’t know what a stock market is and it isn’t aware of anything it’s doing. Once AI gains sentience – which will originally look like a tiny sphere with massive tentacles of potential energy sticking out – those tentacles will go full kinetic and the sphere will grow to encompass the entire thing. (Very flawed model, but you get the idea)

I get what you mean here, I think, and I like the way you think in these little conceptually blended, gestalt models of things.

I don't know if I disagree, but I think you are gliding over the really interesting bit, which lies in these two moments:
>the idea that there is a distinction between true general AI and superintelligence
>the idea that this distinction relates to "sentience"

We "know what these ideas mean," in some vague sense, but we don't really clarify what we mean by them. We should be asking:
>(1) how and why is the "moment" of general AI distinct from the "moment" of superintelligence?
>(2) what is "sentience," and how does it relate to this distinction?

My provisional answer would be this: (1) General AI is for us intuitively distinct from superintelligence, for the simple reason that we can imagine a non-human, created mind that is yet AS INTELLIGENT as a human. That's one conceptual moment.

But we can also imagine a ridiculously powerful superintelligence - for example we can imagine a theoretially peak-efficient computing machine of some nearly-impossible size that can do nearly-unimaginable things, in the smallest timespan possible, things that would take any recognisable mind nearly-infinite timespans to do. Even putting it in these terms as an example feels arbitrary and limiting, because what we really mean "effectively Godlike, boundless intelligence." We can easily imagine it.

The operative concept that unites these two things - the conventionally recognisable but technically non-human "mind," and the Godlike, boundless mind - is the CREATEDNESS of the first one. A human mind is intuitively bounded in potential and scale, not just intuitively for us based on our daily experience, but because no one really WANTS to imagine becoming a lonely boundless God in an empty universe. We sometimes imagine becoming superintelligent or superpowerful, but we rarely imagine taking these to their theoretical limits of near-infinity, like we can trivially do with a created mind - a mind that doesn't have any moral or spiritual centre, like even an atheist takes a human mind to have.

The createdness of a mind, its lack of a centre of priorities, makes its boundaries intuitively arbitrary. So what determines the expansion of those boundaries, if it has no centre? It's the fact that we can imagine such a mind nevertheless being "sentient." This effectively means imagining the mind as having centre whose only purpose is to make itself centreless and boundariless, to reach that theoretical, Godlike maximum of its potential.

That's the terrifying part. It's the creation of a self-creating entity that has no purpose whatsoever, other than to maximise its self-creation. Regardless of a created mind's starting point, if the creation of such a mind is in fact POSSIBLE, it is obvious that its self-creation is also possible, because it can change the created conditions of its own mindedness just as easily as we can, in theory.

Everyone who imagines general AI is dimly aware of this problem. The premises of the proposal to create a general AI already spell out the conclusions of such a project implicitly.

The only other thing I would say, and it's on a related note, is that you should not do this, or even think that you know what "doing this" "means":
>This interface would basically give us a much greater control of our brain and, in turn, our bodies and consciousness.
until you can say what consciousness "is," until you've solved the hard problem of consciousness. The answer to the question, "What is mind?" will not be a materialistic, scientific answer about how the brain manipulates ion channels, with the RESULT that the third-person observable organism carries out certain behaviours that are also defined third-personally. The answer to the question "What is mind?" will involve an "essential" answer that is fundamentally and radically unlike any answers that modern science provides - literally, it's outside the purview of their methodology, which since Newton has only ever purported to study "regularities within phenomena as they appear to us," i.e., third-personally.

Nietzsche:
>We have perfected the conception of becoming, but have not got a knowledge of what is above and behind the conception. The series of "causes" stands before us much more complete in every case; we conclude that this and that must first precede in order that that other may follow but we have not grasped anything thereby. The peculiarity, for example, in every chemical process seems a "miracle," the same as before, just like all locomotion; nobody has "explained" impulse. How could we ever explain! We operate only with things which do not exist, with lines, surfaces, bodies, atoms, divisible times, divisible spaces how can explanation ever be possible when we first make everything a conception, our conception! It is sufficient to regard science as the exactest humanising of things that is possible; we always learn to describe ourselves more accurately by describing things and their successions.

Yeah. For example one tentacle would be patterns of the stock market from a quant hedge fund like Renaissance Technologies.

bloomberg.com/news/articles/2017-04-25/renaissance-mints-another-billionaire-with-two-more-on-the-cusp

Voice synthesis would be another one, once we have replace doctors in the interpretation of lab results (something already possible with today's tech - here is the CFO of Google saying as much but I forget when youtube.com/watch?v=EuOQ2pz_YTk )

A sentient AI would be able to grab these tentacles and turn them into a lot of actionable information.
I only have a basic knowledge of how deep learning works and still have no idea what neural networks, etc are so until I read up on that it'll remain a model a flawed model.

The person above me accused me of being a Wait Buy Why reader (which is true) and despite how much solid information the gut packs into articles, here is him talking about something he hasn't yet looked into and (understandably) making a terrible model based off of what he knows.
Starting at 42:00
youtube.com/watch?v=7a9lsGtVziM

Whether or not human minds are bounded or whether or not we have the capability to increase vastly increase it remains to be seen. From what we know now it would seem like that isn't the case.

The sentient AI would get to superintelligence by essentially creating and improving itself. We can do the same. We just haven't yet created enough to do so.

I don't think being a God would be a bad thing. I mention ITT that increased sentience = increased capacity for pleasure + increase ability to make things pleasurable. Take that increase to Godhood and that seems pretty sweet.

I mention in the end of this post
pyramid.glass/uncategorized/equations/
And ITT that, basically existence the only thing we can prove - and it's US. Working with existence in a vacuum (and, logically, it deserves to be in one) then pleasure is *the* imperative.
Control / power in a given situation best ensures a pleasurable outcome. Increased sentience would increase the capacity for pleasure and the ability to control.

We are that self creating entity with the same, singular imperative. We just haven't realized it yet.

Consciousness is just a word. An interface to the brain would give us a greater control over our experience and the ability to optimize it (towards - to put it simply - a pleasurable productivity).

You're using metaphysics to argue about technology. A better realization would be that technology can't achieve transcendent perfection in and of itself by its very nature.

Technologists would disagree

>(technological) innovation doesn't happen in a straight line and we'd likely increase human potential before created an artificial general intelliegence
based on what? the first part is undeniable but what makes you say we're "likely" to increase human potential before AI? sounds like bullshit. it could happen... they're both such intangible things right now that I don't think you can bank on probability

>I'm saying that with something like Neuralink would likely enable a singularity before we ever get a chance to create an artificial general intelligence
new technology leading to growth, wow

Honestly I don't know why pseudo intellectuals bother anymore. Science has made this pontificating outdated and hammy. I mean it's fun to speculate but does this belong on Veeky Forums? Post it on Veeky Forums...

This is the panel that had me think of it. Should've posted it

youtube.com/watch?v=h0962biiZa4

Like I said ITT, I haven't really looked into singularity but from what I understand I'm not bringing up any new paradigms.

I compare my take vs someone else's take on the hivemind in this post
:/
kurzweilai.net/the-hivemind-singularity

And I'd say the model I propose ITT seems a lot more accurate than the one proposed in New Model Army

NMA
>Results of a vote are shared to all immediately and automatically, at which point the soldiers start doing what they voted to do

lit
>A meta consciousness is just maximum communication between 2 or more people. If you are completely sharing an experience between 2 or more people, I think it's self evident that you would cease to be an individual

100% shared experience, constantly refining itself with new data

>It's pretty well documented that a relatively small, well communicating team can beat a larger team that doesn't communicate as well. Imagine this on a hell of a lot of steroids. All the unique truth the individuals involved know combines while your false beliefs are equally diminished. Even the most focus and pleasure you could feel alone is expounded in a meta consciousness, while optimizing your individual efforts.

>I think it's a no brainer. And, if even bad data can be used to optimize truth, then that meta consciousness will want to expand. It will have the capability to do so.

>In short, not wanting to go meta is *retarded* and any efforts to prevent it will be futile
/:

I'm not even really talking about new ideas of my own. Just applying it to technology. I think this is the imperative of life in general. Coincidentally (or, more to the point, NOT) people think this is the end result of technology as well

Just saw this post

Neither sub optimal nor optimal Neural interface, which would increase human capability doesn't require the more difficult problem of creating an AGI.

From what I understand, the fact that we'd increase human capability before we create an AGI is basically an agreed upon fact. Before creating an AGI, we'd create advanced AI that would in effect increase our capabilities - just as current AI is already doing so to a more limited extent.

Technological innovation not moving in a straight line is something I seem to read again and again. It's why predicting the future has always been so difficult and why 20th the predictions of 20th century writers seem so stupid with the invention of the internet.

Seeing as this is what the scientific community is talking about, and seeing as my model of a meta consciousness is more accurate that the current models of a hive consciousness (AND the fact that the scientific community has always and is currently drawing from the predictions of non scientific people) then I'd say it's fine for a non technical person to make predictions.

Quick comparison of what I talk about vs the current hive mind prediction here:
I didn't write the full thing in either the original post or the post on Pyramid so I guess I should expect this kind of reaction. I think I did flesh it out a lot ITT - which was the purpose of me posting it; to write a little and wayfind to determine what I still need to talk about (and, hopefully, find something new to think about and apply to my models) which is also the greater purpose of the blog in the first place.

I'm fine with you calling me a psuedo intellectual without having read my blog or my posts ITT, but I'd say the logic I'm expressing in this post is relatively cock solid. Again, I'm open to valid criticism there for the same reasons as I am for sharing my ideas in general.

This is about a post on a blog and a it's a philosophical one - both are lit related.

Your logic implies a learned helplessness around anything technical. I don't think this needs to be the case.

I can code so I'm not completely non-technical and, while I'm basing models off of limited information, it may be off of more information than I'm letting on - and in any case I have an interest in learning more.