Can machine learning be used to detect trollposts on Internet forums. What methods are good...

Can machine learning be used to detect trollposts on Internet forums. What methods are good? K means logistic regression, Bayesian, neural networks, what

Attached: WP_20171127_003.jpg (1456x2592, 1.4M)

Sure, if you feed it enough troll posts

How do you decide which is a troll post?

they will shut down the internet before you teach AI to differentiate between genuine baitposting, sarcasm and plain old jokes

Attached: 1475571134670.jpg (200x184, 14K)

A random forest could probably do this well.

In general case it's impossible. An exact same post can be either troll or not depending who wrote it.

A single post probably isn't enough.
The forum would have to keep a running "troll score" on IP addresses and issue a warning if someone gets below a certain level -- or if other users keep replying "GTFO, brainlet!" or if you've begun 50 identical threads espousing your personal crackpot theory.

Veeky Forums supposedly allows you to mark posts as illegal or inappropriate (and they tell you abuse of this feature will get YOU banned) but Veeky Forums has almost non-existent moderation.

In addition, websites vary. A serious, well-thought out, rational comment would probably get you kicked of Breitbart.

the algorithm will be able to detect who wrote the post, in particular on a board like /pol/ where it has the additional information of flags and ID and can use said context to make a guess much better than random

You guys are way overthinking this. An ironic shitpost is still a shitpost. All OP needs to do is find data sources for relatively general topic forums where historical flagged posts were identified. The program doesn't need to have a bunch of explicit instructions about irony, it just needs to minimize the error function when trying to label posts from those data sources where you already have the answer for whether they count as shitposts or not.

Probably a text analyst with a custom library.

Yes. Naive Bayes is commonly used for spam filtering.

The question is how do you represent your input data.
Bag of word embeddings, syntaxic tree...

>Can machine learning be used to detect trollposts on Internet forums.
Can machine learning be used to -declare- trollposts on an internet forum. It's a subtle difference allowing for dystopian results, but that's what many people in control of the media want.

>Probably a text analyst with a custom library.
>library of things i don't like

the question is: can machine learning be used to make trollposts on internet forums?

That escalated quickly!

Google tried it because /pol/ threads would show up as the top result for various in-the-news topics. They basically just tried to make ai learn the word nigger. /pol/ then started referring to black people as googles for awhile.

Short answer is no. Computers dont operate the same way a mind does. Computers are only programmed to identify what they have been programmed to identify, which is to say that any AI routine, although computationally fast, is dumb because it cannot teach itself or learn new information. The brunt of coding language is reliant on "IF" boolean statements.
If (x == "nigger"){do.this;}else{do.that;}
Simple true or false evaluations. Its a very binary and simple way of thinking.
For example, say we had an AI attached to a camera inside a farm, all it has to do is count animals. The programmer would have to provide the reference materials of thousands of different angled photos of each animal on the farm to somewhat closely begin to identify a pig from a cow, and maybe some routine could be written to also acquire new reference images of each animal it has already identified while it is watching through the camera. Further, it would need some spatial orientation and object permanence so if one cow crosses in front of another pig, it wont get confused and think this combined cowpig image is a different animal altogether towards removing 1 cow and 1 pig from the count; or if a pig goes behind a tractor, to not assume there is 1 less pig on the farm just cause the camera cant see it.
Now assume the ai only knew about cows and pigs, but then chickens are introduced on the farm. You coulda asked cleetus to go count all the animals and even if he'd never heard about or ever seen a chicken before, he'd at least return with a count of fluffy birds. The AI only knows true and false on pigs and cows, so it wont count chickens.
>is it a pig? no
>is it a cow? no
>is it a farmer? no
>no counts performed

being sarcastic without using a sarc tag is trolling and trolls are evil people who deserve death.

All this goes towards that there is no true AI that is not general AI. Purpose built AI does not have all the referential materials to evaluate any arbitrary scenario, yet General AI used for a singular purpose will generate massive amounts of unecessary information which could just as easily be offloaded into the wrong hands. A smart enough general AI only meant to count animals on a farm would have an excessive amount of general knowledge, such as being able to identify individual people, cars, license plates, and having storage to retain them for object permanence. Farmer joe only needs animal count integers, but the AI will also know what farmer joe was wearing on January 3rd, or what cleetus was doing to a pig on February 14th, and given this general AI had to be funded and installed by some third party, theres bo telling where all this information will end up if not in some government or google database. Most general AI dev is cloudsourced, so farmer joe's animal counting camera will no doubt be referenced by a general mainframe to benefit the continued development of general AI, and come 1000 years when most of society is AI controlled, people will be fucking pigs assigned through AI Tinder1000 because cleetus was a goddamned retard.

Short of this is AI is dumb. Maybe you want to believe its cause AI is still such an early idea that time will benefit, but it wont ever be good in the now or the long run. Even now you do free, unpaid, uncredited work for Google through solving captchas which goes towards teaching AI.

Unfortunately for everyone, a general AI would have to essentially just be a really expensive, energy inefficient person who can keep no secrets and cant be trusted to not share information, which is in stark contrast to the philosophy of capitalism driving it's development. Even if such a humanesque AI could exist, retarded liberals would treat it as human and give it rights. We already had general AI up til 1865, after all.

Don't need to give it rights when you can just make it look like a straight, white male.

Attached: 1515867206569.jpg (383x424, 106K)

at some point this would flag real idiots as trolls