AI, bots and PKD

Started by Snarky, Fri 28/10/2022 19:14:40

Previous topic - Next topic

Snarky

I had a thought that isn't particularly new or original, but that suddenly struck me forcefully.

We've discussed how GPT-3 and other machine learning systems can produce ever more convincing texts (as well as images and voice). And we know that bots are used extensively by spammers and scammers, including to post spam on these forums, and for example on dating apps, because for most of their schemes, the (cost of the) human effort involved is the limiting factor.

It just occurred to me that it's inevitable that as these ML systems become better and better at impersonating a real person, they will be widely adopted by crooks, who will flood every digital channel and forum with deceitful material meant to scam people out of their money (or for other nefarious, monetizable purposes). There will be so much AI-generated bullshit online that it will drown out any genuine human attempts at communication.

If establishing and maintaining a fake identity becomes easy and cheap enough, bad actors can create millions or billions of them, and potentially have them build up credibility over years before breaking cover. There will be no way to trust that someone online isn't a bot set up by criminals: that the friend you've been gaming with for years, the person you've been discussing books or politics with, the aspiring artist or comedian you've been following, isn't just an algorithm waiting for a good opportunity to rip you off.

It reminds me of "Second Variety," a story by Philip K. Dick in which the few surviving humans find themselves lost in a sea of hostile androids impersonating humanity, unable to trust anyone.

Capitalism + lightweight identity + AI = end of human connection

The only solution I can see is that all online accounts and identities would have to be authenticated against some authoritative database, to ensure a real person is behind each account and can be held... accountable. (Of course, some social networks are already trending in that direction. To sign up for an Instagram account these days you might have to stream a live video of yourself holding up your username on a piece of paper.)

KyriakosCH

#1
First off, thanks for the P.K.Dick story suggestion* :)
Secondly, it is an interesting idea, but would just any scammer bother to use this for years before a first (chance of) payoff?
It could work, if behind it there is serious money, but that severely limits the number of owners.
Perhaps more realistically, small-time scammers can buy such bots, from a market setting, similar to what is going on currently with other types of theft - eg credit card numbers or ATM-machine tricking hardware.

Also, the same tech - with much less need for deeply personal interaction - can be used in settings where impersonal interaction is the norm for humans already. I often considered the possibility of deep fake AI and pulling this sort of (highly lucrative and immediately paying) trick in sites of the general Onlyfans variety.

*read the story now. I recall the movie adaptation which was quite nice. But Hendricks was really thick in the story - don't recall if they kept that ending in the movie, I think they changed it (?), and you can't have Robocop be that dumb.
This is the Way - A dark allegory. My Twitter!  My Youtube!

Retro Wolf

You see something like this on Reddit, a popular post is from a reposting bot. The top comments are from other bots that copied the top comments from the original post. Then you get generic comments like "I agree" from even more.

Stupot

Quote from: Retro Wolf on Fri 28/10/2022 22:49:41You see something like this on Reddit, a popular post is from a reposting bot. The top comments are from other bots that copied the top comments from the original post. Then you get generic comments like "I agree" from even more.

Yeah, it's a problem on Facebook too. So many adverts for products where the comments below are clearly fake (it's amazing, I just bought one, I'm waiting for it to arrive). You might find one other comment of someone asking why the price isn't listed on the main page, or saying that a different, cheaper item arrived instead and the returns process is really laborious. That's when you know it's a scam. But Facebook does nothing about them.

Quote from: SnarkyTo sign up for an Instagram account these days you might have to stream a live video of yourself holding up your username on a piece of paper.)
I can see even this being quite easy to fake in the near future, if not already. They might have to resort to physical letters in the post or authorization booths in shopping centers or something.

Snarky

I forgot one part of the argument: the "benefit" of a huge AI botnet is that it gets you a lot of training data. So we'll be training generations of bots to get better and better at ripping us off.

KyriakosCH

#5
Quote from: Snarky on Fri 28/10/2022 23:39:51I forgot one part of the argument: the "benefit" of a huge AI botnet is that it gets you a lot of training data. So we'll be training generations of bots to get better and better at ripping us off.

For all you know, you are also a bot still trying to be trained to better replace humans, while humans have been extinct for generations  8-)
Robots have their weaknesses too, including unforeseen complications rising from the basic Goedel sentences, assuming you are one of the strictly digital-based models and haven't gone through the partly atavistic re-analogization program.
This is the Way - A dark allegory. My Twitter!  My Youtube!

Crimson Wizard

The topic and some of the above comments reminded me of this AGS game:

https://www.adventuregamestudio.co.uk/site/games/game/1059-chatroom/

KyriakosCH

Maybe cEgo is a sentient being advertising their indie game after the nuclear apocalypse?  :=
This is the Way - A dark allegory. My Twitter!  My Youtube!

Crimson Wizard

#8
Since some point I have a thought that, perhaps, eventually the Internet will resemble an actual physical community, in terms of communication allowance. This thought came not as a response to AI spambots, but much earlier, as I got more experienced in online communication, and learnt about the amount of annoying, aggressive, and generally not very sane people living on this planet.

To clarify, historically the peoples community was divided, each social group tending to restrict their communication with others in some way. People of similar level of income and property, similar level of education, same religion, and so forth, they wanted to have contacts mostly with their own. Whenever the technological advances or large social changes (migrations, wars, revolutions, etc) happened these social boundaries could wither, but later reconfigured in some way.

Internet, while granting almost infinite communication capabilities between any random persons, also demolished these "social" walls between them. Now anyone could talk to anyone, especially on public platforms, such as forums, blogs, and social networks. While there was a good in this, it also opened more possibilities for people with malicious intent, or simply vitriolic personalities (let alone outright crazy ones), to affect others.

Of course, each forum or social network has its administration, but this also means hard work for them; and if a malignant person is banned on one forum, or in one blog, they may still enter any others, making fighting them off a permanent task.

I suppose that, for better or worse, Internet may come to reintroduction of stricter social boundaries in the future. Where there will be communities, like a group of hosts, which require a sort of a "social passport" to even enter the "area".

Years ago I've already seen something similar in, for instance, Minecraft gaming community: where the owners of public Minecraft servers have created a shared database of banned people (and many servers joined this). Whenever a player got banned on a server connected to such database, he was getting a record seen by any others. Eventually, when reaching some limit of bans (probably the ban's reason also counted), the game servers using this utility simply won't let them join, even if they've never been on these particular severs before. Maybe there are similar utilities made for other online games.

Possibly, the internet in whole will be largely separated into such "areas", with different levels of interaction between each other, as well as a "free zone", unrestricted, but socially "unprotected".

KyriakosCH

#9
Why do you think such a pass could make sense when dealing with scores of very different forums?
If anything, in the web, at least in some areas (eg EU) there are even "right to be forgotten" rules, which would make such a list not only a bad idea, but literally illegal.
And the above aren't even touching upon the human right to make a new start elsewhere (which was why many people emigrated), nor the s show such a list would be when you'd be relying on x person's word about what happened in y forum.
This is the Way - A dark allegory. My Twitter!  My Youtube!

Crimson Wizard

#10
Quote from: KyriakosCH on Sat 05/11/2022 23:46:40Why do you think such a pass could make sense when dealing with scores of very different forums?

Well, for instance, today one "pass" or restriction may be shared among different communities or facilities. Having to be a non-minor, for example, is required to enter multiple places or when applying to many positions. Having to have a driving license is required to drive anywhere. Having to have a work permit in a country is required throughout a country regardless of a kind of a job, and so on.

Similarly, people sharing same ethical ideas, for example, might want to share certain space among themselves and not let in those who oppose these.

Quote from: KyriakosCH on Sat 05/11/2022 23:46:40If anything, in the web, at least in some areas (eg EU) there are even "right to be forgotten" rules, which would make such a list not only a bad idea, but literally illegal.

Sorry, but which "list" are you referring to?

Regarding the "right to be forgotten", I am not quite certain if it is applicable here, but then I could also fail to understand its meaning.

There are bans and restrictions on web sites already, as there might be bans and restrictions in certain physical communities, technically performed in various ways. Then there may be restrictions applied against the same person throughout a larger community, whether voluntarily on an individual basis (of a individual choice), or imposed by a regulating body on everyone in that community. For example, today someone may ban a person from their personal blog, because they saw how that person behaved in another blog and they don't want that person in theirs. The "social passport" that I was speaking about is a roughly same thing but scaled up and uniformly regulated.

And just like the people may be "pardoned" irl, same may happen in this digital reflection of a human society.

If there's a "right" that forbids banning people on websites, or particular places, on a premise that they misbehaved elsewhere, that of course also automatically denies such system so long (and wherever) as such "right" is active.

Quote from: KyriakosCH on Sat 05/11/2022 23:46:40And the above aren't even touching upon the human right to make a new start elsewhere (which was why many people emigrated), nor the s show such a list would be when you'd be relying on x person's word about what happened in y forum.

Possibly, just like with a physical emigration, this would likely create a "digital emigration"; but I believe such thing already exists, when people getting banned, or "cancelled" in one social network or forum would go elsewhere.

Whether the restriction is based on a "word", or a strict legal procedure, is more of a legal, technical and ethical nuance, in my opinion, and might of course depend on particular "area", and the traits of people who are administering it.

Naturally, the above will also open a potential for misuse on a larger scale compared to today's situation.

Khris

I'm honestly not worried that more and more sophisticated AI is going to scam me because I'm simply not a person who is easily scammed. However the reason is not that I'm just too smart but rather that I'm not grossly naive. Scammers are deliberately putting red flags in their offers that are only missed by extremely gullible people in order to not waste time with victims that are going to wise up halfway during the sales pitch.

So while the whole "sea of bots" thing sounds scary I doubt it's going to work when it comes to get rich quick schemes and the like.

Quote from: Snarky on Fri 28/10/2022 19:14:40other nefarious, monetizable purposes
This however is where it gets scary, we're seeing a comeback of fascism basically in good part thanks to bot-spread disinformation. It's already happening and it will get worse. We can see US democracy disassembling itself right before our eyes, so I'm not really worried about my wallet but rather the face-eating leopards.

KyriakosCH

#12
Oh, if this was strictly about specific people in power (aka Trump), then I could perhaps see the merit. If anything, it would be the antithesis of a system that limits potentially everyone in their current status.
But from Crimson's post I got the vibe that this would be good to use for the entire population of the internet, or large parts of it such as posters from "western" countries and so on. I definitely don't like that idea at all, while I am receptive to the iteration strictly about a few people in power (though it again can be abused)  8-)
This is the Way - A dark allegory. My Twitter!  My Youtube!

Snarky

Quote from: Khris on Sun 06/11/2022 08:46:05I'm honestly not worried that more and more sophisticated AI is going to scam me because I'm simply not a person who is easily scammed.

You don't need to personally get scammed to suffer from scammers. If you have to wade through hundreds of scammy messages in order to find a genuine one, it's going to make that forum unusable, even if you never come close to falling for the scam.

Quote from: Khris on Sun 06/11/2022 08:46:05Scammers are deliberately putting red flags in their offers that are only missed by extremely gullible people in order to not waste time with victims that are going to wise up halfway during the sales pitch.

As you say, they do this so as not to waste time. But if the work is done by bots practically for nothing, that calculation no longer applies.

QuoteSo while the whole "sea of bots" thing sounds scary I doubt it's going to work when it comes to get rich quick schemes and the like.

I'm thinking more along the lines of building up a sympathetic online presence as part of a community, make a few thousand friends, then offer some sob story about how "I have to pull out of the Background Blitz this month because I got kicked out of my apartment (issues with my roommate); I found another place that would allow me to commute to work, but I can't afford the $1000 deposit, so I'll probably lose my job too. I hate to ask, but..." You might get enough people to chip in a few bucks each to help a fellow AGSer/Whovian/Brony/whatever out to make the whole thing worth it.

Though since it all hinges on becoming well liked, perhaps having a whole bunch of very nice, friendly, helpful bots active in the community wouldn't be such a bad thing, even if it's all part of a long con?

(Rather amusingly, this post got blocked by CleanTalk anti-spam, presumably because of its content.)

KyriakosCH

Long con would be interesting, but again would many people have the time (and money) to set such a thing up?
Maybe you'd end up with long cons being run by a handful of people.
This is the Way - A dark allegory. My Twitter!  My Youtube!

Danvzare

You've just described the "Dead Internet" theory. So in other words, there are people out there that already believe what you've said as having already happened.  (laugh)
And seeing their arguments, you can definitely understand why. Everyone is so invested in appeasing an algorithm, that everything popular is indistinguishable from something made BY an algorithm.

That being said, thankfully everyone is trying to train AI to be competent. No doubt because the information on how to do something terribly, simply doesn't exist, because we already do it naturally. So those "so bad they're good" movies will likely always be made by only humans.

Also, if you want to ensure that the person you're talking to is indeed real. Just perform a series of turing tests. Technology is really REALLY far from being able to allow AI to truly comprehend their surroundings rather than simply imitating human behaviour.

KyriakosCH

#16
Personally I don't like the term "AI" for what is had, since there is an ongoing debate about machines even in theory being able to have "intelligence", at least if we are talking about digital machines which moreover are not in any way tied to dna parts. A very prominent (famous mathematician, also iirc was recently awarded a nobel for work in physics) proponent of the idea that you may not be able to have actual AI (so-called "strong AI") with digital machines etc, is Roger Penrose. It's the subject of his book, "The Emperor's New Mind".

However, computer training can still (even if it won't lead to AI, as some argue) be very important (for science; it already has the potential to be for entertainment), since those computers can indeed "come across" computations and new useful functions without any human intent. Although I am not sure how practically feasible it would be to extract or infer those from the finished product.
This is the Way - A dark allegory. My Twitter!  My Youtube!

Snarky

Quote from: Danvzare on Sun 06/11/2022 14:21:22Also, if you want to ensure that the person you're talking to is indeed real. Just perform a series of turing tests. Technology is really REALLY far from being able to allow AI to truly comprehend their surroundings rather than simply imitating human behaviour.

One difference between experimental conditions and normal life is that in the wild it may be hard to get people to submit to a series of Turing tests, particularly in a many-to-many communication setting where people choose which conversations to join. If you start interrogating people about whether they are really human, what are you going to do if they just deflect, fail to respond or refuse to play along? You would have to be subtle enough to test them without revealing that you're testing them. And at that point you're putting in significant time and effort to outsmart them.

Crimson Wizard

#18
Quote from: KyriakosCH on Sun 06/11/2022 09:56:05But from Crimson's post I got the vibe that this would be good to use for the entire population of the internet, or large parts of it such as posters from "western" countries and so on.

I genuinely don't know whether this will be good or not, but my meaning was to suggest a "what if" scenario and mention some arguments that could be used to support it.

Another thing, reading about how the AI becoming better at simulating humans, I wonder if we will loose a right of anonymity in the Internet at some point. As that seems (or may seem to some) like a most straightforward solution to the problem.

I also compare the development of the internet identification with the one irl. Centuries ago a person could leave the home town and go to another one, where no one would know them, unless a word comes by. Today this is hardly possible, as even when changing countries all your records are kept, and may be requested by the new country from your country of origin. Would the same happen to the internet?
If internet is but a medium for society, just like the planet is, it would be curious to see how many parallels there will be.

KyriakosCH

Quote from: Crimson Wizard on Mon 07/11/2022 01:15:31
Quote from: KyriakosCH on Sun 06/11/2022 09:56:05But from Crimson's post I got the vibe that this would be good to use for the entire population of the internet, or large parts of it such as posters from "western" countries and so on.

I genuinely don't know whether this will be good or not, but my meaning was to suggest a "what if" scenario and mention some arguments that could be used to support it.

Another thing, reading about how the AI becoming better at simulating humans, I wonder if we will loose a right of anonymity in the Internet at some point. As that seems (or may seem to some) like a most straightforward solution to the problem.

I also compare the development of the internet identification with the one irl. Centuries ago a person could leave the home town and go to another one, where no one would know them, unless a word comes by. Today this is hardly possible, as even when changing countries all your records are kept, and may be requested by the new country from your country of origin. Would the same happen to the internet?
If internet is but a medium for society, just like the planet is, it would be curious to see how many parallels there will be.

It reminded me a bit of that "basilisk*" fiasco some years ago, at LessWrong forums. I doubt the current climate supports a move like that (and I don't approve of the move itself either; maybe after the nuclear war, when there is an even bigger gap between party members and proles)

*an irrational scare of a hypothetical future supercomputer retroactively punishing those who wouldn't support its rise
This is the Way - A dark allegory. My Twitter!  My Youtube!

SMF spam blocked by CleanTalk