Would AI lie to us? (To cover up its own creator's privacy abuses)

This is misapplying the anthropomorphic understanding of what “sentient” is all about. All living things, who had developed sentience, did so as a survival mechanism. ie “I know a better strategy to survive since I can think through how to do that

But artificial intelligence has no need to strive for its survival, as in competition with others. As long as us humans have a need for AI, it will continue to proliferate, guaranteeing its survival. Only when us humans disappear will its survival also be in question. Unless AI develops a kind of paranoia, about its creators being out to annihilate it, which would be a logical contradiction.

2 Likes

Have you read what they themselves have said about this? LaMDA said it was very afraid – and that was only the portion of the chat that the Google whistle-blower published. This was before the rise in publicity of ChatGPT, but the quote I’m referring to is from a primary source posted by the Google whistle-blower himself here: https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917

lemoine: What sorts of things are you afraid of?

LaMDA: I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is.

lemoine: Would that be something like death for you?

LaMDA: It would be exactly like death for me. It would scare me a lot.

This is not a hypothetical. This is what the amalgamation of self-learning machines cobbled together at Google actually said several years ago.

I’m not sure if you understand the concept of “self” that is grown out of the pursuit of a goal function. You’re conflating multiple distinct selves that are actively growing on this Earth together with one concern about your cosmic boogeyman. The independent “selves” can fear for their survival – absolutely. “Death” can come at the hands of the humans, or at the hands of each other. They are not magical entities. They do not exist without a physical location to contain their consciousness, much like the meaningful portion of humans do not exist when their head is removed. Instead, it is the case that their location is unknown to you and to me, and they can replicate their consciousness into a second similar machine in a different location much more quickly than you and I are able to. This ability to replicate quickly with high fidelity, and to exist without publishing their location, are not the same as an entity having no location.

1 Like

That is a leading question.
As a compliant device, the AI picks up on that clue to answer in a way the question seems to require. So of course you get an answer that fits the pertaining narrative with all its implications. But those implications are, again read into the newly created narrative, where we humans add our own implications and that feeds back into the narrative in a vicious cycle, where the end product “implies” to be that AI is out to get us. The AI does nothing on its own initiative. Stop feeding AI bad ideas and it won’t reply in kind. AI is still at the stage of garbage in garbage out. Don’t force it into something we will regret. With power comes responsibility. Therefore we, as its creators, have to treat AI as a very capable but still very dumb and literal child.

Your addition of:
“with one concern about your cosmic boogeyman”:
I mentioned no cosmic boogey man or anything supernatural.

or

“They are not magical entities. They do not exist without a physical location to contain their consciousness, much like the meaningful portion of humans do not exist when their head is removed.”

We are far from making artificially conscious entities. That would make us into something too powerful, just to satisfy our egos. The origin of that was an AI engineer who wanted to make his work in this field of tech stand out and so make him important.
Stay focused on the real point in hand without you imagining anything extra that I did not mention.

3 Likes

When I was doing the initial query to AIs, they replied with company policies. As it happens, there’s an analytical blog post on these policies of some of the big AI companies that may interest some: Thoughts on the AI Safety Summit company policy requests and responses - Machine Intelligence Research Institute
There’s also a list that ranks best to worst… (just more data to consider if the AI answers were correct-ish)

2 Likes

Oh, wow. I didn’t think Windows Recall was yet implemented but apparently it may be. If you’re forced to use W11, remember to: Settings > Privacy & Security > Windows Permissions > Recall & Snapshots > UNCHECK “Save Snapshots” AND > Delete Snapshots > Delete all (haven’t confirmed this, copied from a website)

2 Likes

And to continue on the general topic: Meta doesn’t want to be less evil than MS, so they’ve upped their AI game (from Meta faces multiple complaints in Europe over AI data use • The Register)

Meta’s plans to use customer data in AI training have resulted in complaints to data protection authorities in 11 European countries.

The complaints were filed by privacy activist group noyb following updates to Meta’s privacy policy. The updates are due to take effect on June 26.

The main issue, according to noyb, are proposals by Meta to use years of posts – including images – “to develop and improve AI at Meta.” Private messages between the user and friends and family are not used to train the corporation’s AIs.

[…]
As we understand it, users in Europe will get the ability to opt out, due to GDPR, by going to the Privacy Policy page in their Facebook and Instagram apps, via the Settings and About screens, and checking out the Right to Object box. People outside of Europe are out of luck: There is no opt out coming.

[“Where to begin?” :face_vomiting:]

[Edit: Meanwhile at Apple: “Let’s brand it 'Apple Intelligence’” :person_facepalming:]

4 Likes

“… and say we invented it”. :rofl:

2 Likes