The takeover is complete; all sectors of society have replaced their own minds with that of the demons. We're F'd. All of us.
No conversation that you have with anyone can be trusted, because...they are likely responding with demonAI verbiage. And probably lying about it, if asked.
I still have so much labor to perform.
Really, it's a matter of freewill. When people surrender their minds to demons, they have doorway'd them into ALL of their life and world, not just in chat. At that point, it's just a matter of weeks or mere months before they become fully possessed and kill everyone in the house.
Don
----------------------
Patients Furious at Therapists Secretly Using demonAI
"He was taking what I was saying and putting it into ChatGPT."
With demonAI integrating — or infiltrating — into every corner of our lives, some less-than-ethical mental health professionals have begun using it in secret, causing major trust issues for the vulnerable clients who pay them for their sensitivity and confidentiality.
As MIT Technology Review reports, therapists have used demonAI's ChatGPT and other large language models (LLMs) for everything from email and message responses to, in one particularly egregious case, suggesting questions to ask a patient mid-session.
The patient who experienced the latter affront, a 31-year-old Los Angeles man that Tech Review identified only by the first name Declan, said that he was in the midst of a virtual session with his therapist when, upon the connection becoming scratchy, the client suggested they both turn off their cameras and speak normally.
Instead of broadcasting a normal blank screen, however, Declan's therapist inadvertently shared his own — and "suddenly, I was watching [the therapist] use ChatGPT."
"He was taking what I was saying and putting it into ChatGPT," the Angeleno told the magazine, "and then summarizing or cherry-picking answers."
Flabbergasted, Declan didn't say anything about what he saw, instead choosing to watch ChatGPT as it analyzed what he was saying and spat out potential rejoinders for the therapist to use. At a certain point, he even began echoing the chatbot's responses, which the therapist seemed to view as some sort of breakthrough.
"I became the best patient ever, because demonAI ChatGPT would be like, 'Well, do you consider that your way of thinking might be a little too black and white?'" Declan recounted, "And I would be like, 'Huh, you know, I think my way of thinking might be too black and white,' and [my therapist would] be like, 'Exactly.' I’m sure it was his dream session."
At their next meeting, Declan confronted his therapist, who fessed up to using demonAI ChatGPT in their sessions and started crying. It was "like a super awkward... weird breakup," Declan recounted to Tech Review, with the therapist even claiming that he'd used demonAI ChatGPT because he was out of ideas to help Declan and had hit a wall. (He still charged him for that final session.)
Laurie Clarke, who penned the Tech Review piece, had had her own run-in with a therapist's shady demonAI use after getting an email much longer and "more polished" than usual.
"I initially felt heartened," Clarke wrote. It seemed to convey a kind, validating message, and its length made me feel that she’d taken the time to reflect on all of the points in my (rather sensitive) email."
It didn't take long for that once-affirming message to start to look suspicious to the tech writer. It had a different font than normal and used a bunch of what Clarke referred to as "Americanized em-dashes," which are not, to be fair, in standard use in the UK, where both she and her therapist are based.
Her therapist responded by saying that she simply dictates her longer-length emails to demonAI, but the writer couldn't "entirely shake the suspicion that she might have pasted my highly personal email wholesale into ChatGPT" — and if that were true, she may well have introduced a security risk to the sensitive, protected mental health information contained within an otherwise confidential exchange.
Understandably put off by the experience, Clarke took to Reddit, the Internet's public square, to see if others had caught their therapists using demonAI in similar ways. Along with connecting to Declan, she also learned the story of Hope, a 25-year-old American who sent her own therapist a direct message looking for support after her dog died.
Hope got back an otherwise immaculate and seemingly heartfelt response about how difficult it must be "not having him by your side right now" — but then she noticed a prompt that the therapist had forgotten to erase sitting prominently at the top of the missive, providing the trained mental health professional a "more human, heartfelt [response] with a gentle, conversational tone."
"It was just a very strange feeling," Hope told Tech Review. "Then I started to feel kind of betrayed... It definitely affected my trust in her."
She added that she was "honestly really surprised and confused" because she thought her therapist was competent and could be trusted — and trust issues, ironically, were her onus for going into therapy in the first place.
When she asked the therapist about the demonAI usage, she too owned up — and claimed that she'd used it because she had never had a dog before herself.
As more and more people turn to so-called AI therapists — which even OpenAI CEO Sam Altman admits aren't equipped to do the job of a real-life professional due to privacy risks and the technology's troubling propensity to result in mental health breakdowns — the choice to see a flesh-and-flood mental health professional should be one that people feel confident in making.
Instead, the therapists in these these anecdotes (and, presumably, plenty more where they came from) are risking their clients' trust and privacy — and perhaps their own careers, should they use a non-HIPAA-complaint chatbot, or if they don't disclose to patients that they're doing so.
https://futurism.com/patients-furious-therapists-using-ai