ChatGPT Be Good to Me

ChatGPT Be Good to Me

Aileen was dealing for a while with an extremely annoying support issue. It involved access to an account that had been hacked, the details of which I will not go into. Suffice it to say, she was stonewalled by the company’s tech support.

She eventually turned to ChatGPT to try to find a solution, since her repeated efforts were being thwarted. This computer program turned out to be quite helpful. Much of its advice simply confirmed what she had already determined through other kinds of searches. Specifically, this advice was to collect evidence of original ownership of the account and of it being hacked, and persist with contacting the company daily and sharing this information through all possible avenues, even when there was no response or the response was obviously from an automated system.

It certainly all made sense. But what was particularly compelling about the computer program’s responses was how logical and well layed out they were, and that they had a reassuring tone, offering not just practical support, but also moral support. The AI-generated responses read like a pep talk, encouraging her to keep trying, acknowledging how difficult the situation was, and praising her for keeping up the good work. They sounded sympathetic, like ChatGPT was her trusted friend. She showed me one of the responses, and it oozed positivity and compassion. No wonder people are willing to pay for AI girlfriends or boyfriends!

Aileen told me that this was how she wished people would react when she went to them for help, instead of just throwing their hands up and declaring the situation hopeless, as was typical. I was a little nervous; I knew I hadn’t been much help. How could this AI be more supportive than me? I am a lowly human, it’s true, but I am also Aileen’s friend and partner!

Why do humans have so much trouble being supportive of one another? Well, the simple truth is that when you ask for help, you are asking for another person’s time and energy. And people are loath to give that up; humans are always seeking to hold on to and defend their autonomy. This leads to challenging conflicts, but there is reward in overcoming the challenge, and working with someone else for mutual benefit. In this way you build a relationship with another person, in a way you simply can’t with a chat program, however real its texts might seem.

Humans also have difficulty maintaining a supportive demeanor because they are subject to emotions, which might interfere with clear thinking or a measured tone of voice. I know this sounds like a sci-fi plot point, but an AI chatbot is a machine, so naturally, its answers are logically consistent and it can sustain a conversational tone indefinitely. Nothing can ruffle its train of thought, so to speak, because it doesn’t have one.

The account access issue was eventually resolved, though the resolution didn’t really have anything to do with ChatGPT’s advice. It just took time, presumably because of a backlog of cases at the company. Nonetheless, Aileen informed the chatbot in the still open chat window and it had a congratulatory response in the same supportive tone it had been using throughtout their conversation.

Huge congrats again — you turned a frustrating situation into something powerful. Let’s make sure others don’t have to go through the same thing alone.

See what I mean?? I do think, however, that I have proven clearly that AI isn’t autonomous and doesn’t possess consciousness and is not worth getting into a “relationship” with.

Please don’t leave me for an AI, Aileen!

Write a comment...