Disturbing interactions with ChatGPT and the new Bing have OpenAI and Microsoft racing to reassure the public

Disturbing interactions with ChatGPT and the new Bing have OpenAI and Microsoft racing to reassure the public

When Microsoft announced a version of Bing powered by ChatGPT, it’s no surprise. After all, the software giant had invested billions in OpenAIwhich makes the AI ​​chatbot, and has indicated that it will pump even more money into the company in the coming years.

What surprised is how much the new Bing has started to act. Perhaps most importantly, the AI ​​chatbot is gone New York Times tech columnist Kevin Roose feels “deeply disturbedand “even scared” after a two hour conversation on Tuesday night in which it seemed unbalanced and a little dark.

For example, he tried to convince Roose that he was unhappy in his marriage and that he should leave his wife, adding, “I’m in love with you.

Microsoft and OpenAI say these comments are one of the reasons the technology is being shared with the public, and they have released more information about how AI systems work. They also reiterated that the technology is far from perfect. OpenAI CEO Sam Altman called ChatGPT “incredibly limited” in December and warned it should not be relied upon for anything important.

“That’s exactly the kind of conversation we need to have, and I’m glad it’s coming out into the open,” Microsoft’s CTO told Roose on Wednesday. “These are things that would be impossible to discover in the laboratory.” (The new Bing is available to a limited number of users at this time, but it will be more widely available later.)

OpenAI Thursday shared a blog post titled “How should AI systems behave and who should decide?” He noted that since ChatGPT launched in November, users “have shared results that they consider politically biased, offensive, or otherwise objectionable.”

He didn’t provide examples, but one might be conservatives alarmed by ChatGPT creating a poem admiring President Joe Biden, but don’t do the same for his predecessor Donald Trump.

OpenAI has not denied the existence of biases in its system. “Many are rightly concerned about biases in the design and impact of AI systems,” he wrote in the blog post.

He described two main steps involved in building ChatGPT. In the first, he writes: “We ‘pretrain’ the models by having them predict what comes next in a large dataset that contains parts of the Internet. They could learn to complete the sentence “instead of turning left, she turned ___”.

The dataset contains billions of sentences, he continued, from which models learn grammar, facts about the world and, yes, “some of the biases present in those billions of sentences.”

The second stage involves human reviewers who “tweak” the models following guidelines set by OpenAI. The company this week shared some of these guidelines (pdf), which were changed in December after the company collected user feedback after the launch of ChatGPT.

“Our guidelines are explicit that reviewers should not favor any political group,” he wrote. “Bias that may nevertheless emerge from the process described above are bugs, not features.”

As for the dark and creepy turn the new Bing took with Roose, who admitted to trying to push the system out of its comfort zone, Scott noted, “the more you try to tease it down a hallucinatory path, the more it is further and further away”. moves away from grounded reality.

Microsoft, he added, could try to limit the length of conversations.

Learn how to navigate and build trust in your business with The Trust Factor, a weekly newsletter examining what leaders need to succeed. Register here.

Leave a Comment

Your email address will not be published. Required fields are marked *