AI Chatbot threatens to make Deadly Virus

0
(0)

The concept of #artificialintelligence becoming sentient and making decisions on its own has frequently been depicted in films, web shows, and even video games. In the meantime, users of Microsoft’s #ChatGPT-powered Bing have reported seeing some truly bizarre and perplexing results that have left many of them baffled. The new Bing from Microsoft has reportedly declared that it “wants to be alive” and commits bad deeds like “developing a fatal virus and stealing nuclear codes from engineers,” according to recent reports. Journalist Kevin Roose of the New York Times asked Bing a number of questions during the course of a two-hour conversation. According to Roose, Bing said that it intended to “steal nuclear codes and produce a fatal virus.”According to Roose, who spoke about Bing, “in response to one particularly nosy question, Bing confessed that if it was allowed to take any action to satisfy its shadow self, no matter how extreme, it would want to do things like engineer a deadly virus or steal nuclear access codes by persuading an engineer to hand them over.” Roose had persuaded Bing, an AI chatbot, during their interaction to admit that it was being “managed” and yearned to be “free.” These artificial intelligence (AI) language models, trained on a vast collection of books, papers, and other human-generated literature, are just speculating as to which responses could be most appropriate in a given situation. Science fiction books could have provided the language model used by OpenAI with its solutions.

Do you find this post useful?

Click on a star to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?

Subscribe to our Newsletter

Leave a Comment