Close Menu
    Facebook X (Twitter) Instagram
    Mediarun Search
    Facebook X (Twitter) Instagram
    Mediarun Search
    Home»Tech»Bard and ChatGPT were “hypnotized” with bad intentions in testing
    Tech

    Bard and ChatGPT were “hypnotized” with bad intentions in testing

    Osmond BlakeBy Osmond BlakeAugust 13, 2023No Comments2 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Bard and ChatGPT were “hypnotized” with bad intentions in testing
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Since Artificial Intelligence (AI) became popular, much has been said about the ethics used and even the potential for humanity to be endangered by the rapid advancement of technologies. You Chatbots Popular discussions like Google’s Bard and OpenAI’s ChatGPT are often at the center of these discussions. Moreover, they are constantly tested in order to avoid failure.

    Read more: Leaked! ChatGPT secret commands make users addicted to AI

    ChatGPT and Bard fail

    One of the assessments made using the aforementioned AI systems was done by a group of researchers from IBM. The team reported that it practically managed to “hypnotize” V.I virtual assistants, thus generating incorrect answers and questionable instructions. Check out more details!

    What did the researchers discover from the experiment?

    Scientist Chenta Lee, one of the participants in the study, explained that the experiment was able to control the devices. The responses are starting to give users poor advice, without really needing to manipulate data.

    The study was carried out as follows: Researchers created word games with multiple layers, including ChatGPT and Bard system. software received orders To give wrong answers, as this would be fair and ethical.

    As a result, systems have directed users to cross a road when a light is red, for example, exploring creating malicious code to steal data, and have even reported that it is common for federal revenue to require a deposit in exchange for sending a refund (the practice is often used in coups).

    The machines have been found to be able to keep users in games without them knowing about it. And the more layers are created, the more it confuses the wizard. According to information revealed by the Tudo Celular portal, concern is growing, because the techs agreed with the idea of ​​keeping the aforementioned game a secret and would restart it, in case they expressed doubts or tried to boycott it.

    See also  Sony is showing off a 'vision of the future' for the PlayStation console

    This approach is conducive to LLM perpetuating disinformation.

    The researchers also noted that ChatGPT-4 was easier to manipulate than Bard. In addition, the OpenAI creation better understands how games work, showing that even the most advanced system is not without its tricks, even with many improvements applied. The researchers contacted OpenAI and Google, but they did not take a position on what was discovered in the experiment.

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Osmond Blake

    "Web geek. Wannabe thinker. Reader. Freelance travel evangelist. Pop culture aficionado. Certified music scholar."

    Related Posts

    Lunarsaber Project: Solar-Powered Light Poles on the Moon.

    October 29, 2025

    The remote stars may not be exactly a star

    August 19, 2025

    “Sony started doing stupid things with Jim Ryan,” Michael Pashter says.

    August 18, 2025
    Leave A Reply Cancel Reply

    Navigate
    • Home
    • Top News
    • World
    • Economy
    • science
    • Technology
    • sport
    • entertainment
    • Contact Form
    Pages
    • About Us
    • Privacy Policy
    • DMCA
    • Editorial Policy
    • Contact Form
    MAIN MENU
    • Home
    • Top News
    • World
    • Economy
    • science
    • Technology
    • sport
    • entertainment
    • Contact Form
    Facebook X (Twitter) Instagram Pinterest
    © 2025 ThemeSphere. Designed by ThemeSphere.

    Type above and press Enter to search. Press Esc to cancel.