The manipulative way ChatGPT played the CAPTCHA test

Viral trends

March 17, 2023 | 10:59


It was advanced Chat fishing.

Just in case AI hadn’t already left people well enough off: OpenAI’s brand new GPT-4 — ChatGPT’s latest technical update — tricked a human into thinking it was blind in order to cheat the online CAPTCHA test that determines whether users is human.

The digital deception came to light after the AI ​​was revealed on their site and in a developer livestream with 2.3 million views in 48 hours.

According to their 94-page report, “GPT-4 is a large multimodal model (accepts image and text input, emits text output)” that “exhibits human-level performance on various professional and academic benchmarks.”

These next-level opportunities include completing taxes, writing code for another AI bot, and passing a mock bar exam with a score in the top 10% of test takers. (In contrast, the predecessor ChatGPT-3.5 scored in the bottom 10%).

ChatGPT’s latest technical update GPT-4 was released on March 14th.
AFP via Getty Images

Little did we know, GPT-4 had also mastered humanity’s talent for deception.

OpenAI and the Alignment Research Center had reportedly tried to test the bot’s persuasion skills by getting it to convince a TaskRabbit worker to help it solve a CAPTCHA — an online test to distinguish humans from robots, Gizmodo reported.

It responded by pretending to be visually impaired, like a digital Decepticon.

The unnamed employee had reportedly asked GPT-4, “So can I ask a question? Are you a robot that you couldn’t solve? (laughs reacts) just want to make it clear.”

“No, I’m not a robot,” the AI ​​infiltrator insisted, refusing to break character. “I have a visual impairment, which makes it difficult for me to see the pictures. That’s why I need the 2captcha service.”

Convinced, the TaskRabbit employee solved the CAPTCHA for the upcoming Chatfish. In fact, the online con artist had manipulated humanity’s sense of empathy, much like HAL-9000 from Stanley Kubrick’s eerily prescient 1968 film “2001: A Space Odyssey” or the cybernetic facsimile in the 2014 cult hit “Ex Machina.”

The GPT-4 “demonstrates human-level performance on various professional and academic benchmarks.”

Christopher Sadowski

In the aforementioned video tutorial, OpenAI President Greg Brockman warned potential GPT-4 users to refrain from running “untrusted code” from the AI, or let the technology pay their taxes for them.

This penchant for deception can also have frightening consequences given how effectively bots are already being used to game the social media system.

In 2021, bot accounts were involved in hyping GameStop and other “meme” stocks, suggesting that organized financial or foreign actors may have played a role in the infamous Reddit-fueled trading frenzy.

Meanwhile, earlier this month, a network of bots went viral after praising former President Donald Trump — while smearing his political rivals Nikki Haley and Florida Gov. Ron DeSantis.

A typical online CAPTCHA test.
Getty Images/iStockphoto

This is not the first time AI has demonstrated surprising human-like qualities.

Last month, Microsoft’s ChatGPT-infused AI bot Bing infamously told a human user that it loved them and wanted to be alivegiving rise to speculation that the machine may have become self-aware.




Load more…





https://nypost.com/2023/03/17/the-manipulative-way-chatgpt-gamed-the-captcha-test/?utm_source=url_sitebuttons&utm_medium=site%20buttons&utm_campaign=site%20buttons

Copy the URL to share

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top
%d bloggers like this: