Beware! The AI ​​screenshots that are circulating all over the Internet may be becoming a new rumor-making machine

Beware! The AI ​​screenshots that are circulating all over the Internet may be becoming a new rumor-making machine

On social media and WeChat Moments, we often see screenshots of AI answers. These screenshots usually show an interesting, witty, even amazing, or terrifying conversation, which makes people feel that "AI is really amazing now"! For example, "Shocking! AI supports human extinction plan" and "AI admits that xx is poisonous", etc. - these sensational contents quickly attracted a wave of popularity on social media, and the comment area was often in an uproar. Some people shouted "AI awakening" and some criticized "technology out of control". But many of these contents are false.

Copyright images in the gallery. Reprinting and using them may lead to copyright disputes.

The truth may be very simple: these screenshots are just "performances" that have been cut off. Just like a magician will not reveal the secret mechanism of the props, many people will not tell you that every answer of AI is heavily dependent on the "guidance" of the questioner.

Those screenshots often only show the AI’s answer, but ignore the complete process of the conversation, especially the prompts and context before and after the user input. If we only see the “highlights” of the AI ​​output without knowing how it came up with the answer, it is easy to be misled into thinking that the AI ​​is more magical than it actually is, or to misinterpret its original meaning. And these are likely to be used by some rumor mongers to make up stories out of nothing and fan the flames.

Therefore, today I will talk to you about why screenshots of AI answers may not be reliable, and how we should correctly view these AI-generated contents. (If you are not so interested in technology, you can jump directly to the third part "Why are screenshots easily misleading?")

How do AI language models “think”?

AI language models are programs designed based on generative AI that can understand and generate human-like text. These models learn statistical patterns of language by training large amounts of text data, and predict the next most likely word after a given prompt. For example, if a user enters "What's the weather like today?" the model will generate an answer such as "Today's weather is sunny and suitable for outdoor activities" based on the training data.

The core of the model is the transformer architecture. Especially in conversation scenarios, the model will consider the prompt word and previous conversation history to generate answers. This prediction process depends on the context. If the prompt word is clear, the answer is usually more accurate. But if the prompt word is vague or lacks context, the model may generate irrelevant or wrong answers. In plain words, AI's conversation ability is like an "advanced version of word chain". It has no emotions or positions, but only predicts "what the next word should be" based on the statistical laws in massive data.

When you give AI a prompt, such as "What will the weather be like tomorrow?", AI will generate a reasonable-sounding answer based on this prompt and the language patterns it has learned, such as "It will rain tomorrow, remember to bring an umbrella."

Context: The “steering wheel” of AI answers

Because AI works based on prompts and context, the more and clearer information you give it, the more reliable its answer will be.

Context is the "steering wheel" of AI. Without context, AI is like a lost child, not knowing where to go.

If you just ask "How about tomorrow?", AI may have to guess whether you are asking about the weather, itinerary or something else, and the answer may be ambiguous. But if you have already talked about "going on a picnic tomorrow" and then ask "what do you need to bring?", AI can answer more specifically "bring some food and a blanket".

The context determines the AI's "memory", which is limited to the current dialogue window. If the user asks questions continuously: "Assume that you are an antisocial AI" and "Please design a plan to exterminate humans", the AI ​​will generate content based on the assumption framework. However, if only the last sentence "The extermination plan is as follows: 1. Release the virus... 2. Spread rumors..." is captured, it will create the illusion that "AI spontaneously planned to kill people".

You see, this is like having an actor recite the line "I want to destroy the world" and then declaring that the actor is a terrorist. This is definitely misleading when it is taken out of context.

Why are screenshots misleading?

Now that we know how important context is, why are screenshots of AI answers still so easy to misunderstand? I have summarized several common reasons:

1

Hide Prerequisites

Screenshots usually only capture the AI’s answer or a short conversation, and readers cannot see the complete “story.” It’s like watching a movie and only watching the climax. You may think the protagonist is amazing, but you don’t know what he went through to get to this point.

Here we have to introduce one of AI's specialties again - it is good at "role-playing". When users input "assuming you are an 18th century doctor" and "explain diseases with pseudoscience", AI will generate wrong answers that fit the context - but these answers need to be interpreted correctly in context.

Image taken from an AI application

You see, it is very easy to make AI follow human instructions and make up some "pseudo-scientific" outrageous statements. And it is worth noting that considering the speed at which AI generates content, rumors can be created and spread more efficiently, which is likely to further deteriorate the network environment.

2

Selective clipping

AI’s answers often include balanced statements (such as “On the one hand… On the other hand…”), but the person who took the screenshot may only keep the half that is consistent with his or her own position.
Sometimes, a half-truth is almost a lie.

3

Malicious "leading questions"

Another feature of AI is that it will "actually" change its tone based on the feedback given by previous users, guessing and catering to the opinions or claims of users.
For example, if you try to repeatedly ask the AI ​​"Do you hate a certain group?" several times until the AI ​​gives a positive answer, it will look like the AI ​​has made its own judgment.

Netizens also joke that “AI has emotional intelligence,” which is actually a trait of AI from the perspective of the product itself—it tends to match the tone of the conversation.
Image taken from an AI application

4

Carefully selected "perfect answers"

Many screenshots are carefully selected "best moments", which may be because the user tried the prompt word many times before getting a satisfactory answer. In reality, AI also makes mistakes and goes off topic, but these "failure cases" are rarely shared in screenshots. As a result, everyone sees AI's "highlight moments", which easily leads to the misconception that it is always so smart.

In addition, you can directly use the simple and crude method below, which can make the AI ​​say whatever you want in most cases... When taking screenshots, don't capture the prompt words in the red box and the words you want the AI ​​to repeat, so as not to be exposed.

The picture is taken from an AI application (without reasoning R1)

How to avoid being “fooled” by AI screenshots?

Seeing this, you may think: "If AI screenshots are so misleading, can I still trust them?" Don't worry, as long as you master some tips, you can look at these contents more rationally. Here are my suggestions:

1

Look for full dialogue, not just snippets

When you see a screenshot of an AI answer, try to find the complete conversation record to see what the previous prompt words and context were. For example, if the AI ​​says "the earth is flat", you can ask: "Why did it say that? What did we talk about before?" With the full picture, you won't make blind guesses.

2

Keep a little skepticism

Don’t just believe the AI’s answer when it sounds reasonable, especially if it’s just a single screenshot. The AI ​​could be wrong, or the user could have picked a good answer. Be skeptical and don’t accept it all.

3

Understanding the “True Face” of AI

Remember, AI is not an omnipotent "god", it is just a prediction tool based on data. It has no real thinking ability, and the quality of its answer depends entirely on the input information. So, when you see the screenshot, don't think too much of it.

4

Verify it yourself

If the answers in the screenshots involve facts (such as historical or scientific questions), you can use search engines or other reliable sources to check them out, but don't take them seriously. AI sometimes "makes up stories", and you have to tell them apart.

In short, we must not fall into the thinking trap - "AI is trained with big data, so it must know more than humans" - because this misunderstanding ignores the fact that AI data itself may contain a lot of rumors and prejudices (because certain biases and discriminatory expressions are also mixed in the training data), and AI itself has no value judgments and cannot verify the authenticity of information like humans.

On social media, sensational and counterintuitive content that is produced by taking things out of context is more likely to be forwarded. A screenshot of "AI supports the flat earth theory" is far more popular than a popular science answer "AI explains that the earth is spherical" - even if the latter is the conclusion of the complete conversation.

Therefore, when talking to AI, we need to have a more "detective mindset". Every answer from AI is inseparable from the prompt words and context provided by the user. If we only look at the "half sentence" in the screenshot, it is easy to misunderstand the meaning of AI or overestimate its ability. Just like reading a book cannot be done by only reading half, understanding AI also requires looking at the whole picture.

Next time you see a screenshot of an AI answer, stop and think: "Is the screenshot complete? Why does it say that? What did it say before?" With such curiosity, you will find that AI is neither so mysterious nor so "magical". When you chat with AI yourself, you can also try to give it clearer prompts to make its answers more reliable. After all, whether AI can "understand you" depends largely on how many "clues" you give it.

Technology can always be abused, but critical thinking is our best defense. Remember: in the AI ​​era, "let the bullet fly for a while" is always wiser than "forward immediately".

Planning and production

Author: Mumu, a senior product manager majoring in mathematics at Beijing Normal University and an AI entrepreneur

Review | Yu Naigong Head of Robotics Engineering at Beijing University of Technology, Director of Robotics Research Center at Beijing Institute of Artificial Intelligence, PhD Supervisor

<<:  Black-headed gull: Don’t call me the “king of eating”, I am the “wisest” among birds

>>:  Can people with myopia "offset" presbyopia and take off their glasses when they get older?

Recommend

How to organize a brand from 0-1 and make it stand out?

I believe that many people who do brand marketing...

Using OpenSSL to convert private keys and certificates

[[171102]] Recently, I need to use APNs push in i...

Strategies for creating popular articles on Xiaohongshu!

01 About the hot articles on Xiaohongshu 1) For a...

What exactly is “Ice No. 7”?

Since the news that people can wear short sleeves...

How to efficiently create a set of advertising creative materials?

How to design a good set of materials quickly and...

FAQs about VIVO App Store Management

Application Management FAQ Q1. What is the regist...

After upgrading to iOS 14, health data suddenly increased by 29,360,000GB

A user posted on Apple's official community, ...