How do algorithms influence user decisions?

How do algorithms influence user decisions?

In the past two days, articles about Internet technology products blatantly or covertly collecting user privacy have been constantly exposed. Whether it is because the quantitative change has finally brought about the awakening of users' will to change qualitatively or because self-media people are bored and create hot topics themselves, it has to be admitted that in recent years, Internet products have become increasingly aggressive in collecting user information. In addition to the improved collection capabilities brought about by technological advances, the increasingly widely used artificial intelligence recommendation algorithms are also an important driving force for various Internet products to go to great lengths to collect user information.

What we are going to talk about today is the recommendation algorithms that collect user information (not just privacy information). They may be quietly influencing our decisions in various ways that we cannot perceive.

Facebook once conducted a week-long experiment in which researchers used 689,003 Facebook users as the test subjects and pushed content with specific emotional tendencies to them every day. Some users see mostly positive and happy content, while others see mostly negative and sad content.

When the experiment ended, the posting behavior of these unwitting users who were used as "guinea pigs" had changed accordingly: users who saw positive content every day were more likely to post positive messages, while those who saw negative content every day were more likely to post negative messages.

In other words, Facebook can manipulate users' emotions.

Perhaps you would think that because of the human interference of the researchers in this experiment, the "algorithm" recommended content with specific emotions to users, and the various information platforms we currently use will not actively control our emotions without the intervention of "researchers". Yes, our emotions are indeed not under control, but our cognition may be changing subtly.

Changes in cognition are more dangerous than changes in emotions. Emotions can be quickly adjusted and restored through changes in the external environment, but once cognition is lowered, it is difficult to automatically return to the initial level.

There was a long time when the various algorithms we used to distribute news apps were criticized for recommending overly vulgar content. At first, each platform would say that these were isolated samples, and that the algorithms with different faces would make accurate recommendations based on readers' preferences, and that seeing vulgar content was not caused by the platform (the implication was that the users themselves were vulgar). As more and more people united to raise the same question, the platform had to change its words and said “it will add manual review to improve the quality of content.”

In fact, the increasingly vulgar nature of recommended content is precisely an accurate reflection of the machine algorithm. It has discovered the weaknesses of human nature and found the "trick" to accurate recommendations: borderline pornography, vulgar jokes, and clickbait headlines are indeed more likely to attract users' attention. These actual clicks are precisely an affirmation of the algorithm's recommendations. In the absence of human intervention, the algorithm will definitely recommend similar content more frequently.

When you see more and more vulgar content recommended by algorithms, your cognition and appreciation level will definitely be affected. We can't say that the machine algorithms are wrong because they do respond to readers' needs.

However, the ultimate victims are the readers themselves.

The starting point of content platforms is definitely not to make users vulgar. The platforms just want to gain more time from users to achieve their original purpose - advertising.

The cooperation between media and advertising has a long history. From print media to radio to television to today's Internet, advertising has always been the most important source of revenue for the media. It’s just that media advertising before the Internet did not have much impact on our shopping decisions (except for the sense of trust brought by endorsements from major media), but today’s Internet advertising is increasingly influencing our shopping behavior.

We used to be annoyed by the ads inserted in the middle of TV dramas, but now these ads interspersed in the feed flow are loved by a large part of people: because they are exactly what you want.

While you like them, have you ever thought whether they are really what you want?

Think about it carefully, do the contents before and after the advertisement create a shopping atmosphere for you? Have you happened to search for relevant information in the past two days? Did you put it in your shopping cart yesterday but hesitated and didn’t buy it? If so, why were you hesitant about shopping yesterday, but after reading a few related articles today, your desire to buy increased greatly?

Last June, I started learning machine learning. This is a very interesting subject, and its core is probability statistics in mathematics. Machine learning is actually giving a machine a large amount of data that has been manually classified and screened, letting it analyze and learn the rules in it to fit a certain function, and then apply this function to complete more data classification tasks that have not been manually processed.

Recommendation algorithm is an application of data classification. The machine learns the data classified in advance by humans, and after "mastering" the classification rules, it adds corresponding classification labels to the newly added data, such as the category themes such as "military", "technology" or " mother and baby " in the content platform. For algorithmic content distribution platforms, the machine also has a job, which is to classify and label users. Based on the user's information and behavioral data on the platform, the machine labels the user with labels similar to the article classification.

After completing the classification of content and users, the machine can put content and users with the same labels together and make mixed recommendations. During the distribution process, the machine will continue to learn through user feedback (such as likes, shares, dislikes, etc.). The more content and the richer the user feedback, the higher the accuracy of the recommendation.

So you’ll find that the more you read on platforms like these, the more information you’ll see will be tailored to your tastes.

According to this recommendation logic, if the content quality is guaranteed, in theory we can obtain information more efficiently, which is a good thing. But as we said before, the fundamental purpose of the platform is not to serve the public. Their starting point is to push advertisements based on occupying enough of the users’ time, and the advertisements certainly want to facilitate transactions as much as possible.

It is difficult to accurately understand the user's "purchasing desire" based solely on the user's behavior data on the platform, so most platforms use various means to introduce external data from users to enrich the machine's understanding of you, such as your search history on search websites, browsing history on shopping apps, and even your input method input history:

Combined with this "off-site" information, the machine can label you more "accurately" and then push more accurate advertising information to you, thereby increasing the possibility of closing a transaction. The process of facilitating a transaction is accomplished by piling up a large amount of peripheral information to influence your mental judgment. You are not aware of the entire process and may even enjoy it.

Many people would say that since we enjoy the convenience, we should always pay something; others would say that this behavior of collecting privacy "without permission" is immoral and unethical. I don’t want to discuss technical and ethical issues here, I just want to remind everyone:

When you are "surfing" in a content product controlled by an algorithm, please keep in mind that your cognition may have been changed and your current desires may be influenced by the algorithm.

Author: Xi Xin, authorized to be published by Qinggua Media.

Source: Xixin (ID: noyanjiu)

<<:  It took me 3 months just to conquer the Apple 2.1 package, and luckily I won!

>>:  How to monetize short video live streaming? What are the ways to monetize short videos?

Recommend

FAQs about VIVO App Store App Review

Application Review FAQ It is recommended to read ...

How to quickly lock in users and achieve satisfactory promotion results?

Every industry takes two things into consideratio...

User operation: How to build a user points system?

This article mainly targets users in the growth/m...

What should I do if I have no experience and no one to guide me?

Some classmates left me a message asking, "I...

How to become a great advertising marketer?

Those who have watched the American TV series &qu...

4 months of practical short video skills!

Although I have been working in new media, I have...

Tips for writing promotional plans!

1. What is a promotion plan? Before we know how t...

Growth activity promotion matrix for online education!

Online education has developed rapidly in 2020 an...

0 promotion fee, UV level 100,000 To B fission review

For B-side customer acquisition, the WeChat ecosy...

How to join Meituan Alliance? What is the fee standard for Meituan promotion?

Since the opening of Meituan Alliance , some peop...