User behavior data is a foundation that can help us understand and recognize users in the Internet era. First of all, our operations are user-centric, but from the perspective of user analysis, there are mainly two major models: The first one is the "page analysis model" that is more well-known to everyone. In China, it is mainly represented by Umeng and Baidu Statistics. It revolves around page jumps and then collects data to help us analyze. This will cause new problems - we cannot know why the user stays on this page, and we cannot accurately restore what the user sees and gets. Therefore, this page analysis model is not detailed enough and not flexible enough. The second type is the behavioral event analysis model based on user cognition. Compared with page analysis, the user behavior analysis model is more comprehensive and flexible in data collection. We usually do a lot of things when we are doing operations: organizing activities, creating content, designing copywriting around business goals such as attracting new customers and promoting activation, and designing some targeted activities to push to users; we also need to pay attention to many operation methods and design different operation strategies from a human perspective. In short, we do all these things in the hope that users can recognize the products and the value of our operations; but what I want to ask is - do you know whether these things we have done are the same as our expectations. To answer this question, let’s first look at the data needed to evaluate operational effectiveness. Usually, we look at the cumulative number of users, active users, retention, length of stay, etc. These are some of our common indicators. Then you may ask, yes, aren’t the reasons why we do so many things just to make these data better? The improvement of these data proves that our business is thriving, so I say these indicators are correct and valuable; but the question is, do users really recognize what you do before the data characteristics you expect are shown? To give another simple example, the push of an APP may bring in a wave of active users. Do these active users feel that the push is particularly timely and relevant, and that they get what they want after entering the product? Or do they feel very upset and harassed after seeing the push, and then click in to take a look at the product and then uninstall the app? Therefore, our focus is to at least spend a lot of energy to focus on the process during the operation. We need to achieve data indicators through the correct path. Whether it is OKR or goal, our key point is to achieve it through the correct path. So how do we know if what we are doing is on the right track? We need to evaluate whether our path is correct and whether it meets expectations through some user behaviors on the page, such as liking, closing, complaining or leaving the app. Let me tell you a case. This is a real case, and the background is a content product: One day, their content operations director came to me very urgently and said that they had been monitoring an indicator called the reading conversion rate of the homepage headlines. Recently, this conversion rate has declined. They have tried many plans and many things have not improved. However, from the perspective of the overall product health, there has been user growth and a small increase in retention. From this perspective, users should be relatively healthy; so they are particularly contradictory now. They can't understand these data when they are put together. Users are active, so why does the reading conversion rate continue to decline? As a content product, if users are not viewing the content or reading it, what are they doing? At this time, everyone will have some speculations: for example, whether some lottery activities have diverted people to the exchange area; or the headlines are not attractive enough; or there are some quarrels in the community group, so everyone has no time to read the headlines, etc.; at least no one knows which is the real reason, and therefore cannot make targeted adjustments. The question is, what kind of data and analytical capabilities do we need? I spent a one and a half hour analyzing the analysis at the client's site using their computer. Let me briefly explain the logic to you. First of all, we need to identify the core of the problem. From the description just now, we can know that the core problem is that the reading rate of the homepage headlines has decreased. Well, at this time, I will first look at the basic statistics of the article reading events to check whether there is really a problem with the quality of the headlines. However, I found that the average reading volume per person has not decreased, and according to the time node of the previous content adjustment, there are still ups and downs. This proves that the content production method and the quality of the content are fine, and it has received responses from users. What happened to those users who did not read the headlines? There may be one thing I want to talk about in more detail here. Why did I conclude that what happened to users who did not read the headlines? What is average reading volume? The average reading volume per person refers to the users who have read the article, that is, the total number of users who have read the article divided by the users who have read the article; if the average reading volume per person has not decreased, it proves that these people who read the article are still reading it seriously and they are attracted to you. If the conversion rate of reading is reduced, where is the only possible source? This means that the number of users who do not read the headlines may have increased, which lowered the overall conversion rate. So I located the problem to the users who did not read the headlines. At this time, I can use the user grouping function to take the event of triggering the headlines as a condition and split it into two groups. I can create a user group called "Users Who Have Read Headlines" for users who have read headlines more than or equal to one; then create a user group called "Users Who Have Not Read Headlines" based on the condition that the number of times the trigger for reading headlines is zero, and then I can compare these two user groups together. Through user portraits, I compared the new addition time, region, channel source, version, etc. of these two types of users, including some key behaviors in the product, such as likes, comments, and private messages. Finally, I found that these users who had not read Toutiao were actually not active in their overall behavior. They were not active enough in liking articles or sending private messages. Finally, I locked the problem on a new channel, which I called channel A. There seems to be something wrong with this channel. Most of these users who have not read the headlines came from this channel a in the past seven days. Based on this, I made a guess: Is there a problem with the user quality of channel a? I grouped the users of channel a for the past seven days, still following the same grouping rules; then compared them with the average of all users and their headline conversion rates, I found that the average value of channel a is much lower; now I can make a hypothesis that the traffic quality of channel a is indeed relatively low. At this point someone may ask, they are quite active, their retention and activity are good, why do you still say their quality is low? We have a goal in mind when making any product, otherwise it would not be our original intention to make the product; every company has encountered this kind of thing to a greater or lesser extent, and everyone has been very passive all along; so everyone was very frustrated when they saw this result at the time, and they didn’t expect that after all the trouble they thought it was a problem with the content, but in fact it ended up appearing on this channel. We reflected that it was a bit unreliable to use retention and dwell time to evaluate a channel, and we would need to add the conditions that trigger key behaviors in the future. In fact, you can see that I used a lot of grouping in the whole process, and grouping represents a very basic way of analysis - comparison. Analysis based on comparison is a very basic but absolutely effective means of analysis. To sum up, the essence of data is scene restoration, which is the greatest value behind the data. After all, we can never truly accompany the users, and it is difficult to observe the user's behavior bit by bit. And to be honest, even if you are with the user and watch his usage scenarios, the behavior reflected may not necessarily be the real behavior. In fact, data is like a bird's-eye view for us. I can record the situation of all users and present the trend of the data through statistical methods. As for data needs, we need to be user-centric. The most important thing is to be able to flexibly split the data and perform some analysis. If operators can obtain data that meets these needs, then I believe it will definitely improve the efficiency of every operations colleague, and our energy will be focused on the most valuable things. Author: Zhuge io Source: Zhugeio |
<<: Wu Zhihong's 2022 Annual Lecture: Psychology of Getting Things Done
>>: Pepsi's New Year's ad is so dramatic hahahahahaha
Teacher Jinchuan - Huge online community of Qianc...
We often hear some words about personality, such ...
In recent years, mobile Internet has developed ra...
During the 2022 Two Sessions, Premier Li Keqiang ...
Xiaohongshu is known as a popular shopping tool f...
Why are more and more brands investing in short v...
First, a common question: What is the most import...
Cover The article cover is the first information ...
: : : : : : : : : : : : : : : : : : : : : : : : : ...
Contact information for Chengdu Tea Tasting: 135-...
I have heard many of my friends and colleagues ta...
On March 12, the Beijing Education System Epidemi...
Q: How much does it cost to go to Russia to watch...
Some commonly used advanced search engine command...
According to industry insiders, mini programs wil...