How should operations perform data monitoring?

How should operations perform data monitoring?

In the previous article, we have built the data indicators we want, which means we have completed the first step of data planning. You can review what we said in the previous article. We said that in our operation process, we must first know:

  1. What data do we collect?
  2. What does each piece of data mean about the operation?

Now that we have the data indicators, this article will focus on what problems they represent, what are their usage scenarios, and how we should analyze and use them.

Although the data seems extremely numerous and it seems very cumbersome to use it in work, in fact, in our data operations , the usage scenarios of data are nothing more than three categories, namely: monitoring, prediction, and detection . This is also the third test of data usage.

Let’s talk about these three tests separately.

Before using data, there is also a necessary step, which is " obtaining data ". This was also introduced in the previous article. Data is mainly obtained through the technology of embedding points. Generally we have two methods: self-development and third-party tools .

If we develop it ourselves, we need to provide the planned data to the product or technical personnel. Those who have read our previous article know that we should provide them with the original data as much as possible. Ask them to add statistical codes to the corresponding pages, add some functions we need to the background management system, and finally obtain the processing data we want . The more accurate the raw data we provide, the more efficient our products and technologies will be.

If it is a third-party tool (such as Baidu Statistics, Google Statistics, Umeng , etc.), you only need to put the code provided by the third party into our page code, and you can directly use the background provided by the tool without developing it yourself. There are some analysis tools on the market that do not require embedding points, and you can also try them.

However, everyone needs to understand that tools are just tools. They just make our data operations more efficient . The first thing we need to know is the logic behind data operations. Once the logic is clear, even if we only have a piece of paper and a pen, it can still play a big role.

Let's continue with our three tests:

  1. What is monitoring? Monitoring is the real-time monitoring of a certain data over a long period of time to keep track of its changes at all times.
  2. What is prediction? Prediction is to make good use of current data and use reasonable methods to predict its future.
  3. What is testing? Testing is to use reasonable means to judge whether a person, a product or an event is good or bad.

Therefore, monitoring is looking at the past and the present; prediction is looking at the future; and testing is looking at the good and the bad.

1. Monitoring

Once we have completed the preliminary data planning work, the monitoring work will be easy. First, we select the data we want to monitor. Here we take the number of users as an example:

We can see real-time data:

We can also compare the data over a period of time:

We can also look at the distribution of users:

Generally speaking, whether it is a backend developed by yourself or a third-party analysis tool, there are various icons to display data. In fact, it doesn’t matter which icon is displayed. The most important thing is to know why you should look at it and be able to interpret it.

In our actual work, the data that we often use for monitoring are mostly some intuitive data, such as: number of users, daily active users, number of orders, transaction amount, etc. Putting all this data in a page in the form of charts forms our DashBoard, which is also the boss’s favorite to see:). Similar to the following picture:

2. Prediction

Forecasting is also very important in our operations. For example, you need to make a work plan, conduct a feasibility analysis for an activity, set performance indicators, meet with investors , etc. The simplest way to predict is to list a trend chart of data according to time, as follows:

Then we can make an estimate based on the time point we want to predict.

Of course, if conditions permit, we can be more precise. We can achieve this through the four steps of dimension splitting -> sampling -> estimation -> superposition . For example, we want to predict the user growth next month:

  1. The user growth can be divided into natural growth and event growth . The so-called event growth refers to the additional number of users we obtain through some temporary activities.
  2. Then we sample the natural growth, take out the natural growth of the past few months or the same month for sample analysis, and then we can take the average value , or remove the highest and lowest two data and then take the average value.
  3. At the same time, we sample the event growth volume, take out the data of some activities we have done in the past for sample analysis, and get a numerical value of the event growth volume. Then, combined with the activities we plan to do next month, we estimate the event growth volume for next month.
  4. Finally, by adding the two dimensions together, we get our predicted user growth for next month.

The above simple four-step prediction method is often used in our work. It is a skill that must be mastered as an operator. Of course, if more accurate big data mining and big data prediction are needed, we must rely on more high-tech technologies such as artificial intelligence and machine learning.

The above is about how to use past and present data to predict future data . However, there is another very important use scenario for prediction, which is: " How to use past and present data to predict future user behavior ." This is also what we often call user behavior analysis . We can do it in the following ways:

(1) Find out the attributes of users who frequently perform this user behavior. For example, if we want to predict whether a certain user will buy our product A, we will first sort out the data of all users who have purchased from us before, such as gender, age, region, occupation, etc.

(2) Find out the preceding behavior of this user behavior. For example, when a user purchases product A on Taobao , there will be several pre-actions: such as the time spent on the product page, whether to add it to the shopping cart, whether to collect it, whether to consult, etc.

(3) Find out the related behaviors of this user behavior. For example, A is tea and B is tea set. The act of purchasing product A will be associated with the act of purchasing B. Then the data on whether B has been purchased is also the data we use for prediction.

(4) Then set the above user attributes, whether the behavior is performed, the number of times the behavior is performed, etc. into variables , and record the data of these variables. Through the user's actual data and the data of those variables, we can predict the probability that the user will purchase our product A.

This is actually very common in our lives. For example, based on our search behavior, browsing behavior, consulting behavior, etc., Baidu will push targeted advertising. Toutiao will push news topics you like, and Tmall and Taobao will push items of interest to you. I believe that with the advancement of technology, data will become more and more accurate in predicting user behavior.

You may wonder, what should we do if there is no such technology to support our actual work, especially in some traditional enterprises? Actually, it’s nothing. We just need to set fewer variables mentioned above and focus on the core ones. Although the accuracy is not that accurate, the effect is still there.

The above is from the perspective of micro data. What if we talk about how to predict user behavior from a macro perspective? In fact, everything in the world requires balance, and they grow and decline at the same time . At the same time, they also nourish and restrict each other. We spend more time shopping online and less time shopping; we spend more time driving and less time walking; we spend less time working and more time entertaining; we spend less time playing on the PC and more time playing with our phones. ...The increase in behaviors at the macro level is generally referred to as a "trend" in our Internet industry.

Similarly, in our operation process, as long as we do a good job in attracting new data, there will be more and more retained behaviors; if the retained data is good, there will be more and more active behaviors; if the active data is good, there will be more and more conversion behaviors; if the conversion data is good and we have money, there will be more and more behaviors to improve product quality; if the quality of the product is good, there will be more and more behaviors to attract new customers; this is basically what is often said in the Book of Changes, "Yin and Yang" and "Five Elements".

3. Detection

Detection is probably the most commonly used scenario in our operations. As mentioned above, testing tells us what is good and what is bad?

So what should we test during the operation? Do you remember what we said in the structure section? The basic structure of operations consists of only three things: people (users), goods (products), and scenarios (ways of display or transaction). This is what we want to test. (Please refer to "Structure | How to Become an Operations Expert (I): The Basic Structure of Operations")

As we all know, we all live in a three-dimensional world, and adding the time dimension makes it a four-dimensional space .

Similarly, in our operating world, users , products , and display or transaction scenarios form a three-dimensional space;

If we look at just one of these, it’s a two-dimensional space;

If we further split one of the items and only look at a certain point in it, it is a one-dimensional space;

Add time to everything and you get four-dimensional space.

There are two best ways to detect the quality of data: comparison in the same dimension and decomposition in lower dimensions.

1. Comparison of the same dimension

Comparison in the same dimension means that when we want to test an object, we try to keep other objects at the same level as much as possible, and make judgments only based on the differences in the final results produced by the differences in this object.

For example: In our actual work, we often encounter the following situations:

  • After creating content A and B, we want to know which content is more effective?
  • After doing two activities, A and B, we want to know which activity had a stronger user response?
  • We have found two channels , A and B, and we want to know which channel brings in higher quality users?
  • There is a competitor called A, and we want to know what are the advantages and disadvantages of this competitor compared to ours?

For the above comparison, we generally use the trend line comparison method and the left and right bar comparison method . Let's take an example of channel detection:

1) First, we obtained the retention numbers of channels A and B in the same period of time.

2) Based on the above data, we make the following trend line comparison chart.

3) From the above trend chart, we can see that the quality of channel B is slightly better than that of channel A during this period.

Let’s take another example of competitive product analysis :

The difficulty in doing competitive analysis lies not in the analysis itself, but in the acquisition of competitive data, especially some internal operational data, which is basically impossible to obtain. Usually we can collect external data that competitors can collect in the market and make a left-right bar comparison chart for analysis.

For example, through data collection , we obtained the following data of competitor A in the past three months: advertising was done on 20 channels, 5 paid channels, 50 WeChat pictures and texts were sent, with a total of about 80,000 views, 4 news releases, 3 events, 2 new paid products were launched, 1 old product was removed from the shelves, the App product was iterated 7 times, 4 new functions were added, etc.

The above information market data can generally be obtained through third-party public opinion monitoring tools , while product data requires us to pay attention to them in our work. We then made the following graph based on our data:

Through this left and right bar comparison chart , we can see at a glance some of the gaps and advantages and disadvantages between us and our competitors.

2. Disassembly in the downward dimension

This analysis method is generally used when we have nothing to compare. We only know that something is good or bad, but we are not clear about the specific reasons. At this time, we need to continue to break down the dimensions until we find the real reason. At this time, we can generally use the funnel analysis method.

For example: We conducted a new user acquisition activity A, the purpose of which was to attract new users to download the App and register as users. However, when the channel was very stable, the new user acquisition effect of this activity was not as good as expected. We now need to find a way to detect the reason.

Those who have read our previous structure article should still remember the process structure of the activity. Then, based on the process structure of the activity and the product page, we selected the following data nodes:

  • Activities : before the event, warm-up, official event, residual heat, after the event
  • Products : guide page, registration page, registration success page

After we collected the data, we got the following funnel analysis chart :

We found that during the event, the performance was very good in the early stage, but there was a significant decline during the residual heat period, with only 15% download conversion; and during registration, the conversion rate of the guide page was not very ideal, only 50%. Therefore, through these two funnel charts, we can find that there is a high possibility that there is a problem with the warm-up part of our activity and the user registration guide page.

The above examples are relatively simple and may be much more complicated in actual work, but as long as we master the usage scenarios and analysis methods, then it is essentially the same just by making them more refined.

At this point, we have finished talking about the detection of data usage. Through the four-dimensional space of data operation detection, we can find the following conclusions:

(1) The lower the dimension, the easier it is to detect

This is like your boss saying to you, "Check why you didn't make money this year?" (four-dimensional space) and "Check why no one clicks this button?" (one-dimensional space). The difficulty of these two tasks is completely different, but it also shows that when planning data, the lower the dimension is split, the more help it will be for future work.

(2) High-dimensional detection requires splitting into low-dimensional ones

If you can't find a comparison of the same dimension, you can only split it into lower dimensions.

(3) The detection method is to split it down first, and then compare the left and right sides.

When you split it into a benchmark object with the same dimension that can be compared using the comparison method, or when you can clearly find out its pros and cons, you don't need to split it.

(4) To judge the correctness of the test results, we need to go back from low dimensions to high dimensions.

Our final judgment criteria are to go back from low dimensions to high dimensions and look at the data indicators in high dimensions.

Are you very familiar with this process? In fact, its core thinking is the first article in this series of articles: "How to become an operations expert - Thinking (I): Think from top to bottom and execute from bottom to top".

Okay, we have finished the data section here. As we mentioned in the previous article, data is something that has no limit downwards, so in these two articles we only describe some top-level data as well as commonly used scenarios and analysis methods. More detailed content in the data is waiting to be explored in our work.

summary

Data drives operations, and mastering the data means mastering the strategic direction of operations. But in the process of implementing operational strategies, is there any way to achieve twice the result with half the effort? At what stage and what role will these methods play? So next we will enter our "Method" section.

The author of this article @志远 is compiled and published by (Qinggua Media). Please indicate the author information and source when reprinting!

Product promotion services: APP promotion services, information flow advertising, advertising platform

<<:  With 80 million users after one year of launch, what is Pinduoduo’s growth logic?

>>:  How much does it cost to create an e-book applet in Yingkou?

Recommend

4 Effective Strategies to Increase App User Engagement

For an accounting app, an effective participating...

Several optimization strategies for CPD placement in the application market

As the commercialization of many open platforms b...

The most comprehensive core strategy for user operations!

"Operation is to keep users," this stat...

How to operate an event (Part 1)

In product operation, activity operation is also ...

How to make users addicted to your product? These 4 steps are required

What would happen if users fell in love with prod...

Brand solutions, sustainable creativity

As the end of the year approaches, how should we ...

"Qunar" operation strategy: 10 times user growth skills

Based on his own professional experience, the aut...

Why is “fashion” not a brand differentiation positioning?

Separating products from the multitude of brands ...

New media operation: How to create a brand super symbol

In daily life, we often have this experience: mos...

21 most commonly used growth techniques by foreign growth hackers

After testing and practicing throughout 2017, for...

How to Promote Customers on JD.com’s Double 11 Shopping Festival in 2020

1. I just finished writing about Tmall’s Double 11...