How do I view server logs? How to view website logs?

How do I view server logs? How to view website logs?

For a website optimization SEO personnel, it is very necessary to understand data analysis. Just as the editor Dongguan SEO talked about how to quickly improve the website ranking last time, if you want to improve your website, you must first have spiders to crawl it. If your website does not have spiders to crawl it, how can you talk about ranking? So how do we check whether spiders are crawling our website? This is the topic that I am going to share with you today: How to view server logs? How to view website logs?

How do I view server logs? How to view website logs?

1. First, we have to download the logs from the server. Since the editor uses HiChina's virtual host, the logs are downloaded in the background FTP. The general virtual host is wwwlogs, and some are different. You can ask your service provider.

2. There are many tools for analyzing website logs, such as: Light Year Log Analysis, Lager, PHP software, etc., but I don’t think they are very good. Today I will introduce you to a software I mentioned earlier: cygwin.

  • 1. Since the installation of cygwin software has been discussed before, this article will not introduce it again. Input: pwd

2. Rename the downloaded log and put it into the Adminstrator file

3. Separate the data you want to analyze

3. How to separate data? Here are some commands, just copy them directly. Open cgywin software

  • 1. Determine the crawling situation of Baidu spider:

Input: cat 1.log|grep 'baiduspider'>>baidu.txt

Input: cat baidu.txt|awk '{print $9}'|sort|uniq -c

(Note: 1.log is the website log server file named by the editor myself); Entering this command can separate Baidu spider from your website log server, which is very convenient. You can check the status of spider crawling the website; the number "9" is the line where you write the link corresponding to your website log.

2. Determine the 404 crawling situation:

Input: cat baidu.txt|grep '404' >> baidu404.txt

Input: cat baidu404.txt|awk '{print $7}'>>404.txt

Note: The number of times a link is crawled repeatedly (add nofollow to links that are crawled more times); note that nofollow is not added randomly. cat baidu.txt|awk '{print $7(write the line corresponding to your website log)}'|sort|uniq -c

If you want to analyze other content, you can use the same method. In addition, you can also use this to analyze and view the overall reasons why the website is not included and see the frequency of spider crawling. Isn’t it simple and practical?

<<:  How to make robots? How to write robots?

>>:  What is a dead link? What does a dead link mean?

Recommend

How to conceive a marketing plan for a new product?

This is a very common scenario. When you take on ...

Do you feel dizzy when using virtual reality glasses? Try adding a virtual nose

Virtual reality (VR) technology is becoming more ...

Second-class e-commerce advertising | 15 product cases in 7 categories!

The hot August gathers the enthusiasm of midsumme...

How to analyze data in product operations?

Today we will talk about the last element of the ...

This kind of "black sesame" cannot be eaten, be careful of Datura poisoning!

Datura is an annual herbaceous plant of the Solan...

Galaxy S4 spontaneously catches fire again: battery not original

According to foreign media reports, a Samsung Gala...