How do I view server logs? How to view website logs?

How do I view server logs? How to view website logs?

For a website optimization SEO personnel, it is very necessary to understand data analysis. Just as the editor Dongguan SEO talked about how to quickly improve the website ranking last time, if you want to improve your website, you must first have spiders to crawl it. If your website does not have spiders to crawl it, how can you talk about ranking? So how do we check whether spiders are crawling our website? This is the topic that I am going to share with you today: How to view server logs? How to view website logs?

How do I view server logs? How to view website logs?

1. First, we have to download the logs from the server. Since the editor uses HiChina's virtual host, the logs are downloaded in the background FTP. The general virtual host is wwwlogs, and some are different. You can ask your service provider.

2. There are many tools for analyzing website logs, such as: Light Year Log Analysis, Lager, PHP software, etc., but I don’t think they are very good. Today I will introduce you to a software I mentioned earlier: cygwin.

  • 1. Since the installation of cygwin software has been discussed before, this article will not introduce it again. Input: pwd

2. Rename the downloaded log and put it into the Adminstrator file

3. Separate the data you want to analyze

3. How to separate data? Here are some commands, just copy them directly. Open cgywin software

  • 1. Determine the crawling situation of Baidu spider:

Input: cat 1.log|grep 'baiduspider'>>baidu.txt

Input: cat baidu.txt|awk '{print $9}'|sort|uniq -c

(Note: 1.log is the website log server file named by the editor myself); Entering this command can separate Baidu spider from your website log server, which is very convenient. You can check the status of spider crawling the website; the number "9" is the line where you write the link corresponding to your website log.

2. Determine the 404 crawling situation:

Input: cat baidu.txt|grep '404' >> baidu404.txt

Input: cat baidu404.txt|awk '{print $7}'>>404.txt

Note: The number of times a link is crawled repeatedly (add nofollow to links that are crawled more times); note that nofollow is not added randomly. cat baidu.txt|awk '{print $7(write the line corresponding to your website log)}'|sort|uniq -c

If you want to analyze other content, you can use the same method. In addition, you can also use this to analyze and view the overall reasons why the website is not included and see the frequency of spider crawling. Isn’t it simple and practical?

<<:  How to make robots? How to write robots?

>>:  What is a dead link? What does a dead link mean?

Recommend

Niu Ge Practical Training Camp No. 9

Resource introduction for the 9th session of Niu G...

[Practical Tips] Things about APP promotion and channel promotion!

Recently, many people have asked the following qu...

The relationship between host speed and SEO

In addition to affecting the website's user e...

Teach you how to create a complete bidding promotion plan!

Many SEMers have asked a question in the bidding ...

How do big self-media accounts make money?

The fans of emotional self-media are ordinary peo...

How to promote a new APP?

Mobile phone manufacturer cooperation bundling &l...

What are the operators doing before the APP is launched?

Nowadays, people tend to divide product operation...

Fake traffic in channels is rampant---How can operators develop a keen eye?

In 2017, the mobile Internet was a turbulent sea....

2019 Advertising Monetization Insights Report!

In this article, we will continue to provide you ...