How to grab red envelopes scientifically: write a program to grab red envelopes

How to grab red envelopes scientifically: write a program to grab red envelopes

Everyone knows the background. It's the Chinese New Year, and red envelopes are flying everywhere. I just learned Python two days ago, and I was quite excited. I studied how to crawl Weibo red envelopes. Why Weibo red envelopes instead of Alipay red envelopes? Because I only know the Web. If I have the energy, I may also study the whack-a-mole algorithm later.

Because I am a beginner in Python, and this program is the third one I wrote after learning Python, so please don’t point out any bad parts in the code. The key is the idea. Well, if there are any bad parts in the idea, please don’t point them out to me. You see, IE has the nerve to set itself as the default browser, so it’s acceptable for me to show off by writing a crappy article, right?

I use Python 2.7. I heard that there are big differences between Python 2 and Python 3. Friends who are even less knowledgeable than me should pay attention.

0×01 Thoughts

I'm too lazy to describe it in words, so I drew a sketch and I think you can understand it.

First of all, as usual, let's introduce a bunch of libraries that I don't know what they are used for but cannot be without:

  1. import re
  2. import urllib
  3. import urllib2
  4. import cookielib
  5. import base64
  6. import binascii
  7. import os
  8. import json
  9. import sys
  10. import cPickle as p
  11. import rsa

Then declare some other variables that will be used later:

  1. reload(sys)
  2. sys.setdefaultencoding( 'utf-8&' ) #Set the character encoding to utf -8  
  3. luckyList=[] #Red envelope list
  4. lowest = 10 # What is the lowest record of receiving red envelopes that can be tolerated?

An rsa library is used here. Python does not come with it by default, so you need to install it: https://pypi.python.org/pypi/rsa/

After downloading, run setpy.py install to install it, and then we can start our development steps.

0×02 Weibo login

The action of grabbing red envelopes can only be performed after logging in, so there must be a login function. Logging in is not the key, the key is the preservation of cookies, which requires the cooperation of cookielib.

  1. cj = cookielib.CookieJar()
  2. opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
  3. urllib2.install_opener(opener)

In this way, all network operations performed using opener will process the status of the cookie. Although I don’t quite understand it, it feels magical.

Next, you need to encapsulate two modules, one is the data acquisition module, which is used to simply GET data, and the other is used to POST data. In fact, there are only a few more parameters, which can be merged into one function, but I am lazy and stupid, and I don’t want to and can’t change the code.

  1. def getData(url) :
  2. try :
  3. req = urllib2.Request(url)
  4. result = opener.open(req)
  5. text = result.read()
  6. text=text.decode( "utf-8" ).encode( "gbk" , 'ignore' )
  7. return text
  8. except Exception, e:
  9. print u 'Request exception, url:' +url
  10. print e
  11.    
  12. def postData(url,data,header):
  13. try :
  14. data = urllib.urlencode(data)
  15. req = urllib2.Request(url,data,header)
  16. result = opener.open(req)
  17. text = result.read()
  18. return text
  19. except Exception, e:
  20. print u 'Request exception, url:' +url

With these two modules, we can GET and POST data. The reason why getData is decoded and then encoded is because the output is always garbled when I debug under Win7, so some encoding processing is added. These are not the point. The following login function is the core of Weibo login.

  1. def login(nick, pwd):
  2. print u "----------Logging in----------"  
  3. print "----------......----------"  
  4. prelogin_url = 'http://login.sina.com.cn/sso/prelogin.php?entry=weibo&callback=sinaSSOController.preloginCallBack&su=%s&rsakt=mod&checkpin=1&client=ssologin.js(v1.4.15)&_=1400822309846' % nick
  5. preLogin = getData(prelogin_url)
  6. servertime = re.findall( '"servertime":(.+?),' , preLogin)[ 0 ]
  7. pubkey = re.findall( '"pubkey":"(.+?)",' , preLogin)[ 0 ]
  8. rsakv = re.findall( '"rsakv":"(.+?)",' , preLogin)[ 0 ]
  9. nonce = re.findall( '"nonce":"(.+?)",' , preLogin)[ 0 ]
  10. #print bytearray( 'xxxx' , 'utf-8' )
  11. su = base64.b64encode(urllib.quote(nick))
  12. rsaPublickey= int (pubkey, 16 )
  13. key = rsa.PublicKey(rsaPublickey, 65537 )
  14. message = str(servertime) + '\t' + str(nonce) + '\n' + str(pwd)
  15. sp = binascii.b2a_hex(rsa.encrypt(message,key))
  16. header = { 'User-Agent' : 'Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)' }
  17. param = {
  18. 'entry' : 'weibo' ,
  19. 'gateway' : '1' ,
  20. 'from' : '' ,
  21. 'savestate' : '7' ,
  22. 'userticket' : '1' ,
  23. 'ssosimplelogin' : '1' ,
  24. 'vsnf' : '1' ,
  25. 'vsnval' : '' ,
  26. 'su' : su,
  27. 'service' : 'miniblog' ,
  28. 'servertime' : servertime,
  29. 'nonce' : nonce,
  30. 'pwencode' : 'rsa2' ,
  31. 'sp' : sp,
  32. 'encoding' : 'UTF-8' ,
  33. 'url' : 'http://weibo.com/ajaxlogin.php?framelogin=1&callback=parent.sinaSSOController.feedBackUrlCallBack' ,
  34. 'returntype' : 'META' ,
  35. 'rsakv' : rsakv,
  36. }
  37. s = postData( 'http://login.sina.com.cn/sso/login.php?client=ssologin.js(v1.4.15)' ,param,header)
  38.    
  39. try :
  40. urll = re.findall( "location.replace\(\'(.+?)\'\);" , s)[ 0 ]
  41. login=getData(urll)
  42. print u "---------Login successful!-------"  
  43. print "----------......----------"  
  44. except Exception, e:
  45. print u "---------Login failed!-------"  
  46. print "----------......----------"  
  47. exit( 0 )

The parameters and encryption algorithms here are all copied from the Internet, and I don’t really understand them. It’s probably about first requesting a timestamp and public key, then encrypting it with RSA, and finally processing it and submitting it to the Sina login interface. After a successful login from Sina, a Weibo address will be returned. A request is required to make the login status take effect completely. After a successful login, subsequent requests will carry the current user’s cookie.

After successfully logging into Weibo, I couldn't wait to find a red envelope to try it out, of course, I had to try it in the browser first. After clicking and clicking, I finally found a page with a button to grab a red envelope. I pressed F12 to summon the debugger to see how the data packet was requested.

You can see that the requested address is http://huodong.weibo.com/aj_hongbao/getlucky. There are two main parameters. One is ouid, which is the red envelope ID, which can be seen in the URL. The other share parameter determines whether to share it to Weibo. There is also a _t, which I don’t know what it is used for.

Well, now theoretically, you can complete the red envelope extraction by submitting three parameters to this URL. However, when you actually submit the parameters, you will find that the server will magically return you a string like this:

  1. 1  
  2.      
  3. { "code" : 303403 , "msg" : "Sorry, you do not have permission to access this page" , "data" : []}

Don't panic at this time. Based on my many years of experience in Web development, the other party's programmer should have determined the referer. It's very simple. Just copy all the headers of the request.

  1. def getLucky(id): #Lottery program
  2. print u "---Drawing red envelope:" +str(id)+ "---"  
  3. print "----------......----------"  
  4.    
  5. if checkValue(id)==False: #Does not meet the conditions, this is the following function
  6. return  
  7. luckyUrl= "http://huodong.weibo.com/aj_hongbao/getlucky"  
  8. param={
  9. 'ouid' :id,
  10. 'share' : 0 ,
  11. '_t' : 0  
  12. }
  13.    
  14. header={
  15. 'Cache-Control' : 'no-cache' ,
  16. 'Content-Type' : 'application/x-www-form-urlencoded' ,
  17. 'Origin' : 'http://huodong.weibo.com' ,
  18. 'Pragma' : 'no-cache' ,
  19. 'Referer' : 'http://huodong.weibo.com/hongbao/' +str(id),
  20. 'User-Agent' : 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/33.0.1750.146 BIDUBrowser/6.x Safari/537.36' ,
  21. 'X-Requested-With' : 'XMLHttpRequest'  
  22. }
  23. res = postData(luckyUrl,param,header)

In theory, there is no problem, and in fact, there is no problem. After the lottery action is completed, we need to judge the status. The returned res is a json string, where the code is 100000 for success, 90114 for today's lottery reaching the upper limit, and other values ​​are also failures, so:

  1. hbRes = json.loads(res)
  2. if hbRes[ "code" ]== '901114' : #Today's red envelopes have been snatched
  3. print u "---------The upper limit has been reached---------"  
  4. print "----------......----------"  
  5. log( 'lucky' ,str(id)+ '---' +str(hbRes[ "code" ])+ '---' +hbRes[ "data" ][ "title" ])
  6. exit( 0 )
  7. elif hbRes[ "code" ] == '100000' : # Success
  8. print u "---------Congratulations on your prosperity---------"  
  9. print "----------......----------"  
  10. log( 'success' ,str(id)+ '---' +res)
  11. exit( 0 )
  12.    
  13. if hbRes[ "data" ] and hbRes[ "data" ][ "title" ]:
  14. print hbRes[ "data" ][ "title" ]
  15. print "----------......----------"  
  16. log( 'lucky' ,str(id)+ '---' +str(hbRes[ "code" ])+ '---' +hbRes[ "data" ][ "title" ])
  17. else :
  18. print u "---------Request error---------"  
  19. print "----------......----------"  
  20. log( 'lucky' ,str(id)+ '---' +res)

Among them, log is also a function I customized to record logs:

  1. def log(type,text):
  2. fp = open(type+ '.txt' , 'a' )
  3. fp.write(text)
  4. fp.write( '\r\n' )
  5. fp.close()

0×04 Crawling the red envelope list

After the single red envelope receiving action test is successful, it is the core module of our program - crawling the red envelope list. There should be many methods and entrances to crawl the red envelope list, such as various Weibo search keywords and so on, but I use the simplest method here: crawling the red envelope list.

On the homepage of the red envelope activity (http://huodong.weibo.com/hongbao), you can see everything through various points. Although there are many links in the list, they can be summarized into two categories (except the richest red envelope list): theme and ranking list.

Continue to summon F12 and analyze the formats of these two pages. First, there is a list of topics, such as: http://huodong.weibo.com/hongbao/special_quyu

You can see that the red envelope information is all in a div with the class name info_wrap, so we just need to activate the source code of this page, grab all the infowrap, and then simply process it to get the red envelope list of this page. Here we need to use some regular expressions:

  1. def getThemeList(url,p):#Theme red envelope
  2. print u "---------第" +str(p)+ "页---------"  
  3. print "----------......----------"  
  4. html=getData(url+ '?p=' +str(p))
  5. pWrap=re.compile(r '(.+?)' ,re.DOTALL) #h Get all info_wrap regular expressions
  6. pInfo=re.compile(r '.+(.+).+(.+).+(.+).+href="(.+)" class="btn"' ,re.DOTALL) #Get red envelope information
  7. List=pWrap.findall(html,re.DOTALL)
  8. n = len(List)
  9. if n== 0 :
  10. return  
  11. for i in range(n): #Traverse all info_wrap divs
  12. s=pInfo.match(List[i]) #Get red envelope information
  13. info=list(s.groups( 0 ))
  14. info[ 0 ] = float (info[ 0 ].replace( '\xcd\xf2' , '0000' )) #Cash, 10,000 -> 0000  
  15. try :
  16. info[ 1 ] = float (info[ 1 ].replace( '\xcd\xf2' , '0000' )) #gift value
  17. except Exception, e:
  18. info[ 1 ] = float (info[ 1 ].replace( '\xd2\xda' , '00000000' )) #gift value
  19. info[ 2 ] = float (info[ 2 ].replace( '\xcd\xf2' , '0000' )) # Sent
  20. if info[ 2 ] == 0 :
  21. info[ 2 ] = 1 # prevent division by 0  
  22. if info[ 1 ] == 0 :
  23. info[ 1 ] = 1 # prevent division by 0  
  24. info.append(info[ 0 ]/(info[ 2 ]+info[ 1 ])) #Red envelope value, cash/(number of recipients + prize value)
  25. # if info[ 0 ]/(info[ 2 ]+info[ 1 ])> 100 :
  26. # print url
  27. luckyList.append(info)
  28. if   'class="page"' in html:#Next page exists
  29. p=p+ 1  
  30. getThemeList(url,p) #Recursively call to crawl the next page

Regular expressions are difficult. It took me a long time to learn them and I was only able to write these two sentences. There is also an info[4] appended to the info here. It is an algorithm I came up with to roughly determine the value of a red envelope. Why do we do this? Because there are many red envelopes but we can only draw four times. In the vast sea of ​​red envelopes, we must find the most valuable red envelope and then draw it. There are three data for reference: cash value, gift value and number of recipients. Obviously, if there is little cash and many recipients or the prize value is extremely high (some are even crazy and in the billions), then it is not worth grabbing. So I worked hard for a long time and finally came up with an algorithm to measure the weight of red envelopes: red envelope value = cash/(number of recipients + prize value).

The principle of the ranking page is the same, find the key tags and match them with regular expressions.

  1. def getTopList(url,daily,p):#Ranking list red envelope
  2. print u "---------第" +str(p)+ "页---------"  
  3. print "----------......----------"  
  4. html=getData(url+ '?daily=' +str(daily)+ '&p=' +str(p))
  5. pWrap=re.compile(r '(.+?)' ,re.DOTALL) #h Get all list_info regular expressions
  6. pInfo=re.compile(r '.+(.+).+(.+).+(.+).+href="(.+)" class="btn rob_btn"' ,re.DOTALL) #Get red envelope information
  7. List=pWrap.findall(html,re.DOTALL)
  8. n = len(List)
  9. if n== 0 :
  10. return  
  11. for i in range(n): #Traverse all info_wrap divs
  12. s=pInfo.match(List[i]) #Get red envelope information
  13. topinfo=list(s.groups( 0 ))
  14. info=list(topinfo)
  15. info[ 0 ]=topinfo[ 1 ].replace( '\xd4\xaa' , '' ) #元-> ''  
  16. info[ 0 ] = float (info[ 0 ].replace( '\xcd\xf2' , '0000' )) #Cash, 10,000 -> 0000  
  17. info[ 1 ]=topinfo[ 2 ].replace( '\xd4\xaa' , '' ) #元-> ''  
  18. try :
  19. info[ 1 ] = float (info[ 1 ].replace( '\xcd\xf2' , '0000' )) #gift value
  20. except Exception, e:
  21. info[ 1 ] = float (info[ 1 ].replace( '\xd2\xda' , '00000000' )) #gift value
  22. info[ 2 ]=topinfo[ 0 ].replace( '\xb8\xf6' , '' ) # -> ''  
  23. info[ 2 ] = float (info[ 2 ].replace( '\xcd\xf2' , '0000' )) # Sent
  24. if info[ 2 ] == 0 :
  25. info[ 2 ] = 1 # prevent division by 0  
  26. if info[ 1 ] == 0 :
  27. info[ 1 ] = 1 # prevent division by 0  
  28. info.append(info[ 0 ]/(info[ 2 ]+info[ 1 ])) #Red envelope value, cash/(number of recipients + gift value)
  29. # if info[ 0 ]/(info[ 2 ]+info[ 1 ])> 100 :
  30. # print url
  31. luckyList.append(info)
  32. if   'class="page"' in html:#Next page exists
  33. p=p+ 1  
  34. getTopList(url,daily,p) #recursively call to crawl the next page

OK, now we can successfully crawl the lists of both topic pages. The next step is to get the list of lists, that is, the collection of all these list addresses, and then crawl them one by one:

  1. def getList():
  2. print u "---------Search target---------"  
  3. print "----------......----------"  
  4.    
  5. themeUrl={ #Theme list
  6. 'theme' : 'http://huodong.weibo.com/hongbao/theme' ,
  7. 'pinpai' : 'http://huodong.weibo.com/hongbao/special_pinpai' ,
  8. 'daka' : 'http://huodong.weibo.com/hongbao/special_daka' ,
  9. 'youxuan' : 'http://huodong.weibo.com/hongbao/special_youxuan' ,
  10. 'qiye' : 'http://huodong.weibo.com/hongbao/special_qiye' ,
  11. 'quyu' : 'http://huodong.weibo.com/hongbao/special_quyu' ,
  12. 'meiti' : 'http://huodong.weibo.com/hongbao/special_meiti' ,
  13. 'hezuo' : 'http://huodong.weibo.com/hongbao/special_hezuo'  
  14. }
  15.    
  16. topUrl={ #Ranking list
  17. 'mostmoney' : 'http://huodong.weibo.com/hongbao/top_mostmoney' ,
  18. 'mostsend' : 'http://huodong.weibo.com/hongbao/top_mostsend' ,
  19. 'mostsenddaka' : 'http://huodong.weibo.com/hongbao/top_mostsenddaka' ,
  20. 'mostsendpartner' : 'http://huodong.weibo.com/hongbao/top_mostsendpartner' ,
  21. 'cate' : 'http://huodong.weibo.com/hongbao/cate?type=' ,
  22. 'clothes' : 'http://huodong.weibo.com/hongbao/cate?type=clothes' ,
  23. 'beauty' : 'http://huodong.weibo.com/hongbao/cate?type=beauty' ,
  24. 'fast' : 'http://huodong.weibo.com/hongbao/cate?type=fast' ,
  25. 'life' : 'http://huodong.weibo.com/hongbao/cate?type=life' ,
  26. 'digital' : 'http://huodong.weibo.com/hongbao/cate?type=digital' ,
  27. 'other' : 'http://huodong.weibo.com/hongbao/cate?type=other'  
  28. }
  29.    
  30. for (theme,url) in themeUrl.items():
  31. print "----------" +theme+ "----------"  
  32. print url
  33. print "----------......----------"  
  34. getThemeList(url, 1 )
  35.    
  36. for (top,url) in topUrl.items():
  37. print "----------" +top+ "----------"  
  38. print url
  39. print "----------......----------"  
  40. getTopList(url, 0 , 1 )
  41. getTopList(url, 1 , 1 )

0×05 Determine the availability of red envelopes

This is relatively simple. First, search the keywords in the source code to see if there is a red envelope grabbing button, and then go to the receiving ranking to see what the highest record is. If the highest amount you receive is only a few dollars, then bye bye...

The address to view the red envelope record is http://huodong.weibo.com/aj_hongbao/detailmore?page=1&type=2&_t=0&__rnd=1423744829265&uid=red envelope id

  1. def checkValue(id):
  2. infoUrl= 'http://huodong.weibo.com/hongbao/' +str(id)
  3. html=getData(infoUrl)
  4.    
  5. if   'action-type="lottery"' in html or True: #There is a button to grab the red envelope
  6. logUrl= "http://huodong.weibo.com/aj_hongbao/detailmore?page=1&type=2&_t=0&__rnd=1423744829265&uid=" +id # View ranking data
  7. param={}
  8. header={
  9. 'Cache-Control' : 'no-cache' ,
  10. 'Content-Type' : 'application/x-www-form-urlencoded' ,
  11. 'Pragma' : 'no-cache' ,
  12. 'Referer' : 'http://huodong.weibo.com/hongbao/detail?uid=' +str(id),
  13. 'User-Agent' : 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/33.0.1750.146 BIDUBrowser/6.x Safari/537.36' ,
  14. 'X-Requested-With' : 'XMLHttpRequest'  
  15. }
  16. res = postData(logUrl,param,header)
  17. pMoney=re.compile(r '< span class="money">(\d+?.+?)\xd4\xaa< /span>' ,re.DOTALL) #h Get all list_info regular expressions
  18. luckyLog=pMoney.findall(html,re.DOTALL)
  19.    
  20. if len(luckyLog)== 0 :
  21. maxMoney= 0  
  22. else :
  23. maxMoney= float (luckyLog[ 0 ])
  24.    
  25. if maxMoney< lowest: #The maximum red packet in the record is less than the set value
  26. return False
  27. else :
  28. print u "---------One step slower---------"  
  29. print "----------......----------"  
  30. return False
  31. return True

0×06 Finishing work

The main modules have been completed, and now we need to connect all the steps in series:

  1. def start(username,password,low,fromFile):
  2. gl=False
  3. lowest=low
  4. login(username, password)
  5. if fromfile== 'y' :
  6. if os.path.exists( 'luckyList.txt' ):
  7. try :
  8. f = file( 'luckyList.txt' )
  9. newList = []
  10. newList = p.load(f)
  11. print u '---------Loading list---------'  
  12. print "----------......----------"  
  13. except Exception, e:
  14. print u 'Parsing the local list failed, crawling the online page.'  
  15. print "----------......----------"  
  16. gl=True
  17. else :
  18. print u 'luckyList.txt does not exist locally, fetch the online page.'  
  19. print "----------......----------"  
  20. gl=True
  21. if gl==True:
  22. getList()
  23. from operator import itemgetter
  24. newList=sorted(luckyList, key=itemgetter( 4 ),reverse=True)
  25. f = file( 'luckyList.txt' , 'w' )
  26. p.dump(newList, f) #Save the captured list to a file so you don’t have to capture it again next time
  27. f.close()
  28.    
  29. for lucky in newList:
  30. if not 'http://huodong.weibo.com' in lucky[ 3 ]: #Not a red envelope
  31. continue  
  32. print lucky[ 3 ]
  33. id=re.findall(r '(\w*[0-9]+)\w*' ,lucky[ 3 ])
  34. getLucky(id[ 0 ])

Because it is troublesome to crawl the red envelope list repeatedly every time you test, I added a code to dump the complete list to a file, so that you can read the local list and grab the red envelopes in the future. After constructing the start module, write an entry program to pass the Weibo account to it:

  1. if __name__ == "__main__" :
  2. print u "------------------Weibo Red Packet Assistant------------------"  
  3. print "---------------------v0.0.1---------------------"  
  4. print u "-------------by @All-powerful Soul Master----------------"  
  5. print "-------------------------------------------------"  
  6.    
  7. try :
  8. uname=raw_input(u "Please enter your Weibo account: " .decode( 'utf-8' ).encode( 'gbk' ))
  9. pwd = raw_input(u "Please enter your Weibo password: " .decode( 'utf-8' ).encode( 'gbk' ))
  10. low = int (raw_input(u "Participate when the maximum cash received by the red envelope is greater than n: " .decode( 'utf-8' ).encode( 'gbk' )))
  11. fromfile=raw_input(u "Do you want to use the red envelope list in luckyList.txt: (y/n) " .decode( 'utf-8' ).encode( 'gbk' ))
  12. except Exception, e:
  13. print u "Parameter error"  
  14. print "----------......----------"  
  15. print e
  16. exit( 0 )
  17.    
  18. print u "---------Program starts---------"  
  19. print "----------......----------"  
  20. start(uname,pwd,low,fromfile)
  21. print u "------------Program ends---------"  
  22. print "----------......----------"  
  23. os.system( 'pause' )

0×07 Go away!

The basic crawler skeleton has been basically completed. In fact, there is still a lot of room for improvement in many details of this crawler, such as modifying it to support batch login, optimizing the red envelope value algorithm, and there should be many places in the code itself that can be optimized, but with my ability, I think I can only get this far.

Everyone has seen the result of the program. I wrote hundreds of lines of code and thousands of words of articles, but all I got in return was a set of double-color balls. What a rip-off! How could it be a double-color ball? (Aside: The author became more and more excited as he spoke, and he actually started crying. People around him tried to persuade him: "Brother, it's not that serious. It's just a Weibo red envelope. I shook my hands so hard yesterday but I didn't get a WeChat red envelope.")

Alas, actually I am not crying about this. I am sad because I am already in my twenties and still doing such boring things as writing programs to grab red envelopes on Weibo. This is not the life I want at all!

Source code download: http://download..com/data/1984536

<<:  How to speed up NFC development?

>>:  One picture tells you how popular WeChat red envelopes are during the Chinese New Year

Recommend

E-commerce platform traffic lost?

In 2020, some people will still ask: Who is the n...

What happens to your body when you stay at home for a holiday?

Due to the COVID-19 epidemic, experts have repeat...

iOS 9 Learning Series: Split Screen Multitasking

A major change in iOS 9 is the addition of multit...

How much does it cost to be an agent for a moving app in Beihai?

How much does Beihai's moving agency app cost...

Zhihu App product operation experience analysis report!

From a niche elite community to a knowledge mento...

Dalian cancels China International Beer Festival (full text)

The organizing committee of the 21st China Intern...

Snack industry Douyin self-broadcast promotion strategy

During this year’s Double 11, many leading snack ...

"High Sweetness" Ahead: No One Can Stop Humanity's Pursuit of Sweetness!

In the long process of human evolution, sugar, wh...

iPhone finally supports third-party apps? iOS 14.7 Beta 5 leaks

Recently, Apple officially pushed the iOS 14.7 an...

Java Journey: A Traveler's Map

[[163802]] Some time ago, I sorted out my Java kn...