Artificial intelligence has gone through several lows, these dark times are called "AI winters." This is not the period here. In fact, artificial intelligence has become so hot that technology giants such as Google, Facebok, Apple, Baidu, and Microsoft are competing for leadership in the field. The current excitement about artificial intelligence is largely due to the research progress of "convolutional neural networks." This machine learning technology has brought huge and exciting advances in computer vision, speech recognition, and natural language processing. You may have heard of it by another more popular and friendly name: deep learning. Few people are more closely associated with deep learning than Yann LeCun, 54. As a researcher at Bell Labs in the late 1980s, LeCun developed convolutional networks and showed how to use them to dramatically improve handwriting recognition; many handwritten checks in the United States are still processed using his methods. At the beginning of this century, when neural networks fell out of favor, LeCun was one of the few scientists who persisted in research. He became a professor at New York University in 2003 and has led the development of deep learning ever since. Deep learning and related fields have become one of the most active areas of computer research recently. This is one of the reasons why LeCun joined Facebook in late 2013 to lead its newly formed artificial intelligence lab, although he retained his position at NYU. Born in France, LeCun retains some of the importance of his country’s “public intellectual” role. His writing and speaking are primarily focused on his field of technology, but he does not shy away from touching on other areas, including current events. IEEE Spectrum’s Lee Gomes sat down with LeCun for an in-depth, nine-part conversation at his Facebook office in New York City. 1. Deep Learning in 8 words IEEE Spectrum: We see a lot of deep learning in the news these days. Of all the descriptions of deep learning, which one do you dislike the most? Yann LeCun: My least favorite description is "it works like the brain." The reason I don't like people saying that is that while deep learning is inspired by the biological mechanisms of life, it is very, very different from how the brain actually works. It's dangerous to compare it to the brain and give it some magical aura. This will lead to hype and people are asking for something that is unrealistic. AI has experienced several cold winters before because people asked for something that AI couldn't give. Spectrum: So if you were a journalist looking at deep learning and you had to describe it in eight words, as all journalists do, what would you say? LeCun: I need to think about it. I think it will be "machines that learn to represent the world." Maybe another way to describe it is "end-to-end machine learning." The idea is that in a machine that can learn, every component and every stage can be trained. Spectrum: Your editor might not like this. LeCun: Yeah, the public won't understand what I mean. Well, there's another way. You can think of deep learning as building machines that learn by combining a lot of modules and components that can be trained in the same way, like pattern recognition systems and so on. So there needs to be a single principle that can train everything. But that's more than eight words. Spectrum: What are some things that deep learning systems can do that machine learning can’t? LeCun: That's a better question. Previous systems, I guess we can call them "shallow learning systems", are limited by the complexity of the functions they can compute. So if you use a shallow learning algorithm like a "linear classifier" to recognize images, you will need to extract enough parametric features from the image to feed it. But manually designing a feature extractor is very difficult and time-consuming. Or use a more flexible classifier, such as a Support Vector Machine or a two-layer neural network, and feed them the pixels of the image directly. The problem is that this will not improve the accuracy of object recognition to any extent. Spectrum: That doesn’t sound like a straightforward explanation. Maybe that’s why journalists try to describe deep learning as… LeCun: Like our brain. 2. A black box with 500 million switches Spectrum: One of the problems is that machine learning is an extremely inaccessible field of study for non-specialists. Some educated laymen can understand some semi-technical computational problems, such as the PageRank algorithm used by Google. But I bet only professors can understand linear classifiers and vector machines. Is this because the field is inherently complex and difficult to understand? LeCun: I actually think the basics of machine learning are pretty straightforward to understand. I’ve explained the subject to high school teachers and students, and it wasn’t boring to many of them. A pattern recognition system is like a black box with a camera on the back, a red light and a green light on top, and a series of switches on the front. Let’s say a learning algorithm tries to adjust the switches so that when a dog appears in the camera, the red light comes on, and when a car appears in the camera, the green light comes on. To train the algorithm, you put a dog in front of the machine, and if the red light comes on, do nothing. If the light is dim, turn the knob to make the light brighter. If the green light comes on, turn the knob to make it dim. Next, turn the knob to make the red light dim or the green light brighter. If you try this many times, and keep adjusting the knobs slightly each time, eventually the machine will get the right answer every time. The interesting thing is that it can correctly distinguish between cars and dogs that it has never seen before. The trick is to calculate the direction and amplitude of each turn of the knob, rather than just turning it randomly. This involves calculating the "gradient", each turn of the knob represents a corresponding change in the light. Now imagine a box with 500 million knobs, 1,000 light bulbs, and 10 million images to train it. This is a typical deep learning system. Spectrum: I think your use of the term "shallow learning" is a bit imprecise; I don't think people who use linear classifiers would consider their work "shallow." Isn't there a media hype factor in the term "deep learning" because it makes it seem like what it learns is very deep, but in fact, "deep" only refers to the order of the system? LeCun: Yes, it is a bit funny, but it reflects the reality: shallow learning systems have one or two layers, while deep learning systems generally have 5-20 layers. Shallow or deep does not refer to the learning behavior itself, but to the structure being trained. 3. Pursue beautiful ideas Spectrum: Yann LeCun’s standard bio mentions that you persisted in exploring new approaches to neural networks when everyone else was losing interest in them. What enabled you to ignore conventional wisdom and keep going? LeCun: I have always been fascinated by the idea of being able to train a complete system end-to-end. You feed raw data into the system, and because the system has multiple layers, each layer will know how to transform the representation produced by the previous layer until the last layer outputs the result. This idea that you should integrate learning from beginning to end so that the machine can learn good representations of data is what I have been fascinated by for the past 30 years. Spectrum: Do you work more like a hacker or a scientist? Do you try things until they work? Or do you start with a theoretical insight? LeCun: There is a lot of interaction between intuitive insights, theoretical models, practical implementation, empirical research, and scientific analysis. Insight is a kind of creative thinking; models are based on mathematics; practical implementation involves engineering and pure hacking; empirical research and analysis belong to real science. What I like most are those simple and beautiful theories that can be successfully implemented. I have absolutely no patience for researchers who prefer to use a theory simply because it is simple, or who ignore truly useful theories because they are too difficult. There are some such phenomena in the field of machine learning. In fact, to some extent, the "neural network winter" at the end of the last century and the beginning of this century was caused by that "research principle". You seem to have an indestructible theoretical basis, but the empirical results are worthless, which is very unfavorable for solving the next engineering problem. But there are also many pitfalls in using a purely empirical research approach. For example, the field of speech recognition has always maintained a tradition of empirical research, and only when your results are better than the baseline can you get the attention of the industry. This stifles creativity, because if you want to beat other research teams in test results, and they have been doing research on this for many years, then you should first immerse yourself in research for 4-5 years and build your own basic architecture, which is very difficult and risky, so no one does it. So for the entire field of speech recognition, although research progress is continuous, it is all incremental. Spectrum: You seem to go out of your way to distance your work from neuroscience and biology. For example, you refer to “convolutional networks” instead of “convolutional neural networks.” You refer to “units” instead of “neurons” in your algorithms. LeCun: That's true. Some parts of our model are inspired by neuroscience, but there are quite a few parts that have nothing to do with neuroscience. Instead, they are derived from theory, intuition, and empirical exploration. Our model is not intended to be a model of the brain, and we do not claim neuroscience relevance. But at the same time, if convolutional networks are inspired by some basic knowledge about the visual cortex, I can accept that. Some people are indirectly inspired by neuroscience, but they are unwilling to admit it. I admit that it (neuroscience) is helpful. But I would be careful not to touch those words that will cause hype, because there has been crazy hype in this field, which is very dangerous. 4. The hype looks like science, but it’s not Spectrum: Hype is undoubtedly harmful, but why do you say it is "dangerous"? LeCun: Because it creates expectations for foundations, the public, potential customers, startups, and investors, and they believe that we are on the cusp of building some system as powerful as the brain, but in fact we are still far from this goal. This can easily lead to another "winter cycle". There’s some “cargo cult science” here, which is Richard Feynman’s expression for describing something that sounds like science but isn’t. Spectrum: Can you give some examples? LeCun: In "grassroots science," you often copy the appearance of a machine without deeply understanding the principles behind it. Or, in the field of aviation, when you build an airplane, you will completely copy the appearance of a bird, its feathers, wings, and so on. People in the 19th century liked to do this, but the success was very limited. The same is true in the field of artificial intelligence, where they try to replicate all the details of what we know about neurons and synapses, and then run a huge simulated neural network on a supercomputer in the hope that artificial intelligence will emerge from it. This is "folk science" artificial intelligence. There are many serious researchers who have received large grants and are basically buying into this. Spectrum: Do you consider IBM’s True North project to be “fool science”? LeCun: This is going to sound a little harsh. But I do think that what the IBM team is claiming is a little biased and misleading. On the surface, their announcement is impressive, but it doesn't actually achieve anything of value. Before True North, that team used IBM's supercomputer to "simulate a mouse-level brain." But it was just a random neural network that did nothing except burn CPU cycles. The tragedy of the True North chip is that it could have been very useful if it hadn't stuck so closely to biology and hadn't used the "spiking integrate-and-fire neurons" model. So in my opinion - and I was a chip designer - before you build a chip, you have to be absolutely sure that it can do something useful. If you build a convolutional network chip - and it's clear how to do it - it can be immediately used in computing devices. IBM built the wrong thing, and we couldn't do anything useful with it. Spectrum: Are there other examples? LeCun: Fundamentally, a lot of the Human Brain Project in the European Union was based on the idea that we should build a chip that simulates the function of a neuron, as close as possible, and then use that chip to build a supercomputer, and when we turn it on with some learning rules, artificial intelligence emerges. I know that's complete nonsense. Of course, I was referring to the EU Human Brain Project. This is not meant to be sarcastic to everyone involved in the project. Many people participated in the project simply because it offered them a huge grant that they could not refuse. 5. Unsupervised learning: the learning method that machines need Spectrum: How much is left to be discovered about machine learning in general? LeCun: Too many. The learning methods we use in actual deep learning systems are still limited. What works in practice is actually "supervised learning". You show the system a picture and tell it that it is a car, it will adjust its parameters accordingly and say "car" next time. Then you show it a chair, a person. After hundreds of examples and days to weeks of computing time (depending on the size of the system), it figures it out. But humans and animals don't learn this way. When you were a baby, you weren't told the names of all the objects you saw. Yet you learned the concepts of these objects, that the world is three-dimensional, that when I put an object behind another, you still knew it existed. These concepts are not innate, you learn them. We call this type of learning "unsupervised" learning. In the mid-2000s, many of us were involved in the deep learning renaissance, including Geoff Hinton, Yoshua Bengio, and myself—the so-called “deep learning community”—and Andrew Ng, and the idea of using unsupervised learning instead of supervised learning began to take off. Unsupervised learning can help “pre-train” certain deep networks. We made some progress in this area, but in the end, the best practice was the good old supervised learning combined with convolutional networks, which we did 20 years ago (in the 1980s). But from a research perspective, we have always been interested in how to do unsupervised learning properly. We now have practical unsupervised technology, but the problem is that we just need to collect more data and then use supervised learning to beat it. This is why in the current industry, deep learning applications are basically supervised. But it will no longer be this way in the future. Essentially, the brain is far better than our models at unsupervised learning, which means that our AI learning systems are missing many of the basic principles of biological learning. 6. Deep Learning at Facebook Spectrum: What are the reasons why Facebook is interested in setting up an AI lab? LeCun: Facebook's purpose is to connect people. This means connecting people to the digital world. At the end of 2013, when Facebook was about to celebrate its tenth anniversary, Mark Zuckerberg decided to create the Facebook Artificial Intelligence Lab, which I led. The company was thinking about what it would mean to connect people in the next ten years, and realized that artificial intelligence would play a key role. Every day Facebook can show 2,000 pieces of content to each person: posts, pictures, videos, etc. But no one has time to read so much content. Therefore, Facebook must automatically filter 100 to 150 items of content that users want or need to see. To do this well, you must first understand people, including their tastes, interests, relationships, needs, and even life goals. You also need to understand the content, know what the posts or comments are talking about, and what the pictures and videos contain. Only in this way can the most relevant content be filtered out and presented to users. In a sense, doing this job well is a "thoroughly artificial intelligence" problem: it requires understanding people, emotions, culture, and art. Most of our work at Facebook's AI Lab is focused on developing new theories, new principles, new methods, and new systems to enable machines to understand pictures, videos, and language, and then reason about them. Spectrum: We were just talking about hype, and I have my own reservations about hype. Facebook recently announced a facial recognition algorithm called DeepFace, and there have been many reports that the accuracy of facial recognition technology is close to that of humans. But weren't those results achieved on carefully curated databases? Would the system report the same success if it encountered random images on the Internet? LeCun: Systems are more sensitive to image quality than humans are, for sure. People can recognize faces of many different configurations by different facial features like beards, and computer systems rarely have an advantage in this regard. But systems can recognize a person in a very large collection of people, a collection that is far beyond human processing capabilities. Spectrum: Can DeepFace do a better job than I can of browsing pictures online and figuring out, say, whether Obama is in one of them? LeCun: It will be faster, for sure. Spectrum: Will it be more accurate? LeCun: Probably not. But it can find someone among hundreds of millions of people. I can't do that. Spectrum: Can it achieve 97.25 percent accuracy, as in the study? LeCun: It's hard to give a specific number without testing it on a database. It all depends on the nature of the data. If there are hundreds of millions of faces in the image library, the accuracy will be far less than 97.25%. Spectrum: One problem here seems to be that some of the jargon used by computer researchers means something different than what laypeople understand. When researchers talk about “accuracy,” they may actually be talking about results from a carefully curated data set. Laypeople may think that computers recognize images just like the random images we see in our daily lives. But the results require more from computer systems than what they appear to show in the news. LeCun: Yes. We also do a lot of benchmarks, like everyone else, using outdoor face detection databases, etc., and of course compare our methods with others. Of course, we also have internal databases. Spectrum: So generally speaking, how close are computers to human performance in recognizing faces in random images that you find on the internet? LeCun: Pretty close. Spectrum: Can you give me a number? LeCun: No, it can’t. The scenario is different (and the results are different). Spectrum: How does deep learning perform beyond image recognition, especially when it comes to general intelligence problems like natural language? LeCun: A large part of our work at Facebook is focused on this. How do we combine the strengths of deep learning, its ability to represent the world through learning, its ability to accumulate knowledge from fleeting signals (which emerged with language), its ability to reason, and its ability to store knowledge in a different way than current deep learning systems do? With current deep learning systems, it's like learning a sport skill. The way we train them is similar to how we taught ourselves to ride a bike. You learn a skill, but it doesn't actually involve a lot of factual memory or knowledge. But some other things you learn require you to remember facts, you have to remember and store things. A lot of what we do at Facebook, Google, and many other places is to build neural networks on the side and build a separate memory module on the side. This can be used for things like natural language understanding. We are beginning to see impressive results in natural language processing with deep learning enhanced by memory modules. The system is based on the idea of describing words and sentences with continuous vectors, transforming these vectors through multiple layers of deep architecture, and storing them in a kind of joint memory. This works very well for both question answering and language translation. An example of this model is the Memory Network, which was recently proposed by Facebook scientists Jason Weston, Sumit Chopra, and Antoine Bordes. Scientists at Google/Deep Mind have also proposed a related concept, the Neural Turing Machine. Spectrum: So you don’t think deep learning will be the key to unlocking general artificial intelligence? LeCun: It will be part of the solution. In part, the solution looks like a big, complicated neural network. But it's very different from what people have seen in the literature so far. You can already start to see some of the papers that I'm talking about. A lot of people are working on what are called "recurrent neural nets." In these neural networks, the output is fed back to the input so that you can form a chain of reasoning. You can use this to process sequential signals, like speech, audio, video, and language, and the initial results are quite good. The next frontier of deep learning is natural language understanding. Spectrum: If all goes well, what can we expect machines to be able to do soon that they can’t do now? LeCun: You might see better speech recognition systems, but they're hidden to some extent. Your digital companion will become better; there will be better question-answering and dialogue systems; you can talk to your computer; you can ask the computer questions and it will find answers for you from its knowledge base; machine translation will be more accurate; you will also see self-driving cars and smarter robots, and self-driving cars will use convolutional networks. 7. Can deep learning enable machines to acquire common sense? Spectrum: In preparation for this interview, I collected some questions that people in the computing field would like to ask you. Oren Etzioni, director of the Allen Institute for Artificial Intelligence (AI2), is interested in the Winograd Schemas Challenge (WS Challenge) to improve the Turing test. The WS Challenge involves not only natural language and common sense, but also an understanding of how modern society works. How might computers respond to these challenges? LeCun: The key to this problem is how to represent knowledge. In "traditional" AI, factual knowledge is manually input in the form of graphs (a set of symbols or entities and their relationships). But we all know that AI systems can automatically acquire knowledge through learning. So the question becomes "How can machines learn to represent knowledge about facts and relationships?" Deep learning is undoubtedly part of the solution, but not all. The problem with symbols is that they are just a bunch of meaningless bits. In deep learning systems, entities are represented by large-scale vectors, which are learned from data and features that reflect this data. Learning to reason comes down to learning functions that operate on these vectors. Facebook researchers Jason Weston, Ronan Collobert, Antonine Bordes, and Tomas Mikolov have taken the lead in trying to use vectors to represent words and language. Spectrum: A classic problem in AI is to give machines common sense. What insights does the field of deep learning have on this problem? LeCun: I think that by using predictive unsupervised learning, you can acquire some kind of common sense. For example, I can have a machine watch lots of videos of objects being thrown or falling. The way I train it is to show it a video and then ask it, "What will happen next? What will the picture look like in one second?" In this way, by training the machine to predict what the world will look like in one second, one minute, one hour, or one day, it will acquire a very good description of the world. This will allow the machine to understand many of the constraints of the physical world, such as "objects thrown into the air will fall after a certain period of time", or "an object cannot be in two places at the same time", or "objects still exist after being blocked". Understanding the constraints of the physical world will enable the machine to "fill in the gaps" and make predictions about the state of the world after being told a story that contains a series of events. Jason Weston, Sumit Chopra, and Antoine Bordes are building such a system using the "memory networks" I just talked about. Spectrum: When it comes to human intelligence and consciousness, many scientists often say that we don’t even know what we don’t know. Do you think this is a problem when building artificial intelligence? LeCun: It's hard to say. I once said that building AI is like driving in a fog. You just follow the road you can see, but suddenly a wall appears in front of you. This story is repeated in AI: perceptrons in the 50s and 60s; syntactic-symbolic approaches in the 70s; expert systems in the 80s; neural nets in the early 90s; and image models, kernel machines, and many other things. Each time there is some new progress and new understanding, but also some limitations that need to be overcome. Spectrum: Another question comes from Stuart and Hubert Dreyfus, two famous professors at the University of California, Berkeley: "There are reports in the media that computers are now powerful enough to identify and attack specific targets at will. What do you think of this and the moral issues behind it?" LeCun: I think moral issues should not be left to scientists alone! The ethical issues of artificial intelligence must be discussed. Ultimately, we should establish a set of ethical guidelines on what artificial intelligence can and cannot be used for. This is not a new problem. Just like many ethical issues that come with powerful technologies need to be answered by society, such as nuclear and chemical weapons, nuclear energy, biotechnology, genetic manipulation and cloning, and information access. I personally believe that machines cannot launch attacks without human decision-making. But then again, this type of moral issue needs to be collectively examined through democratic and political procedures. Spectrum: You often make pointed comments on political topics. Doesn't that worry the people at Facebook? LeCun: There are only a few things that make me uncomfortable. One is political decisions that are not based on facts and evidence. I react to any important decision that is not made based on rationality. Intelligent people can disagree about the best way to solve a problem, but I think it is very dangerous if people cannot agree on hard facts. That's what I appeal to. It happens that in this country, the people who support irrational decisions and religious decisions are mostly on the right. But I also call out people on the left, such as those who think that all GMOs are evil - only some are evil, and those who oppose vaccination or nuclear power for irrational reasons. I am a rationalist. I am also an atheist and a humanist, and I am not afraid to admit it. My moral philosophy is to maximize overall human happiness and minimize human suffering in the long run. These are just my personal opinions, not my employer's. I try to clearly separate my personal opinions (published on my personal Facebook page) and my professional articles (published on my public Facebook page). 8. The old-fashioned singularity theory Spectrum: You’ve said before that you disagree with some of the ideas associated with the Singularity Movement. I’m interested in how you think about the sociological aspects of it, and how you interpret its popularity in Silicon Valley? LeCun: It's hard to say. I'm a little confused about this phenomenon. As Neil Gershenfeld (translator's note, director of MIT's Center for Bits and Atoms) pointed out, the initial part of the sigmoid function curve is exponentially rising, which means that the trend that seems to be growing exponentially now is likely to encounter bottlenecks in the future, including physics, economy, and society, and then experience an inflection point and saturation. I am an optimist, but also a realist. There are indeed some people who promote the Singularity theory, such as Ray Kurzweil. He is a typical futurist with a positivist view of the future. He sold a lot of books by promoting the Singularity. But as far as I know, he has not made any contribution to the discipline of artificial intelligence. He has sold a lot of technology products, some of which are innovative, but there are no conceptual breakthroughs. It is certain that he has not written any papers that guide people on how to make breakthroughs and progress in artificial intelligence. Spectrum: What do you think he’s accomplished in his current role at Google? LeCun: So far, it seems like very few. Spectrum: I also noticed an interesting phenomenon when I discussed the Singularity Theory with some researchers. In private, they seemed to disagree with it (the Singularity Theory), but in public, their comments would be much milder. Is this because all the bigwigs in Silicon Valley believe in it? LeCun: Frontline AI researchers need to strike a delicate balance: stay optimistic about the goals, but don't brag too much. Point out that it's not easy, but don't make people feel hopeless. You need to be honest with your investors, sponsors, and employees; you need to be honest with your colleagues and peers; you also need to be honest with the outside public and yourself. It's difficult when there are many uncertainties about future progress, especially when those who are dishonest and self-deceiving always make big promises about future success. This is why we don't like unrealistic hype, which is done by those who are dishonest or self-deceiving, but makes the work of rigorous and honest scientists more difficult. If you are in a position like Larry Page, Sergey Brin, Elon Musk, and Mark Zuckerberg, you have to constantly think about where technology is going in the long run. Because you have a lot of resources and can use these resources to make the future move in the direction you think is better. So inevitably you have to ask yourself these questions: What will technology look like in 10, 20, or even 30 years? What will the development of artificial intelligence, the singularity, and ethical issues look like? Spectrum: Yes, you are right. But you have a very clear vision of how computer technology will develop, and I don't think you believe that we will be able to download our consciousness in the next 30 years. LeCun: Not anytime soon. Spectrum: Probably never. LeCun: No, you can't say never. Technology is accelerating and changing with each passing day. Some problems need our attention now, while others are far away, maybe we can spend some time writing about them in science fiction, but there is no need to worry about them now. 9. Sometimes I need to create something myself Spectrum: Another question comes from a researcher, Bjarne Stroustrup, the father of C++, who asked: "You have made some very cool things, most of which can fly. Do you still have time to tinker with them now, or has the fun been squeezed out of you by your work?" LeCun: There is a lot of fun in work. But sometimes I need to create something with my own hands. This habit is inherited from my father, who is an aerospace engineer. My father and brother are also keen on aircraft manufacturing. So when I go to France for vacation, we will immerse ourselves in building aircraft for up to three weeks. Spectrum: What's that plane in the picture on your Google+ profile? LeCun: It is Leduc, right next to the Musée de l'Air airport in Paris. I love this aircraft very much. It was the first aircraft powered by ramjet engines, a unique model that supports very high flight speeds. The SR-71 Blackbird aircraft is perhaps the fastest-flying aircraft in the world with a hybrid ramjet and turbojet engine. The first Leduc prototype was built before World War II, and was destroyed before Germany invaded France, and several more were built after World War II. This is a very creative way of doing things. It looks great, has an indescribable shape, and each design is to meet speed requirements, but it is expensive to build such an efficient and convenient aircraft. The noise of this aircraft's ramjet engine is also unbearable. Spectrum: There is a post on your website that tells an interesting story about you meeting Murray Gell-Mann many years ago (an American physicist who won the 1962 Nobel Prize in Physics), and you asked him to correct the pronunciation of your last name. It seems that you were a little tricked on this outstanding but arrogant scientist. Now you have become quite famous, are you worried that you will become arrogant too? LeCun: I try to be less eye-catching. It’s very important when you lead a lab and you need to let young people use their creativity. The creativity of older people comes from what they know, and the creativity of young people comes from what they don’t know. This allows them to explore more broadly. You don’t want to erase that passion. Communicating with doctoral students and young researchers is a very effective way to deal with pride and complacency. I don’t think I’m arrogant, and Facebook is also a very pragmatic company. So we fit quite well. |
<<: Microsoft brings Android Wear apps to Android phones
>>: Why is it harder to grab red envelopes this year? Plugins are the culprit
Xiaomi Redmi mobile phone Guangdiantong advertisi...
[Summer 2021] Senior high school Chinese language...
Zhang Zhicheng from Hammer Media will help you bu...
A few years ago, many people would ask: What is t...
The moon is the closest natural celestial body to...
With the rapid rise of the Internet, there are to...
Recently I received a secret report: it said that...
Operational means cannot essentially change the v...
Insight comes from understanding. Industry resear...
Eid al-Adha is an important festival for my count...
As we age People will feel that their bodies are ...
Lin Yu Little Bookboy Thinking Course, quickly cap...
Two years ago, when people talked about drinking ...
In the marketing circle If there is no profession...
[Paid content] The latest WeChat unlimited real-n...