Leviathan Press: In ancient Greek, paidia (game) and paideia (education, which refers to the cultivation and shaping of intelligence and spirit) are cognates, which means that education can be entertaining. The reason why the saying of entertaining can be established is that playing games is human nature. This nature now fits in with two other human natures: one is the instinct of group development. Humans have a strong desire and need for games. The strong technical requirements of modern games drive the development of the entire computer industry; the other is the instinct to explore the unknown, which is why games attract many scientific researchers. But in addition to instinct, there are naturally other factors. Why do scientists love games ?<br /> When it comes to scientific experiments, we often think of dust-free laboratories, white work clothes and serious-looking scholars - but the Big Bell Experiment launched in 2016 does not look too "serious". Its essence is an electronic game participated by more than 100,000 players around the world. This game wants to verify something that sounds simple: whether the world is really as crazy as described by quantum mechanics. The gameplay is very simple: players log in to the game on their computers or mobile phones, and control the characters in the game by clicking buttons on the game page. The faster the better; or guess the color of the hidden ball by pressing buttons. It all depends on luck. If you exceed a certain score, you can enter the next level. There are 6 levels in the whole game. © Gfycat The game graphics are childlike and the gameplay is suitable for all ages. Each click of the player will generate two numbers, "0" or "1", so a binary sequence can be generated after one game. These huge sequences will be uploaded to 12 laboratories distributed around the world (including the laboratory of Academician Pan Jianwei of the University of Science and Technology of China), and the research team will use the collected data to verify their experimental observations on quantum mechanics. (Game link is available at: museum.thebigbelltest.org/#/contribute?l=EN) How can such a simple web game help scientists verify quantum theory? You may have heard of the concept of quantum entanglement: a pair of quantum particles can transmit information instantly no matter how far apart they are, and the transmission speed of information far exceeds the speed of light. It's very strange.
© Quanta/NIST But in Einstein's theory of relativity, superluminal information transmission is impossible, and this is the main reason why he questioned quantum theory. Einstein believed that: there must be some unknown, deeper mechanism (hidden variables) that is also in line with the logic of classical physics that allows superluminal information transmission to occur. The reason why we think quantum theory is weird is because we don’t understand it well enough. But Einstein's opponents (such as the great physicist Bohr) believed that quantum mechanics was just so weird. Just as John Stewart Bell put it: the determinism of classical physics cannot fundamentally describe quantum mechanics. Isn't this just causing a quarrel? Later, Bell proposed an experimental method that could transform the century-long dispute between Einstein and Bohr on quantum mechanics from a philosophical discussion to a disagreement that could be determined by experiments. Unfortunately, the results of several experiments over the years were only disappointments for Einstein, who had already passed away. John Stewart Bell. © ria The reason why the Big Bell Experiment conducted in 2015 had to be carried out in the form of a video game was that the research team needed to use human free will (clicking buttons in the game) to generate a large number of random numbers in order to make up for the so-called "free choice loophole" - this loophole suggests that there is some unknown but possible mechanism that has existed since time immemorial for quantum entanglement. If such a mechanism does exist, random numbers generated by machines or dice would not be truly random numbers, because these physical systems are also likely to be related to particles - everything in the world seems independent of each other, but no one knows whether they come from a common source in the past. Using these random numbers to conduct experiments is like asking students to create their own test papers, which makes the experiment meaningless. Only the choices made by humans based on free will can eliminate these interferences. Although the premise of this logic is that humans really have something called "free will", but this is another topic. What is the result of the Big Bell experiment? Even with human free will as the data source, quantum is still so weird. Einstein was wrong again. Scientists have long been eyeing the game industry. In addition to the Big Bell experiment, games for scientific research have emerged one after another. For example, the electronic game Foldit aims to decode the folding structure of proteins, hoping to use the human brain's natural three-dimensional graphics matching ability to replace computer work that requires a lot of computing power. In the game, players cracked a protein structure that scientists have been studying for more than a decade, and this result will help the development of anti-AIDS drugs.
Foldit game screen. © Gfycat The game "EyeWirers" requires players to arrange the neurons in the retina, which is a process of drawing a neural map of the brain with the help of the player's power. At present, a mouse retinal model composed of about 700 neurons has been completed, which will help researchers further study the mysteries of the brain. © eyewirers blog There is also a game called "Quantum Moves" in which players need to move a pool of liquid matter to pass levels. High-score operations will help scientists optimize their operating techniques in the laboratory. There are countless examples of this kind. These socially purpose-driven video games naturally have the usual game features—eye-catching graphics, challenges that bring a sense of accomplishment, and rewards that keep people playing. In addition, they can also accomplish tasks that only games can accomplish. In 2009, when the British Parliament was questioned about the existence of a large amount of corruption involving the misappropriation of public funds, it was forced to disclose detailed accounting statements under public pressure. However, the more than one million statements disclosed by the Parliament were not only not classified, but also filled with data in the form of scanned copies, pictures, etc., and could not be effectively simplified by computer.
© uptown Parliament thought that doing so would be enough to deceive people and discourage them from using the data due to the huge amount of data. However, the British newspaper The Guardian came up with an idea: to "crowdsource" the huge computing work to the public. They hired someone to write a simple online game. People can go to their website to claim files and perform calculations on their own. They also thoughtfully designed a player contribution ranking list... The name of this game is also very direct, "Investigate Your Congressman's Expenses." In less than three months, all the report compilation work was completed in excess of the target. The public was so enthusiastic about participating that they even calculated the difference between the travel fuel costs reported by the councillors and the actual fuel costs... A huge amount of work that can kill an accountant becomes easy with the sharing of the work by gamers. The computing power comes from the people. Computing power is as big as the sky <br /> We can find that what this type of game wants most from players is computing power. The computing power mentioned here refers to the data processing capability of the device, which is usually determined by the CPU (central processing unit) and GPU (graphics processing unit) of the gaming device. Modern games have an insatiable demand for the high-density data processing capabilities of the device, which not only requires the rapid changes in the semiconductor industry, but also drives the direction of the industry's progress. Whoever has the strongest chip computing power will be the industry leader. In the semiconductor industry, computing power is king. Modern scientific research often encounters the need for large-tonnage computing, which is why even scientists are eyeing this piece of fat meat. How terrifying can the computing power of a game console be? For example, the PS2, which still holds the global sales record (over 150 million units), was once restricted from export by Japan under the Foreign Exchange and Foreign Trade Law when it was preparing to expand from Japan to overseas markets. This was because there was a clear provision in the law that products with a price of more than 50,000 yen and that may be used for military purposes must apply for special permission if they want to be exported from Japan.
Players who played PS1 while waiting in line all night for the release of PS2. © Reddit What military use could a game console have? It turns out that at the time, the vector graphics computing capability of the PS2 was as high as 3 million vectors per second, which was almost as powerful as the graphics rendering capabilities of other products of the same period. It was indeed possible to be used for the simulation and development of missile guidance systems - it is said that Iraq did so in the past. For a gaming console, this is another kind of honor. However, video games in the Age of Enlightenment did not place excessive demands on hardware. After all, in that era of "material scarcity", the number of pixels on the screen could be counted with the naked eye. The original Super Mario game released in 1985 (yes, the one where you eat mushrooms at the beginning, beat a turtle at the end, and finally slide down the flagpole to save the princess) has a capacity of only 40KB. In comparison, the capacity of the Switch flagship game "The Legend of Zelda: Breath of the Wild" is as high as 13.4GB, which can hold 335,000 original Marios... © Gfycat In the early days of gaming, people did not expect much from the graphics. However, with the rapid development of the computer industry, graphics became increasingly important to games. Computing requirements began to shift toward mass production, and the graphics processing unit (GPU), a dedicated computing chip, came into being. In August 1999, NVIDIA launched the world's first real GPU, which became another important computing unit in computers independent of the CPU. This was the beginning of a new story. What is the difference between GPU and CPU? To make an inappropriate analogy, the CPU is like a butcher who is good at moving forward and backward and moving in depth, with logical computing ability as its specialty; while the GPU is like a meat grinder in a meat processing factory, regardless of whether it is a dragon, a neck, a spoon handle, or a five-flowered toe, all of them are large lumps of meat in and out, and are particularly good at high-throughput data tasks.
At the end of the century, the glory belongs to NVIDIA RIVA TNT2 graphics card. ©wiki Today's GPUs are becoming more and more powerful, and have long surpassed conventional CPUs in computing power. Faced with a large amount of data to be processed, GPUs always have only one thing to say: "Let me do it." In the early years, Andrew Ng led the Google Brain team and spent $1 million to assemble 1,000 computers. With a powerful lineup of 16,000 CPUs, he finally succeeded in allowing artificial intelligence to recognize a cat from 10 million pictures. After replacing CPU with GPU, he could easily accomplish the same task with only 16 computers and 64 GPUs. This terrifying GPU comes from Nvidia, a miracle factory founded by Chinese entrepreneur Huang Renxun in Silicon Valley.
The cat picture that took 16,000 CPUs to find. © JIM WILSON/NYTIMES There is a saying in the industry that comes first with Renxun, then with God, a graphics card in hand can defeat gods. At the beginning of the new century, when Intel and AMD were still fighting each other in the main frequency battlefield, Huang made a bold gamble - he secretly launched a project called CUDA within the company, hoping to open up a new chip world and become the king. It is no exaggeration to say that the increasing demand for gaming effects by the majority of players has directly driven the development of GPUs. From monochrome to color, from 2D to 3D, from early airplane shooting games that shifted one pixel per second to the smoothness of 240 frames per second...where there is demand, there will be a market. But Huang's vision is obviously bigger. He believes that GPU should not be just a graphics processing chip, and in the coming era of high computing demand, a general parallel computing architecture is the way to go. Not long after CUDA was launched, Andrew Ng used it to find the cat. This is how the story mentioned above came about. As for Nvidia, which Huang and he built up, it is now mainly studying how to combine the powerful computing resources in GPU with the huge data processing needs of artificial intelligence. A large number of AI startups in the industry have chosen the hardware platform provided by Nvidia, which has made Nvidia's stock "unattainable" for a long time. “The Language of God” When Notre Dame de Paris caught fire two years ago, there were rumors that the game "Assassin's Creed" retained a large amount of game modeling of Notre Dame's appearance, which could help rebuild this ancient church.
Notre Dame de Paris in the game "Assassin's Creed". © pinterest Notre Dame has not yet been restored, and it is unknown how much the game will help the real reconstruction process. However, such rumors do show from the side that the level of detail in modern games has long been recognized as a high achievement in the minds of the public. And we also have substantial expectations for the application of GPUs in non-game fields. Modern large-scale games have almost abnormal requirements for real-time performance and picture precision. This is not only due to the astonishing speed of hardware development, but also thanks to the efforts made by programmers of all generations at the expense of their hair. A good cook cannot cook without rice. If it weren’t for the genius algorithms of those early game developers that put computing power to good use, even the best hardware would only be self-satisfied. In the binary world, algorithms are the language of God, and programmers are God. When talking about game algorithms, one person who cannot be avoided is the magical man known as the "Mozart of Programming", John D. Carmack II.
John Carmack. © giphy What did this gray-haired old man do? He created the world's first first-person shooter (FPS) game, Wolfenstein 3D, which brought players back to the real world from the "God's perspective" common in early games, allowing them to experience life. To today's gamers, this may not sound like much. But before this, few people dared to imagine how to create a 3D game screen on a 2D game screen. This is a dimensional transgression. How did Carmack do it? In short, Carmack applied a 2D imaging technology called "ray casting algorithm" in Wolfenstein 3D. This algorithm can not only help the computer determine whether the bullet shot by the player hits the enemy or the wall in the horizontal perspective, but also avoid the computer from wasting computing power in places that the player cannot see. Therefore, only a small amount of computing power resources are needed to render a pseudo-3D picture with a high perspective effect.
© agenda digitale Games that are more familiar to Chinese people, such as "Half-Life" and "Medal of Honor", and even the popular "Overwatch" in recent years, were all born based on Carmack's algorithm concept. Nowadays, the "ray casting algorithm" is no longer limited to gaming, but is also widely used in magnetic resonance imaging and remote sensing technology. It can be used to find lesions inside the human body and to depict the world's scenery. It is a typical example of how gaming algorithms benefit non-players. Although Carmack was not the first inventor of this algorithm, he was undoubtedly the most important promoter. In addition, Carmack made two important contributions to computer graphics: binary space partitioning and Carmack inversion. Binary space separation sounds difficult, but it is actually a technology that enhances the three-dimensional sense of the scene in pseudo-3D games. Through this technology, objects in the game scene can be classified into a tree according to the spatial relationship, so that the game screen is no longer a unified shadow effect, but has different shadows to enhance the relationship between near and far, thus greatly enhancing the depth of field of the game.
© wiki Carmack first applied this technology in the game "Doom". In the game, the scenes closer to the player are brighter, while the distant scenes are darker, thus creating a more immersive three-dimensional feeling than in "Wolfenstein 3D".
Screenshot of Wolfenstein 3D. © fraps Carmack's inversion is an algorithm that quickly calculates the inverse of the square root of data, which can greatly reduce computing power consumption when the lighting and projection in the computer rendering screen fluctuate angles and reflection effects. Although it is still unknown who first invented this algorithm (Carmack has explicitly denied this), it has also attracted public attention due to Carmack's application of this algorithm in "Thor's Hammer". The game "Counter-Strike", which is very familiar to Chinese people, was developed based on this game engine. In 1999, Carmack was selected as one of the 50 most influential people in the field of science and technology by Time magazine, and ranked 10th. As an old game programmer, Carmack's innovation in games not only promoted the development of computer graphics, but also promoted social progress. As game algorithms become more powerful, the virtual worlds created by game engines are becoming more realistic and fascinating. Currently, the most powerful game engine is the Unreal engine developed by Epic. Insiders say Unreal Engine 5 is a groundbreaking game engine for two main reasons. First, the new rendering technology Nabite introduced in Unreal Engine 5 can render ultra-high-precision models of film and television quality directly in real time in the game, which means that it will be difficult for users to visually distinguish whether the game screen is constructed or real, thus further achieving spatial simulation. © reddit The second is the dynamic global illumination technology Lumen. This technology allows the light and shadow in the game to adjust synchronously with the change of the set light source (such as morning and dusk) and the advancement of the game perspective, making people believe in the game world from the time dimension. With such powerful functions, the application level of Unreal Engine 5 is naturally far beyond the gaming field. More than half of the shots in last year's popular TV series "The Mandalorian" were produced with the help of the Unreal Engine 5. NASA is also studying how to use Unreal Engine 5 to build a space simulation environment to better help astronauts train on the ground. Work harder than humans
© tweaktown Unreal 5 can spit out billions of tiny triangular faces in one second to create a highly deceptive simulated world. This is a victory of algorithms and computing power, and it is also the charm of the game - creating a simulated environment that can be interacted with. As a simulation environment, games naturally provide almost endless trial and error opportunities and influencing factors for the growth of artificial intelligence. Since its inception, games have been a fertile ground for the experimentation and application of artificial intelligence, and this is still the case today. In February this year, MIT announced that they are teaching artificial intelligence to play games to train it to cope with uncertain situations. They eat mushrooms in Super Mario and fight pigs in Angry Birds. The Uber team even let artificial intelligence drive on Sunset Boulevard in Grand Theft Auto V, which is built based on a simulated environment. Its driving data will be fed back into real-life autonomous driving research. I just don’t know if the artificial intelligence in “GTA5” will one day open the NPC’s car door in order to steal the car. Since its birth, artificial intelligence has been closely linked to games. At the Canadian National Exhibition in Toronto in 1950, a four-meter-high machine with a huge display screen was surrounded by visitors. The machine played a tic-tac-toe game called Bertie the Brain, where humans and the computer behind the screen made moves one after another on a large screen made up of small light bulbs. Although the picture is simple, it is regarded as the earliest appearance of artificial intelligence in the mass field. From this ancient game of tic-tac-toe, to the concept of "machine learning" first proposed by IBM electrical engineer Arthur Samuel in the checkers program, to AlphaGo defeating the top human chess players, the growth of artificial intelligence in games is enough to shock the world. And this process has only been 70 years from the beginning to now. You can only see Ke Jie's tears after his defeat on the stage, but you don't know how hard AlphaGo worked behind the scenes. So how hard is it? When AlphaGo played chess with Lee Sedol, it used 1,920 CPUs and 280 GPUs, and the average electricity bill for each game was $3,000. When it came time to face Ke Jie, AlphaGo even directly abandoned the traditional CPU+GPU architecture and instead migrated to a processor platform called "Tensor Processing Unit" designed specifically for machine learning and reasoning. Ke Jie's tears are the result of his personal unwillingness, but also the tears of happiness shed by mankind over the prosperity of science and technology. * * * The new version, AlphaGo Zero, released in 2017, learned by playing chess with itself and surpassed all previous versions in just 40 days. Huang has a religious belief in the potential of graphics computing. “The world has not yet fully realized that everything we do is fundamental to humanity.” As for Carmack, he joined Oculus VR as chief technology officer as early as 2013 and headed for the next battlefield. Where is the next battlefield? Some say it is the cyber world where artificial intelligence continues human work, some say it is the metaverse based on NFT technology, and some say it is a new future that connects everyone through the human-computer interface. We don’t know which one or which predictions will come true, but one thing we can be sure of is that if it weren’t for the insatiable appetite of gamers, the entire computer industry would not exist. From the computer industry to the entire world, everything is changing dramatically because of games. By Kokyo Proofreading/Rabbit's Light Footsteps This article is based on the Creative Commons License (BY-NC) and is published by Kokyo on Leviathan The article only reflects the author's views and does not necessarily represent the position of Leviathan |