The 2023 Kyoto Prize in Japan was awarded to American mathematical physicist Elliott H. Lieb. Based on his work in many-body physics, Lieb laid the foundation for mathematical research in fields such as physics, chemistry, and quantum information science; he also made important contributions to mathematical analysis. The Kyoto Prize officials called him one of the intellectual giants in the field of mathematical science. In a public lecture after winning the award, he talked about his research in physics and mathematics for more than half a century (the photo in the article is used in Lieb's lecture). Speech | Elliott H. Lieb Compiled by Ye Lingyuan I was born in Boston, USA in 1932, but I grew up in New York City, where my worldview was formed. My family was middle class, but New York City provided excellent free public education. I discovered that I loved to create and participate in amateur radio activities. I am most proud of my learning Morse code very well and obtaining a radio W2ZHS license, which can connect and transmit information to all other operators around the world. These hobbies and endeavors were greatly influenced by my cousin, and I thought they would lead me to the path of electrical engineering. At the age of 17, my family moved back to Boston. I was extremely fortunate to be recommended and encouraged by the famous physicist Victor Weisskopf to attend the Massachusetts Institute of Technology, which is known as MIT. Lieb spent his undergraduate years at MIT (1949-1953) Soon after I entered MIT in 1949, my first physics class changed my mind. Matthew Sands, co-author of the popular Feynman Lectures on Physics, introduced me to the intellectual beauty of Newtonian physics. At first, I had a hard time grasping the material. My high school coursework had not really prepared me for the deep understanding of the laws of physics, such as what Newton's equations really meant. It took me a while to figure out Newton's equation. It means exactly what it says: force equals mass times acceleration. To know the acceleration of an object, you must first know the magnitude of the force and mass. The exact values of these quantities will be different in different scenarios, but the underlying principle is the same. With Matthew Sands' patient help, I spent half of the course understanding the significance of this equation, and my scientific career was off to a good start. Newton's contemporaries must have encountered similar difficulties. I gave up on becoming an engineer and turned to pure physics courses in my undergraduate studies. As an undergraduate at MIT, I was lucky enough to get a part-time job in the lab that was developing an early particle linear accelerator. The accelerators that actually work today are very large, but they were very small back then. The builders of this 17 million electron volt machine, Isaac Halpern and Peter Demos, were the most inspiring people I have ever met. They had a big impact on my undergraduate career. Bates Linear Accelerator Physics was not as well known to the general public in 1949 as chemistry. My father thought that my sudden decision to pursue physics would lead to my future in poverty. But that was not the case. I was lucky to catch the wave of government funding for natural sciences after World War II and make a decent living. Mathematics played a role in my studies, but not a lot. I was fortunate to learn advanced linear algebra at MIT from Isadore Singer, who is famous for the Atiyah-Singer index theorem. I became a good friend of him. After graduating from MIT, I wanted to see the world. Up to that point, I had not been to any other places except a few large American cities. Under the guidance of Professor Weisskopf, I wrote my graduation thesis on topics related to relativity. He believed that the Department of Mathematical Physics at the University of Birmingham in the UK, with Professor Rudolf Peierls and two lecturers, Sam Edwards and Gerry Brown, was one of the best places in Europe to do theoretical physics. Moreover, they all spoke English. This was very important to me because I had not yet been exposed to any foreign language. John Bell was my classmate at that time, and he later discovered an inequality that is extremely important for quantum information. University of Birmingham, UK (1953-1956) I did get my wish in those years, and traveled to most of Europe. I spent three years in Birmingham, wrote a mediocre doctoral thesis, and received my doctorate in 1956. The next stop was Kyoto, so my first job after receiving my doctorate was in Japan. Why Kyoto? My uncle, who owned an art bookstore in Boston that specialized in Japanese art, got me interested in Ukiyo-e. In Birmingham, I had the good fortune to share an office with a Japanese nuclear physicist named Shiro Yoshida. I was assigned to help him improve his English, which I did, and he taught me some basic Japanese in return. He didn't teach me the Japanese kanji, I just learned by listening, so I was still illiterate. After graduating from his doctorate, Lieb came to Kyoto (1956-1957) Nevertheless, the U.S. Fullbright Program provided me with funding for a year to go to Kyoto University's Institute for Fundamental Physics (YITP), also known as Yukawa Hall. That year had a profound impact on me, both culturally and scientifically. Until then, I had always doubted whether I could make a valuable contribution to science. In Kyoto, I think I did. After leaving Kyoto, it would be another four years before I could do it again. Institute of Fundamental Physics, Kyoto University At the Institute for Fundamental Physics at Kyoto University, I met Kazuo Yamazaki, a brilliant young Japanese physicist. I developed a close collaboration with him, and together we worked on a very challenging model of physics, the Polaron model, which describes the motion of electrons confined in a crystal. This was a hot topic at the time. We decided to go beyond calculations based on physical intuition and to mathematically and rigorously calculate the ground state of the polaron model. We showed that the energy of the polaron is actually finite, in other words, that the ground state exists. Other physicists, such as Feynman, thought this was self-evident, even though the ground state did not exist in other similar physical models. Thus, the study of polarons and our lives entered a new chapter. Polaron It was this experience in Kyoto that convinced me that I had the ability to do scientific research. Two years later, I met Feynman himself at Cornell University, and he asked me what my interests were. I proudly told him about the work I had done with Yamazaki Kazuo in Kyoto, and he responded rather aggressively: "Real physicists don't do research like that!" In his eyes, I was wasting my time as a young scholar. This negative evaluation made me more determined to choose the path of mathematical physics and believe in its significance to physics. University of Illinois (1957-1958; left) and Cornell University (1958-1960) After leaving Kyoto, I spent a year at the University of Illinois and then two years at Cornell University, working under the famous Nobel Prize winner in physics, Hans Bethe, who explained the nuclear reactions behind the sun's light. However, I achieved nothing during these three years, which made me worry about my future as a mathematical physicist. But this period also led me to the problem that has been with me for a lifetime, namely the study of Bose gases, especially their lowest energy states. Bose gases are named after the Indian physicist Satyendra Nath Bose, and have special quantum properties. So I worked for two years at the best university, under the best physicists, and left in 1960 with only a problem to think about. This problem lingered in my mind for thirty-six years. It was not until many years later that I solved this problem with Jakob Yngvason in 1996, which led to the current interest in Bose gases in mathematical physics. After Cornell, I went to IBM Computer Research Center in Yorktown Heights, New York. It was 1960, the year the center was founded. It was my first permanent position, although I was there for only three years. I was lucky to work with two colleagues who were about my age, Ted Schulze and Dan Mattis. We were three physicists who wanted to prove certain accepted theories mathematically. This interest was beyond the scope of all other industrial laboratory research, so we were grateful that IBM gave us the freedom to do this. In general, 1960 to 1970 was a brilliant decade for world physics research, and several important theorems in physics came from this period. One of them is the Lieb-Schultz-Mattis theorem, which states that one-dimensional matter can never be magnetized. In other words, a chain of atoms can never produce magnetism, and at least two dimensions are required. Most theoretical physicists at the time, including the famous German physicist Heisenberg, my doctoral supervisor Professor Peirels at Birmingham, and my supervisor Bethe at Cornell, had imagined the exact opposite, and they had believed that magnetization would definitely occur in one-dimensional objects. But we proved that this would never happen. It took us some effort to convince these colleagues that our conclusion was correct, and Peirels eventually accepted our mathematical proof. This was one of the earliest mathematical proofs in quantum mechanics that attracted widespread attention, and we later went on to develop several theorems based on it. Reprint of Mathematical Physics in One Dimension (1966). During my second year at IBM, I went to Sierra Leone, a country in West Africa, and I spent a year teaching applied mathematics at a university in Freetown, the capital of the country. There were a lot of sociopolitical movements going on, and there was also an outbreak of malaria. By the way, if you have never had malaria, I can tell you that it is a very nasty disease. Lieb spent a year on sabbatical teaching at Fourah Bay College in Sierra Leone That’s why I had time to think about scientific problems there. It was then that I invented the model of the one-dimensional boson, which I later solved with Werner Liniger after returning to IBM. Today, this model plays a fundamental role in understanding the many-body problem in quantum mechanics. Although this work was about the model of a one-dimensional atomic chain, it was later confirmed by experiments. After working at Yeshiva University in New York for two years, I returned to Boston and became a professor at Northeastern University. There, I co-authored the most cited paper in the history of Physical Review Letters (PRL) with Professor Fayue Wu, solving the one-dimensional Hubbard Model. It still holds the record for the most cited article in the journal. Most cited papers on PRL While at Northeastern, I turned my interest to something else: ice. When water cools, it freezes, but ice is not simple. What does ice have to do with mathematics? Linus Pauling made a very important observation, saying that the entropy of ice can be calculated by thinking about the arrangement of water molecules. As we know, water molecules are made up of two hydrogen atoms and one oxygen atom. Experiments have shown that the entropy of ice does not drop to zero at absolute zero. This is one of the most exquisite experiments in the history of physics. In other words, there is a part of the entropy inherent in ice that will never disappear. This means that the arrangement of hydrogen and oxygen atoms in the ice, that is, the orientation of the water molecules, has a significant randomness. One way to describe ice is to imagine it as a lattice of arrows. In this model, each dot represents an oxygen atom, and the arrows represent the position of a hydrogen, which is always between two oxygen atoms and can point to one side or the other. As I said, even at absolute zero, there is still some variation in ice, and you need to count the total number of rearrangements. That's what I set out to do. This model, by the way, was invented by Linus Pauling. As you can see in the figure below, in the case where the oxygen atoms are arranged in a regular pattern on the lattice, the entropy of the ice is equal to the logarithm of the total number of ways the hydrogen atoms can be arranged. Also, for ice to form, each vertex must have two arrows pointing toward it and two arrows away from it. Two-dimensional ice model | Image source: Vadim Gorin Therefore, calculating the entropy of ice is equivalent to counting the total number of regular arrangements of arrows in this diagram. I discovered a new branch of combinatorics called the six-vertex problem. Because at each vertex, if there are two arrows pointing toward it and two arrows away from it, there are six possible arrangements, and you have to calculate the total number of arrangements that satisfy this configuration at every vertex. This has led to a whole subfield of combinatorics. My contribution was to find the total number of arrangements that satisfy the requirements, but there are still many open problems in this field that have not been solved yet. The next few years were a highlight of my collaboration with Joel Lebowitz. We proved that there is a thermodynamic limit to the Coulomb force. This theorem, along with Freeman Dyson and Andrew Lenard's conclusion that charged particles have a lower energy limit, proved the "stability of matter." Let me explain this concept a little bit. An atom, as you can see in the picture below, has a nucleus and electrons orbiting around the nucleus. The electron has the same charge as the nucleus. And to form macroscopic matter, you need many atoms combined together. The Stability of Matter The question is, why is this arrangement of nuclei and electrons not unstable? Macroscopic matter is essentially made up of an infinite number of nuclei and electrons, but there seems to be nothing holding them together tightly, so why is it so stable? You can knock on it and it won't fall apart, even though it is made of the stuff shown in the picture above. In physics, people slowly became aware of this problem, and we decided to solve it. The picture above is crude, but it is not fundamentally wrong. The atoms do attract each other, but the attraction is very weak, and they still maintain their integrity. It took decades to solve this mystery mathematically, and I made some contributions to solving it. The solution to this problem was the joint efforts of many scholars, such as Dyson and Lenard mentioned above, and my colleague Walter Thirring. Then, in 1973, Mary-Beth Ruskai and I proved the strong subadditivity of quantum entropy. From a mathematical point of view, this result is one of the cornerstones of quantum computing. Proving this result required a lot of mathematical analysis, which also opened up my stage of working in pure functional analysis. Other work of mine at this time included proving another set of analytical inequalities, now known as the Brascamp-Lieb inequalities, which have very wide applications in quantum information theory. Herm Jan Brascamp was a young Dutch mathematician and physicist, and we worked together at that time. In 1975, I accepted a position at Princeton University, joining its mathematics and physics departments. In the same year, I struck up a friendship with Walter Thirring at the University of Vienna, one of the world’s most renowned mathematical physicists. The Dyson-Lenard proof of the stability of matter mentioned earlier was actually quite complicated, and we thought that there should be a simpler proof that did not require so many pages of calculations and would give better estimates of stability. We ended up being very successful and invented a whole new class of mathematical inequalities (the Lieb–Thirring inequalities), which now bear our names. Let me mention a few things that happened later. One of the more interesting results was the so-called Lieb-Oxford limit. Working with Steve Oxford, we found a limit that no one had even thought existed before, and it had to do with the energy exchange in solids. I won't explain it in more detail, but just think of it as the energy that keeps a solid stable. Are there any estimates of this energy limit? How big it might be? We got an estimate, and it was unexpected. In 1979, I was lucky enough to be in Kyoto again on sabbatical with my wife Christiane Fellbaum, who was sitting in the audience. We experienced many exciting things, but perhaps the most important one was about the trams. We were on Imadegawa Street and witnessed the last stop of the Kyoto tram. It was a monumental event, and there were many people there. I remember clearly that the tram slid along the track and suddenly stopped, and the last tram ended in front of us. Lieb returned to Kyoto (1978-1979) and witnessed the closure of Japan's first tram line As for the work I mentioned earlier on the limit of energy exchange in solids, it actually originated during my time in Kyoto, but later in Princeton, I worked with Oxford to improve this limit to its current value. Another equally influential paper also has something to do with Japan - the AKLT model of electron spin. A stands for Ian Affleck, K stands for Tom Kennedy, and T stands for Hal Tasaki, my postdoc in 1987, who is here today. This is one of the first models in condensed matter physics to show that there is an energy gap between the lowest energy state and the next energy state. Very few materials have this property. Normally, the energy of the state of matter changes continuously from the bottom to the top, but here there is an energy gap, which plays an important role in the electronic products that everyone is used to using today. The last thing I want to mention is my research with Jakob Yngvason on the meaning of entropy in thermodynamics. Entropy is one of the oldest concepts in thermodynamics, dating back to the beginnings of thermodynamics in the early nineteenth century. But what exactly is entropy? Does it have a meaning other than being a physical quantity that can be (indirectly) measured? Does entropy have a meaning other than Boltzmann's picture of atoms and molecules jumping around and colliding? Is entropy just the movement of particles? The answer is no. Entropy has a more general meaning, and it now appears in many different fields, such as computer science. We found the meaning of entropy as an indicator of which state transitions are possible. This is entropy. We explained what entropy really means in a way that is completely independent of any physical model. Entropy indicates what is possible and what is not possible, and the criterion is whether the entropy of the starting state is less than the entropy of the final state. Entropy is a counting method that shows the universal law that although matter can in principle change from one state to another, in most cases this change can only go in one direction, and this direction is determined by a simple function called entropy. This provides a completely new perspective on understanding entropy. In this talk, I have mentioned several areas of research in mathematics, physics, and mathematical physics in which I have been privileged to be involved. I have had the great honor to collaborate with many outstanding colleagues from many countries, especially in Kyoto, Japan, and to receive so much support and encouragement along the way. Despite initial doubts in my career, I have persisted. I am humbled to thank the Inamori Foundation for awarding me the Kyoto Prize and for giving me this opportunity to share my life and work. Thank you. This article is translated from Elliott H. Lieb "My Journey Through Physics and Mathematics" based on the Creative Commons License (CC BY-NC) Special Tips 1. Go to the "Featured Column" at the bottom of the menu of the "Fanpu" WeChat public account to read a series of popular science articles on different topics. 2. Fanpu provides a function to search articles by month. Follow the official account and reply with the four-digit year + month, such as "1903", to get the article index for March 2019, and so on. Copyright statement: Personal forwarding is welcome. Any form of media or organization is not allowed to reprint or excerpt without authorization. For reprint authorization, please contact the backstage of the "Fanpu" WeChat public account. |
<<: Do you get mad when you are helping with homework? Be careful of "poisoning"!
>>: Only after you go there do you realize how wonderful Sichuan is!
Is it true that Chen Linchun will be the chief di...
Course Catalog: ├──Wang Chong 1.1.mp4 1.11G ├──Wa...
To celebrate the 50th anniversary of the GT-R fam...
Author: Li Chenyang People leave traces behind, w...
As we all know, the H1 tag is one of the most imp...
Investment and mergers and acquisitions within th...
In this article, I will analyze user activity dat...
Is it "Therefore, Heaven will give great res...
Long ago, the ancient Egyptians and Mesopotamians...
It is a good thing for car companies to actively ...
[[141042]] Original question: What computer techn...
There are 24 hours in a day. When is the best tim...
From the earliest "Super Girl" to "...
【51CTO.com original article】 The word "argum...