This article is the second part of the commemorative article on the 120th anniversary of von Neumann's birth. In the first part, the famous mathematician Ulam mainly introduced von Neumann's work in mathematics, especially mathematical logic, set theory, Hilbert space and operator theory; and in the second part, his contributions to theoretical physics, game theory, numerical computing, computer theory and the Manhattan Project will be introduced. Von Neumann conducted in-depth explorations in such a wide range of fields, which makes people wonder: Is there a continuous thread in his research? As a problem solver, perhaps we can see his deeper goals and ideals from his handling of practical problems, and why he can become the father of modern computers. By Stanisław Ulam Translation | Yuanyuan Theoretical Physics Professor Léon Van Hove describes his work in theoretical physics in Von Neumann's contributions to quantum theory. In the questionnaire of the National Academy of Sciences mentioned above, von Neumann selected the mathematical foundations of quantum theory and the ergodic theorem as his most important scientific contributions (along with the operator theory discussed above). This choice, or rather restriction, may seem strange to most mathematicians, but it is psychologically interesting. It seems to indicate that perhaps one of his main desires and strongest motivations was to reestablish the role of mathematics at the conceptual level of theoretical physics. Since the end of World War I, the separation between abstract mathematical research and mainstream thought in theoretical physics has been undeniable. Von Neumann often expressed concerns that mathematics might not be able to keep up with the exponential growth of problems and ideas in physics. I remember in one conversation I raised the concern that there might be a kind of Malthusian1 divergence - physical science and technology growing in geometric progression, while mathematics growing in arithmetic progression. He said that this might indeed be the case. However, in subsequent discussions, we both insisted on the hope that mathematical methods would retain conceptual control over the exact sciences for a long time! The paper [7]2 was co-authored by von Neumann, Hilbert and Lothar Nordheim.3 According to its preface, it was based on a lecture given by Hilbert in the winter of 1926 on new developments in quantum theory, and was completed with the help of Lothar Nordheim. According to the introduction, the important mathematical parts and discussions of the paper were given by von Neumann. The stated purpose of the paper was to introduce probabilistic relations, rather than the strict functional relations of classical mechanics. It also presented the ideas of Jordan and Dirac in a rather simple and more accessible way. Even now, thirty years later, the historical importance and influence of von Neumann's paper and his subsequent work in this area can hardly be overestimated. Hilbert's great program of axiomatization found another important application here, namely the isomorphism between physical theories and the corresponding mathematical systems. It was clearly stated in the introduction of the paper that one could hardly understand a theory if its formalization and its physical interpretation were not clearly and completely separated. This separation was the purpose of the paper, although it was admitted that a complete axiomatization was not possible at the time. We may add here that this complete axiomatization of relativistically invariant quantum theory, with its application to nuclear phenomena, remains to be achieved. 4 This paper outlines the operator calculus corresponding to physical observables and discusses the properties of Hermitian operators - these together form the preface to the Mathematical Principles of Quantum Mechanics (Mathematical Principles of Quantum Mechanics). Von Neumann’s clear and precise ideas on the role of statistical mechanics in quantum theory and the measurement problem are given in his paper [10]5. His famous work, Mathematical Foundations of Quantum Mechanics (Mathematische Grundlagen der Quantenmechanik), gives a detailed discussion of axiomatic treatment, measurement theory, and statistics. At least two mathematical contributions are important in the history of quantum mechanics: Dirac's mathematical treatment did not always meet the requirements of mathematical rigor. For example, it assumed that every self-adjoint operator could be diagonalized, which forced the introduction of Dirac's famous "anomalous" functions for operators that could not do this. As von Neumann said, a priori it seemed that quantum theory would require a new form of analysis for infinitely many variables, just as Newtonian mechanics (at the time) required a paradoxical infinitesimal calculus. The results achieved by von Neumann showed that this was not the case. That is, transformation theory could be put on an explicit mathematical basis, not by following Dirac's method in detail, but by developing Hilbert's spectral theory of operators. In particular, this was achieved through his study of unbounded operators, which went beyond the classical theories of Hilbert, Frigyes Riesz, and Schmidt, among others. The second contribution forms the bulk of Chapters 5 and 6 of his book. It concerns the problem of measurement and reversibility in quantum theory. Almost from the beginning, when the ideas of Heisenberg, Schrödinger, Dirac and Born first achieved their sensational success, the question of the role of indeterminism in the theory was raised and suggestions were made to explain it by postulating possible "hidden" parameters (latent variables) which, when discovered in the future, would lead back to a more deterministic description of the theory. Von Neumann showed that the statistical character of the theory's formulation was not due to the unknown state of the observer performing the measurement. The system of observed and observer leads to an uncertainty relation, even if one admits the exact state of the observer. This was shown to be a consequence of a priori assumptions concerning the general nature of the association of physical quantities with operators in Hilbert space. This work presented the ideas of the new quantum theory in a form that was both mathematician-friendly and technically interesting, which was definitely the first important contribution. Because it tried to rationalize the theory that physicists had originally conceived - relying on intuitions that not everyone understood - it also had great teaching value. Although it is impossible to assert whether this work introduced new physical ideas for the more confusing physical phenomena discovered later, after all, the quantum theory constructed by Schrödinger, Heisenberg, Dirac and others in those years was still only an incomplete theoretical skeleton, von Neumann at least provided a logically and mathematically clear foundation for its rigorous treatment. Analysis, Numerical Computing and Fluid Dynamics In an early paper [33]7, von Neumann proved Radó's fundamental lemma in the calculus of variations by a simple geometric construction. (The lemma states that a function z = f(x, y) satisfies the Lipschitz condition for a constant Δ if there is no plane with a maximum inclination angle Δ greater than that intersecting the boundary of the surface defined by the given function at three or more points.) This paper is also interesting in that its proof method involves direct geometric visualizations, which is rare in von Neumann's published works. The paper [41]9 was one of the most remarkable achievements in mathematical analysis in the last quarter of a century. It gave the first exact mathematical result in the entire field: a rigorous treatment of the ergodic hypothesis in statistical mechanics. Von Neumann had been inspired by Bernard Koopman10, who had found that it was possible to reduce the study of Hamiltonian dynamical systems to the study of operators on Hilbert spaces. Using Koopman's representation, von Neumann proved what is now known as the weak ergodic theorem, namely that the mean of a function of iterated, measure-preserving transformations on a measure space converges in measure. This theorem was soon strengthened by G.D. Birkhoff in a form that converges almost everywhere, providing the first rigorous mathematical foundation for classical statistical mechanics. Subsequent developments in the field and many generalizations of these results are well known and need not be described here. Again, this success was due to von Neumann's mastery of techniques inspired by analytical methods in set theory combined with his original work on operators on Hilbert spaces. There is another area of mathematical physics that can also be studied precisely in a general sense with modern analysis. In this case too, great progress was made at the beginning, but of course the story is not over yet; the mathematical treatment of the foundations of statistical mechanics is far from enough as far as classical dynamics is concerned! It is all very well to have the ergodic theorem and the knowledge of the existence of metrically transitive transformations11, but these facts are only the foundation of the subject. Von Neumann often expressed in conversation the feeling that future progress in this field would depend on such theorems - which would give a satisfactory mathematical treatment of the subsequent parts of the subject. The Boltzmann equations required a complete mathematical theory, and the rates at which systems tend to equilibrium required precise theorems. Von Neumann's paper [86]14, which is perhaps less well-known than it deserves, shows von Neumann's growing interest in approximation problems and numerical work. It seems to me to be of considerable pedagogical value. He studies the properties of a finite number of N × N matrices, and the behavior of the space of all linear operations on N-dimensional complex Euclidean space, when N is large. The paper is straightforward, and the preface explicitly states that asymptotic methods for studying the limiting case (i.e., infinite-dimensional unitary space, i.e., Hilbert space) have been unjustifiably neglected in comparison with the usual approach. (This statement is curiously almost the opposite of the view expressed in the introduction to his book Mathematical Foundations of Quantum Mechanics.) In summary, the paper discusses the following question: which matrices of order N behave or approximately behave like matrices of order m, where m is small compared to N and is a factor of N. The notion of approximate behavior becomes exact under a given metric or pseudo-metric on the space of matrices. I would like to add that the basic character of the paper is commendable and not always evident in his study of Hilbert spaces. Von Neumann's ideas were continued in a paper he co-authored with Valentine Bargmann and Deane Montgomery [91]15. The paper included various methods for solving linear equations and showed that von Neumann had begun to consider the possibility of using electronic machines that were already available at the time to perform calculations. The war years created a need for fast estimates and approximate results for applied analytical problems, which were often not so "clean". That is, mathematically "non-homogeneous", involving, in addition to the main process of the physical phenomenon to be calculated, many external perturbations, the effects of which cannot be ignored or even separated in the additional variables. This situation often occurs in today's technical problems and forces one to resort to numerical methods, at least initially, not because one needs high accuracy of results, but simply to achieve qualitative analysis! Von Neumann's interest in numerical analysis increased significantly at that time, and he realized this fact, which may be somewhat sad for mathematical purists. In a paper co-authored with HH Goldstine [94]16 they studied the problem of numerical inversion of high-order matrices and attempted to give rigorous error estimates, obtaining interesting results on the accuracy achievable for inverting matrices of order ~150. The estimates are obtained “in the general case”. (“General” means that under credible assumptions about the statistics, except for a set of low-probability sets, the estimates hold.) The need to quickly locate and answer problems in mathematical physics and engineering led to the development of fast electronic computers. As a byproduct, people had the opportunity to do some more interesting work! To some extent, people's curiosity about certain interesting integer sequences was satisfied. The simplest example is the frequency of certain numbers in the tens of thousands of decimal places after the (infinite and non-repeating) decimal point of e and π. One such calculation was performed on the machine at the Institute for Advanced Study, giving the cube root of 2 as the first 2000 partial quotients in its continued fraction expansion. Johnny was interested in such experimental work, no matter how simple the problem was. In a discussion of these problems at Los Alamos, he asked for "interesting" numbers to calculate their continued fraction expansions. I gave a quartic irrational quantity y, which is given by the equation y=1/(x+y) with x=1/(1+x), and some strange patterns may appear in its expansion. People planned to calculate many other numbers, but I don't know if this little project was actually carried out. Game Theory Game theory is now a rapidly developing new area of mathematics, and it was essentially pioneered by von Neumann. His fundamental work in this area will be described in an article by AW Tucker and HW Kuhn in the same journal as this one. 18 Suffice it to say that these studies reflect his richest and most influential work. In 1921, Émile Borel proposed for the first time a mathematical scheme for the game strategy of two players in a note to Comptes-Rendus. The real establishment of this discipline is considered to be due to von Neumann's paper [17]19. It was in this paper that von Neumann proved the basic "minimax" theorem and formulated a general scheme for games between n players (n ≥ 2). In addition to the significance and application of these schemes to practical games in fields such as economics, they also generated a large number of novel combinatorial problems in a purely mathematical sense. The theorem Min Max = Max Min and the corollary on the existence of saddle points for multivariable functions are both included in his 1937 paper [72]20. They are proved to be the result of the generalization of Brouwer's fixed point theorem and the following geometric fact: Let S and T be two equations containing closed subsets; suppose that for each element x of S, the set Q(x)={y:(x, y)∈V} is a non-empty convex closed set; similarly, for each element y in T, the set P(y)={x:(x, y)∈W} is a non-empty convex closed set, then the sets V and W have at least one common point. This theorem, later further discussed by Shizuo Kakutani, John Nash, George W. Brown, and others, plays a central role in proving the existence of "good strategies". Game theory, including the current study of infinite games (first developed in Poland by Stanisław Mazur around 1930), is flourishing. One need only look at the work contained in the three volumes of Contributions to Game Theory [102;113;114]21 to see the richness of the ideas in this field—the variety of ingenious formulations in the purely mathematical sense and the growing number of important applications; there are also many unsolved problems that are simply stated. economics Oskar Morgenstern and John von Neumann’s classic paper, Theory of Games and Economic Behavior[90]22, presents game theory in purely mathematical form and describes in great detail its application to actual games. It also introduces different approaches to economic behavior and certain sociological problems, combined with a discussion of some basic problems in economic theory. Economist Oskar Morgenstern was a friend of von Neumann’s at Princeton for many years. He was interested in all aspects of the economic situation, especially the exchange of goods between two or more people, and the problems of monopoly, oligopoly, and free competition. It was in the attempt to discuss the mathematization of these processes that the theory began to take its present form. Numerous current applications in "operations research," in communication problems, and in the statistical estimation theory of Abraham Wald23 either derive from or are drawing on the ideas and schemes developed in this monograph. We cannot even sketch in this article the scope of these investigations. The interested reader may find descriptions of those problems in Leonid Hurwicz24's The theory of economic behavior25 and in Jacob Marshak26's Neumann's and Morgenstern's new approach to static economics27. Dynamics, Continuum Mechanics and Meteorological Computing In two papers co-authored with S. Chandrasekhar [84 and 88]28, they considered the following problem: suppose that the centers of mass are randomly distributed, such as in many stars in a star cluster or a nebula, and that these large masses are in motion and attract each other. The problem is to explore the statistical consequences of the fluctuations of the gravitational field and to study the motion of individual masses affected by different local distribution changes. In the first paper, they solved the problem of the rate of fluctuation of the distribution function of gravity by clever calculations and obtained a general formula for the probability distribution W(F, ƒ), where F is the gravitational field strength and the associated rate of change ƒ is the derivative of F with respect to time. The results obtained include the following theorem: for weak fields, the probability of a change in the field at a given moment is independent of the direction and magnitude of the initial field; for strong fields, the probability of a change in the direction of the initial field is twice the probability of a change in the perpendicular direction. The second paper is devoted to the statistical analysis of the fluctuation rate of the gravitational force acting per unit mass on a star whose centroid moves with velocity V relative to a nearby star. The problem is solved under the assumption that the stars are uniformly Poisson distributed and that the local velocities are spherical; they also solve for general distributions of different masses, giving expressions for the gravitational force acting on two very close points. This method gives the asymptotic behavior of the spatial correlation. Von Neumann had long been interested in turbulent phenomena. I remember the discussion in 1937 on the possibility of a statistical treatment of the Navier-Stokes equations, which allowed the analysis of fluid dynamics by replacing these partial differential equations by an infinite number of total differential equations, which the Fourier coefficients in the Fourier expansion of the Lagrangian function satisfy. A mimeographed report written by von Neumann for the Office of Naval Research in 1949, Recent theory of turbulence, gave a profound and clear introduction to the ideas of Lars Onsager and Andrey Kolmogoroff and other work of the time. With the onset of World War II, von Neumann investigated the problems posed by the motion of compressible gases, particularly the puzzling phenomena caused by their discontinuous behavior, such as shock waves. His extensive research in this field was largely designed to solve problems arising in defense work. They were published in the form of reports, some of which are listed in the appendix. (Editor's note: See original text.) It is not possible to summarize his rich and varied work in this field, most of which reflects his sharp analytical skills and his usual clear logic. His contributions to the theory of collision-shock interactions are particularly noteworthy. For one thing, he gave the first rigorous demonstration of the Chapman-Jouguet hypothesis of explosive processes (i.e., combustion processes initiated by shock). The first systematic study of the theory of shock wave reflection also came from von Neumann (Progress report on the theory of shock wave, NDRC, Div.' 8, OSRD, No. 1140, 1943; Oblique reflection of shocks, Navy Department, Explosive Research Report no. 12, 1943). As has been said, even a qualitative analysis of the motion of compressible media in two or three dimensions is beyond the capabilities of current explicit analysis. Worse still, the mathematical foundations of a theory describing such physical phenomena have perhaps not yet been established. Von Neumann's view is well expressed in his comments at [108]29: "It is a rather difficult and ambiguous question whether the solutions one finds by mathematical reasoning actually occur in nature, and whether one can rule out in advance the existence of certain solutions with good or bad features. This question has been studied in both the classical and more recent literature, but it varies greatly in rigor and even in the coarseness of the approaches. In short, it is very difficult to be sure of anything in this area. Mathematically speaking, we are in a state of continuous uncertainty, because the general theorems we seek for the existence and uniqueness of solutions have never been proved and may well be incorrect in their apparent form." He then wrote: "Thus, allowing for discontinuities, requiring reasonable thermodynamic behavior, and so on, there is a wide variety of mathematical possibilities in fluid mechanics. There may be a set of conditions under which every reasonably stated problem has one and only one solution. However, we can only guess what it is; in our search for it we rely almost entirely on physical intuition. Hence, we can never be very sure about any point. And we can hardly say, however confidently, that any solution that is obtained is a solution that must exist in nature." If one wants only to gain some insight into these difficult problems, one must resort to numerical work under special conditions. In a series of reports, von Neumann discussed the problems of optimal numerical procedures, difference schemes, and numerical stability of computational schemes. One should especially mention his paper with Robert D. Richtmyer [100]30, in which, without going into the details of shock conditions and discontinuities, they introduced a purely mathematical fictitious viscosity, which made it possible to calculate the motion of the shock step by step, following the ordinary equations of fluid dynamics, without having to make explicit assumptions about the motion of the shock. The daunting mathematical problems posed by the fluid dynamics equations governing the motion of the Earth's atmosphere had fascinated von Neumann for quite some time. With the advent of computers, detailed numerical studies of at least simplified versions of the problem became possible, and he embarked on a vast program. A meteorological research group was formed at the Institute for Advanced Study in Princeton;31 the group's plan was to gradually solve numerical weather problems by means of models that came closer and closer to the true nature of the atmosphere. At present, numerical studies of true three-dimensional motions were impractical even on the most advanced electronic computers. (This may not be the case, say, five years from now. Editor's note: This article was written in 1958.) The first highly stylized calculations initiated by von Neumann dealt with two-dimensional models, and were mostly cases of the so-called geostrophic approximation. Later, by assuming the interaction of two or three two-dimensional models corresponding to different altitudes or pressure levels, so-called "2 + 1/2" dimensional fluid dynamics calculations could be performed. This problem was very important in his mind, not only because of its intrinsic mathematical interest, but also because a successful solution could have great technological impact. He believed that with the development of computers, and our understanding of the dynamics that control atmospheric processes, we were approaching the level of achieving weather forecasting. He also believed that people could understand, calculate, and perhaps eventually achieve the processes of controlling and changing the climate. In his paper [120]32 he speculated that in the near future it would be possible to use the vast existing nuclear energy resources to produce changes in atmospheric circulation of the same magnitude as "the great Earth itself".33 In these problems of already understood physical phenomena, future mathematical analysis may enable humanity to greatly expand its ability to control nature. Computer theory and practice, Monte Carlo method Von Neumann's interest in numerical work had different sources. One stemmed from his initial work on the role of formalism in mathematical logic and set theory, and his work in his youth dealt extensively with Hilbert's program of viewing mathematics as finite games. Another equally strong motivation came from his work on problems in mathematical physics, including the purely theoretical study of ergodic theory in classical physics and his contributions to quantum theory. With the emergence of various types of continuum mechanics in fluid mechanics and nuclear energy technology, more and more practical problems were reflected, which became directly computational problems. We have already briefly discussed von Neumann's interest in turbulent problems, the general dynamics of continuous media, and meteorological calculations. I remember well how early in the Los Alamos project it became apparent that analytical work alone was often insufficient to provide even qualitative answers. Numerical work by hand, even with a desktop calculator, would take an unacceptably long time to solve for many problems. This situation seems to have been von Neumann's final impetus to his energetic pursuit of computational applications with electronic devices. For several years, von Neumann had been arguing that in many problems of fluid mechanics—in the behavior and propagation of shock waves, and in cases where the phenomena described by nonlinear partial differential equations involved large displacements (that is, where linearization was not adequate to get close to a true description)—numerical work was necessary in order to provide heuristic material for future theory. This ultimate necessity forced him to study the problem of computation by electronic machines from the ground up, and during 1944 and 1945 he worked out the basic method now used - the translation of a set of mathematical procedures into the instruction language of a computer. Electronic computers of the time (such as the ENIAC 34) lacked the flexibility and versatility now available for mathematical problems. Broadly speaking, each problem required a special and different wiring system to enable the machine to carry out the prescribed operations in a given sequence. Von Neumann's great contribution was his introduction of the concepts of a "flow diagram" and a "code": the former made the connections or circuits of the machine fixed but quite general; the latter made this fixed set of connections capable of solving a wide variety of problems. Although it can be said with hindsight that the possibility of coming up with such an arrangement might have been obvious to a mathematical logician, it was far from easy to implement and execute such a general method with the electronic technology of the time. Even now, a decade after the invention of these methods, it is easy to underestimate the enormous possibilities that can be opened up by theoretical experiments of this kind, born out of mathematical physics problems. The field is still so new that predictions seem risky, but there is already a large accumulation of theoretical experiments in many areas, such as fluid dynamics, magnetohydrodynamics, and quantum theory calculations, so we can expect a satisfactory synthesis of theories from these calculations. The engineering design of the computer owes much to von Neumann. The logical scheme of the machine, the relative role of the memory, the speed of its operation, the choice of the elementary "commands," and the circuits in the present machine, all bear the stamp of his ideas. Von Neumann personally supervised the construction of the electronic computer at the Institute for Advanced Study in Princeton, in order to become familiar with the engineering problems involved and to master this tool for new experiments. Even before the machine was completed (which took longer than expected), he set it on certain problems from the Los Alamos laboratory and performed a large number of calculations. One of these was a problem concerning the course of a thermonuclear reaction, involving more than a billion elementary arithmetic operations and elementary logical commands. The problem was really a question of "yes" or "no" to the propagation of the reaction. One did not care whether the final data were very precise, but all the intermediate and detailed calculations seemed necessary to obtain the answer to the original problem. Indeed, guesses about the behavior of some elements of the problem, combined with manual calculations, can go a long way in revealing the final answer. In order to increase the confidence in such an intuitive estimate, a great deal of computational work must be done. And this seems to be quite common in solving certain new problems in mathematical physics and modern technology. In describing these phenomena, we do not need astronomical precision; in some cases, people are quite satisfied if the behavior can be predicted with an accuracy of "up to 10 percent." But in the calculation process, the individual steps must be as accurate as possible. The huge number of elementary steps raises the question of the reliability of the estimate of the final result, as well as the question of the inherent stability of the mathematical method and the process of performing the calculations. When von Neumann received the Atomic Energy Commission's Fermi Award, it specifically noted his contributions to the development of electronic machines for performing computations, which were useful in many areas of nuclear science and technology. Electronic computers can calculate thousands of times faster than hand calculations, which has led to the emergence of many new methods - not only in numerical analysis in the classical sense, but also in the fundamental principles of the process of mathematical analysis itself. No one understood the implications of this better than von Neumann. We can use a small example, the so-called Monte Carlo method, to illustrate this. The numerical analysis methods that were developed for manual calculations or even for relays in the past are not necessarily optimal for electronic computers. For example, it is obviously more economical to calculate the required values directly than to use elementary function tables. Secondly, problems that require the simplification of integral equations to find integrals can now be solved by very complex algorithms that cannot even be implemented by hand, but are completely feasible for the new machines. In the years after World War II, von Neumann invented dozens of computational techniques, such as "subroutines" for computing basic algebraic or transcendental functions, solving auxiliary equations, and so on. Some of this work, incidentally, is not yet generally known in the mathematical community, but is very familiar to researchers who use computers in industry or government projects. This work includes methods for finding the eigenvalues and inverses of matrices, concise methods for searching for extrema of functions of multiple variables, and the generation of random numbers. Much of this work shows the combinatorial dexterity typical of his early work in mathematical logic and operator theory, and some of it can even be described as virtuosic. The simplicity of mathematical formulation of the principles of mathematical physics that was hoped for in the 19th century seems to be conspicuously missing in modern theory. The discovery of a bewildering diversity and rich structure in the elementary particles seems to have postponed the earlier hope of mathematics becoming a whole. In applied physics and technical problems, one has to deal with situations that present a mixture of different systems mathematically: for example, a system of particles whose behavior is governed by mechanical equations and whose interacting electric fields are described by partial differential equations; or in the study of processes that produce neutrons, in addition to the system of neutrons, one has to consider the fluid dynamics and thermodynamic properties of other matter separated from these particles that interact with the whole system. From a combinatorial point of view alone, not to mention the analytical difficulties in dealing with partial differential and integral equations, it is clear that there is currently little hope of finding a closed-form solution35. Therefore, in order to explore the properties of these systems, even if only qualitatively, people are forced to look for practical methods. We decided to look for such a method, roughly speaking, to find a homomorphic image of a given physical problem in a mathematical pattern that can be represented by a fictitious system of "particles" processed by an electronic computer. This method is particularly useful for problems involving functions of a large number of independent variables. To give a very simple concrete example of this Monte Carlo method, let us consider the problem of estimating the volume of a subregion of a given n-dimensional "cube" described by a set of inequalities. The general approach is to systematically divide the space into a grid of points to approximate the required volume, but this method is to randomly select some points in the space with uniform probability and determine (on the machine) how many of these points belong to a given region. According to basic facts of probability theory, given a sufficient number of sample points, this ratio will approach 1 with the desired probability, giving an approximation of the relative volume. Here is another slightly more complicated example: consider the diffusion problem in a region of space bounded by a curved surface where the diffusing particles are partially reflected and partially absorbed; if the geometry of the region is complex, it may be more economical to try to perform a large number of "physically" random walks rather than to try to solve the integro-differential equations classically. These "walks" can be conveniently performed on a machine, and the treatment of random walks in probability theory is reduced to differential equations - this program actually does the exact opposite. Another example of this approach is that, given a set of functional equations, one attempts to transform them into equivalent equations with a probabilistic or game-theoretic interpretation. These equivalent equations are simulated on a computer to represent random processes, and the distribution obtained will give a reasonable guess about the solutions to the original equations. Going a step further, one hopes to directly obtain a "homomorphic image" of the behavior of the physical system in question. It must be pointed out that in many physical problems currently being studied, the differential equations originally obtained through certain idealizations are no longer sacrosanct, so to speak. At the very least, studying these system models directly on a computer may be of heuristic value. At the end of the war and in the years that followed, von Neumann and I (the author of this article) treated quite a number of problems in this way. At first, the physical situation itself directly raised the problem of probabilistic interpretation. Later, the third class of problems mentioned above was studied. The theory of such mathematical models is still very incomplete. In particular, estimates of fluctuations and precision have not yet been developed. And here von Neumann again contributed a large number of ingenious methods, such as the generation of series of numbers with given probability distributions by appropriate games. He also devised probabilistic models for treating the Boltzmann equation and important stochastic models for some strictly deterministic problems in fluid dynamics. Most of this work is scattered in various laboratory reports or remains in manuscripts. We certainly hope to publish a systematic collection of papers to the mathematical community in the near future. Automata Theory and Probabilistic Logic Professor Claude E. Shannon's article "Von Neumann's contributions to automata theory" gives an introduction to his work in the theory of automata. This work, like game theory, has stimulated a wide and increasingly expanding range of research over the past few years, and in my opinion stands alongside his most fruitful ideas. Here his interest in mathematical logic, computers, mathematical analysis, combined with his knowledge of problems in mathematical physics, bore fruit in new constructions. The ideas of Alan Turing, Warren McCulloch, and Walter Pitts on the representation of logical propositions by electrical networks or idealized nervous systems inspired him to propose and outline a general theory of automata. The concepts and terminology of this theory come from several different fields - mathematics, electrical engineering, and neuroscience. These studies are now expected to achieve more in mathematics, perhaps at first at a very simplified level - formalizing the workings of organisms and nervous systems themselves. Nuclear Energy - Work at Los Alamos Just before the outbreak of World War II, the fission phenomenon of uranium atoms absorbing neutrons and releasing more neutrons was discovered. Many physicists immediately realized that large amounts of uranium reacting exponentially would release huge amounts of energy, so they began to discuss how to quantitatively evaluate this phenomenon in order to realize the use of new energy. Compared with mathematicians, theoretical physicists formed a smaller and more closely knit group, and generally there was a faster exchange of results and ideas between them. Von Neumann's work on the foundations of quantum theory brought him into early contact with most of the leading physicists, and he became aware of the new experimental facts and participated from the beginning in their speculations about the great technical possibilities hidden in the fission phenomenon. Before the outbreak of the war, he was involved in scientific work related to national defense problems. However, it was not until late 1943 that Oppenheimer invited him to visit Los Alamos Laboratory as a consultant and began to participate in the work with the ultimate goal of building an atomic bomb. The first self-sustaining nuclear chain reaction, as is well known, was achieved on December 2, 1942, in Chicago, by a group of physicists led by Fermi. They built a reactor that placed uranium and a moderator material together in which neutrons were slowed down to increase the probability of initiating further fissions. The reactor was very large, and the time it took for the neutrons to increase exponentially by a factor of e was relatively long. The goal of the project established at Los Alamos was to produce a very rapid reaction in a relatively small amount of an isotope of uranium-235 or plutonium, resulting in an explosive release of enormous energy. In the late spring of 1943, a scientific team began to be organized, and by the fall of that year a large number of outstanding theoretical and experimental physicists had settled in Los Alamos. When von Neumann arrived, the team was studying various methods of assembling a critical mass of fissile material. None of the schemes could be known in advance whether they would succeed, and one of the problems was that the assembly had to be carried out quickly before the nuclear reaction caused a mild or moderate explosion, otherwise most of the nuclear charge would be wasted. Edward Teller remembers Johnny arriving at Lamy (the nearest train station to Los Alamos) and being taken in an official car to "the Hill" (the town of Los Alamos, located on a plateau), which was then a highly secret place: "When he arrived, the Coordinating Council was in session. Our leader, Oppenheimer, was reporting on the Ottawa Conference. His speech mentioned many of the most important persons and equally important decisions, one of which concerned us closely: we could expect a British contingent to arrive here in the near future. After his speech, he asked if the audience had any questions or comments. The audience was impressed and asked no questions. Oppenheimer then suggested other topics for questioning. After a second or two, a deep voice (whose source has been lost to history) said: 'When can we find a shoemaker in the mountains?' Although no scientific questions were discussed with Johnny at the time, he asserted that from that moment on he had fully understood the nature of Los Alamos." The atmosphere was very lively, informal, exploratory, and therefore more like a university seminar than a technical or engineering laboratory, an abstract style of scientific discussion, so to speak. I remember clearly my surprise upon arriving at Los Alamos to find that the environment reminded me of a group of mathematicians discussing their abstract conjectures rather than engineers working on a well-defined practical project - discussions often took place informally until late at night. Scientifically speaking, a striking feature of this situation was the diversity of the problems encountered, each of which was equally important to the success of the project. For example, there was the problem of the distribution of the exponentially growing number of neutrons in space and time; equally important were the problems of the ever-increasing energy deposition caused by the fission of the nuclear charge of the atomic bomb, the calculation of the fluid dynamics of the explosion; the distribution of energy in the form of radiation; and finally, the movement of the surrounding material after the atomic bomb lost criticality. It was essential to understand all these problems involving very different areas of mathematics. It is not possible here to give a detailed account of von Neumann's contributions. I will try to point out some of the relatively important aspects. In early 1944, we considered an implosion method for the assembly of fissile material. This process involved spherical impacts on the nuclear charge, compressing it. Von Neumann, Hans Bethe, and Teller were the first to recognize the advantages of this scheme. Teller told von Neumann about the experimental work of Seth Neddermeyer, and they then worked out the basic results of this spherical geometry together. Von Neumann concluded that this method could produce extremely large pressures, and in the course of the discussion it became clear that with great pressures also came considerable compression. In order to start the implosion in a sufficiently symmetrical manner, the high explosives delivered to the interior had to be detonated simultaneously from multiple points. James Tuck and von Neumann suggested the use of high-explosive lenses to assist in this. We mentioned earlier von Neumann’s ability to communicate with physicists, an ability that was perhaps rare among mathematicians: he understood the physicists’ language and could almost immediately translate it into a form familiar to mathematicians. He could then translate the answers back into the language commonly used by physicists. The first attempts to calculate the motions caused by the implosion were extremely schematic. The equations of state of the nuclear charge involved were poorly understood, but even by crude mathematical approximations some equations were derived whose solution was clearly beyond the scope of exact analytical methods. It was obvious that a great deal of tedious numerical work would be necessary to obtain correct quantitative results, and the computer was a necessary aid. A more complicated problem is the calculation of the characteristics of a nuclear explosion. The energy released depends on the processes of outward motion, which are, of course, constrained by factors such as the rate of energy deposition, the thermodynamic properties of the materials, and the radiation produced at very high temperatures. For the first experiments, one could only be satisfied with approximate calculations; as mentioned above, even the order of magnitude is not easy to estimate without the complex calculations of computers. After the war, the use of computers, in order to save resources and maximize their use, required more accurate calculations. Von Neumann made a great contribution to the mathematical treatment of the physical problems under consideration. During the war, researchers had considered the possibility of thermonuclear reactions, initially with some discussion and then with preliminary calculations. Von Neumann was very active as a member of an imaginative group that considered various schemes for realizing such reactions on a large scale. Mathematically, the problems involved in dealing with the conditions necessary for such reactions and their processes were even more complex than those of fission explosions (indeed, understanding the nature of fission explosions was a prerequisite for exploring thermonuclear reactions). During one of our discussions, we were outlining the process of such calculations, and von Neumann turned to me and said, "We have probably performed more elementary arithmetic operations in performing our calculations than all the operations ever performed by mankind." However, we noticed that the total number of multiplications performed by the world's schoolchildren in a few years was clearly greater than our problem! Space does not allow me to list von Neumann's numerous smaller technical contributions, but they were popular with the physicists and engineers working on the project. Von Neumann was very good at doing sizing estimates and algebraic and numerical calculations in his head, without the use of pen and paper. This ability, perhaps somewhat akin to a talent for playing chess blindfolded, often impresses physicists. My impression is that von Neumann did not visualize the physical objects under consideration, but rather viewed their properties as logical consequences of basic physical assumptions, and he could perform this kind of deductive reasoning to perfection! A great feature of von Neumann’s personal scientific style was his willingness to listen, even if the problems did not make much scientific sense, but if the puzzles showed a combinatorial appeal, he would pay attention. This made him popular and sought after by those who applied mathematics to technology. Many of those who talked with him were actively helped or comforted by the knowledge that there was no magic in mathematics that would make it easy for them to solve their problems. Von Neumann selflessly participated in perhaps too many and too wide-ranging activities that could be useful for mathematical insight (and are increasingly common in today’s technological development) but also placed severe demands on his time. In the years after the end of World War II, he found himself struggling with conflicting demands almost every moment. Von Neumann firmly believed that the technological revolution triggered by the release of nuclear energy would bring more profound changes to human society, especially to scientific development, than any technological discovery in human history. He told me that when he was very young, he believed that nuclear energy would be developed in his lifetime and would change the order of human activities. This was one of the few times he talked about his lucky guesses. He was actively involved in the early conception and deliberations on the possibility of controlled thermonuclear reactions. In 1954, he became a member of the Atomic Energy Commission, working on technical and economic problems associated with the construction and operation of fission reactors. In this position, he also spent a lot of time organizing research on mathematical computers and trying to make them available to universities and other research centers. Von Neumann's mathematical journey Von Neumann left so many lasting marks on mathematics that our cursory glance at this aspect of his work, combined with sporadic coverage of his achievements in many other areas, might raise the question: Is there a continuous thread in his work? As Poincaré said: "There are questions we ask ourselves, and there are questions that arise naturally." Now, fifty years after the great French mathematician made this vague distinction, this division in mathematical problems has become more sharply evident. The objects that mathematicians consider are more often their own free creations, often, so to speak, special generalizations of previous constructions. These theories are sometimes originally inspired by physical pictures, while others evolve from free mathematical creations - in some cases, foreshadowing actual patterns of physical relations. Von Neumann's thinking was clearly influenced by both tendencies. His desire was to keep the pyramidal mathematical constructions as close as possible to the growing complexity in physics and other sciences, a connection that was now increasingly elusive. Some of the great mathematicians of the 18th century, notably Euler, succeeded in bringing the description of many natural phenomena into the realm of mathematical analysis. Von Neumann's work attempted to make mathematics developed by set theory and modern algebra play a similar role. Today, of course, this is a much more difficult task. For much of the 19th century, the infinitesimal calculus (an early name for calculus) and the subsequent development of mathematical analysis offered the prospect of not only cataloguing the contents of the Pandora's box opened by the discoveries of physics, but also of really understanding them. This hope is now illusory simply because the real number system of Euclidean space can no longer claim to be the only or even the best mathematical foundation for physical theories - algebraically or even just topologically. Nineteenth-century physical thought, which was dominated mathematically by differential and integral equations and the theory of analytic functions, is no longer sufficient. The new quantum theory requires more general views of set theory in its analytical aspects, and its original concepts themselves involve probability distributions and infinite-dimensional function spaces. Algebra, on the other hand, involves the study of combinations and algebraic structures, which are more general than structures represented by real or complex numbers alone. So to understand this mathematics, one can use Cantor's set theory, and a whole set of complex ideas developed by Hilbert, Weyl, Noether, Artin, and Brauer, and von Neumann's work was born out of this. Another area of inspiration for the development of general mathematics is a new kind of combinatorial analysis, arising from recent fundamental research in the biological sciences. Here the present lack of general methods is even more striking. These problems are essentially nonlinear and of extremely complex combinatorial character. It seems that many years of experimental and heuristic research will be required before one can hope to gain the insight required for a definitive comprehensive theory. It was with this awareness that von Neumann devoted much of his energy during the last decade to the study and construction of computing machines and laid down the preliminary outlines for the study of automata. Looking back at von Neumann’s work, with its many branches and its vast extension, one can say, as Hilbert did: “One may ask oneself whether mathematical science will not end up, as has long been the case with the other sciences, divided into its own isolated parts, whose representatives (investigators) can hardly understand one another, and whose relations will continue to diminish? I do not think so, nor do I hope so; mathematical science is an indivisible whole, an organism, the vitality of which lies precisely in the inseparability of its parts. However diverse our branches may be in their details, we are still struck by the equivalence of logical processes, by the relations of ideas throughout the science, by the innumerable analogies of different fields…” 36 Von Neumann’s work contributed precisely to the ideal of the universality and organic unity of mathematics. (Editor's note: The last part of the original text introduces some of von Neumann's honors and positions he has held, as well as a list of papers compiled by the author Ulam. You can read the original text if necessary.) Notes 1.Translator’s note: Thomas Robert Malthus (1766-1834), a British clergyman, demographer and political economist, is world-famous for his population theory. 2.[7]Uber die Grundlagen der Quantenmechanik. With D. Hilbert and L. Nordheim. Math. Ann. vol. 98 (1927) pp. 1-30. 3.Translator’s note: Lothar Wolfgang Nordheim (1899-1985) was a German-American physicist who made contributions to quantum theory, nuclear physics, and particle physics. 4. For an excellent and concise summary of the current state of the axiomatization of the non-relativistic quantum theory of atomic phenomena, see George Mackey's article Quantum mechanics and Hilbert space, Amer. Math. Monthly, October, 1957, still largely based on von Neumann's book Mathematical Foundations of Quantum Mechanics. 5. Wahrscheinlichkeitstheoretischer Aufbau der Quantenmechanik, Nachr. Ges. Wiss. Göttingen (1927) pp. 245-272. 6. It is impossible to summarize here the mathematical arguments involved. The vast majority of physicists still agree with von Neumann's proposal. This is not to say that theories different from the current mathematical formulation of quantum mechanics do not allow the existence of hidden variables. For a recent discussion, see the Colston Papers (Volume 9), the proceedings of the Ninth Symposium of the Colston Research Society held at the University of Bristol, April 1-4, 1957, with discussions by David Bohm, Léon Rosenfeld, and others. 7. [33] Über einen Hilfssatz der Variationsrechnung, Abh. Math. Sem. Hansischen Univ. vol. 8 (1930) pp. 28-31. 8.Translator’s note: Tibor Radó (1995-1965) was a Hungarian mathematician, famous for solving the Plateau problem. 9.[41]Proof of the quasi-ergodic hypothesis, Proc. Nat. Acad. Sci. USA vol. 18(1932) pp. 70-82. 10. Bernard Osgood Koopman (1900-1981), French-American mathematician, known for his foundational work in ergodic theory, probability theory, statistical theory, and operations research. He was a founding member and sixth president of the Operations Research Society of America. 11.Translator's Note: Metric transitivity, see 12.[56]On compact solutions of operational-differential equations. I. With S. Bochner. Ann. of Math. vol. 36 (1935) pp. 255-291. 13.[80]Fourier integrals and metric geometry. With IJ Schoenberg. Trans. Amer. Math. Soc. vol. 50 (1941) pp. 226-251. 14.[86]Approximative properties of matrices of high finite order, Portugaliae Mathematica vol. 3 (1942) pp. 1-62. 15.[91]Solution of linear systems of high order. With V. Bargmann and D, Montgomery. Report prepared for Navy BuOrd under Contract Nord-9596-25, Oct. 1946, 85 pp. 16.[94]Numerical inverting of matrices of high order. With HH Goldstine. Bull. Amer. Math. Soc. vol. 53 (1947) pp. 1021-1099. 17.[109]Numerical inverting of matrices of high order, II. With HH Goldstine. Proc. Amer. Math. Soc. vol. 2 (1951) pp. 188-202. 18.Kuhn, HW, & Tucker, AW (1958). John von Neumann's work in the theory of games and mathematical economics. Bulletin of the American Mathematical Society, 64(3), 100–123. doi:10.1090/s0002-9904-1958-10209-8 19.[17]Zur Theorie der esellschaftsspiele, Math. Ann. vol. 100 (1928) pp. 295-320. 20. [72] Über ein ökonomisches Gleichungssystem und eine Verallgemeinerung Brouwerschen Fixpunktsatzes, Erg. eines Math. Coll., Vienna, edited by K. Menger, vol. 8, 1937, pp. 73-83. 21.[102]Solutions of games by differential equations. With GW Brown, "Contributions to the Theory of Games,n Ann. of Math. Studies, no. 24, Princeton University Press, 1950, pp. 73-79. 22.[113]A certain zero-sum two-person game equivalent to the optimal assignment problem. "Contributions to the Theory of Games,* Vol. II, Ann. Of Math. Studies, no. 28, Princeton University Press, 1953, pp. 5-12. 23.[114]Two variants of poker. With DG Gillies and JP Mayberry. "Contributions to the Theory of Games," Vol. II. Ann. of Math. Studies, no. 28, Princeton University Press 1953, pp. 13-50. 24.[90]Theory of games and economic behavior. With O. Morgenstern. Princeton University Press (1944, 1947, 1953) 625 pp. 25.Translator’s note: Abraham Wald (1902-1950) was a Romanian-American statistician. He considered the survivor bias problem in the problem of aircraft damage during World War II. 26.Translator’s note: Leonid Hurwicz (1917-2008), winner of the 2007 Nobel Prize in Economics, pioneered the theory of mechanism design. 27.American Economic Review vol. 35 (1945) pp. 909-925. 28.Translator’s Note: Jacob Marschak is an economist and one of the founders of Western information economics. In 1959, he published the article “Review of Information Economists”, marking the birth of information economics. 29. Journal of Political Economy vol. 54 (1946) pp. 97-115. 30.[84]The statistics of the gravitational field arising from a random distribution of stars, I. With S. Chandrasekhar. The Astrophysical Journal vol. 95 (1942) pp. 489-531. 31.[88]The statistics of the gravitational field arising from a random distribution of stars. II. The speed of fluctuations', dynamical friction*, spatial correlations. With S. Chandrasekhar. The Astrophysical Journal vol. 97 (1943) pp. 1-27. 32.[108]Discussion of the existence and uniqueness or multiplicity of solutions of theaerodynamical equations (Chapter 10) of the Problems of Cosmical Aerodynamics, Proceedings of the Symposium on the Motion of Gaseous Masses of Cosmical Dimensions held at Paris, August 16-19, 1949. Central Air Doc. Office, 1951, pp. 75-84. 33.[100]A method for the numerical calculation of hydrodynamic shocks. With RD Richtmyer. Journal of Applied Physics vol. 21 (1950) pp. 232-237. 34. Jule Charney worked closely with him on meteorological problems. See [104] Numerical integration of the barotropic vorticity equation. With JG Charney and R. Fjortoft. Tellus 2 (1950) pp. 237-254. 35.[120]Can we survive technology?, Fortune, June, 1955. 36.Translator’s note: This quote borrows the line “the great globe itself” from Shakespeare’s The Tempest. 37.Translator's Note: ENIAC, the full name of which is Electronic Numerical Integrator And Computer, was born in the United States on February 14, 1946. ENIAC is the second electronic computer after ABC (Atanasoff-Berry Computer) and the first general-purpose computer. It is a complete electronic computer that can be programmed to solve various computing problems. 38.Translator's Note: For closed solutions, see 39.Hubert: Problèmes futurs des Mathématiques, Comptes-Rendus, 2ème Congrès International de Mathématiques, Paris, 1900. This article is based on the Creative Commons License (CC BY-NC 4.0), translated from S. Ulam, John von Neumann 1903-1957, Bull. Amer. Math. Soc. 64 (1958), 1-49, original link: https://www.ams.org/journals/bull/1958-64-03/S0002-9904-1958-10189-5/S0002-9904-1958-10189-5.pdf Produced by: Science Popularization China Special Tips 1. Go to the "Featured Column" at the bottom of the menu of the "Fanpu" WeChat public account to read a series of popular science articles on different topics. 2. Fanpu provides a function to search articles by month. Follow the official account and reply with the four-digit year + month, such as "1903", to get the article index for March 2019, and so on. Copyright statement: Personal forwarding is welcome. Any form of media or organization is not allowed to reprint or excerpt without authorization. For reprint authorization, please contact the backstage of the "Fanpu" WeChat public account. |
<<: How far away is Betelgeuse, the famously doomed star?
>>: How romantic was the ancients’ elegant name for snowflakes?
Last week, we conducted a violent evaluation of t...
The case is this: An e-commerce company that sell...
Recently, WeChat released the 7.0.10 beta version...
Welfare is coming! You can make money by scanning...
The mini program provides convenience for publici...
According to the Sinopec News Office, another 100...
Recently, the movie "She's Missing"...
“If you take away one thing today,” Facebook CEO ...
"The first bowl of noodles in the world was ...
Douban's cold start was quite successful. Of ...
Let’s first take a look at what a good product ad...
Recently, North China broke the historical low te...
In the process of communicating with beverage ind...
Preface Before MarshMallow, permissions were gran...
The Ministry of Finance has issued a bonus for th...