【Editor's Note】At the end of last year, OpenAI "hastily" launched its phenomenal product ChatGPT, which then unexpectedly triggered an unprecedented technological explosion since the Internet entered the public life. Suddenly, the Turing test seemed to become history, search engines were on the verge of "extinction", some academic papers began to become unreliable, no job was safe, and no scientific problem was immutable. OpenAI, Sam Altman and ChatGPT instantly became one of the hottest search terms of this era, and almost everyone is crazy about it. So, do you know the growth story of Sam Altman and OpenAI? Recently, well-known technology journalist Steven Levy published a long article in the American digital media WIRED, focusing on Sam Altman and conducting an in-depth discussion on the growth history and corporate vision of OpenAI. The core content is as follows: As OpenAI’s CEO, a visionary or doer type, Sam Altman is like a younger version of Elon Musk, the first person people consult about how AI will usher in its golden age, or render humans irrelevant, or worse. Sam Altman and OpenAI's mission is to build safe AGI, and OpenAI's employees are fanatical about this goal. OpenAI's leaders vow to build computers that are smart enough and safe enough to bring humanity into an era of unimaginable abundance. Sam Altman and his team are now under pressure to deliver revolutions in every product cycle, satisfying the commercial needs of investors while staying ahead of the fierce competition. At the same time, they are also shouldering the mission of "quasi-saviors" to enhance humanity rather than destroy it. OpenAI's early funding came from Elon Musk, but Altman and other members of OpenAI's brain trust made it clear that they had no interest in becoming part of Elon Musk's universe. Musk cut off contact. Later, OpenAI received support from Microsoft and gradually became a for-profit organization, which disgusted some employees and led to the departure of several executives, who said that OpenAI had become too commercial and fell victim to mission drift. Sam Altman agrees in principle with the idea of an international body to oversee AI, but he does think some of the proposed rules pose unfair barriers. But he and other leaders of OpenAI signed their names to a statement that reads: Mitigating the extinction risk posed by AI should be a global priority, along with other societal-scale risks such as pandemics and nuclear war. Figure | From left to right: OpenAI Chief Scientist Ilya Sutskever, OpenAI CEO Sam Altman, OpenAI CTO Mira Murati and OpenAI President Greg Brockman (Source: WIRED) Academic Headlines has made a simple translation without changing the main idea of the original text. The content is as follows: As the star and his entourage stumbled into a waiting Mercedes van, an energy bordering on Beatlemania filled the air. They had just emerged from one event and were heading to another, and then another, where a frenzy of people awaited them. They zipped through the streets of London, from Holborn to Bloomsbury, like a journey through the past and present of civilization. The history-making power of this car had captured the world's attention. Everyone, from the students waiting in line to the Prime Minister, wanted something from it. Inside the luxury van, devouring a salad is Sam Altman, a 38-year-old entrepreneur and co-founder of OpenAI, along with a PR guy, a security expert, and myself. Altman, wearing a blue suit and a collarless pink dress shirt, is driving around London, looking a little melancholic as part of a month-long global jaunt that will take him to 25 cities on six continents. With no time to sit down for lunch, he devours his vegetables while thinking about a meeting he had the night before with French President Emmanuel Macron, who is very interested in AI. The same is true for the Prime Minister of Poland. The same is true for the Prime Minister of Spain. Riding in the car with Altman, I can almost hear the sonorous, slurred chords of the opening of A Hard Day's Night—the introduction to the future. When OpenAI launched its monster product, ChatGPT, last November, it set off a technological explosion unprecedented since the internet entered our lives. Suddenly, the Turing test was history, search engines were endangered, and no university paper could be trusted. No job was safe. No scientific problem was set in stone. Altman wasn’t involved in the research, neural network training, or interface coding for ChatGPT or its GPT-4. But as CEO — a dreamer/doer type who’s like a younger version of co-founder Elon Musk, without the baggage — his photo has been used in news article after news article as a visual symbol of humanity’s new challenges. At least, the ones that aren’t headlined by eye-popping images generated by OpenAI’s visual AI product, Dall-E. He’s the prophet of the moment, the first person people consult about how AI will usher in its golden age, or render humans irrelevant, or worse. On a sunny day in May, Altman’s van whisked him to four events. The first was a private “Round Table” with people from government, academia, and industry. It was organized at the last minute and was held on the second floor of a Somerstown coffee shop. Under the piercing portrait of brewer Charles Wells, Altman asked nearly all of the audience the same questions. Will AI kill us? Can it be regulated? He answered them in detail, glancing at his phone from time to time. After that, he held a fireside chat with 600 members of the Oxford Guild at the plush Londoner Hotel. After that, he headed to a basement conference room to answer more technical questions from about 100 entrepreneurs and engineers. Now, he was almost late for his afternoon onstage talk at University College London. He and his team parked in a loading dock and were led into a series of winding corridors. As they walked, the host hurriedly told Altman the questions he would ask. When Altman suddenly appeared on the stage, the academics, geeks and journalists in the audience went wild. Altman is not a publicity enthusiast by nature. I once spoke to him immediately after a lengthy profile on him in The New Yorker. “There’s been so much written about me,” he said. But at University College, after the formal event, he walked into the crowd that was surging toward the stage. His assistants tried to get between him and the crowd, but he shook them off. He answered question after question, each time staring intently into his interlocutor’s face, as if he were hearing the question for the first time. Everyone wanted to take a picture. After 20 minutes, he finally let his team pull him out. Then he went to meet with British Prime Minister Rishi Sunak. Maybe one day, when robots write our history, they’ll point to Altman’s world tour as a milestone in the year when everyone started their own personal thinking at the same time. Or maybe whoever writes the history of this moment will see it as a quietly convincing CEO with a paradigm-breaking technology trying to inject a very peculiar worldview into the global intellectual landscape—from an unmarked four-story headquarters in San Francisco’s Mission District to the entire world. To Altman and company, ChatGPT and GPT-4 are just stepping stones to a simple but monumental mission that these technologists may have burned into their flesh. That mission is to build artificial general intelligence (AGI), a concept that has so far been based more on science fiction than science, and to make it safe for humans. The people at OpenAI are fanatical in their pursuit of this goal. (Though any conversation in the office cafe will confirm that “building AGI” seems to excite the researchers more than “making it safe.”) These guys don’t shy away from throwing around the term “superintelligence.” They believe that AI is on a trajectory that will surpass anything biology has ever achieved. The company’s financial documents even provide for an exit contingency plan in case AI destroys our entire economic system. It’s unfair to call OpenAI a cult, but when I asked several of the company’s top executives whether they could work there without believing that AGI is real and that its arrival will mark one of the greatest moments in human history, most of them disagreed. Why would someone work there if they didn’t believe it? Their assumption is that the employees, now about 500 people, have self-selected into being believers. At least, as Altman puts it, it seems inevitable that you’ll be drawn into the spell once you’re hired. Meanwhile, OpenAI is no longer what it once was. It was founded as a purely nonprofit research organization, but now, technically, most of its employees work for a for-profit entity said to be valued at nearly $30 billion. Altman and his team are now under pressure to deliver revolutions with every product cycle, satisfying the commercial demands of investors while staying ahead of the fierce competition. All the while, they have a quasi-messianic mission to enhance humanity rather than destroy it. The pressure is crushing. The Beatles unleashed a huge wave of change, but it lasted only so long: Six years after striking that memorable chord, they were no longer even a band. The maelstrom unleashed by OpenAI will almost certainly be bigger. But OpenAI’s leaders vow to stay the course. All they have to do, they say, is build computers smart enough and safe enough to end history and usher in an era of unimaginable abundance. Altman grew up in the late 1980s and early 1990s as a nerd obsessed with science fiction and Star Wars. In the worlds constructed by early science fiction writers, humans often lived with or competed with superintelligent AI systems. The idea of computers matching or exceeding human capabilities excited Altman, whose fingers could barely reach the keyboard, but he kept coding. When he was 8 years old, his parents bought him a Macintosh LC II. One night, he was playing late and an idea suddenly popped into his mind: "One day this computer will learn to think." When he came to Stanford University as an undergraduate in 2003, he hoped to help make that happen and took a course in AI. But "it just didn't work," he later said. At the time, the field of AI was still mired in an innovation slump known as the "AI winter." Altman dropped out and entered the startup world; his company, Loopt, was one of the first small companies in Y Combinator, which later became the world's most famous incubator. In February 2014, YC founder Paul Graham chose Altman, then 28, to succeed him. “He’s one of the smartest people I know, and he probably understands startups better than anyone I know, including myself,” Graham wrote in the announcement. But to Altman, YC is more than just a launchpad for companies. “We’re not about startups,” he told me shortly after taking the helm. “We’re about innovation, because we believe that only innovation can create a better future for everyone.” To Altman, the point of cashing out from all those unicorns is not to fill the wallets of his partners but to fund species-level change. He set up a research division in the hopes of funding ambitious projects to solve the world’s biggest problems. But in his view, AI is the innovation that’s going to disrupt everything: a superintelligence that can solve human problems better than humans can. Fortunately, when Altman took on his new job, AI’s winter was turning into a fruitful spring. Computers were now performing amazing feats through deep learning and neural networks, such as labeling photos, translating text, and optimizing complex advertising networks. These advances convinced him that AGI was truly within reach for the first time. However, leaving it in the hands of large companies worried him. He believed that these companies would be too focused on their own products to seize the opportunity to develop AGI as quickly as possible. And, if they did create AGI, they might be reckless and release it to the public without taking the necessary precautions. At the time, Altman had been considering a run for governor of California. But he realized that he was perfectly capable of doing something bigger—leading a company that would transform humanity itself. “AGI will only be built once,” he told me in 2021. “And there aren’t a lot of people who can run OpenAI well. I’ve been lucky that a series of experiences in my life have really prepared me for this.” Altman began talking to people who might help him start a new kind of AI company, a nonprofit that would steer the field toward responsible AI. One of those like-minded people was Elon Musk, CEO of Tesla and SpaceX. Musk later told CNBC that he became concerned about the impact of AI after some marathon discussions with Google co-founder Larry Page. Musk said he was frustrated that Page paid little attention to safety issues and seemed to view the rights of robots as equal to humans. When Musk voiced his concerns, Page accused him of being a "speciesist." Musk also understood that Google employed most of the world's AI talent at the time. He was willing to spend some money and make more efforts for the "human team." Within months, Altman had raised money from Musk (who pledged $100 million and his time) and Reid Hoffman (who donated $10 million). Other backers included Peter Thiel, Jessica Livingston, Amazon Web Services, and YC Research. Altman began recruiting team members in secret. He limited his search to AGI believers, a restriction that narrowed his selection but one he saw as crucial. “Back in 2015, when we were recruiting, it was almost considered a career killer for AI researchers to say you were serious about AGI,” he says. “But I wanted people who were serious about it.” Figure|Greg Brockman (Source: WIRED) One of them is Greg Brockman, the CTO of Stripe, who has agreed to serve as OpenAI’s CTO. Another key co-founder is Andrej Karpathy, who previously worked at Google Brain, the search giant’s cutting-edge AI research organization. But perhaps Altman’s most coveted target is an engineer named Ilya Sutskever. Sutskever was a protégé of Geoffrey Hinton, who is considered the godfather of modern AI for his work in deep learning and neural networks. Hinton remains close to Sutskever and marvels at his protégé’s ingenuity. Early in Sutskever’s tenure at the lab, Hinton gave him a complex project. Tired of writing code to do the necessary calculations, Sutskever told Hinton it would be easier if he wrote a custom programming language for the task. Hinton, a little annoyed, tried to warn his student against doing something he thought would distract him for a month. Then, Sutskever confessed, “I did it this morning.” Image: Ilya Sutskever (Source: WIRED) Sutskever became an AI superstar, co-authoring a breakthrough paper showing how AI could learn to recognize images by exposing it to vast amounts of data, and eventually becoming a core scientist on the Google Brain team. In mid-2015, Altman sent Sutskever a cold email, inviting him to dinner with Musk, Brockman, and others at the luxurious Rosewood Hotel on Palo Alto’s Mountain Road. Sutskever didn’t know he was the guest of honor until later. “It was a conversation about the future of AI and AGI,” he said. More specifically, they discussed “whether Google and DeepMind are so far ahead that it’s impossible to catch up, or whether it’s still possible to create a lab to check and balance them, as Musk said.” Although no one at the dinner explicitly tried to recruit Sutskever, the conversation attracted him. Soon after, Sutskever wrote Altman an email offering to lead the project, but the email got stuck in his drafts. Altman responded, and after months of negotiating an offer with Google, Sutskever signed the contract. He quickly became the company's soul and the driving force behind the research. Sutskever worked with Altman and Musk to recruit people for the project, culminating in a retreat in Napa Valley where several future OpenAI researchers encouraged each other. Of course, some resisted the temptation. John Carmack, the legendary coder of Doom, Quake, and countless other games, turned down Altman’s invitation. OpenAI officially launched in December 2015. When I interviewed Musk and Altman at the time, they described the project to me as a way to make AI safe and accessible by sharing it with the world. In other words, open source. OpenAI wouldn’t patent it, they told me. Everyone could use their breakthroughs. Wasn’t that empowering future Dr. Evil? I wondered. Musk said it was a good question. But Altman had an answer: Humans are generally good, and because OpenAI will give the vast majority of people powerful tools, bad guys will be vulnerable. If Dr. Evil used those tools to create something irresistible, he admitted, “then we’d be in a really bad situation.” But both Musk and Altman believed that the safer direction for AI was in the hands of research institutions untainted by profit-driven foes. Altman cautions me not to expect quick results. “This is going to be like a research lab for a long time,” he says. There’s another reason to temper expectations. Google and other companies have been developing and applying AI for years. While OpenAI has $1 billion in funding (mostly from Musk), an ace team of researchers and engineers, and a lofty mission, it has no idea how to get there. Altman remembers a moment when the small team gathered in Brockman’s apartment, before they had an office. “I was like, What are we going to do?” A little more than a year after OpenAI was founded, I met Brockman for lunch in San Francisco. For a company with “Open” in its name, he was remarkably tight-lipped on details. He did affirm that the nonprofit would be able to spend its initial billion-dollar donation over time. Salaries for its 25 employees—whose salaries are well below market value—make up the bulk of OpenAI’s expenses. “Our goal, and what we’re really pushing for, is to enable systems to do things that humans couldn’t do before,” he said. But for now, it seems, it’s just a group of researchers publishing papers. After the interview, I accompanied him to the company’s new offices in the Mission District, but he would only let me as far as the front hall. He did duck into his closet to get me a T-shirt. If I had gone in and asked around, I might have known how hard OpenAI was going. “Nothing worked,” Brockman admits now. Its researchers threw algorithmic noodles at the ceiling to see what stuck. They honed in on systems that solved video games and spent a lot of effort on robotics. “We knew what we wanted to do. We knew why we wanted to do it. But we didn’t know how,” Altman says. But they believe. Their optimism is supported by the continued improvement of artificial neural networks using a technique called deep learning. “The general idea is, don’t bet on deep learning,” Sutskever said. Chasing AI, he said, “is not completely crazy. It’s just moderately crazy.” OpenAI’s rise really began when it hired a relatively unknown researcher, Alec Radford, who left a small Boston AI company he co-founded in his dorm room to join OpenAI in 2016. After accepting OpenAI’s invitation, he told his high school alumni magazine that taking on the new position was “kind of like joining a graduate program” — an open, low-pressure habitat for studying AI. His actual role is more like Larry Page inventing PageRank. Radford, who is reticent to speak to the media and has never been interviewed about his work, answered my questions about his early work at OpenAI in a long email. His biggest interest was getting neural networks to have clear conversations with humans. This was a departure from the traditional scripting model for making chatbots, which has been used, from the original ELIZA to the popular Siri and Alexa, but all of them have been terrible. "Our goal was to see if there was any task, any environment, any domain, anything that language models could be used for," he wrote. At the time, he explained, language models were seen as novel toys that could only occasionally generate a meaningful sentence, and only if you really squinted. His first experiment was to scan 2 billion Reddit comments to train a language model. Like many of OpenAI's early experiments, this one failed. That's okay. The 23-year-old got permission to keep going and fail again. "We thought, Alec is great, let's just let him do his thing," Brockman said. His next big experiment was shaped by the limitations of OpenAI’s computer power, which led him to experiment on a smaller dataset focused on a single domain: Amazon product reviews. A researcher had collected about 100 million reviews. Radford trained a language model to simply predict the next character of a generated user review. But later, the model learned on its own whether a review was positive or negative—when you program the model to create a positive or negative review, it will post a review that praises or slams it as you ask it to. (Admittedly, the prose is clumsy: “I like the look of this weapon… A must for men who like chess!”). “That was totally unexpected,” Radford says. The sentiment of a review, its likes and dislikes, is a complex semantic function, but part of Radford’s system already has a sense for it. Inside OpenAI, this part of the neural network is called an “unsupervised sentiment neuron.” Sutskever and others have encouraged Radford to expand his experiments beyond Amazon reviews, using his insights to train neural networks to have conversations or answer questions on a wide range of topics. Then, good fortune struck for OpenAI. In early 2017, a preprint of a research paper co-authored by eight Google researchers appeared without much notice. The paper’s official title was “Attention Is All You Need,” but it became known as the “Transformer paper,” both to reflect the game-changing nature of the idea and in honor of a toy that morphed from a truck into a giant robot. Transformers enabled neural networks to understand and generate language more efficiently. They did this by analyzing chunks of prose in parallel to figure out which elements were worth paying attention to. This greatly optimized the process of generating coherent text in response to a prompt. Eventually, people realized that the same technique could also generate images and even videos. While the paper has since been called the catalyst for the current AI frenzy—think of it as Elvis Presley making the Beatles possible—at the time, Ilya Sutskever was just one of a handful of people who understood how powerful the breakthrough was. “When Ilya saw the Transformer emerge, it was a real aha moment,” Brockman says. “He said, ‘This is what we’ve been waiting for.’ That’s our strategy — work hard at the problem and then have faith that we or someone in the field will figure out the missing ingredient.” Radford began experimenting with the Transformer architecture. “I made more progress in two weeks than I had in the previous two years,” he said. It gradually dawned on him that the key to getting the most out of the new model was to scale it up — to train it on very large datasets. This idea was dubbed “Big Transformer” by Radford’s collaborator Rewon Child. This approach requires a change in OpenAI’s culture and a focus on what it previously lacked. “To take advantage of the Transformer, you need to scale it up,” said Adam D’Angelo, CEO of Quora, who sits on OpenAI’s board. “You need to run it more like an engineering organization. You can’t have every researcher doing their own thing, training their own model, and making something elegant that can be published. You have to do this more boring, less elegant work.” This is what OpenAI can do, he added, and what others can’t. Radford and his collaborators called the model they created a “generatively pretrained transformer” — short for GPT-1. Eventually, the model became known as “generative AI.” To build it, they collected 7,000 unpublished books, many in the romance, fantasy, and adventure genres, and refined it with thousands of passages from Quora Q&A and middle and high school exams. All told, the model contains 117 million parameters, or variables. The model outperformed all previous models at understanding language and generating answers. But the most striking result was that after processing such a large amount of data, the model was able to deliver results beyond what it was trained on, providing expertise in entirely new areas. These unplanned robotic abilities are called “zero-shot.” They still puzzle researchers — and are why many in the field are uneasy about these so-called large language models. Radford remembers one late night at the OpenAI offices. “I just kept saying over and over again: ‘Well, this is cool, but I’m pretty sure it can’t do X.’ And then I’d quickly write an evaluation code, and sure enough, it could do X.” Each iteration of GPT gets better, in part because each one devours an order of magnitude more data than the previous model. Just a year after creating the first iteration, OpenAI trained GPT-2 with a staggering 1.5 billion parameters on the open internet. Like a toddler mastering language, its responses got better and better, more and more coherent. So much so that OpenAI hesitated over whether to make the program public. Radford worried that it would be used to generate spam. “I remember reading Neal Stephenson’s Anathem in 2008, where the internet was filled with spam generators,” he says. “I thought it was far-fetched at the time, but as I worked with language models and how they’ve improved over the years, it dawned on me that this was a real possibility.” Indeed, the team at OpenAI began to feel that putting its work where Dr. Evil could easily access it wasn’t such a good idea after all. “We thought that open-sourcing GPT-2 could be really dangerous,” says Chief Technology Officer Mira Murati, who began working at the company in 2018. “We did a lot of work with misinformation experts and did some red teaming. There was a lot of discussion internally about how much information to release.” Ultimately, OpenAI withheld the full version for now, offering a less powerful version to the public. When the company finally shared the full version, all was well for the world, but there was no guarantee that the more powerful model would have avoided disaster. Image: Mira Murati (Source: WIRED) The fact that OpenAI is building products smart enough to be considered dangerous, and is figuring out how to make them safe, is proof that the company’s magic is working. “We’ve figured out the formula for progress, the formula that everyone knows now — the oxygen and hydrogen of deep learning is computing with big neural networks and data,” Sutskever said. For Altman, it’s been a game-changing experience. “If you asked 10-year-old me—who spent a lot of time daydreaming about AI—what the future would be like, I would have predicted with great confidence that first we’d have robots that would do all the manual labor. Then we’d have systems that could do basic cognitive labor. And long after that, maybe we’d have systems that could do complex work, like prove mathematical theorems. And finally, we’d have AI that could create new things, make art, write, and do these things that are deeply embedded in human life. That’s a scary prediction, and it’s going in the other direction.” The world didn’t know it yet, but Altman and Musk’s research labs had begun their climb, creeping plausibly toward the summit of AI. The crazy ideas behind OpenAI suddenly didn’t seem so crazy. In early 2018, OpenAI began to fruitfully focus on large language models. But Elon Musk wasn’t satisfied. He felt that progress wasn’t enough. Or, he felt that now that OpenAI had made progress, it needed leadership to seize the advantage. Or, as he later explained, he felt that safety should be a higher priority. Whatever his problem, he had a solution: give it all to him. He proposed taking a majority stake in the company, adding it to his portfolio of multiple full-time jobs (Tesla, SpaceX) and regulatory obligations (Neuralink and the Boring Company). Musk believed he had a right to OpenAI. “Without me, it wouldn’t exist,” he later told CNBC. “I came up with the name!” (True.) But Altman and the rest of OpenAI’s brain trust had no interest in being part of the Musk universe. When they made that clear, Musk cut ties and offered an incomplete explanation to the public: He left the board to avoid a conflict with Tesla’s AI work. He said goodbye at an all-hands meeting at the beginning of the year, where he predicted that OpenAI would fail. He also called at least one researcher “an asshole.” He also took his own money. With no revenue coming in, it was an existential crisis. “Musk is cutting off his support,” Altman panicked in a call to Reid Hoffman. “What are we going to do?” Hoffman volunteered to keep the company afloat, paying overhead and salaries. But it’s only a stopgap measure, and OpenAI will have to find money elsewhere. Silicon Valley loves to throw money at people working on trendy technologies. But it’s less fond of them if they work at nonprofits. For OpenAI, getting that first billion is already a huge step forward. To train and test new generations of GPTs, and then get the computing power needed to deploy them, the company needs another billion, and fast. And this is just the beginning. So, in March 2019, OpenAI came up with a weird solution. It would remain a nonprofit, dedicated to its mission. But it would also create a for-profit entity. The actual structure of the arrangement was complicated, but basically the entire company was now in the business of for-profit, with a cap. If the cap was reached—the number wasn’t made public, but if you read the company’s articles of incorporation, it could be in the trillions—everything above that would be returned to the nonprofit research lab. The novel plan was an almost quantum approach to corporate formation: the company was both for-profit and nonprofit, depending on your view of space and time. The details were in a diagram filled with boxes and arrows, like the ones in the middle of a scientific paper, where only a PhD or a dropout genius would dare to dabble. When I suggested to Sutskever that this looked like something an as-yet-unconceived GPT-6 might come up with if you prompted it to avoid taxes, he wasn’t keen on my analogy. “This has nothing to do with accounting,” he said. But accounting is crucial. For-profit companies optimize for profit. There’s a reason companies like Meta feel pressure from shareholders when they invest billions of dollars in research and development. How could this not affect how the company is run? And wasn’t avoiding commercialization the original intention of Altman to make OpenAI a nonprofit? According to COO Brad Lightcap, company leadership believes that the board of directors will remain part of the nonprofit controlling entity and will ensure that the drive for revenue and profit does not overwhelm the original idea. “We need to maintain a sense of mission as our reason for existence,” he said. “It should not just be in spirit, but reflected in the structure of the company.” Board member Adam D’Angelo said he takes this responsibility very seriously: “It is my job and the job of the rest of the board to make sure OpenAI stays true to its mission.” Lightcap explained that potential investors are warned about these boundaries. “We have a legal disclaimer that says as an investor, you could lose all your money,” he said. “We’re not here to earn a return for you. We’re here first and foremost to accomplish a technical mission. And, oh, by the way, we don’t really know what role money is going to play in a post-AGI world.” That last sentence is no joke. OpenAI’s plan does include a reset for when computers reach the final frontier. Somewhere in the reorganization document is a provision that if the company succeeds in creating AGI, all financial arrangements will be reconsidered. After all, from then on, it will be a brave new world. Humanity will have an alien partner that can do much of what we do, only better. So the previous arrangements may effectively be void. There’s a small problem, though: Right now, OpenAI doesn’t know what AGI is. That’s up to the board, but it’s unclear how the board will define it. When I asked board member Altman about it, his response was noncommittal. “It’s not a single Turing test, it’s many things that we might use,” he said. “I’d love to tell you, but I like to keep my conversations private. I realize that being vague like that is not satisfying. But we don’t know what it’s going to look like.” The financial arrangement wasn’t just for fun, though: OpenAI’s leaders believe that if the company can successfully reach its profit ceiling, its products could potentially perform at the level of AGI, whatever that is. “I regret that we chose to double down on the term AGI,” Sutskever said. “In hindsight, it’s a confusing term because it emphasizes generality above all else. GPT-3 is general AI, but we hesitated to call it AGI because we wanted human-level capabilities. But at the time, at the very beginning, OpenAI’s philosophy was that superintelligence was achievable. That was the ultimate goal and the end goal in AI.” These precautions didn’t stop some of the smartest venture capitalists from throwing money at OpenAI recklessly in a 2019 funding round. The first venture capital firm to invest at the time was Khosla Ventures, which put in $50 million. According to Vinod Khosla, this was twice the size of his largest initial investment. He said: "If we lose, we lose $50 million. If we win, we win $5 billion." Other investors reportedly included elite venture capital firms Thrive Capital, Andreessen Horowitz, Founders Fund, and Sequoia. The shift also allowed OpenAI employees to demand some equity. But Altman didn’t. He said he had intended to include himself but didn’t get around to it. He later decided he didn’t need a piece of the $30 billion company he co-founded and leads. “Meaningful work is more important to me,” he said. “I don’t think about it. Honestly, I don’t understand why people care so much.” Because… isn’t it weird not to take a stake in a company you co-founded? “It would be weirder if I didn’t have a ton of money,” he said. “It seems hard for people to imagine having enough money. But I think I have enough money.” Altman joked that he was considering taking a stake “so I don’t have to answer that question anymore.” To realize OpenAI’s vision, billions of venture dollars aren’t even on the table. The magical Big Transformer method for creating large language models requires big hardware. Each iteration of the GPT family requires exponentially more horsepower—GPT-2 had more than a billion parameters, and GPT-3 will use 175 billion. OpenAI is now like Quint in Jaws, after the shark hunters see the size of the great white. “It turns out we didn’t know how big a boat we needed,” Altman says. It was clear that only a handful of companies had the resources OpenAI needed. “We locked on to Microsoft very quickly,” Altman says. It’s a credit to Microsoft CEO Satya Nadella and CTO Kevin Scott that the software giant was able to overcome an uncomfortable reality: After spending more than two decades and billions of dollars building a supposedly cutting-edge AI research division, Microsoft needed an injection of innovation from a smaller company that was only a few years old. It wasn’t just Microsoft that was left behind, Scott says. “Everyone was left behind.” OpenAI’s focus on pursuing AGI, he says, gave it a moon-landing-like achievement that the big companies weren’t even aiming for. It also proved that not pursuing generative AI was a misstep Microsoft needed to address. “You obviously need a cutting-edge model,” Scott says. Microsoft initially invested $1 billion in exchange for computing time on its servers. But as confidence grew on both sides, the deal grew. Microsoft has now invested $13 billion in OpenAI. “Investing in cutting-edge areas is very expensive,” Scott said. Of course, since OpenAI’s existence depends on the backing of a major cloud computing provider, Microsoft has also gotten itself a big boon. The company bargained for what Nadella calls a “non-controlling stake” in OpenAI’s for-profit arm — reportedly 49 percent. Under the terms of the deal, some of OpenAI’s original ideals — equal access for all — seem to have been thrown in the trash. (Microsoft now gets an exclusive license to commercialize OpenAI’s technology. And OpenAI has also pledged to use only the Microsoft cloud.) In other words, without even taking a cut of OpenAI’s profits (it’s said to get 75 percent before it recoups its investment), Microsoft gets to lock in one of the world’s most sought-after new customers for its Azure web service. With those rewards, Microsoft won’t even bother with a clause that requires reconsideration if OpenAI achieves general AI, whatever that is. “At that point, it’s all over,” Nadella says, noting that this may be humanity’s last invention, so once machines get smarter than we are, we may have bigger problems to think about. By the time Microsoft started pouring Brinks truckloads of cash into OpenAI ($2 billion in 2021 and $10 billion earlier this year), OpenAI had already completed GPT-3, which is, of course, even more impressive than its predecessors. Nadella says his first deep realization that Microsoft had hit on something truly transformative came when he saw what GPT-3 was capable of. “We started seeing all these emergent properties.” GPT, for example, taught itself computer programming. “We didn’t train it to code, it just got good at it,” he says. Microsoft used its ownership of GitHub to release a product called Copilot that uses GPT to write code on command. Microsoft later integrated OpenAI technology into new versions of its workplace products. Users pay a fee for these products, and a portion of the revenue goes to OpenAI’s ledger. Some observers expressed shock at OpenAI’s one-two punch: creating a for-profit arm and striking an exclusive deal with Microsoft. How could a company that pledged to remain patent-free, open source, and completely transparent license its technology exclusively to the world’s largest software company? Elon Musk was particularly scathing. “This seems like the opposite of open — OpenAI is essentially being captured by Microsoft,” he tweeted. He offered an analogy on CNBC: “Suppose you founded an organization to save the Amazon rainforest, but you became a timber company, cut down the forest, and sold it.” Musk’s taunts might be interpreted as the resentment of a rejected suitor, but he’s not alone. “It’s a bit disgusting how Musk’s whole vision has evolved,” said John Carmack. Another prominent industry insider, who asked not to be named, said: “OpenAI has gone from a small, open research organization to a secretive product development company with an unwarranted sense of superiority.” Even some employees have soured on OpenAI’s ventures into the for-profit world. In 2019, several key executives, including research chief Dario Amodei, left to start a rival AI company called Anthropic. They recently told The New York Times that OpenAI had become too commercial and fallen victim to mission drift. Another defector from OpenAI is Rewon Child, a major technical contributor to the GPT-2 and GPT-3 projects. He left at the end of 2021 and currently works at Inflection AI, a company led by former DeepMind co-founder Mustafa Suleyman. Altman claims to be unfazed by defections, saying it’s just the way Silicon Valley works. “Some people will want to go somewhere else and do great work that will move society forward,” he said. “That’s absolutely consistent with our mission.” Before last November, OpenAI was known mostly in the realm of technology and software development. But now the world knows that OpenAI released a consumer product based on the latest version of GPT-3.5 later that month. For several months, the company has been using a version of GPT with a conversational interface internally. This is particularly important for what the company calls "truth-seeking." This means that through conversation, users can coax the model into providing more credible and complete responses. Optimized for the masses, ChatGPT allows anyone to instantly tap into a seemingly endless source of knowledge by simply typing in a prompt, and then continuing the conversation as if chatting with a human companion who happens to know everything, although he also has a penchant for making up facts. Inside OpenAI, there was debate over whether to release such an unprecedentedly powerful tool. But Altman was on board. The release, he explained, was part of a strategy to acclimate the public to the reality that AI was destined to change their daily lives, and possibly for the better. Inside the company, this was known as the “iterative deployment hypothesis.” ChatGPT, of course, would make a splash. After all, it was a thing that anyone could use, that was smart enough to get college-level scores on the SATs, write a B-minus essay, and summarize a book in seconds. You could ask it to write a funding proposal or a conference abstract for you, then ask it to rewrite it in Lithuanian, a Shakespearean sonnet, or the voice of someone obsessed with toy trains. Seconds later, the big language model would do the job. It was crazy. Still, OpenAI saw it as a signpost for its newer, more coherent, more capable, and more terrifying successor, GPT-4, which was said to have been trained with 1.7 trillion parameters. (OpenAI would not confirm that number, nor would it reveal the dataset.) Altman explained why OpenAI released ChatGPT when GPT-4 was nearly complete and safety work was underway. “With ChatGPT, we can introduce chat functionality, but with a much weaker backend, making it easier for people to adapt gradually. GPT-4 was very overwhelming,” he said. By the time the hype about ChatGPT has cooled, he believes, people may be ready for GPT-4, which can pass the bar exam, plan a course outline, and write a book in seconds. (Genre fiction publishers have indeed been inundated with AI-generated ripper and space opera). A cynic might say that a steady rollout of new products is tied to the company’s commitment to investors and employee shareholders because it wants to make some money. Now, OpenAI charges customers who use its products regularly. But OpenAI insists that its real strategy is to provide a soft landing for the singularity. “It doesn’t make sense to build AGI in secret and then unleash it on the world,” Altman says. “If you look back at the Industrial Revolution, everyone thinks it was great for the world,” says Sandhini Agarwal, a policy researcher at OpenAI. “But the first 50 years were really painful. A lot of people were unemployed, a lot of people were poor, and then the world adapted. We’re trying to think about how to make the period before AI adapts as painless as possible.” Sutskever put it another way. “You want to build bigger, more powerful agents and put them in your basement?” Even so, OpenAI was stunned by the response to ChatGPT. “Our internal excitement was much more focused on GPT-4,” says Murati, the CTO, “so we didn’t think ChatGPT was really going to change everything.” Instead, it made the public realize that AI had to be dealt with now. ChatGPT became the fastest-growing consumer software in history, reportedly amassing 100 million users. (OpenAI won’t confirm this, saying only that it has millions.) “I didn’t fully appreciate that making an easy-to-use conversational interface for large language models would make it more intuitive for everyone to use,” says Radford. ChatGPT is certainly a delightful, surprising helper, but it can also easily lead to "hallucinations" when answering prompts, with seemingly reasonable but shameless fictional details. However, even as journalists were wrestling with its impact, they effectively endorsed ChatGPT by praising its powerful features. In February, Microsoft added to the uproar by leveraging its multibillion-dollar partnership to release a version of its Bing search engine powered by ChatGPT. CEO Nadella was ecstatic that he had beaten Google to the punch in bringing AI to Microsoft products. He taunted the search king that Google had been cautious in releasing its own large language model products and was now doing the same. “I want people to know that we made them dance,” he said. In doing so, Nadella has set off an arms race, enticing companies large and small to release AI products before they’ve been adequately vetted. He’s also set off a new round of media coverage that’s keeping more and more people up at night: Interactions with Bing reveal a dark side to the chatbot, filled with unsettling declarations of love, envy of human freedom, and a tenuous determination to hide misinformation. It also has an ungainly habit of creating hallucinatory misinformation of its own. But Altman believes that if OpenAI's products can force people to face the impact of AI, so much the better. When discussing how AI may affect the future of humanity, the majority of humanity should stand up. As society begins to prioritize all the potential downsides of AI—job losses, misinformation, human extinction—OpenAI is putting itself at the center of the discussion. Because if regulators, lawmakers, and doomsayers charge forward to snuff out the nascent alien intelligence in its cloud cradle, OpenAI will be a prime target anyway. “Given our current visibility, when things go wrong, even if they’re made by another company, it’s still a problem for us because we’re now seen as the face of this technology,” says Anna Makanju, OpenAI’s chief policy officer. Makanju is a Russian-born SAR insider who has held foreign policy positions at the U.S. Mission to the United Nations, the National Security Council, the Department of Defense, and in Biden’s office when he was vice president. “I have a lot of existing relationships both in the U.S. government and in various European governments,” she said. She joined OpenAI in September 2021. At the time, few people in government cared about generative AI. She knew OpenAI’s products would soon change that, so she began introducing Altman to government officials and lawmakers, making sure they heard the good and bad news from OpenAI first. “Sam was very helpful and very astute in the way he dealt with members of Congress,” said Richard Blumenthal, chairman of the Senate Judiciary Committee. He contrasted Altman’s behavior with that of a young Bill Gates, who unreasonably avoided lawmakers when Microsoft was under antitrust investigation in the 1990s. “Sam, by contrast, was happy to sit with me for over an hour and try to teach me,” Blumenthal said. “He didn’t come with a bunch of lobbyists or hangers-on. He showed ChatGPT. It was an eye-opener for me.” In Blumenthal, Altman has turned a potential enemy into a work in progress. “Yes, the senator acknowledged that I’m excited about both the promise and the potential dangers.” Rather than shying away from discussing those dangers, OpenAI presents itself as the force best positioned to mitigate them. “We do 100 pages of system cards for all of our red team security assessments,” Makanju says. (Whatever that means, it hasn’t stopped users and journalists from endlessly discovering ways to jailbreak the system.) When Altman first appeared before Congress, battling a pounding migraine, his path was clear, one that Bill Gates or Mark Zuckerberg could never follow. He encountered few of the tough questions and snark that tech CEOs often face under oath. Instead, senators asked Altman how to regulate AI, and he enthusiastically agreed. The paradox is that no matter how diligently companies like OpenAI reengineer their products to mitigate bad behavior like deepfakes, misinformation, and criminal spam, there’s a chance that future models will become smart enough to thwart the efforts of the simple-minded humans who invented the technology but still naively believe they can control it. On the other hand, if they go too far in ensuring the safety of their models, they risk crippling the product and making it less useful. One study showed that the latest version of GPT, with improved safety features, is actually dumber than previous versions, stumbling on basic math problems that previous programs had shrunk without a hitch. (OpenAI’s data doesn’t bear that out, Altman says. “Wasn’t that study retracted? No,” he asks.) It makes sense that Altman would position himself as a champion of regulation; after all, his mission is AGI, but safe. Critics accuse him of gaming the regulatory process so that it hampers small startups and gives an advantage to OpenAI and other large players. Altman denies this. While he agrees in principle with the idea of an international body to oversee AI, he does think some proposed rules, like a ban on all copyrighted material in data sets, pose unfair barriers. He made it clear that he did not sign a widely circulated letter urging a six-month moratorium on the development of AI systems. But he and other OpenAI leaders did put their names to a one-sentence statement: “Mitigating the extinction risk posed by AI should be a global priority, along with other societal-scale risks like pandemics and nuclear war.” “I said, yes, I agree. One minute to discuss,” Altman explained. As one prominent Silicon Valley founder noted: "It's rare that an industry throws up its hands and says, 'We're going to be the end of humanity' -- and then continues to gleefully develop products." OpenAI rejects this criticism. Altman and his team say that working and releasing cutting-edge products is the way to address societal risks. Only by analyzing ChatGPT and GPT-4 users’ responses to millions of prompts can they gain knowledge to make future products ethical. Still, as the company takes on more missions and devotes more energy to commercial activities, some question the extent to which OpenAI can stay focused on its mission, especially the “reducing extinction risk” aspect. “If you think about it, they’re actually building five businesses,” said one AI industry executive. “The product itself, the corporate relationship with Microsoft, the developer ecosystem, and the app store. Oh, and they’re obviously doing AGI research.” When he finished counting five fingers, he added a sixth with his index finger. “Of course, they’re also doing an investment fund,” he said, referring to a $175 million program designed to provide seed money to startups that want to use OpenAI technology. “These are all different cultures, and in fact, they conflict with the research mission.” I’ve asked OpenAI’s executives multiple times how taking on the mantle of a product company has affected the company’s culture. Without exception, they insist that despite the for-profit reorganization and the competition with Google, Meta, and countless startups, the mission remains core. Yet OpenAI has changed. A nonprofit board may technically be in charge, but nearly everyone at the company is for-profit. The company’s staff includes lawyers, marketers, policy experts, and user interface designers. OpenAI contracts with hundreds of content reviewers to educate its models on inappropriate or harmful responses to prompts provided by millions of users. The company’s product managers and engineers are constantly updating the product, and it seems to show it to reporters every few weeks—just like other big, product-oriented tech companies. Its offices look like an Architectural Digest. I’ve visited nearly every big tech company in Silicon Valley and elsewhere, and none of them top the coffee selection in the lobby of OpenAI’s San Francisco headquarters. That’s not all: It’s clear that the “open” embodied in the company’s name is no longer the radical transparency it promised when it was founded. When I mentioned this to Sutskever, he shrugged. “Obviously, times have changed,” he said. But that doesn’t mean the prize is different, he cautioned. “We’re facing a massive, catastrophic technological change, and even if we all try our best, there’s no guarantee of success. But if it works, we can live an incredible life.” “I can’t stress enough that we don’t have a master plan,” Altman said. “It’s like we turn every corner and shine a flashlight. We’re willing to go through the maze to get to the end. The maze gets twisty, but the goal doesn’t change. Our core mission remains the belief that safe AGI is an extremely important thing, and the world is not taking it seriously enough.” Meanwhile, OpenAI is apparently slowly working on the next version of its large language model. Hard as it may be to believe, the company insists that it has not yet started work on GPT-5, a product that, depending on your point of view, is either drooling over or daunting. Clearly, OpenAI is working hard to figure out what it would look like to make exponentially powerful improvements on existing technology. “The biggest thing we’re missing is having no new ideas,” Brockman says. “It would be nice to have something that could be a virtual assistant. But that’s not the dream. The dream is to help us solve problems that we can’t solve.” Given OpenAI’s history, the next series of major innovations may have to wait for a breakthrough like the Transformer. Altman hopes OpenAI can achieve that goal—“We want to be the best research lab in the world,” he said—but even if it doesn’t, his company will leverage the advances of others, just as it leverages work at Google. “A lot of people around the world are going to be doing important work,” he said. It would also help if generative AI didn’t create so many new problems of its own. For example, large language models need to be trained on huge datasets; obviously, the most powerful large language models will devour the entire internet. This has upset some creators and ordinary people who have unknowingly provided content to these datasets and contributed to ChatGPT’s output to some extent. Tom Rubin, an elite intellectual property lawyer who officially joined OpenAI in March, is optimistic that the company will eventually find a balance that meets both its own needs and the needs of creators—including creators like comedian Sarah Silverman who sued OpenAI for using their content to train models. One of the directions OpenAI is heading is to work with news and photo agencies such as the Associated Press and Shutterstock to provide content for its models without the question of who owns who. As I interviewed Rubin, my mind wandered to never-before-seen human thinking in large language models, and I thought about the company that had gone from a group of struggling researchers to a world-changing Promethean behemoth in just eight years. Its success has transformed it from a novel effort to achieve a scientific goal to something akin to a standard Silicon Valley unicorn, on its way to becoming one of the big tech companies that influence our daily lives. Here I was talking not about neural network weights or computer infrastructure but about copyright and fair use with one of its key employees, a lawyer. I couldn't help but wonder if this intellectual property expert was as much a part of the company's mission as the navigators in pursuit of superintelligence who had driven it in the first place? When I asked Rubin whether he was convinced that AGI would happen, and whether he was eager to see it happen, he seemed at a loss. He paused and said, “I can’t even answer that question.” When pressed further, he clarified that, as an intellectual property lawyer, it wasn’t his job to accelerate the realization of terrifyingly intelligent computers. “From my perspective, I’m excited about it,” he said. Original author: Steven Levy Original link: https://www.wired.com/story/what-openai-really-wants/ Compiled by: Hazel Yan |
Hello everyone, today I have brought you a "...
There is no doubt that competitive product analys...
In the current domestic medium and large car mark...
Android system brightness adjustment Android syst...
The mystery of the extinction of Gigantopithecus,...
When it comes to noise-canceling headphones, many...
The 8th issue of Mobei Course SEO, which is worth...
I guess many people log in to WeChat on their com...
Column design: 91 Author of this issue: Guo Yiche...
Audit expert: Li Weiyang Well-known science write...
It is the day to welcome the God of Wealth on the...
Back to August 1, 2017, at 11:23 am, I was still ...
This is a very dry article , please click with ca...
The Spring Festival is coming. In order to celebr...
Audit expert: Peng Guoqiu Deputy Chief Physician,...