AI masters the language. Should we believe what he writes?

But as GPT-3 fluid has stunned many observers, the grand language model approach has also attracted significant criticism in recent years. Some skeptics argue that software is only capable of blind mimicry – mimicking the syntactic patterns of human language, but is incapable of generating its own ideas or making complex decisions, a basic limitation that will prevent the LLM approach from ever maturing into something like human intelligence. For these critics, GPT-3 is just the latest great facility in the long history of AI hype, directing research dollars and attention to what will eventually prove to be a dead end, preventing other promising approaches to mature. Other critics believe that software like GPT-3 will forever remain vulnerable to prejudice and propaganda and misinformation in the data on which it is trained, which means that using it for anything more than salon tricks will always be irresponsible.

Wherever you find yourself in this debate, the pace of recent improvements in large language models makes it hard to imagine that they will not be commercially applied in the coming years. And that raises the question of how they – and, in that case, the headless progress of AI – should be released into the world. In the rise of Facebook and Google, we have seen how dominance in a new field of technology can quickly lead to astonishing power over society, and artificial intelligence threatens to be even more transformative than social media in its ultimate effects. What is the right kind of organization to build and possess something of such scope and ambition, with such promises and such potential for abuse?

Or should we build it at all?

The origins of OpenAI dates back to July 2015, when a small group of world-class technology people gathered for a private dinner at the Rosewood Hotel on Sand Hill Road, the symbolic heart of Silicon Valley. The dinner was held amid two recent developments in the world of technology, one positive and one more worrying. On the one hand, radical advances in computing power — and some new discoveries in neural network design — have created a tangible sense of excitement in the field of machine learning; there was a feeling that the long “AI winter”, the decades in which the area had failed to meet its early popularity, had finally begun to thaw. A group at the University of Toronto trained a program called AlexNet to identify classes of objects in photographs (dogs, castles, tractors, tables) with a level of accuracy far greater than any neural network before. Google quickly came in to hire the creators of AlexNet, while at the same time taking over DeepMind and launching its own initiative called Google Brain. The common adoption of intelligent assistants such as Siri and Alexa has shown that even scripted agents can be consumer hits.

But during that same period, there was a seismic shift in public attitudes toward Big Tech, and once popular companies like Google or Facebook were criticized for their near-monopoly powers, their spread of conspiracy theories, and their relentless distraction. according to algorithmic feeds. Long-term fears about the dangers of artificial intelligence have surfaced on text pages and on the TED stage. Nick Bostrom of the University of Oxford has published his book “Superintelligence”, introducing a series of scenarios in which advanced AI could deviate from the interests of humanity with potentially catastrophic consequences. In late 2014, Stephen Hawking announced for the BBC that “the development of full artificial intelligence could mean the end of the human race.” It seemed that the cycle of corporate consolidation that characterized the age of social media was already happening with AI, only this time, algorithms might not just sow polarization or sell our attention to whoever offers the most – they could ultimately destroy humanity. And once again, all the evidence suggests that this power will be controlled by several Silicon Valley megacorporations.

The agenda for the Sand Hill Road dinner that July evening was nothing but ambitious: finding the best way to steer AI research toward the most positive outcome possible, avoiding both the short-term negative consequences of the Web 2.0 era and long-term existential threats. From that dinner, a new idea began to take shape – one that would soon become the obsession of Sam Altman of Y Combinator and Greg Brockman, who recently left Stripe. Interestingly, the idea was not so much technological as it was organizational: if AI is to be released to the world in a safe and useful way, it will require innovation at the level of governance and incentives and stakeholder involvement. The technical path to what this area calls artificial general intelligence, or AGI, was not yet clear to the group. But worrying predictions from Bostrom and Hawking convinced them that AI’s achievement of human intelligence would consolidate an astonishing amount of power and moral burden, in whoever eventually manages to invent and control them.

In December 2015, the group announced the formation of a new entity called OpenAI. Altman signed a contract to be the CEO of the company, and Brockman oversaw the technology; another participant in the dinner, AlexNet co-creator Ilya Sutskever, was recruited from Google to be the head of research. (Elon Musk, who was also present at the dinner, joined the board but left it in 2018.) In a blog post, Brockman and Sutskever outlined the scope of their ambitions: “OpenAI is a non-profit research on the company’s artificial intelligence,” they wrote. “Our goal is to advance digital intelligence in a way that is most likely to benefit humanity as a whole, unhindered by the need to generate financial returns.” more evenly distributed. “

The founders of OpenAI will publish a public charter three years later, outlining the basic principles behind the new organization. The document was easily interpreted as a not-so-subtle immersion in Google’s “Don’t Be Evil” slogan from its early days, acknowledging that maximizing social benefits – and minimizing harm – new technologies is not always a simple calculation. While Google and Facebook have achieved global dominance through closed source and proprietary network algorithms, the founders of OpenAI have promised to go the other way, freely sharing new research and code with the world.

Leave a Comment

Your email address will not be published.