Humans are, as someone once observed, “language animals”, implying that the ability to communicate linguistically is unique to humans. Over the last decade, machine-learning researchers, most of whom work for the big tech companies, have been labouring to disprove that proposition. In 2020, for example, OpenAI, an artificial intelligence lab based in San Francisco, unveiled GPT-3, the third iteration of a huge language model that used “deep learning” technology to create a machine that can compose plausible English text.
Opinions vary about the plausibility of its output but some people regard GPT-3 as a genuine milestone in the evolution of artificial intelligence; it had passed the eponymous test proposed by Alan Turing in 1950 to assess the ability of a machine to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Sceptics pointed out that training the machine had taken unconscionable amounts of computing power (with its attendant environmental footprint) to make a machine that had the communication capabilities of a youngish human. One group of critics memorably described these language machines as “stochastic parrots” (stochastic is a mathematical term for random processes).
All the tech giants have been building these parrots. Google has one called Bert – it stands for bidirectional encoder representations from transformers, since you ask. But it also has a conversational machine called LaMDA (from language model for dialog applications). And one of the company’s engineers, Blake Lemoine, has been having long conversations with it, from which he made some inferences that mightily pissed off his bosses.
What inferences, exactly? Well, that the machine was displaying signs of being
Read more on theguardian.com