Almost anywhere you currently interact with other people is being eagerly assessed for AI-based disruption. Chatbots in customer service roles are nothing new but, as AI systems become more capable, expect to encounter more and more of them, handling increasingly complex tasks. Voice synthesis and recognition technology means they’ll also answer the phone, and even call you.
The systems will also power low-cost content generation across the web. Already, they’re being used to fill “content farm” news sites, with one recent study finding almost 50 news websites that hosted some form of obviously AI-generated material, rarely labelled as such.
And then there are the less obvious cases. The systems can be used to label and organise data, to help in the creation of simple programs, to summarise and generate work emails – anything where text is available, someone will try to hand it to a chatbot.
All three systems are built on the same foundation, a type of AI technology called a “large language model”, or LLM, but with small differences in application that can lead to large variety in output. ChatGPT is based on OpenAI’s GPT LLM, fine-tuned with a system called “reinforcement-learned human feedback” (RLHF). In giant “call centres”, staffed by workers paid as little as $2 an hour, the company asked human trainers to hold, and rate, millions of chat-style conversations with GPT, teaching the AI what a good response is and what a bad response is. However, ChatGPT can’t know the answer to any question after its training data was set, in around 2021.
Microsoft has revealed little about how Bing chat works behind the scenes, but it seems to take a simpler approach, called “prompting”. The bot, also built on top of OpenAI’s GPT, is
Read more on theguardian.com