A ccording to a recent open letter, society needs to immediately pause development of “giant” AI models, or risk apocalyptic outcomes. Massive job losses, the destruction of consensus reality and even the end of all organic life on Earth have all been mooted as risks of pressing forward with development of these systems before we understand their intricacies.
The high-water mark of these is GPT-4, the snappily named AI that underpins the latest version of the breakthrough ChatGPT service. Creating anything more powerful than GPT-4, before we spend at least six months working out its limits and risks, would be too dangerous, more than 1,000 AI experts say.
I decided to spend some time with the new ChatGPT myself. Not just to find out about its risks to civilisation, but also to see what it could and couldn’t do to help me with my life. I’ve never had an assistant, a life coach, a chef or a personal trainer – could ChatGPT be all those things for me? I gave it a week to find out.
Can it give me basic information without lying?
The odd thing about being handed a tool of unimaginable complexity and potential is that the blinking cursor stares at you just like any other, daring you to find something interesting to type. I feel as if I’m on a bad blind date where I’m expected to ask all the questions.
Throughout the day I pepper the service with queries, trying to use it instead of Google when I want to find out a basic fact, but I quickly hit upon the problem with that approach: ChatGPT’s habit of “hallucinating”. The system will, on occasion, just make things up, things that feel true but aren’t grounded in, well, reality.
To win an argument with a friend, for instance, I ask how many drivers there are in Sunderland (my friends
Read more on theguardian.com