Artificial intelligence (AI) has become a major talking point with the rise to prominence of AI chatbot ChatGPT from OpenAI and generative AI image makers like Midjourney and DALL-E 2. However, not everyone sees eye to eye with this emerging technology.
A new report from The New York Times (NYT) revealed that in March, two Google employees tried to stop the company from launching its own AI chatbot rivaling OpenAI’s ChatGPT.
According to the NYT report, the employees’ jobs are specifically to review Google’s AI products. The employees allegedly believed the technology generated “inaccurate and dangerous statements.”
Microsoft employees and ethicists raised similar concerns months prior, as it, too, planned the release of an AI chatbot to be integrated into its Bing browser. Concerns were voiced at Microsoft about the degradation of critical thinking, disinformation and eroding the “factual foundation of modern society.”
Nonetheless, Microsoft released its Bing-integrated chatbot in February, and one month later, Google released its “Bard” chatbot toward the end of March, both of which succeeded OpenAI’s release of ChatGPT-4 in November 2022.
We're expanding access to Bard in US + UK with more countries ahead, it's an early experiment that lets you collaborate with generative AI. Hope Bard sparks more creativity and curiosity, and will get better with feedback. Sign up: https://t.co/C1ibWrqTDr https://t.co/N8Dzx1m0fc
Since its release, ChatGPT has stirred up major conversations around the ethics and usage of AI chatbots and image generators.
Midjourney — an application that uses artificial intelligence to generate realistic images — discontinued its free trial to curb problematic deep fakes. Around the same time, an Australian
Read more on cointelegraph.com