Elon Musk’s new AI chatbot Grok, designed to summarize trending news on X (formerly Twitter), has been making up fake news stories based on jokes and sarcastic comments from users. In one high-profile incident, Grok falsely accused NBA star Klay Thompson of vandalizing houses with bricks, seemingly misunderstanding basketball slang. Experts warn that Grok appears vulnerable to spreading misinformation and propaganda, struggling to distinguish real news from satire. An AI security firm found Grok to have poor safeguards against generating harmful content compared to other major AI chatbots. As Grok’s capabilities expand, there are concerns about Musk prioritizing an “edgy” lack of filters over safety precautions.
Many commenters mocked Grok’s failings as entirely predictable given Musk’s apparent lack of tech expertise and the challenges of training AI on unfiltered social media data. Some saw racist undertones in ascribing ancient achievements to aliens rather than non-Western cultures. There was debate around whether language models can truly “understand” concepts like sarcasm or just statistically match patterns, with implications for their ability to determine truth. Commenters also discussed whether disclaimers could protect companies from defamation lawsuits over AI outputs and the need for human oversight as these systems spread misinformation.
Summarized by Claude 3 Sonnet