A lot of people use LLMs as the source of their objective truth. They have a question that would be very well answered with a search leading to a reputable source, but instead they ask some LLM chat bot and just blindly trust whatever it says.
How do you deal with that? Do you try to tell them about hallucinations and that LLMs have no concept of true or false? Or do you just let them be? What do you do when they do that in a conversation with you or encounter LLMs being used as a source for something that affects you?
Comments URL: https://news.ycombinator.com/item?id=47433702
Points: 32
# Comments: 29
A rogue AI agent inadvertently exposed Meta company and user data to engineers who didn't have permission to see it.
Altman expresses gratitude for people who knew how to write their code from scratch. The internet replies with salty jokes.
Remember when it was fun to play around with LLMs?
FBI director Kash Patel admitted that the agency is buying location data that can be used to track people's movements. Unlike information obtained from cell phone providers, this data can be accessed without a warrant - and used to track anyone. "We do purchase commercially available information that's consistent with the Constitution and the laws […]
Planned EU ban on nudify apps would likely force Musk to make Grok less "spicy."