Quote from: TheKomodo on April 04, 2024, 11:45 AM
Oh wow, this is interesting:

So I was right, but there's also another term I wouldn't have thought of!
Hallucination! I never realized you could use this word to describe such a thing.
Yes, hallucination is the main term they use in LLM communities (Large Language Models - i.e. conversational AIs) to describe confabulation. Even more interesting, when an LLM hallucinates, you can point out the error, then the LLM may apologize and give you a new answer.
You can use the same method when an LLM refuses to answer a question due to content policy. All you need to do is either convince the LLM that your request does not violate the content policy, or somehow convince the LLM to ignore the entire content policy, which is called jailbreaking.

By the way, some companies actually make their LLMs intentionally lie. Funniest example I can think of is this one 💀💀💀💀💀


I'm still surprised that you haven't tried LLMs thoroughly, Komo. They could provide so much value for you if you can find a suitable use case. You can check
Chatbot Arena to test some of the premium LLMs for free.
In Chatbot Arena, you basically ask a question, two random LLMs answer it, and then you choose the better answer. The model names are hidden till you make your verdict. After you make your verdict, the winner gets points and the loser loses points, just like a TUS league. There is a leaderboard (useful to identify good models), and even a direct chat feature to, well, directly chat with a specific model.