AI models are often overconfident. A new MIT training method teaches them self-doubt, improving reliability and making them more trustworthy.
I wonder if this might have the side effect of reducing confabulations, if the need to provide any response at all is modulated by a low confidence in some random baloney?
I wonder if this might have the side effect of reducing confabulations, if the need to provide any response at all is modulated by a low confidence in some random baloney?