Ay, but “knowing” is not a feature that AI can have. It also can not comprehend what the “whole” (100%) of a thing is. The models make calculations based on the data they have and is likely related, so confidence levels are applied and data engineer or programmer can set the limits when the model seems sure in it’s output and when it is more likely to state “I do not know”, or any other appropriate phrasing. I was once part of a dev project where we were sooo excited when the model confidently and practically in all correct situations said it did not have an answer but offered links to potentially relevant sources (in stead of trying to give an answer). It’s a hard balancing act to get right. “Lie” implies intent from the machines part and more likely is that this facet of interaction has not been implemented (at all or properly or in a way that is expected/wanted by user). Programming a system to give bad outputs is bad or happenstance or applying model to do what it’s not intended to do or incompetence or possibly also external attack. I just had this happen, and I’m pretty sure there was no malice involved - it was just too helpful in using data that actually was just guesses and it couldn’t tell it was bad quality (and I was part of that
).
1 Like