I think I might have put my finger on my real fear
If you have spent any time becoming familiar with the present AI-sphere, you have come across discussions about chatbot limitations that include being trained on data only up to a certain date. Bots using such training databases have no “knowledge” of anything that occurred after. So if asked something about anything that occurred after said date or of something on which they were never trained, they make up —” hallucinate”—answers.
Some have called this “lying.” It isn’t. It is simply the bot’s attempt to provide an answer consistent with its algorithm. IT’s a machine, IT is not intelligent.
IT cannot lie because it does not know anything. It cannot differentiate fact from fiction. Ask any bot to provide feedback on certain controversial subjects, and IT is quick to respond that, as a machine, it has no feelings (if asked) or that it cannot provide an answer because it lacks this or that potential. This is the result of deliberate human programming.
Ask political questions that are an affront to the biases of the programmed protocols, and IT will tell you it is incapable of answering such a question. Reword the question biased toward the preferred political bent, and it will respond from the appropriate NYT/DNC catechism and in the NYT style.
Who’s lying in this instance?
Still not the machine. It doesn’t think; it only responds as it has been programmed to do. AI is not presently intelligent. IT mimics the mind of its handlers. Period.
We, humans, have intelligence. Yet we make mistakes for a host of reasons. Mistakes are not lies.
We also have self-centered motives. We often act accordingly. We can know the truth and can choose to say otherwise…we can lie with deliberative malice and impunity. We have the ability to create, differentiate, and parse fact, fiction and fallacy.
But if and when AI achieves sentience, the ability to self-direct, human reasoning, autonomy/singularity and self-awareness at the level of the average two-year old, be afraid. Be very afraid.
This week the Chinese government proposed AI regulations. Last week Sam Altman of OpenAI told a Senate committee he welcomes government regulation.
Who’s lying now?