WASHINGTON>> Jim Duggan uses ChatGPT almost daily to draft marketing emails for his carbon removal credit business in Huntsville, Ala. But he’d never trust an artificial intelligence chatbot with any questions about the upcoming presidential election.

“I just don’t think AI produces truth,” the 68-year-old political conservative said in an interview. “Grammar and words, that’s something that’s concrete. Political thought, judgment, opinions aren’t.”

Duggan is part of the majority of Americans who do not trust artificial intelligence-powered chatbots or search results to give them accurate answers, according to a survey from The Associated Press-NORC Center for Public Affairs Research and USAFacts. About two-thirds of U.S. adults say they are not very or not at all confident that these tools provide reliable and factual information, the poll shows.

The findings reveal that even as Americans have started using generative AI-fueled chatbots and search engines in their personal and work lives, most have remained skeptical of these rapidly advancing technologies. That’s particularly true when it comes to information about high-stakes events such as elections.

Earlier this year, a gathering of election officials and AI researchers found that AI tools did poorly when asked relatively basic questions, such as where to find the nearest polling place. Last month, several secretaries of state warned that the AI chatbot developed for the social media platform X was spreading bogus election information, prompting X to tweak the tool so it would first direct users to a federal government website for reliable information.

Large AI models that can generate text, images, videos or audio clips at the click of a button are poorly understood and minimally regulated. Their ability to predict the most plausible next word in a sentence based on vast pools of data allows them to provide sophisticated responses on almost any topic — but it also makes them vulnerable to errors.