We’re used to chatting with AIs, and now it seems we can’t tell AIs from real people. This a sign it’s time for you to take advantage of the way AI can boost your business.

Everyone from cocky entrepreneur Elon Musk to inventor and futurist Ray Kurzweil — as well as plenty of other kibitzers — has predicted that the moment when artificial intelligence becomes as smart as, and then smarter than, a human is very near. Possibly as soon as 2027. When it happens, it will be an important moment because then our human-made digital friends may become more useful than many people are in the workplace, upending the economy.

Fresh research suggests chatbots are already at human levels of intelligence in some respects, which should make everyone from politicians to megacorporations to one-person businesses sit up and take note: OpenAI’s GPT-4 model fooled people into thinking it was human more than half the time.

News site TechRadar reports on work done by experts at the Department of Cognitive Science at University of California, San Diego: The team carried out a classic, controlled Turing test to see how modern AI chat systems compared to talking to a real person.

In the experiments, 500 participants each had a 5-minute, text-based conversation with another party, and then had to say if they thought that the other participant in the chat was a human or not and explain why. They were either chatting with a real person, OpenAI’s GPT-3.5 or more advanced GTP-4 models, or ELIZA — an amazing experimental chatbot from MIT in the 1960s that used simple rules to answer queries, and which was partly designed to model classic psychotherapy.

The results of the test are stunning. Humans were correctly spotted by the experiment’s participants 67% of the time — which makes sense: Our brains are very good at spotting the subtleties that give away real human interaction (though not for one in three of the human chat tests). ELIZA performed surprisingly well for 60-year-old tech — participants thought it was a human just 22% of the time. But ChatGPT’s GPT-3.5 model was identified as human half the time, and GPT-4 was thought to be a human 54% of the time. As TechRadar puts it, this means the San Diego experiment’s volunteers were “no better than chance” at identifying GPT-4 as AI.

The Turing test, named for famed early computing pioneer Alan Turing, has long been considered the core criterion for deciding how smart an AI is. Though Mustafa Suleyman, co-founder of Google’s DeepMind AI division, has suggested that a more modern, meaningful version should ask if an AI is smart enough to act as a CEO, the Turing test remains useful.

Since many interactions with current generation AI systems are via text-based chat, the failure of so many test subjects in the San Diego experiment to distinguish an AI chatbot from a human is meaningful.

The lesson from the experiment is clear: AI isn’t just hype. Since increasingly capable AI models keep being released, with ChatGPT’s newest model, GPT-5, just around the corner, they’re only going to get more convincingly human. If your company is being slow to adopt AI tech, you should probably speed up that process or risk being left behind by your competitors.

Or you could choose to avoid AI technology, employ only human staff and differentiate yourself from competing businesses by trading on the old-fashioned vibe: “We build our products using real people only.”

But if you try this route, you might want to learn from old Samsung and LG mistakes.

About 10 years ago, both tried smartphone slogans that landed weirdly: “Designed for humans” makes you wonder if other phones are meant for aliens, and the “most human phone ever” was not a winning tagline for its marketing efforts.

Maybe they should test those slogans on an AI chatbot.