In September 1939, Europe entered a strange historical period known as the phony war that lasted until the following spring. Having guaranteed Poland’s independence, Britain and France declared war when Germany invaded. But then nothing much occurred for months other than some naval skirmishes. Only with the blitzkrieg offensive of late spring did the phony war start to seem real.

History produces these moments from time to time, liminal pauses when it’s clear that something has happened, but it doesn’t feel as if it’s happened yet. One of them is our AI revolution, which is underway but visible only in a few odd spots, like a cheating wave engulfing U.S. schools.

So when I warn people to prepare for what’s coming, a common retort is that they’ll believe it when they see it. Ethan Mollick, a professor at the University of Pennsylvania’s Wharton School, may insist that artificial intelligence is already good enough to disrupt many industries (including — gulp — my own). But if that’s the case, why isn’t it everywhere?

The answer is that while AI might be evolving faster than any technology in history, institutions can adapt only at the same old human speeds.

When I was a fresh-faced college graduate in the mid-1990s, I spent some years working as a secretary. (Though, in deference to my college diploma, my title was spelled “assistant.”) I did what secretaries had traditionally done: filed papers, answered phone calls and took messages, made photocopies, maintained calendars and typed letters that my bosses had dictated onto a tape recorder.

Today, I’d bet the managers in those roles do their own typing and filing. But even in the 1990s, my job was something of an anachronism. Voicemail took good messages, and desktop computers with spell-check meant anyone could type well. It would have made more financial sense to buy the managers computers preinstalled with a typing tutor than to pay me $25,000 a year to punch in the letters for them. So why didn’t that happen?

Well, because culture lags technology by quite a bit. For one thing, believe it or not, corporate bosses generally hate firing people, which is traumatic for them and bad for morale. For another, my bosses didn’t want to learn to type, in part because the transition period would have been quite annoying. Besides, managers of a certain stature had secretaries. Answering their own phones or showing clients into their own meetings would have made them feel — and look — less impressive.

Eventually, the financial practicalities overwhelmed the cultural lag, especially as computer-literate college graduates began working their way up the management hierarchy. But it took much longer than it had to, simply because managerial culture evolved more slowly than the technology. That was broadly true of all the digital capabilities we now take for granted, and it will also be true of AI.

So if you’re looking for early signs of the AI revolution, you should assume that the last place it will show up is in the heart of an established organization — at least with the exception of tech firms, which are likely to have the best understanding of the possibilities. (The CEO of Shopify recently told employees that before asking for more resources or a higher head count, they must prove that they cannot get the job done using AI.)

But mostly it’s likely to show up at the market’s margins. Start-ups where every precious dollar counts, and there are no legacy workers to worry about, will probably use AI to write code or automate some processes rather than hiring an expensive worker. I’d also expect it to take hold in at least some distressed firms where desperation becomes the mother of inventiveness.

When it does penetrate the rest of the marketplace, it will probably start, like other workers, at the entry level — filling out forms, doing basic research and performing simple programming tasks. That will happen both because these are the kinds of tasks AI is already good at and because there will be less internal resistance to stricter limits on new hires than to major reorganizations or layoffs.

Even those changes will be uneven, however, because the “black box” nature of AI creates headaches for industries that have significant regulatory or liability exposure. It’s hard to see how AI makes its decisions. That’s fine with some sorts of writing output, where it’s enough for humans to fact-check the final results. But it’s not good enough for judgments that are subject to second-guessing by regulators or juries, because those bodies will not accept “We don’t really know how AI decided to do that” as an answer.

Regulators will also be under pressure to protect powerful incumbent groups from displacement. AI shows a lot of promise in reading images and diagnosing disease, but medical associations will do everything in their power to keep well-paid humans in the loop.

All of which means that AI’s phony revolution will go on for longer than you might expect. But no one should mistake a temporary lull for a permanent condition. We are resting in the eye of a gathering storm, and those who fail to fortify themselves now risk being swept away when the storm finally unleashes its full power.