We hear a lot about generative AI apps like ChatGPT. Some people praise their usefulness, others complain about hallucinations and errors, and some predict they’ll lead us into a dystopian future.

I don’t know about the future, but I do know they are useful and that they sometimes make mistakes. If you use them thoughtfully and verify important information before acting on it, they can be incredibly helpful. I know this from experience because I use these tools several times a day in ways I never would have anticipated.

For instance, I recently bought a new bicycle seat. After installing it, the seat tilted backward, and I couldn’t figure out why. I snapped a photo, uploaded it to ChatGPT, and asked if I had it in the wrong position. Seconds later, I learned I’d installed it backward. One quick reversal with an Allen wrench and it felt great.

Health and fitness

A few months ago, while testing a nonprescription Stelo continuous glucose monitor for a review, I noticed some occasional blood sugar fluctuations. I took a screenshot from the Stelo app, uploaded it to ChatGPT and Claude.AI, and asked them to interpret the graph. Both told me the curve looked normal and healthy, which gave me peace of mind until I followed up with my doctor, who agreed with the interpretation.

I’ve also used it to help interpret reports from my Apple Watch, Fitbit and other fitness devices using the same method of taking a screenshot from the app and asking for an interpretation. Unless you delete your history, ChatGPT retains memory, so it can not only interpret each image but also analyze images in the context of what it already knows about me: my age, health, medicines I take and family history, based on prior questions.

I’ve even asked it to help explain radiology and blood test results. A few months ago, I received a report with a finding that I didn’t understand. Instead of worrying until I could reach my doctor, I uploaded the report to ChatGPT. It explained the finding and reassured me it wasn’t serious, which my doctor later confirmed. Of course, there’s always a risk that AI could misinterpret results or provide overly alarming information, so professional medical advice remains essential.

When I was in Prague last summer, I had an infected thumb. Normally, I wouldn’t take medication without consulting a doctor, but it was 2 a.m. in a foreign country, and I had no access to health care, short of visiting an emergency room. I had brought some antibiotics as a precaution but wasn’t sure if they were appropriate. I asked ChatGPT for advice, and it suggested an antibiotic I had. Still, before taking it, I checked several reputable medical websites, which confirmed it was appropriate, and my doctor later agreed.

Financial

I also use it for financial insights, such as understanding California municipal bonds and calculating after-tax yields based on my federal and state tax brackets.

I recently opened a brokerage account and was sent a 113-page PDF with terms and disclosures, far too much to realistically review. I asked ChatGPT to summarize it, then requested potential “red flags.” It quickly identified several concerning practices around conflicts of interest and compensation structures that I never would have spotted in the dense legal text as well as a list of questions to ask my broker.

Sadly, it can’t predict tomorrow’s stock prices.

Privacy concerns

I know some of you may find this creepy or worry about privacy, and that’s fair. But for me, the benefits outweigh the risks. You can ask ChatGPT to “forget” what you’ve shared, but due to a pending legal case, OpenAI is currently required to retain some user content under strict legal protocols.

I’m aware of the potential risks, including data leaks, breaches or legal disclosures. That’s why I avoid uploading anything that could be exploited, such as my tax return, which includes my social security number on each page. In the end, everyone has to decide for themselves how much they’re willing to share with online services that, despite privacy promises, aren’t entirely immune to breaches or data exposure.

It makes mistakes

ChatGPT does make mistakes, but it usually gets things right, and that’s the real risk. When answers seem reliable, it’s easy to become overconfident and overlook occasional errors. If I catch a mistake, I ask it to reconsider, and it often corrects itself. Still, I never rely solely on AI for important decisions. I always verify with trusted sources and often ask ChatGPT to cite its sources, which it doesn’t routinely do unless prompted. Another generative AI tool, Perplexity, typically includes sources by default.

It can even misread documents. When I fed it the results of a hearing test, it initially told me I had a significant loss at a particular frequency. But when I reviewed the chart myself, I realized it had misread the PDF. I asked ChatGPT to take another look, and it found and corrected the error. If you’re working with important documents, it’s crucial to double-check against the original document or have a qualified professional review it.

Shopping and travel advice

I also use ChatGPT to compare products. It can pull together information from professional reviews and user feedback across multiple sources, giving me a comprehensive overview of different products’ strengths and weaknesses. It’s not perfect. It sometimes relies on its pre-existing and often outdated training data, so I still verify the recommendations independently.

It’s an excellent resource for travel advice, particularly when planning road trips and looking for interesting stops along your route. Whether you’re traveling by car, plane or ship, it can suggest points of interest and worthwhile side trips anywhere in the world. If applicable, don’t forget to ask about senior discounts or free travel options in the places you’re visiting.

As a professional writer, I never use ChatGPT to write my columns, but I have asked it to proofread and suggest minor tweaks to paragraphs I’ve written. I also use it for research and idea generation, always double-checking any information. Occasionally, I’ve used it to help craft emails, especially when I need to express something potentially sensitive in a thoughtful and respectful way.

Try it yourself

More often than not, I was pleasantly surprised by ChatGPT’s responses, even when I wasn’t sure if it could help. If you’re wondering whether generative AI can assist you, the best approach is to give it a shot. You might also want to experiment with other options like Claude.AI, Perplexity, Google Gemini and Meta AI to find the one that works best for you.

Generative AI may make mistakes, but it would be your mistake to think it’s going away. Like it or not, it’s here to stay and will have a major impact on our lives.

Disclosure: Larry Magid is CEO of ConnectSafely, a nonprofit internet safety organization that has received financial support from Google, Meta and OpenAI, which are mentioned in this article.