ChatGPT touches every aspect of our lives and now its effects seem to extend to our physical health and well-being. And we should care because of the controversial history of AI chatbots.
Amazfit recently announced that ChatGPT is coming to its GTR 4 smartwatch (it's unclear at this stage if it will be available on other Amazfit smartwatches), labeled "ChatGenius". The demo showed a user asking ChatGenius questions about “how to improve running performance” and “how to improve sleep quality,” with the results displayed in a readout accessible via the Digital Crown.
Responses during the screening were quite high, with answers to running performance questions including "focus on proper nutrition and hydration" and "make sure you get enough sleep and take regular breaks after your run." While this seems harmless enough, it's a path that leads to a potentially dangerous destination.
Currently, anyone can ask a question about ChatGPT health on a computer or phone and receive a corresponding auto-generated response. However, smartwatches can have a huge impact on a user's health and well-being, and as a user of the feature, smartwatch users are more likely to use the feature as advertising and seek health and fitness advice. And for me this is a big mistake.
You don't have to search too far online for often questionable health and fitness advice from Instagram, YouTube and TikTok influencers. Whether someone tells you to eat raw liver and animal organs, take weird swings, run barefoot, or take untested supplements for great results, it's easy to get confused and confused.
When you've been training properly for months and seeing disappointing results, it's tempting to take advice from someone who looks like a Greek god on social media, regardless of their merit or lack thereof.
ChatGPT is no different. AI collecting data from users and providing fitness advice without the supervision of qualified nutritionists and personal trainers should be cause for concern.
My colleagues have already tested the AI service to its limits. He couldn't help but code the game, even cheated on tic-tac-toe, finding flaws with the current iteration of the technology. Health advice should not be given by trained professionals, even though it may seem harmless at first glance.
ChatGPT gets data from mining sites, and there is so much fitness hype that some of this misinformation will undoubtedly be included in the search. I wouldn't trust ChatGPT, for example, to write me an 18-week training plan for a user to run his first marathon or a diet plan to build muscle, depending on the severity of the situation. Such things should be properly regulated.
Of course there are health considerations, with a large unregulated service potentially recommending some dangerous or unhealthy practices, but mental health considerations also come into play. For example, there is a way to know if the user in question suffers from anorexia or bulimia
If users are questioning the fastest way to build muscle services, are steroid recommendations a risk and how long will it take for the folks at Amazfit and ChatGPT to close this particular line of questioning?
For now, this video from Amazfit is all we need to know about smartwatch maker ChatGenius' plans. Based on the questions asked in this short demo, ChatGPT has clear plans to answer health and fitness-related questions. But hopefully there are some solid or restrictive handrails, otherwise things will get ugly very quickly.
I wouldn't trust the medical diagnosis of a doctor without a medical degree, and I'm guessing you wouldn't either. It means the doctor is not qualified. So why would you take fitness advice from someone who doesn't even have a physical body? ChatGPT is not qualified to answer these questions, and linking answers from online health tips that vary widely in quality can only lead to disaster.