I swore that last piece I wrote would be one-and-done, but here I am pulling out my soapbox again. We’ve got to talk. About ChatGPT. Yes, we’re going to dive more into it. We’ve got a lot to unpack since I wrote last about it. Tuck in, this might be a long one.
No, ChatGPT isn’t AI
You’ll hear the word AI lobbed around for things like ChatGPT and even design-based algorithms like Dall-E. But calling ChatGPT “AI” is for people who think Star Wars is a great way to study science or that Sparknotes is the same as reading the book. It’s a fun time, and it can help a little bit, but it’s not exactly true.
ChatGPT is actually a large language learning model (a type of machine learning (ML)). Hang with me, because I promise this is important to ChatGPT’s shortcomings. Machine learning has to pull from existing content, creations or databases in order to put out something “new.” True Artificial Intelligence, according to many researchers and scientists in the field, replicates human synapses (and how we think) to solve complex problems. True AI is on its way, make no doubt about that.
But ChatGPT isn’t making something entirely new whenever it gives you a breakup letter draft or a workout routine. As I mentioned in the previous blog, it’s taking what’s already existing online and smushing it together into a way the algorithm thinks best addresses your question. It’s scouring through existing guides and loads of content to make something semi-coherent that does the job.
It’s critical to remember: It’s not creating.
The current conflation of ChatGPT with AI is causing problems.
ChatGPT is not a person
Firstly, it’s causing people to treat ChatGPT like a person. It’s not a person. Unlike the fears of I, Robot or Wall-E, it’s not going to gain sentience. You don’t have to apologize to it. It’s an amalgamation of text written by real humans. However, treating it like a real person (or even a real creative) is 1) diminishing the value of real creatives, 2) watering down the artistry, skill and talent needed to put art out into the world and 3) giving people an excuse to not hire more human labor for roles that humans are better suited for.
(Sidebar: I’ll still never understand why we don’t have machine learning in more safety-critical roles like warehouse labor or processing plants but instead it’s trying to do the things that fundamentally make us human — connect through creation.)
We’re using critical thinking less
The second issue is that a growing dependence on ChatGPT has caused missed steps in our own critical thinking. Let’s unpack one of the most famous “whoopsies” ever caused by ChatGPT. A few months ago, a legal team used ChatGPT to create a case. Not outline, not a brief, not even using it for ideation — they had it write the whole thing. Ethics aside on this, ChatGPT made one HUGE error: it created fake cases and claimed those cases supported the lawyer’s case. When put before a judge and the opposition, the argument not only fell apart because the precedents listed were fake; it also caused the lawyers who used ChatGPT to be disbarred. The lawyers are now trying to blame ChatGPT for making the error, but ChatGPT itself mentions that it could just make up information if there’s a lack of evidence on the internet.
This everyday reliance on ChatGPT has led to many not bothering to question the results of what’s being read. There’s a misunderstanding that because “AI wrote it,” it doesn’t need to be double checked and facts don’t need to be confirmed. I guess the logic is that “well it’s supposed to save me time, so why should I bother double checking? These look right.”
You might not be writing legal cases with ChatGPT, sure. But you could be using it for ideas on brand taglines, company names, mottos, slogans for t-shirts. Because ChatGPT is pulling from existing content, there’s a good chance that what’s being created already exists as-is. If you’re not willing to double and triple check what ChatGPT produces over what’s already out there, you run the risk of using trademarked or copyrighted language in your own branding.
People are trying to use ChatGPT to solve human-first problems
The third issue is that people are using ChatGPT to offset human interaction. This point is an extension of the first, but with some practical applications. A recent New York Times article mentioned that machine learning language models help improve a doctor’s bedside manners.
However, it’s a patch — and one that goes away quickly if patients find out they’re talking to machine learning rather than a human. This is why many chatbot assistants on websites acknowledge if there’s not a person behind them; it helps the end user manage expectations. Many therapists also caution against using AI-based language models to supplement human interactions. There are extreme cases of AI-interactions gone wrong, but at the end of the day, if something you’re doing requires a human touch or sincerity, don’t let ChatGPT do the work.
So now for the real reason you clicked on this blog:
Really, Really Bad Ways to Use ChatGPT
- To Write Any Legal Documentation – See point #2 above if you only skimmed this article and missed the story.
- Company Apologies or Customer-facing Responses – We get it; dealing with customers can be frustrating, especially if they’ve had a bad interaction with your brand. However, thanks to the prevalence of ChatGPT responses, customers are getting really good at sniffing out bots. A great way to ensure your brand value takes a hit online (and possibly in-stores) is to send a meaningless, robo-generated response for all of Google to see.
- Creating Song Lyrics – This is especially bad if you’re trying to make a song to perform yourself…
- Pulling Data for a Report – ChatGPT is limited to research 2 years old and older. (Don’t believe me? Ask it about COVID. It still thinks COVID is a new disease.) There are plenty of verified, valid databases that offer real research that you can use.
- Writing Heartfelt Sentiments of Any Kind
- Trying to Rewrite Important Company Documentation – So when you share information with ChatGPT, it can then use it to share with other users. The more information you give it, the more it has to pull from. The last thing you need is to violate compliance regulations because you didn’t know how to reword something.
- Doing Relevant Industry Research for Your Company – Again, 2 year restriction on data.
- Writing Target Audience Descriptions for Your Team – Any business owner should be able to explain who the target market is for their product or service. That should never be an AI answer.
Really Smart Ways to Use ChatGPT
- Creating content outlines that you’ll then flesh out with your own research.
- Giving you fun ideas for parties or events
- Kickstarting ideas for marketing themes
- Having ChatGPT explain really difficult concepts to you – But even still, consult with an expert in the respective field to make sure the explanation ChatGPT gave you is accurate.
- Coming up with WWE intros for your niece or nephew. (I say from personal experience, this one is fun.)
So, what have we learned today?
- ChatGPT isn’t human.
- ChatGPT shouldn’t be used as a replacement for human-based interactions.
- ChatGPT does NOT create new content but compiles information from across the internet, and it should always be double checked for factual inaccuracies.
ChatGPT shouldn’t be the backbone of your marketing. If you’re struggling with ideas or concepts, ChatGPT isn’t what you should use. You’ll just fade into the background by using it because, at the end of the day, you’re using someone else’s materials instead of addressing the needs of your own consumers.
At On Target, you’ll get a team of creative, passionate people who know that reaching your target audience is essential for making a difference in your business. We also know that Google’s algorithm is moving in the same direction we’re preaching — a human-first approach. If you’re ready to keep humanity in your marketing, let’s talk.