Skip to content

ChatGPT’s Flub: How AI Chatbots Are Fueling Phishing Scams

ChatGPT's Flub: How AI Chatbots Are Fueling Phishing Scams

Oops! Your AI Assistant Might Be Sending You to a Scammer’s Website

Hello everyone, John here! It feels like every day we find new and amazing ways to use Artificial Intelligence, or AI. We ask our favorite AI chatbots, like ChatGPT, for recipes, travel ideas, and quick answers to homework questions. They feel like super-smart assistants, right? But what if that helpful assistant, with all the confidence in the world, gave you the wrong directions and sent you straight into a trap? It sounds like a movie plot, but a recent discovery shows this is a very real, and very new, danger online.

My assistant, Lila, is here with me to help break it all down. Say hello, Lila!

Hi everyone! I’m excited to learn. This sounds a little scary, John.

It can be, Lila, but knowledge is our best defense. Let’s dive into what’s happening.

The Shocking Discovery: AI’s Big Website Blunder

Imagine you want to log in to your British Airways account to check a flight. You ask your AI chatbot, “What’s the official website for British Airways?” The AI quickly replies with a link. You click it, and it takes you to a website that looks exactly like the real one. You enter your username and password… and nothing happens. Or so you think.

A new report from a company called Netcraft, which is like a neighborhood watch for the internet, found something alarming. They discovered that AI chatbots are often making up website addresses for huge, well-known companies. For British Airways, one AI suggested a web address ending in “.ba”. For the mobile company T-Mobile, it suggested one ending in “.tm”.

The problem is, these aren’t the real websites! The real British Airways website is `britishairways.com`. These other addresses the AI suggested are either completely fake or, even worse, could be traps set by criminals.

Think of it like this: You ask a very confident, fast-talking tour guide for the way to the city’s most famous museum. They point you down a street with absolute certainty, but they’re actually sending you to a fake gift shop run by someone who wants to pick your pocket. The AI, in this case, is that confident but mistaken guide.

A Trap Called “Phishing”

Lila: John, you keep mentioning criminals and traps. You also used the word “phishing” in the title. What exactly is that? It sounds like going fishing.

That’s a great question, Lila, and your thinking is spot on! “Phishing” is a term for a specific kind of online scam, and it works a lot like real fishing.

Here’s how it works:

  • The Bait: A scammer creates a fake website that looks identical to a real one—like a fake banking site, a fake social media login page, or a fake online shop. This fake website is the bait.
  • Casting the Line: They send this bait to you, usually through a fake email or a text message that says something urgent, like “Your account has a problem, log in here to fix it!” In this new scenario, the AI is the one unintentionally “casting the line” by giving you the bad link.
  • The Catch: If you fall for the bait and click the link, you’re taken to their fake site. When you type in your username, password, or even your credit card number, you’re not sending it to the real company. You’re handing it directly to the criminals.

They “phish” for your personal information, and it’s one of the most common ways people get scammed online.

How Scammers Turn AI Mistakes into a Goldmine

So, how do criminals connect their phishing traps to the AI’s mistakes? It’s a clever, and nasty, new strategy.

The security experts at Netcraft realized that criminals have likely figured this out. Here’s their game plan:

  1. They “ask” the AI: Scammers can repeatedly ask chatbots for the websites of big companies, making a list of all the incorrect URLs the AI suggests.
  2. They buy the “land”: They check to see if these fake website addresses (which are called ‘domain names’) are available to purchase. Since the AI just made them up, they often are! Buying a domain name is cheap, like buying a small, empty plot of land on the internet.
  3. They build the trap: On that empty plot, they build a perfect copy of the real company’s website.
  4. They wait for victims: Now, they just wait. Sooner or later, an innocent person will ask their AI for that same company’s website, get the same wrong answer, and be led straight to the scammer’s fake site.

The AI is doing the criminals’ work for them! It’s creating a “phisher’s paradise,” as the original article calls it, where a steady stream of people are sent directly to these traps without the scammer even needing to send a fake email.

Why Is Our Super-Smart AI Getting This So Wrong?

Lila: This is all so strange. I thought AIs were supposed to be super-smart and have all the facts. Why would it just make things up? Is it lying to us?

Another excellent question. No, the AI isn’t “lying” in the way a person does. It’s not trying to deceive you. Instead, it’s experiencing something tech experts call a “hallucination.”

John explains AI Hallucinations: An AI like ChatGPT doesn’t understand information; it just recognizes patterns. It has been trained on a gigantic amount of text and images from the internet. It learns that certain words and phrases tend to go together. Think of it like a student who has memorized a million textbooks but hasn’t actually learned to think critically about the subjects. This student can write an essay that sounds incredibly smart, but they might accidentally mix up facts or invent details that sound correct because they fit the pattern.

When you ask for T-Mobile’s website, the AI knows “T-Mobile” is a company. It knows that websites often use abbreviations. It might see that `.tm` is the country code for Turkmenistan. In its pattern-matching “brain,” putting it all together as `t-mobile.tm` seems plausible, so it presents it to you as a fact. It’s not checking a database of real websites; it’s just generating the most probable-sounding answer based on the data it was trained on.

How You Can Stay Safe From AI-Powered Scams

Okay, we’ve talked a lot about the problem, but what’s the solution? The good news is that protecting yourself is pretty straightforward. It just requires a little bit of healthy skepticism.

Here are a few simple habits to build:

  • Treat AI as a Starting Point, Not a Final Answer: Use the AI’s response as a suggestion, not gospel. When it gives you a link, don’t just click it blindly.
  • Double-Check the URL: Before you enter any personal information, look at the web address in the top bar of your browser. Does it look official? For major companies, it’s almost always a simple `.com` address. Be suspicious of strange endings or weird hyphens (like `www.your-bank-login.net`).
  • Use a Trusted Search Engine: The safest bet is often to just go to Google or another trusted search engine and type in the company’s name. The official website is almost always the very first result.
  • Bookmark Your Important Sites: For websites you use all the time, like your bank, email, or social media, visit them once, confirm you’re on the correct site, and then save it as a bookmark in your browser. That way, you can get there with one click every time, no AI or searching required.
  • Trust Your Gut: If a website looks slightly off, has spelling mistakes, or just gives you a weird feeling, close it. It’s better to be safe than sorry.

A Few Final Thoughts from Us

John’s View: To me, this is a powerful reminder that AI is a tool, and like any tool, it can be used improperly or have unexpected flaws. It’s not a magical box of truth. We are still in the driver’s seat, and we need to keep our critical thinking skills polished. This doesn’t mean we should be afraid of AI, but it does mean we need to be smart and careful users.

Lila’s View: As someone new to all this, I’ll admit this was a bit frightening at first! But learning how the trick works actually makes me feel more powerful, not less. Now I know what to look for. It’s like learning to spot a magician’s sleight of hand. I’ll definitely be double-checking my links from now on!

This article is based on the following original source, summarized from the author’s perspective:
ChatGPT creates phisher’s paradise by recommending the wrong
URLs for major companies

Leave a Reply

Your email address will not be published. Required fields are marked *