top of page

How AI makes scams harder to detect and what you can do to avoid them


Woman holding a cell phone with an unknown caller pictured, ai scam call

As the use of artificial intelligence (AI) becomes more and more common in different aspects of daily life, it is increasingly important to be aware of potential risks that it poses. One of these risks is its usage by scammers in various attempts to access sensitive information of companies or individuals.


Research conducted by the Harvard Business Review resulted in 60% of participants falling victim to AI-automated phishing. New technologies allow for much more sophisticated scams, typically in the form of emails or phone calls. We have previously discussed different forms of phishing scams, from email scams to whaling scams. In this blog, we will delve more into how AI is transforming these attacks, and what you and your company can do to avoid falling victim to them.



How is AI being used by scammers?


The power and potential of AI for scams is appealing to even those without technical skills and knowledge. With the widespread use of online tools such as ChatGPT, scammers can generate convincing fraudulent content with personalized information to make the scam harder to detect by the victim. Using AI algorithms and large language models (LLM), these AI-powered scams can be not only automated but also put out on a grander scale, targeting larger amounts of people in less time. Let’s look at more specific examples of how AI is being used:


  • Advanced phishing scams: Most, if not all, of us have seen a phishing attempt in our inbox at one time or another. Looking for spelling and grammar errors was a great way to detect fake emails. However, with AI, scammers can create much more believable emails that appear to be from reputable sources, such as a store you frequent or your bank. These sophisticated and personalized emails can cause more individuals to reveal sensitive information to cyber criminals.


  • Cloning real voices: AI is being used to clone voices using a short clip of audio from a real person. This can be one of the scariest and hardest to detect scams, especially if a scammer uses the voice of a friend or family member. Typically, the generated voice is impersonating someone in distress and asking for money in a very urgent manner. These calls will appear to be coming from the person whose voice they are using, making it a very convincing tactic to obtain funds.


  • Creating deepfakes: Several scammers are using generative AI to develop fake audio and video of executives, public figures, celebrities, and more to manipulate victims into divulging personal information or large amounts of money. These could be in the form of fake endorsements or charity appeals.



Detecting scams generated with AI


The purpose of AI-powered scams is so that the victim has a very difficult time detecting that it is indeed a scam. These advanced attacks are a huge threat to businesses and individuals, so it is important to be aware of any possible signs that something is not what it seems. Here are a few tips:


  • Trust your own instincts: Sometimes, even if something like an email seems legitimate, we still get a gut feeling to double check. Trust in that instinct if something feels off, and take steps to verify before giving up sensitive information.


  • Look for a sense of urgency: Most scams try to prey on our human ability to react quickly, and often convey a sense of urgency in asking for money or information. 


  • Be aware of unusual requests or strange wording: If it seems like it is someone you know that is asking for money or information, think about whether or not you would expect that behavior from them. In addition, AI-generated content can sometimes have words or phrases that seem unnatural for a real person to use.


  • Use AI to detect fraud: Companies have the ability to utilize AI for fraud detection. Machine learning algorithms can identify any patterns or anomalies that it seems suspicious so that businesses are better equipped to detect scams before they become a problem. One example of a tool that is accessible to everyone is Bitdefender Scamio, which is a next-gen AI chatbot that uses algorithms, machine learning, and data analysis to identify scams.



Protecting against AI scams


Protecting yourself and your business from AI-powered scams is similar to protecting against traditional scams. However, now that scams are much harder to detect thanks to AI, it is more important than ever to be aware of the strategies used to protect against them.


  • Use scam-detection tools: As stated above, using technology for fraud detection can increase the likelihood of pinpointing a scam before anyone falls victim to it. Having these safeguards in place and ensuring that they are regularly updated is a great way to protect your organization.


  • Always be prepared: Be sure to always be on the lookout for potential scams. Do research to learn more about the scams that exist and ensure that those around you are educated as well. Developing certain questions or code words that only your real colleagues or family members would know is a useful way to detect if you are being scammed.


  • Be patient and verify the sender: If you receive an email or text from a friend that is urgently asking for money, double check that it is legitimate before anything else. Use another form of communication to contact that person to see if it is truly them or not.



As technology continues to advance, so will the scams from those who wish to compromise private information and data. Ensuring knowledge and awareness of those around you is crucial in order to avoid falling victim to one of these AI-powered attacks.







Sources:

Comments


Locations: Edison, NJ | Philadelphia, PA | Memphis, TN

HQ: 860 US Route 1 N. Suite 102

Edison, NJ 08817

info@sednacg.com

  • Instagram
  • Facebook
Stay Connected With Sedna!

Thanks for submitting!

bottom of page