AI being used in online fraud, what is the way to avoid it?

 AI being used in online fraud, what is the way to avoid it?


Whether you're drafting an email, creating a piece of art, or pretending to be a friend or relative to help people in trouble, AI is ready to help. AI is multifaceted! But here we are discussing how to avoid becoming a victim of online fraud with its help.


In the last few years, AI technology has made significant leaps in creating everything from text, audio to images and videos. With that, it is possible to produce such media content cheaply and easily.





The same type of tool can be used not only to help imagine imaginary monsters or spaceships, or to help non-Americans improve their English, but also to aid in malicious activities.


However, fraud is a social problem that has been going on for years. But generative AI is making such scams easier, cheaper and more persuasive. It is not possible to prepare a complete list of where and how AI can be used in such fraud. But some measures that they can adopt can be known.



1. Voice imitation of family members and friends


However, imitating someone's voice is not that new. But the development in technology in the last one or two years has made it possible to create a new sound that sounds like that from a few seconds of audio. This means that anyone's voice that has been publicly broadcast - for example, a news story, a YouTube video or a short phone call - is at risk of being cloned.


Fraudsters can use this technique to fake the voice of a loved one or friend. In this way, anything can be said with a fake voice. Fraudsters can probably prepare a voice clip to help.


For example, a parent may receive a voicemail from an unknown number in a voice similar to their son's that their belongings have been stolen during a trip, and someone may ask the parent to send money to a certain address, saying that someone has allowed them to use their phone. Fraudsters can set up many such situations and trap them.


A similar case of fraud by imitating the voice of American President Joe Biden came out. Even if the person involved in the incident is arrested, the fraudsters may be more careful in the future.




How to deal with voice clonic i.e. voice imitation?


The first thing is that it is becoming difficult to identify fake voices. Fraudsters are making progress in this matter day by day and there are many ways to make sounds that sound like zinc. Due to which even the people who say 'I am the one' can be swayed.


Therefore, be suspicious of messages from unknown numbers, emails or social media accounts. If someone is telling you about a friend or close person, contact the concerned person in another way. Most likely, the person will say that they are fine and that you have run into a scammer.


Cheaters don't stick around when ignored - while family members will likely try to make contact through other means as well. It is better to leave the message that seems suspicious while reading it.




2. Targeted phishing and spam via email and messaging


We all are getting spam messages now and will continue to receive them in the future. But AI that can prepare written content has made it possible to send customized emails to a large number of people. Your personal data and information may already be in the hands of fraudsters with regular incidents of data theft.


It doesn't take much effort to get any of the leaked data to click on a scam email that says "click here to view your invoice". Because based on the last location, shopping and behavior etc., it can be made that the email is real or trustworthy.


In this case, by weaponizing some personal facts, a language model can customize thousands of emails within a few seconds. Instead of spam that says "Dear Customer, Please Find Your Invoice Attached," it might say, "Hi Ram! I'm from Daraz's marketing team. The item you're looking for is now 50 percent off! If you click here to get the discount, your Khumaltar Delivery will also be free in the area."


From this simple example, you can understand that spam emails and messages mixed with details like your name, shopping habits (which can be easily found), location etc. can easily fool you. In the days to come, messages and emails that focus on so many people can also be spam.


Ekta's customized spams used to be prepared by content firms in foreign countries working for a small amount of money. Now, LLM technology has made such work possible in a better language than many skilled professionals.




How to deal with spam email?


In the case of traditional spam, your main weapon is to be careful. But now you will not be able to easily determine whether it was written by AI or not by looking at the written language.


Therefore, do not click or open any link until you are 100% sure of the authenticity and identity of the sender. So don't click unless you're even slightly unsure . If you have a knowledgeable person in your contact, you can send him to confirm.


From the data thefts that have occurred in the last few years, it is safe to say that almost all of us have a substantial amount of personal data available on the dark web. If you follow proper online security practices, keep changing passwords, enable multifactor authentication, then you can deal with many such risks.


Even so, generative AI may pose new and serious risks. With the availability of data online, getting a clip or two of a voice, it is becoming increasingly easy to create an AI profile that can target the relevant person.


For example, let's talk about a situation. Let's say you're having trouble logging in to an account, or you're not able to properly configure your authentication app, or let's say you've lost your phone. You may call customer service and they may identify you based on some identifiable information such as your date of birth, phone number, social security number, etc.


A more advanced method is to use it to take a selfie. At present, in some places, AI is present as a customer service agent. In this case, someone else can get access to your account by verifying your identity based on your personal data.


In order to avoid such risks, it is necessary to use multi-factor authentication and not to ignore any email or message that someone is trying to access the account.




3. Deepfake and blackmail generated by AI


Perhaps the most frightening of the latest AI scams is the deepfake images and videos and the blackmails that use them. These widely available tools have made it easy to create someone's naked body and attach it to someone else's photo in such a way that it looks real.


Apart from this, hacking of someone's private pictures or getting out of ex-boyfriends and girlfriends and using them by some other third party for blackmail can also be done. But AI can now create real-looking nude photos for scams without your intimate photos.


Anyone's body can be attached to the body created by AI, although such a picture is not always reliable. But if you or someone else can make an owl by pixelating it or keeping it in low resolution.


Blackmail is done by asking for money to keep such scary pictures secret. If you pay money under pressure, you may end up in the trouble of repeatedly asking for money.


Comments