Nude photos of one lakh women using Deep-fake made public

Nude photos of one lakh women using Deep-fake made public

According to a new report, pictures of more than one lakh woman posted on social networks including Facebook and other online platforms have been made public using Dipfek technology.


A study released by Cincy, a British company working on artificial intelligence, found that women's clothing was digitally removed and widely shared on the messaging app Telegram.




Deepfake is a technology that uses artificial intelligence and deep learning to transform a real person's photo into a new one.


It is said that underground groups are working to make the photos used as various challenges and publicly available online using Deepfake technology.



The report also states that pictures of some minors have been made public. Georgia Patrini, Cincity's chief executive, warned that anyone who shares the public fata on social media could be targeted, saying that the data is being redesigned and widely shared on the telegram.



According to the report, the bot operated by Artificial Intelligence resides within a telegram private messaging channel. Users can send a photo of the woman to the boat, and it will digitally remove her clothes in a matter of minutes.


According to the BBC, many of these photos were tested and found to be unrealistic. Even though a similar app was shut down last year, crack version software is still running.


People who are secretly involved in such activities have been targeting anyone who shares photos online in public.


Cincy has reported that since July 2019, 144,852 women have been made public using Deepfake technology.


The Telegram did not respond to a request for comment. Telegram has reported that such incidents have taken place before. The telegram was officially banned in Russia earlier this year.


What is Deepfake Technology?


Don't believe what you've seen, it could be deep fake technology


. Have you ever seen a video of Prime Minister KP Oli openly insulting a leader of the opposition? Or have you seen any porn video of Rekha Thapa? Or an offensive expression from a person holding an important position?


What do you do if you see such content on Facebook and any other social network? Contribute to the sensation by sharing it immediately? Or try to find out the reality.



On what basis do you recognize such a scene seen in a video or picture as true or distorted? How can you live without believing in these things when your voice, face, and expression are the same?


But algorithms and machine learning developed with AI technology have become so sophisticated. Therefore, there is no environment in which you can believe everything you see and hear at once.



Here, Deepfake technology developed by combining machine learning and AI can confuse you with the difference between real and fake content. This technique matches the face of any celebrity or person on your face.




Technology also incorporates the expressions of one face into another so that people are confused about what is right and what is wrong. With the help of Deepfake technology, the user can extract anything from the mouth of any person he wants.


You can immerse yourself in any movie scene you like. No one can tell the difference between what is real and what is made. The most deadly form of this was when the faces of celebrities were used in pornographic video content using deep fake technology.


Also, various expressions were used as fake news using the voices of various high-ranking and decisive people. Incidents of large-scale banking fraud have been registered by copying the voice of individuals using artificial neural networks and machine learning.


Deepfake technology also allows cloning and copying of a public voice. Last March, the head of a British subsidiary of a German energy firm paid 200,000 euros to a Hungarian bank. In the incident, someone cloned the German CEO's voice from Deepfake technology and called the company to ask for money.


The company later said that the incident was caused by Deepfake but could not prove it. At a time when fake news and cyber hacking are affecting the election campaigns of big countries, analysts have warned that the misuse of deep fake technology could endanger democracy.


It cannot be said that such technologies created for the purpose of entertainment will not be misused in the long run to create a dangerous situation. Moreover, in a country like Nepal where digital literacy is lacking, such technology seems to bring huge problems.


If such technology is misused, it will harm democracy and the normal social life of the people. If we want to encourage new exploration and discovery, we need to stop the damage.


Giorgio Patrini, director of the Dutch cybersecurity company Deptres, says researchers need to develop technology that separates real and fake content from deepfake technology.

https://www.bbc.com/news/technology-54584127?fbclid=IwAR1U4nptzL6zQRDQGOJAOWzT7_GqyqotevtXTlh8FE0eFRldg4QB0BFKMJk

Comments