Sonit Jain, CEO, GajShield Infotech
The sight of spammy and fake comments on social media posts and other websites is not uncommon today. Any post on a public platform is likely to be flooded with repetitive, inappropriate, and generally hate-fuelled comments in today’s day and age. This problem is particularly seen on comment forums of social media websites if proper response management and censorship is not ensured by individuals, organisations, or social media companies. As we know, social media is a big tool for marketing and optics today. When your organisation posts something on Facebook or Instagram, they wish to gauge how their target audience reacts to them. For organisations, social media response from the general public represents an important channel of communication for future research and development. So, if the comment sections of such posts are riddled with vulgar messages generated by pornbots or thousands upon thousands of other junk messages, then the purpose of social media presence for organisations is entirely defeated. One of the main technologies used to generate fake comments is AI. AI-powered bots are capable of generating spam on the internet. Misinformation through these bots can spread like wildfire over the web.
From a data security angle, many of these spam comments or emails received by individuals may contain malware that can damage user devices and corrupt the data stored in them.
AI systems’ ability to generate fake comments
AI-powered systems can be used to create comments that may seem human-generated to an uninitiated viewer. However, the thousands and thousands of fake comments serve the purpose of overshadowing the views of actual people regarding a product, an incident, or the overall performance of a ruling government. It is well-known that people in power can use social media platforms to draw out the opinions of those who do not agree with their views. As we know, two of the main principles of cybersecurity are fairness and transparency. The generation of fake likes, comments and responses on an online platform goes against those principles, and, by extension, data security in general.
Cybercriminals or big, powerful organisations, public bodies, and individuals can list the services of IT and telecommunications companies to use AI and text-generating tools to create thousands of ‘public’ comments on social media posts. The comments are generated in every forum and place where online audience interaction is present. The service-providing companies not only create fake comments but also steal information related to us ernames and profile pictures of individuals before duplicating data to create fake accounts. Due to this, the concept of net neutrality is severely threatened. If one observes the fake comments closely, they are often copy-paste jobs with a few minute variations. Often, even spelling mistakes appearing on one comment will be replicated onto several identical comments.
So, how are the comments generated? Misinformation spreaders use powerful tools such as the Generative Pre-trained Transformer 3 (GPT-3), the most powerful natural language generator in the world, to create fake text and replicate it. As stated earlier, fake comments are created in such a way that they seem real. Organisations must assign a team of social media moderators to deal with this issue. Moreover, there are several AI tools that can be used to counter the problem of spamming and misinformation too. In a way, AI provides both the problem and solution in such instances.
AI-generated spam can disrupt the internet
As we know, spam is not just restricted to comments but other things such as emails, instant messaging texts and connected mobile devices. Text-generating devices make large volumes of data in the form of bogus blogs, spam videos, and even entire websites. Such content is driven by specific keyword phrases, and as a result, usually goes undetected by search engines such as Google Search.
As expected, the relentless generation of spam content causes excessive traffic online and can be debilitating for the speed and efficiency of search engines as well as your organisation’s online operations. Such platforms must employ tools that closely monitor data all kinds of data and filter it out. The internet will not function with its characteristic nimbleness if it keeps getting inundated with spam and fake AI-generated data. While this may not be a cybersecurity-related issue, it does have a detrimental impact on the digital operations of your organisation.
Ways to deal with fake accounts and comments
There have been several examples of big websites eliminating millions of fake accounts to reduce clutter and misinformation spread. On their end, organisations can also take certain measures to get rid of fake accounts and comments to reduce the possibilities of malware attacks and extensive spam infestation of their social media posts.
The measures to be taken to deal with fake accounts and comments are:
- Using some of these social media management tools can be useful to increase the traffic in your online pages in the right way.
- This second measure involves having a rough idea of how many followers on social media pages are real and how many are fake. While this step may be challenging, it can be countered by increasing the number of ‘real’ followers. There are specific web tools that can be used to differentiate real users from fake ones. Using these tools, the number of fake accounts can be detected, reported and blocked. A periodic audit of your online pages can be useful to get rid of fake comments on your posts.
- Additionally, users must be made to pass through digital gateways to scan whether they are humans or AI-generated bots. Tools such as Captcha are currently being used for the purpose.
As stated earlier, fake accounts and comments threaten to take away the sovereignty of the internet and place it in the hands of a few powerful entities. On top of that, the presence of malware and Trojan horse viruses bring cyber threats into the mix. Therefore, apart from using anti-spam tools, your organisation needs to deploy the best data security products and services in order to prevent breaches and attacks from taking place.