Meta Publishes New Guidance on Security, But How Useful is it Really?
As the world becomes more digital, concerns are being raised on the regulation of social media platform safety. Although companies may introduce guidelines to protect accounts from cyberbullying, the way these translate across their different services may not be as sufficient as they hoped in protecting their users.
One company to have done so includes Meta, who recently published a guide on security options on their services to address these concerns. This guide discusses topics from the password basics and setting up two-factor authentication on Facebook to a discussion of how WhatsApp works, whilst also mentioning other steps users can take to prevent harassment.
Although initially targeted solely at journalists, the guide also applies to the general public in helping to protect themselves. Additionally, businesses would benefit from gaining advice on how to operate their page online.
All of this seems to indicate Meta hopes to limit the negative impact their services have; with Instagram specifically, Meta has already spoken about their awareness of comments as the major source of discussion. But, Meta seems to know more actions are needed to protect their users; even though this guide discusses how to limit comments on their profiles by keywords, a new filter system is being introduced where comments that aren’t against community guidelines but are considered bullying may get hidden.
However, the way these apps evolve could shake everything up; plans are in place to have WhatsApp integrate with the new Workplace platform, which could see a change in the way people communicate with each other online. Within this is potential for a domino effect, where small changes ultimately lead to bigger consequences than intended.
Despite their best intentions, the introduction of automated ways of protection may cause issues. In Instagram’s case, many are already aware of the potential bias and abuse possible. Leaked data reports also showed Meta is aware of the harmful effects Instagram has on their users, but these new systems would not be able to limit everything. Additionally, it would also represent a shift in intention; Instagram had already stated they did not want to implement this system previously, but held off until now so the system was sophisticated enough to avoid unintentional consequences.
[Creds: Security.org]
Despite Meta addressing these main issues, this doesn’t really seem to be an indication of improving overall security and experience on these platforms. Too much is left up to the user to navigate, so systems being implemented will need careful scrutinisation if they want to survive for the long-term.
Finally, for our previous #SocialShort, click here.