Facebook Messenger can seem the least social part of the social network — much of the time, you’re only conversing with one other person.
But it’s not just you and your chat partner on Messenger. Facebook itself automatically scans links and attached photos on its chat system for malware and child sexual abuse.
Facebook’s data-policy pages don’t explicitly describe this automated scanning, but the company has confirmed these practices to USA TODAY and other news sites after comments by CEO Mark Zuckerberg drew attention to the practice. Facebook can also investigate reports by users of Messenger content that violates its posted community standards. Bloomberg earlier reported on the scanning.
In this aspect, Messenger is much like other major mail systems.
“Most services do some form of this,” said Joseph Lorenzo Hall, chief technologist at the Center for Democracy & Technology (that Washington non-profit derives 35% of its funding from corporate donors, Facebook among them). He noted the key benefit of checking links for sites blacklisted for abusive behavior: Spammers and scammers can’t get as many people to click on pages pushing dangerous malware or merely annoying “adware.”
Screening e-mail for images of exploited children has also been an industry-standard practice for years. In this routine, a mail or messaging service performs a mathematical check, usually employing a Microsoft-maintained system called PhotoDNA, for matches against a database maintained by the National Center for Missing and Exploited Children.
“It’s a clear example of how technology tools and [artificial intelligence] can work, as it were, behind the scenes to catch the most egregious content,” said Stephen Balkam, CEO of the Family Online Safety Institute (Facebook is among this Washington-based group’s member firms). “The way in which the major platforms have incorporated it is a huge win.”
If this is news to you, you’re not alone.
“I would imagine very few members of the public would be aware that this is going on,” Balkam said.
A year ago, some of Facebook Messenger’s mobile apps added a different sort of robot reading: Its M digital assistant can surface to suggest you use such Messenger features as stickers, polls, and location sharing. You can mute M in Messenger’s settings.
In that respect, Facebook was only following the lead of Google, which introduced the option of Smart Reply in its Inbox app in 2015 and has since added it to Gmail’s mobile apps.
Facebook, Google and Microsoft do not, however, scan messages for ad-targeting purposes. Google did so for years in its free Gmail service, but stopped that last June.
That’s not the case with the Yahoo and Aol mail services of Verizon’s Oath media division, as a new privacy FAQ reminds users while linking to pages where they can decline this targeting.
Charles Stewart, an Oath spokesperson, said the company will also add a privacy dashboard that will let its users see and control how their data gets used across Oath’s various sites.
(Disclosure: I also write for Yahoo Finance, another Oath subsidiary.)
For maximum messaging privacy, you’ll have to use a service that encrypts your conversation from your screen to the recipient’s.
CDT’s Hall called the free and open-source mobile app Signal “the top-of-the-line and most secure messaging service out there.” He also suggested the Secret Conversations encryption option in Facebook Messenger and the Incognito Mode of Google’s Allo Messenger.
The Facebook-owned WhatsApp is yet another option, although its end-to-end encryption only works in conversations where everybody uses that app.
Be aware, however, that end-to-end encryption strips you of service-level protection against dodgy links or attachments. As Hall said: “Using these tools also means you don’t have the protection from spam filtering or malware filtering of URLs and files, but that is the trade-off.”