24 Comments

Thanks for the thoughtful article, Matt.

I have a couple concerns about your suggestion that we repeal Section 230 of the CDA (if that is in fact what you are suggesting).

First, what gives you the confidence that the current members of congress will be able to effectively regulate speech on the internet? They are currently woefully ignorant about many details of technology (as evidenced by the Zuckerburg hearing) and standardized speech guidelines at the scale of a platform like Facebook are a totally open problem— it would be implausible to borrow from, say, how the FCC regulates radio because of the volume of online content.

Second, you use the example of strangers impersonating soldiers to attract women online. Would you feel very differently if the same type of con was done via mail? Do you think the postal service should be held responsible for the bad content that it was delivering? I'm sure you will disagree with the analogy, but I'd love to see why exactly, because to me (who is not at all a fan of Facebook) I have to recognize that social media is more akin to a public online square than say a radio broadcaster or a magazine. (And thus that Section 230 makes sense).

Thanks again.

Expand full comment
Sep 7, 2020Liked by Matt Stoller

Hi Matt, I recently found this: Back in 2012 Baen books increased its ebook prices as a direct result of signing a contract with Amazon. http://teleread.com/baen-inks-deal-with-amazon-makes-major-changes-to-webscriptions-and-free-library/index.html

"Prices for backlist e-books will be going up, too; instead of $6, e-books of books whose print edition is currently hardcover will be $9.99, trade paperback $8.99, and mass market paperback $6.99."

The price increases did result in increased royalties for the authors (and presumably increased profits for Baen itself), so this wasn't solely due to Amazon's middle-man take. Though whether the price increases would have been as significant without Amazon's requirement that they not be undersold elsewhere online is the question.

Expand full comment

Matt,

I see no reason internet providers should not be held to the same liability exposure as newspapers & magazines. The initial argument for 230 was that such restrictions would stymy the growth of the net. That's no longer a valid argument. Online news & information publishers should have no less exposure than their print counterparts - nor should their platforms if their platforms choose to rent out their space. Wordpress, for example, should have exposure for the blogs they allow to be set up on their platform and the bloggers should have exposure for the comments they allow posted to their blogs. The only burden is that the bloggers would need to verify the identity of their commenters (which would not be that difficult).

This would be a greater burden for a Facebook or Twitter, but that's the cost of offering a public platform. Putting them under such exposure would reign them in a bit, but not put them out of business. (And it seems to me to make sense to bring these businesses under existing controls than to attempt to control the uncontrollable.) To cries of loss of rights to free expression, society conveys rights in exchange for duties to society - it's not a fee ride.

Expand full comment

If the problem is fraud then why not handle it like any other fraud case. Platforms should provide directions to users to file police reports. Those police reports can then be forward to the platform where they can then be required to Investigate the claim.

To me the goal isn’t to make platforms liable. The goal should be to minimize harm. And getting the platform involved in that process should get that done.

At this point you could have a situation where the platform is liable for failing to follow through with an investigation of fraud. Leaving section 230 as is.

Expand full comment

Hi Matt - hope you saw this recent story about Goodreads, and how Amazon's monopoly control suppresses the development of websites for book readers: https://www.newstatesman.com/science-tech/social-media/2020/08/better-goodreads-possible-bad-for-books-storygraph-amazon

Expand full comment

I like the product liability angle, to the extent it focuses on indiligence and indifference around verifying the identities of account owners. Let’s know, like with campaign advertising, who is responsible for the message. And clearly display the verified account identity along with the message. Or, if someone wishes not to verify, clearly flag unverified accounts as such, to assist the credulous to avoid being inveigled by the unscrupulous. Unverified accounts, clearly visible as such, would receive the credence they deserved.

This requires authentication, which is a cost. Why require each operator to perform verification? This would be wasteful and error-prone. And why leave authentication to private operators? This is exactly the kind of public service that government is best situated to provide.

As much as people relexively recoil from the idea of unambiguously establishing people’s identities (natural and corporate) at the national level, the time has come where the putative benefits of not doing so are outweighed by the damages and risks of not. I’d rather have my identity unambiguously certified by the federal gov. then have it left open to usurpation and theft. The current (monopolized, right?) credit rating system is a poor substitute, open to abuse and error. The patchwork quilt of “official” authentication documents (driver’s licenses, passports, SSN, voter registration cards, ...) leaves too many opportunities for scamming and impersonation.

We no longer live in villages where either everyone knew who you were, or you were a stranger to be wary of. In our current society, most of those we interact with and live among with are strangers, so whom to trust? Let us not let distrust of everyone become the default. Let’s authenticate societally to make trust a reasonable default.

Expand full comment

Now that online platforms have facilitated slander, libel and defamation of character with impunity for twenty years while racking up considerable corporate value and social clout, perhaps it is high time publishers of the printed word receive the same blessing. Just imagine opening up the pages of the local daily to read whatever unedited calumny, screed or disambiguation was submitted with or without attribution by ‘users’ of the ‘platform’. It would be a free-for-all to be sure. It might even revive the beleaguered newspaper industry and nearly defunct magazine market. And think of the relief publishers would have in saving on legal fees. It will feel good to hide behind a Section 230 of the Communications Decency Act that extends to the printed word. No more lawsuits for publishing something a 'user' ‘posted’. Sure there was a typesetter involved. But it was de minimis involvement only at the level of cut and paste. And as for running the slander through the newspaper’s printing press, why, that’s really no different than an intangible platform web presence, when it comes right down to it. Oh think of the fun local weeklies will have finally getting that local gossip into print. I see a new day dawning. Is it any wonder that users canceled their fuddy duddy, constrained print subscriptions in favor of a milieu without boundaries, referees, or fact checkers? It is mouth watering, lip smacking and juicy. Mmm mmm. This will make matters a lot easier for reporters, too. No more need to get facts right or verify anything or screen out bias. Wait a minute, strike that. There won’t be any need for reporters. More good news for the bottom line.

But seriously, the procedures, standards and safeguards that have grown with the publishers, advertisers, reporters and readers in the print industry should also apply to online platforms. Print publishers long ago without coddling or special protection how to walk and talk without landing in court. Publishers who print noxious content lose advertisers and readers are soon out of business. Citizens who submit false information to reporters or in their letters to the editor or opinion pieces get screened out and their voices are not amplified. Advertisers who dupe unsuspecting consumers lose the privilege of placing future ads, if the publisher is scrupulous. Publishers demand of their reporters honesty, integrity, accuracy. Culls are culled. Reporters scrutinize their sources for false and misleading information and learn to sniff out the con artists and manipulators through intrepid fact checking and verification. The same practices that keep publishers and reporters from being sued are what give citizens peace of mind and a sense that what they are reading is probably close to the truth or true. The very act of committing to print something that is available to the public for them to pick apart with the opportunity for a correction to be printed or to be objected to with a letter to the editor is a cherished institution. The brick-and-mortar nature of the enterprise is another reason publishers do not go out of their way to offend readers: it only takes one whacko to burn the place down. Beyond that, there is the fact that every publisher is ultimately a member of a community, whose kids grow up there and parents and grandparents all knew each other and hold one another accountable. I wouldn’t want newspapers to devolve into the smarmy, slimy, manipulative, surveillance world the online platforms have become or for ‘content’ to be ‘posted’ in newspapers unverified and unvetted. I think a lot of early investors in those online platforms had a lot to be gained, personally, by letting them hide behind Section 230. Now that those same platforms have grown too big to regulate and track our every search, purchase, click and movement, its high time we push the pot-bellied overgrown baby out of the highchair and tell it to walk and talk, come what may.

Expand full comment

I'm an admin (but not a moderator, long story) of a little bbs, with perhaps 30 regular users.

I agree with you that Facebook, Grindr, and more extensively Amazon (acting as a marketplace of defective and fraudulent products) needs to be dealt with, but I don't want to get sued because one of my users vents about a physical therapist. (Actually happened about a month ago, no one on the board is ever going to that PT after they sold out to a bunch of scammers).

How do you thread this needle?

The original justification for section 230 was that a BBS like mine is like a book store, and you don't sue a bookstore for having a libelous book on the shelves.

Business like Facebook, however, are not a BBS with someone telling people to chill out when they get out of line, they, they are a publisher, and an advertising agency, even if the law says that they are not.

I'd love to see Zuckerberg, and Bezos, get what they deserve, and I'm willing to take an (admittedly small) chance of being sued in order to take these psychopaths off the street, but I'd prefer not to take that risk.

In summary, I support an immediate and full repeal of section 230, but IF there is a way to leave real community groups protected, I'd like that too.

Expand full comment

The only way to stop Facebook is to STOP using Facebook.

Expand full comment

Very good article. Fortunately or unfortunately, all moral obligations cannot be made into legal responsibilities. However, enormously profitable social media platforms can be legislated into having to respond to complaints of their customers as well as those affected by their publications.

Certainly politicians cannot and will not be effective policemen of social media. But we have now seen that those people who own and control them are even worse-they have no one to answer to for anything they do.

Comparing social media to the USPS is simply inapplicable-the USPS is a conveyor of private communication between individuals whereas social media is a repeater and broadcaster of speech being actively utilized by bad actors to spread their incitements of violence, slander, half truths or outright lies, and the like. In addition they censor what they deem inappropriate. They know they will not be subject to John Milton's observation of the impossible: Who will censor the censors?

They know whatever they accept or reject is or can be broadcast or withheld from millions of people. All of this is well known and not just tolerated, but promoted for huge amounts of money made by these social media facilitators of unrest - be it individual as in the case of Mr. Herrick or community as in the case of riots with attendant violence, whether in Asia or here in the USA. They simply do not want to be bothered with the expense of having to deal with problems they have enabled others to create.

I am not advocating all protections of Section 230 be ended. But we are in a new era of mass communication not envisioned under laws and legal principles which applied to society and technology as it existed starting back at the evolution of the doctrines of defamation and as they developed up to the advent of our current state of electronic information sharing. The bottom line for social media is anything which advances their agenda or increases revenue is on the table, and anything which does not meet their agenda they take off the table.

Expand full comment

Late for commenting on this, but still...

Asking SNS companies to be 'responsible' for their content sounds reasonable - or at least better than the current system. But what is reasonable? What is harmful? If there are specific legal definitions, that's one thing. For example, selling counterfeit products is illegal, so enabling the sale should maybe bear some consequences. 

But what if we haven't defined a specific action as illegal? If you can legally mimic someone else's identity online or release their personal information in order to harass them - isn't that a problem in and of itself? Asking Grindr to be accountable for something that isn't illegal, but that we consider 'harmful' is maybe a backwards way of fixing the problem... Shouldn't the stalker in that case be responsible for the harm they caused? 

If we're talking about actual speech on Facebook, Twitter, etc. - then it seems even more problematic to require them to censor based on the criteria of 'harmful'. I get that their current system promotes content that is probably bad for society. Not necessarily because that's what the platforms want, but because that's the result you get if the goal is maximum engagement. So I'm not unsympathetic to the idea... but again, it seems weird to me to hold them accountable if the content itself is not illegal. It seems like passing the responsibility to govern speech onto them, because it's too difficult to work it out through our elected representatives. Are we going back to a mercantilist system? 

If we make the criteria for liability anything that could be 'harmful', then probably the result will be censoring based on political expedience and further increasing monopoly power (since it would require a lot of money and political power to survive the potential law suits).   

Expand full comment

Amending the law to say that platforms can be held liable for facilitating "personation" (yes, without a prefix of im) would be generally useful. Still allowing people to be anonymous or pseudonymous of course.

Expand full comment

Thanks for the great article! I always look forward to reading your posts. I think the Grindr case can be tackled in a low-falutin way by considering the similarities of sending mail and using social media platforms: in both cases these kinds of platforms are (largely) controlled by large organizations which are responsible from moving information between one party to another. In the case of the USPS, they deliver messages for a fixed fee, while social media companies perform the same kind of functionality, but have it baked in to have a much larger audience. For example, whenever I post on Facebook, I am sending out information to my friends. By using Facebook, they are outsourcing the process of sorting the mail, but functionally sending posting on Facebook is the same as sending out a bunch of letters to friends (colleagues, acquaintances, etc) containing the same message.

With the Grindr example, you are now getting access to a privately owned list of people interested in hooking up with each other. Effectively, by creating an account, you are advertising yourself by sending a message out to anyone on Grindr looking at your profile. To relate it back to mailing, it's analogous to hiring a dating service to go and send mail to its other clients with a list of interests you have. Nothing controversial there, Grindr is now just acting as this intermediary through its matching algorithms, automating this dating service.

Here's the differences though: with sending messages in the mail it costs you a lot of time and money to send each of these letters, although the time portion can be dealt with through outsourcing. But the main point is the barrier of entry. Now what would happen if a bad actor bought a mailing list of people interested in having casual sex in your local area, and then sent out a fake letter giving your address with a list of sexual desires. This is obviously harassment and could be dealt with through the courts. Should USPS be responsible for this? I don't think so, and that's why we have 230. Now, there should be some kind of legal infrastructure for dealing with these kinds of problems on social media platforms.

Dealing with this kind of scenario legally involves getting a lawyer and contacting the (local) police. But since these platforms are (inter)national, at the bare minimum there should be government office for policing these kinds of problems. Someone could submit their identification, links to fake social media accounts, and any other information which could be used to identify themselves, such as their actual social media accounts. This has the benefit of letting an investigator go ahead and review their credentials, verify there identity, and validate the account they are reporting is impersonating them. Then, if a social media platform does not comply with removing the account, they can be held liable. I think 230 could be preserved, but adding this restriction forces them to remove bad actors.

Here's some analogous situations: suppose someone stole your identity and opened up fake bank accounts in your name and are applying for loans with this stolen information. If the courts ordered the bank to shut down this account, but they refused to comply, what would happen? Or what if someone used a dating service to impersonate you, and it gave your info to (would be) suitors to come and harass you at home. Wouldn't they be liable for harassment?

Admittedly the rumor spreading is a much harder problem to deal with, and I'm skeptical this kind of issue could be regulated away without either destroying the possibility of having social media, or giving Facebook regulatory capture. There are several examples of social networks which are pro-social and aren't creating such a noxious atmosphere for its users. For example, Pinterest and Ello aren't being criticized for enabling genocide, but (in some cases) they also haven't penetrated international markets as much, and don't have a global spread. Part of Facebook's problem I think is the novelty of these kinds of issues, but I agree there should be changes made which prevent such problems. In this case, how are weapons manufacturers liable for selling internationally? That's the closest analogous situation I can think of unfortunately.

Expand full comment

Fantastic piece. This subject doesn't get the attention it deserves in the UNited States. There is a fix of some sorts to this debacle. The Spanish courts found for a persons right to be forgotten. The EU enshrined it in law. Companies such as the Rip Off Report were formed basically to exploit this law, whereby they basically are running an extortion enterprise protected by Section 230. Anonymous posters are allowed to post anything including false information on an individual or a company. In other words an enemy or competitor can defame you publicly. Several accusations have been laid at the doors of Google who have been accused of colluding with ROR to profit by selling reputation repair advertisements next to unverified and defamatory postings. Furthermore ROR has a program where you pay them several 1000s of dollars to move the reports down in priority. In otherwords bury the report on page 10 of a google search. These are the unintended consequences of Section 230 of the Telecommunications act. 10s of 1000s of people have been maligned and in some cases destroyed with zero evidence.

Expand full comment

You've made an excellent argument that we can't consider these cases of Grindr or Facebook as examples of free speech. I'm simple enough to believe that if harm is actively done but there is no-one to blame then there is a gap in law. I appreciate the mention of Citizens United as I suspect a the search for a good answer may involve the nature of the personhood of corporations. Thanks for the article.

Expand full comment
founding

This is one of the best talks I have seen drilling into the attention economy by a former google consultant turned whistleblower:

https://www.youtube.com/watch?v=ElWGkGOcW2I&t=5s

Algorithms do not relieve their operators of liability. When Twitter moved away from a time based feed, I was genuinely afraid for what was to come.

Great post as always Matt. I think the legal particulars of her argument quite stick (at least in this particular case; there is absolutely a point to be made about intentional malicious design of digital platforms) but there is something to be said for forcing internet companies to provide their participants the ability to enact meaningful recourse when wronged.

Expand full comment