Rebooting Internet Immunity
- URL: http://arxiv.org/abs/2306.02876v1
- Date: Mon, 5 Jun 2023 13:47:47 GMT
- Title: Rebooting Internet Immunity
- Authors: Gregory M. Dickinson
- Abstract summary: Article proposes refining online immunity by limiting it to claims that threaten to impose a content-moderation burden on internet defendants.
Internet also operates as a platform for the delivery of real-world goods and services.
This approach empowers courts to identify culpable actors in the virtual world and treat like conduct alike wherever it occurs.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We do everything online. We shop, travel, invest, socialize, and even hold
garage sales. Even though we may not care whether a company operates online or
in the physical world, however, the question has dramatic consequences for the
companies themselves. Online and offline entities are governed by different
rules. Under Section 230 of the Communications Decency Act, online entities --
but not physical-world entities -- are immune from lawsuits related to content
authored by their users or customers. As a result, online entities have been
able to avoid claims for harms caused by their negligence and defective product
designs simply because they operate online.
The reason for the disparate treatment is the internet's dramatic evolution
over the last two decades. The internet of 1996 served as an information
repository and communications channel and was well governed by Section 230,
which treats internet entities as another form of mass media: Because Facebook,
Twitter and other online companies could not possibly review the mass of
content that flows through their systems, Section 230 immunizes them from
claims related to user content. But content distribution is not the internet's
only function, and it is even less so now than it was in 1996. The internet
also operates as a platform for the delivery of real-world goods and services
and requires a correspondingly diverse immunity doctrine. This Article proposes
refining online immunity by limiting it to claims that threaten to impose a
content-moderation burden on internet defendants. Where a claim is preventable
other than by content moderation -- for example, by redesigning an app or
website -- a plaintiff could freely seek relief, just as in the physical world.
This approach empowers courts to identify culpable actors in the virtual world
and treat like conduct alike wherever it occurs.
Related papers
- Toward Textual Internet Immunity [0.0]
Under Section 230 of the Communications Decency Act of 1996, online entities are immune from lawsuits related to content authored by third parties.
This Essay discusses how courts' zealous enforcement of the early internet's free-information ethos gave birth to an expansive immunity doctrine.
It explores what a narrower, text-focused doctrine might mean for the tech industry.
arXiv Detail & Related papers (2023-06-05T13:47:30Z) - A User-Driven Framework for Regulating and Auditing Social Media [94.70018274127231]
We propose that algorithmic filtering should be regulated with respect to a flexible, user-driven baseline.
We require that the feeds a platform filters contain "similar" informational content as their respective baseline feeds.
We present an auditing procedure that checks whether a platform honors this requirement.
arXiv Detail & Related papers (2023-04-20T17:53:34Z) - No Easy Way Out: the Effectiveness of Deplatforming an Extremist Forum to Suppress Hate and Harassment [4.8185026703701705]
We show that deplatforming an active community to suppress online hate and harassment can be hard.
Case study is the disruption of the largest and longest-running harassment forum Kiwi Farms in late 2022.
arXiv Detail & Related papers (2023-04-14T10:14:16Z) - Macroscopic properties of buyer-seller networks in online marketplaces [55.41644538483948]
We analyze two datasets containing 245M transactions that took place on online marketplaces between 2010 and 2021.
We show that transactions in online marketplaces exhibit strikingly similar patterns despite significant differences in language, lifetimes, products, regulation, and technology.
arXiv Detail & Related papers (2021-12-16T18:00:47Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Analyzing the "Sleeping Giants" Activism Model in Brazil [1.549278866004654]
Sleeping Giants Brasil (SGB) campaigned against media outlets using Twitter to ask companies to remove ads from the targeted outlets.
This work presents a thorough quantitative characterization of this activism model, analyzing the three campaigns carried out by SGB between May and September 2020.
arXiv Detail & Related papers (2021-05-16T21:47:30Z) - The Rise and Fall of Fake News sites: A Traffic Analysis [62.51737815926007]
We investigate the online presence of fake news websites and characterize their behavior in comparison to real news websites.
Based on our findings, we build a content-agnostic ML for automatic detection of fake news websites.
arXiv Detail & Related papers (2021-03-16T18:10:22Z) - Detecting Harmful Content On Online Platforms: What Platforms Need Vs.
Where Research Efforts Go [44.774035806004214]
harmful content on online platforms comes in many different forms including hate speech, offensive language, bullying and harassment, misinformation, spam, violence, graphic content, sexual abuse, self harm, and many other.
Online platforms seek to moderate such content to limit societal harm, to comply with legislation, and to create a more inclusive environment for their users.
There is currently a dichotomy between what types of harmful content online platforms seek to curb, and what research efforts there are to automatically detect such content.
arXiv Detail & Related papers (2021-02-27T08:01:10Z) - Abusive Advertising: Scrutinizing socially relevant algorithms in a
black box analysis to examine their impact on vulnerable patient groups in
the health sector [0.0]
This thesis examines the display of advertisements of unapproved stem cell treatments for Parkinson's Disease, Multiple Sclerosis, Diabetes on Google's results page.
Google announced a policy change in September 2019 that was meant to prohibit and ban the practices in question.
A browser extension for Firefox and Chrome was developed and distributed to conduct a crowdsourced Black Box analysis.
arXiv Detail & Related papers (2021-01-04T19:28:19Z) - Quantifying the Vulnerabilities of the Online Public Square to Adversarial Manipulation Tactics [43.98568073610101]
We use a social media model to quantify the impacts of several adversarial manipulation tactics on the quality of content.
We find that the presence of influential accounts, a hallmark of social media, exacerbates the vulnerabilities of online communities to manipulation.
These insights suggest countermeasures that platforms could employ to increase the resilience of social media users to manipulation.
arXiv Detail & Related papers (2019-07-13T21:12:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.