That is, it did not reduce the adverse effects of filters to percentages and statistics, as many other tests and studies attempted to do—in the process, often understating if not ignoring the actual content and value of the tens of thousands of websites that filters routinely block, even at their narrowest settings. Dr. Sutton’s work gives immediacy to the filtering story, and shows through real-world experiences how damaging filters can be for the educational process. It is cause for celebration that her work is now appearing in book form.
Internet filtering today suffers from two major flaws. Stated bluntly, they are: bias and absurdity. Bias arises because like all of us, filter manufacturers have their own ideas about what kind of expression is valuable, acceptable, or inoffensive, and what kind of expression, by contrast, is offensive, unacceptable, or “harmful to minors.” In a free society, everybody is entitled to have a personal view on these matters, but government cannot enforce one view by silencing all others. When censorship decisions are made by private companies, however, the First Amendment does not ordinarily apply. Filters thus have the potential to suppress speech much more broadly than any law or government policy could do.
Private biases are evident both in the blocking categories that filtering companies establish and the specific blocking decisions that company employees make. Some filters block virtually all information about gay and lesbian issues, regardless of whether it has sexual content. Some have broad blocking categories for “alternative lifestyles,” “cults,” or “sex education”; what qualifies as an acceptable mainstream religion, and what merits “cult” status, of course, involves highly subjective judgments. Not surprisingly, one of the most frequently and deliberately blocked categories has been criticism of filtering software.
The absurdity of filtering results is an even more insidious problem. Reducing human expression to simplistic categories and sets of key words and phrases is bound to lead to large volumes of “false positives”—blocks that result from the inability of even the most sophisticated “artificial intelligence” algorithms to consider the context, meaning, and value of speech.