Disinformation and Misinformation: Screening via Social Media

Misinformation and, more insidiously, disinformation circulating in volumes has become a voluminous issue. We argue that the last Presidential election and the political system may have been hijacked by it, that our health in the Covid-19 crisis. that may not have peaked is jeopardized by it, and that people hardly filter it out before passing it forward and magnifying the problem.

We are not saying that the problem is only from a few bad actors, but that millions of people, through fast technology and sharing, are compounding whatever the few bad actors may be instigating.

When we have a problem in society, we often blame an organization as an intermediary exploiting some people to self-aggrandize for the organization per se and its leaders. And, typically, it has a role and deserves some of the blame and imposing blame is easier against it than against the people it organizes or serves.

We can use unloaded language to describe it: It focuses attention on the interests some people have in common, recruits those people to form a base of support (called a membership or whatever other name sticks) to mobilize it on those interests and to add people with weaker versions of the same interests and sympathizers in order to grow the base, reaches out to whoever it thinks can help fulfill those interests, strengthens itself as an organization, seeks solutions that fulfill the members’ main concerns, and evolves with times and demands.

Trade associations and labor unions do this. Religious organizations and scientific societies do this. Publishers with editorial views and politicians running for election do this.

Technology has risen to new heights and people manage the high technology, so these intermediaries now include Facebook and Twitter. These two are now at the center of attention over suspect information making the rounds. These two make passing information easy, quick, and cheap or free for one person who isn’t even famous or reliable to send it to a few chosen people or to masses filling land visible from the moon. Those two collect and provide data that can be used to create and aim that information. There’s so much data that we call it big data and people get masters’ degrees in data science.

And we work it skillfully. We use big data to help create and send messages to the likely most receptive. Apparently, we often do it well.

Pitfalls are also in front. Many messages work in competition with each other, weakening the effect of each. If you open your inbox and the emails almost fall off the screen into your lap so you can’t even see your lap, you may not even click on them to read past the subject lines and then the messages don’t work. But, often enough, they tend to work. Skillful people with their hands on the new technology make them work.

But in older days, before computers existed, message-proponents used whatever they knew to get focused messages out to whomever they thought could be favorably affected by them and, absent much advanced competition or numerical competition, the fewer messages tended to work, like when the first Crusades were organized, a king sent a knight with an order galloping into the countryside, a clothing-maker, tool-maker, or cook sold their wares, or political support was needed (even by a king) and proponents sent word on their view, e.g., when some 200 years ago someone said one candidate had unfortunately died so votes (at least to be meaningful) could go only to the other candidate; while the one had not died, no one listening could check without getting on a horse and being gone for hours or days when voting was about to start.

Disinformation and misinformation have been with us for so long that we probably have no idea how long it’s been around. Facebook et al. are today’s vehicles, but they aren’t the first and many of the older tools are still with us.

Even thousands of years ago, we had data: we’d read someone’s body language and study the context. We had to be up close but at least then we often could read body language and context. We still rely on in-person proximity for a lot of communication. We often consider it trustworthy. New methods help but are not the only thing available. So, causes limited to modern methods can’t explain the problem.

The circulating of bad information may be due to a more fundamental issue. Speed and pervasiveness are important now but mean nothing when no one has them. In older days, no one had them, so they’re not at issue. But we had communication and people who knew well how to use it, and some of it doubtless was bad.

We know something about communications that are persuasive at the expense of accuracy. They’re commonplace in both propaganda and sales. They might offer lessons. A sales copywriter or a propagandist will try to identify what will sound believable and whether that will believably support the central thesis. This likely varies according to which audience is selected.

The limits on those editorial practices are, at least, the audience’s education and the audience’s willingness to challenge authoritarianism distinct from authoritativeness, authoritarianism that may be useful shorthand or pretension and may have to be challenged to test for authoritativeness, authentic authority. Both education and willingness to challenge authoritarianism are relevant to information whether old or new and however disseminated and vary by audience.

Education can take years, but willingness to challenge may not change quickly, in part because many people like keeping their authoritarianism and intend for everyone else to accept and respect it even as shorthand, so the willingness to challenge it has inertia.

Against those schedules, much of the public is getting impatient. It’s faster to get the platforms to crack down on bad information, if it’s possible. And it is possible: Facebook and Twitter want to keep their leadership posts and grow. Smaller platforms generally want to get big. That depends on cultivating and keeping a good reputation. For the sake of reputation, they’ll self-regulate what users expose on a platform. But one platform usually can’t regulate other platforms; that would be a government function.

Government regulation opportunities, however, are limited by the Constitution’s powerful First Amendment. The First Amendment’s speech and press rights belong to a platform as well as to its users. A conflict between platform and users would resolve in favor of a platform with its users being free to go to other platforms that can exercise their legal rights in various ways.

That won’t satisfy most people. The highly educated likely don’t have much of the problem in the first place, as they already use other channels that limit what gets passed around by the criteria of the more highly educated. And some people limit their communications to smaller groups, because they maintain less trust outside of those groups. But that likely still leaves most people, and especially most of the impatient people.

People who are impatient don’t have the time to acquire years of education and probably won’t change their inclination about authoritarianism, not challenging it much more or less than they usually do. They want the platforms they respect to filter what they see, just as elementary school teachers used to do. They want the platforms to become new authority figures, vetting information.

The secret about top platforms vetting is that, while the leading platforms are big, well-known, capable of spending substantially, and popular, they’re not expert in most subjects.

Experts would be more accurate at vetting. They, however, have to be selected. The general public would likely prefer that the main platforms select the experts, and that is what seems to be happening. Their performance in vetting is subject to guidance from their employers, so how constraining that guidance is has yet to be discovered.

That might be enough. Maybe not, but, in that case, maybe this boils down to a call for the consumer to beware. But people aren’t happy with that and they want to be taken care of. Often, people manage to get taken care of.

This final paragraph is supposed to have a crisp solution for what everyone should do. I’m not sure I know. Maybe the platforms will do well. And maybe next we’ll improve our whole national and global educational systems. Both would be wonderful, and either one will help.