Does telling the truth matter anymore?

Today we are seeing a brush fire of lies and misinformation on social media sites. This rise of misinformation poses alarming challenges for our country. Fake facts often lead to fake news; there is no such thing as “alternative facts.”

Last month, Facebook’s (Meta) corporate decision to stop independent, third-party fact-checking may prove far more dangerous to our democracy and institutions than we ever anticipated.

Facebook and Twitter (X) outpace all other social media sites where Americans, especially young adults (Generation Z), regularly get their news.

According to a recent Pew Research Center survey, a large majority U.S. adults (86%) get their news from smartphones, computers or tablets, including over (57%) who say they do so often.

Americans are turning to radio or print publications for news far less frequently. In 2024, just (26%) of U.S. adults say they often or sometimes get news in print, the lowest number since Pew has been recording.

Other sites such as Twitter (X), Snapchat, LinkedIn, and WhatsApp serve not only as sources of news but also as spaces for public discourse, debate and citizen journalism. Users increasingly turn to these platforms for real-time updates and diverse perspectives, often bypassing traditional news outlets. However, the spread of false news remains a challenge, leading to increased scrutiny and calls for more robust content moderation by these platforms and not less.

These social media sites have democratized information sharing and empowered individuals to voice their opinions. But they also spread “misinformation” (incorrect or misleading without malicious intent) or “disinformation” (false information intended to deceive).

Thus, fact-checking is an essential tool to ensure online information is accurate and reliable. Social media quickly spreads information and does so much faster and farther than ever before. But once false information is posted, oftentimes the damage has been done.

The consequences of fake facts are big and frightening. They can skew public perceptions, influence electoral outcomes, affect public health decisions, incite violence and erode trust in our democratic institutions.

Who can remember “Pizzagate,” a conspiracy theory that went viral during the 2016 elections? Fake facts that claimed the New York City Police Department had a pedophilia ring linked to members of the Democratic Party.

The COVID-19 pandemic, for instance, showed how quickly health-related misinformation can spread, leading to real-world consequences such as vaccine hesitancy and resistance to public health measures.

California hadn’t even woken to the amber skies before social media began running false facts that attacked the L.A. Fire Department.

Social media platforms now use algorithms that can determine viewer content. Algorithms maximize engagement, often prioritizing sensational or emotionally charged news. Unfortunately, this can mean that false or misleading information pushes aside more accurate content.

While this personalization ensures that readers receive news with their interests, it also raises concerns about echo chambers and the reinforcement of existing biases. Despite these concerns, the convenience and customization offered by personalized news experiences make them appealing to a wide audience.

Social media platforms have a responsibility to ensure the accuracy of their content. While some argue that platforms should not act as arbiters of truth, the reality is that they already influence what information is seen and shared through their algorithms. So, they have both the power and the obligation to mitigate the spread of false information.

By verifying the accuracy of information before it is posted or widely disseminated, platforms can prevent the initial spread of falsehoods. Fact-checking can also help to correct misinformation that has already been shared.

By working with independent third-party fact-checkers, platforms can also demonstrate their commitment to transparency and accountability.

We must fight back against fake facts:

Our schools should teach and encourage critical thinking and skepticism toward all information. We must educate the public on how to verify facts using reliable fact-checking websites and other tools.

As users, stakeholders and citizens, we must demand and support these efforts, recognizing that the health of our information ecosystem depends on it. The time to act is now in order for us to combat false information before it further erodes the foundation of trust upon which our country is built.

To navigate the complexities of the digital age, the commitment to truth and accuracy must remain paramount, ensuring that the information we rely on is credible and trustworthy. Only by upholding these principles can we hope to build a more informed, cohesive and resilient society.

Ultimately Congress must revisit Section 230 of the Communications Decency Act, a nearly 30-year-old federal law (a topic I plan to cover in a future column), that shields internet companies from legal liability for the content they allow to be published.

While checking posts is complex and fraught with challenges, it must be done. Through a combination of technology, education collaboration and regulation, social media platforms can rise to meet the challenge, fostering a digital environment where truth prevails.

Facebook’s decision to eliminate fact-checking can undermine its integrity. It is critical that we the people and our elected officials decide to better regulate social media sites.

Yes, truth matters as it is the cornerstone of effective communication. Truth is crucial to decision-making, problem-solving and the advancement of knowledge.

Facebook and social media sites can all ditch fact-checking and their integrity, and will, until we decide to better regulate social media.

Jim Martin can be reached at jimmartinesq@gmail.com.