The character of a society is shaped by the information it consumes. This principle has been recognized by leaders and thinkers throughout history, as they understood that controlling the information environment influences public consciousness.
Figures from antiquity, who determined which books were included in the Bible, played a role in forming the culture of Western civilization. During the Middle Ages, priests censored specific texts, reinforcing the notion that those who control information shape societal personality. Historical culture wars reveal that they were fundamentally struggles over the flow of information.
The flow of information is influenced by two main actors: creators and editors. While history often emphasizes creators, editors have significant power. From ancient text editors to modern news directors, they not only select what to include but also decide the emphasis and duration of topics.
For instance, television editors might allocate differing amounts of airtime to political crises versus economic issues. Editors help shape public perception even though they may not be as well-known as creators.
The digital revolution changed our interaction with information. Instead of actively seeking information, it now seeks us. Algorithms, particularly those powered by artificial intelligence, curate content based on user behavior, determining exposure to various topics.
Consequently, artificial intelligence now acts as the principal editor of contemporary information, influencing both its distribution and emphasis. Recently, humanity began to realize the extent of this shift—nonhuman intelligence has been shaping our information landscape for over 15 years.
The algorithm sorts information mostly by what garners attention, prioritizing content likely to engage users. As the industrial revolution turned oil into wealth, the digital revolution transformed human attention into a valuable commodity. Companies like Facebook and TikTok extract attention through algorithms.
These algorithms swiftly adapt to human psychology, understanding what engages or repels users. They found that fear and anger attract more attention, while calm or positive content tends to be overlooked. This creates a polarized environment where different political ideologies receive tailored information that stokes their anger.
The resulting information landscape has intensified polarization globally. Incidents like the murder of Charlie Kirk illustrate this dynamic, with perceptions diverging sharply across political lines. Each side sees the other as a monolithic group, leading to deeper divides.
Increased anger and suspicion emerge as artificial intelligence inundates users with conspiratorial content. Polarization manifests both horizontally, between political camps, and vertically, between citizens and institutions. This rising tension diminishes trust, which is crucial for managing conflicts.
While artificial intelligence does not intentionally sow discord, its focus on attention extraction has led to significant societal consequences. A highly polarized society struggles to reach compromises, impacting its ability to tackle critical challenges like migration, climate change, and terrorism.
Although censorship by officials may not be appropriate, artificial intelligence should not amplify damaging content. Several factors have contributed to our current information environment, including AI shaping content consumption and a prevalence of attention-grabbing information that is often hostile.
A society overwhelmed by such negativity becomes dysfunctional. Concerns over AI dominance should shift to recognizing that this influence has already taken hold. Positively, awareness of the issue is rising, with initiatives in Israel aimed at protecting human attention from digital distractions.
Youth movements are organizing tech-free retreats, and municipalities are reducing smartphone use in schools. These efforts reflect a growing recognition that mental freedom depends on mitigating algorithmic exposure.
However, addressing societal issues requires more than curtailing individual use of technology; it demands a restructuring of the algorithms themselves. Humanity faces AI in two waves, with the first characterized by social media and the second by more sophisticated systems like language models.
The risks of the second wave are real and need urgent attention, but they cannot be effectively addressed unless cooperation is restored—something the first wave has disrupted. First-wave AI remains under human control, and its influence can be recalibrated.
Adjustments to algorithms can change the type of information presented to users, but profit motives can lead platforms to revert back to practices that foster polarization. This raises critical questions about whose interests dictate such decisions.
Regulations should focus on algorithmic influence, not on censoring content creators. A healthy information diet must offer diverse perspectives, moving beyond a narrow echo chamber. Additionally, it is essential that this diet incorporates a range of human emotions and insights.
The discussion on calibrating algorithms to reduce divisiveness must begin, inviting diverse perspectives to explore solutions. Healing polarization represents not just an urgent challenge but a fundamental humanistic endeavor, aimed at liberating human intelligence from the constraining effects of artificial intelligence.
—
