Shhhh! Digital Media - Home of Butterfly Dragon, Tales Of The Sanctum: A Lady's Prerogative, The Two Dragons, The Two Butterflies, We Who Stand On Guard, Night Boat, The Legendary of Xarn. This content is written and created by Brian Joseph Johns and is all produced in Toronto, Ontario, Canada at 200 Sherbourne Street Suite 701
- Entry Point
- Tales of the Sanctum: A Lady's Prerogative [March 11, 2026!]
- The Butterfly Dragon [March 11, 2026!]
- Grand Tapestry [March 11, 2026!]
- Audiobooks
- On YouTube
- On TikTok
- Aerth Mother Animation Download
- Ultimate Self Defense Championship
- Martial Arts Events
- Credits And Site Wide Disclaimer
- About And Contact - Shhhh! Digital Media...
- The Last Lonely Lounge Comic
- Shhhh! Digital's Recent Activity...
- Downloads
- Music
- Unreal Engine Tips (5.0+)
- Synphaera
- Support Us
Terms And Conditions
Thursday, March 19, 2026
The Truth Is All There Is...
Wednesday, March 18, 2026
The Hand That Feeds: Weaponized Suggestion Feeds
Biomagnetism And Hormones
Online Connection
Suggested Content Feed
To finish the human part of this article, I'd like to emphasize the fact that this is most certainly already happening and in the hands of malicious ideologies and that this might be behind the recent risks that have been discussed in the news and via other online sources. Our awareness of this possibility serves to insulate us from it.
When Insiders Weaponize the Algorithm: The Hidden Hands Behind Your Feed
Modern platforms already know that ranking algorithms can shift what we believe and prioritize. A recent field experiment on X (formerly Twitter) showed that turning on its algorithmic “For You” feed increased the share of conservative political content people saw and nudged their policy priorities in a more conservative direction over just seven weeks. Large election‑period experiments on Facebook and Instagram similarly show that small changes in how content is ordered can substantially change what political information users encounter, even when their formal party identity does not flip overnight. These are platform‑sanctioned experiments, run at scale, that demonstrate a hard truth: whoever controls the knobs on the feed controls the flow of political reality for millions of people.
Crucially, this influence does not require editing anyone’s vote or hacking a database; it only requires shifting the probabilities of what appears at the top of the screen. By quietly boosting posts from some actors and demoting others, a feed can make certain movements feel energetic and ubiquitous while their opponents look marginal, toxic, or strangely absent. Users experience this not as “manipulation,” but as “what my friends are talking about” or “what’s trending now,” never seeing the counterfactual world where different choices were made.
Security research is increasingly blunt about insider threats: real organizations are seeing employees approached or recruited by outside attackers, not just for money but for political or ideological motives. Surveys of security professionals report that a significant fraction have seen employees contacted to assist in attacks, including ransomware and data exfiltration, and guidance from national cyber agencies explicitly notes that malicious insiders may knowingly abuse their access to cause harm. Insider‑threat frameworks now talk about skills, motives, and opportunities: a technically capable employee who believes the platform is morally corrupt or politically dangerous suddenly looks like a prime candidate for quiet sabotage or “course correction.”
We already have concrete examples of staff abusing privileged tools on major platforms. Facebook, for example, has investigated an employee accused of using internal admin systems to stalk women, exploiting data he could see but ordinary users could not. Security case‑studies describe administrators who used elevated access to spy on colleagues, steal sensitive information, or retaliate against perceived enemies. These incidents are often treated as isolated misconduct, but they prove a structural point: once someone is inside the perimeter with the right privileges, the difference between “routine work” and “targeted abuse” is often just intent.
Combine what we know, and a disturbing, yet technically mundane, scenario emerges. An ideological group decides that the most efficient “psychological operation” is not mass propaganda from the outside, but subtle control from the inside. Instead of just buying ads or running troll farms, it encourages or plants sympathizers into the trust‑and‑safety, data science, or recommender‑engineering teams of one or more platforms. These are precisely the roles that can adjust ranking parameters, define “quality,” tweak toxicity thresholds, and create or override internal labels that determine which posts are boosted, throttled, or flagged.
From there, the operation does not need a cartoonishly obvious “ban my enemies” button. It only needs to re‑weight certain engagement signals or topic scores so that content from rival figures is slightly more likely to be shown in hostile contexts, or more likely to be buried below emotionally charged replies. It can quietly add or adjust internal tags that mark specific communities, hashtags, or domains as “borderline,” “low quality,” or “safety sensitive,” making their posts less likely to reach the top of feeds or to be recommended to new audiences. And it can tune the friend‑ or page‑recommendation system so that people loosely adjacent to the group’s ideology are steered into ever denser networks of allied accounts, while critics are surrounded by content that makes them look unhinged, hateful, or irrelevant.
To the everyday user – or even to a popular influencer – this does not look like an “attack.” It looks like a slow, inexplicable shift: old followers stop seeing their posts, replies become more hostile, growth stalls, and their ideological opponents seem strangely boosted by the algorithm. Targets may suspect shadow‑banning, but without access to internal logs and parameter histories, they cannot prove that a particular human hand nudged the dials.
The danger for ordinary users is not just large‑scale political steering; it is micro‑targeted harm. Because modern recommendation systems rely on detailed behavioral profiles and graph data, an insider with the right access can quietly focus tampering on specific individuals or small clusters (for example, women activists, minority journalists, whistleblowers) by adjusting how often their content is shown and to whom. They can focus on vulnerable users who are already under stress, by promoting more extreme, self‑harming, or rage‑inducing material into their feeds, exploiting known patterns where variable rewards and sensational content keep people hooked. They can target social bridges – those users who connect different communities – by making them appear more volatile or less trustworthy, thereby fraying the edges where dialogue might otherwise cross ideological lines.
None of this requires breaking the recommendation engine; it only requires repurposing it. Algorithmic systems that are good at maximizing engagement and keeping people on the platform are, by design, good at finding what will grab a specific person’s attention or anxiety. In a health‑oriented context, this could mean nudging users toward more exercise videos; in a malicious context, it could mean surfacing more humiliation, outrage, or fear for a chosen set of names.
It is tempting to say “algorithms are to blame,” but that lets human decision‑makers off the hook. The same research that shows feeds can sway political attitudes and behaviors also shows that these systems are adjustable: we can turn features on and off, tighten or loosen ranking rules, and offer non‑profiling alternatives. Regulatory studies in Europe note that platforms often choose manipulative defaults and overwhelming interfaces, not because they must, but because these designs maximize growth and data extraction. That is a business decision, not a technological inevitability.
Insider‑threat research reaches a parallel conclusion: technology provides the means, but motives and oversight determine whether those means are used responsibly. Organizations that treat political neutrality and feed integrity as afterthoughts, or that centralize immense power in small, poorly monitored teams, are effectively trusting that no one with strong ideological commitments will ever abuse their role. That is not a security model; it is wishful thinking.
In other words, recommendation engines are scalpels: they can heal or harm with great precision. The risk is not the existence of the scalpel, but the absence of safeguards on who wields it, under what supervision, and with what accountability when things go wrong.
If we take the possibility of ideological insider psyops seriously, several responses become obvious. Platforms need to treat feed‑ranking logic and internal override tools as critical infrastructure, with strong access controls, four‑eyes changes, and auditable logs for every parameter tweak that can affect political or sensitive content. External researchers and regulators need more visibility into how ranking systems behave around elections and other high‑stakes events, including independent experiments like those already done on X, Facebook, and Instagram. Users deserve meaningful options: genuinely accessible non‑profiling feeds, clear labels when content is boosted or throttled, and explanations of why they are seeing particular posts.
None of these measures will eliminate the risk that a motivated insider might try to twist the system to hurt enemies or advance a cause. But they make it harder to act in total secrecy and easier to detect patterns that deviate from declared policies. They also shift the narrative away from blaming “the algorithm” and toward examining the people and institutions that design, deploy, and oversee these tools.
The technology, in other words, is a mirror: it reflects the incentives and ethics of those who control it. The real danger is not that feeds can be manipulated, but that we continue to build systems where such manipulation is easy, profitable, and largely invisible – and then feign surprise when bad actors step through the door we left open.
Citations:
- The political effects of X's feed algorithm | Nature
https://www.nature.com/articles/s41586-026-10098-2 - Recommender system in X inadvertently profiles ... (preprint)
https://arxiv.org/abs/2602.02624 - The role of recommendation algorithms in the formation of ...https://www.sciencedirect.com/science/article/pii/S0306457325001840
- How do social media feed algorithms affect attitudes and behavior in ...https://www.science.org/doi/10.1126/science.abp9364
- Insider Threats: Your employees are being used against youhttps://blog.talosintelligence.com/insider-threats-increasing/
- Live discussion: Insider threats and abuse of privilege
https://clickarmor.ca/insider-threats-and-abuse-of-privilege/ - Facebook Investigates Accusation That Employee Used ... (admin tools to stalk)https://www.yahoo.com/entertainment/facebook-investigates-accusation-employee-used-admin-tools-stalk-234230758.html
- Insider threat mitigation: Systematic literature reviewhttps://www.sciencedirect.com/science/article/pii/S209044792400443X
- Careless employees behind the majority of insider threat ...https://www.cybersecuritydive.com/news/insider-threat-malicious-negligent-employee/617656/
- Understanding and Mitigating Insider Threats in Operational Technology (OT) Systemshttps://www.dragos.com/blog/understanding-and-mitigating-insider-threats-in-operational-technology-ot-systems
- Millennial Considerations on Insider Threat (PDF)https://georgetownsecuritystudiesreview.org/wp-content/uploads/2019/05/Millennial-Considerations-on-Insider-Threat-FINAL-PDF.pdf
- New research shows online platforms use manipulative design to influence users towards harmful choices
https://edri.org/our-work/new-research-shows-online-platforms-use-manipulative-design-to-influence-users-towards-harmful-choices/ - Social sciences: X's algorithm may influence political attitudes (Nature press release)https://www.natureasia.com/en/info/press-releases/detail/9242
- Managing Insider Risk – Recent Best Practices Guidance (PDF)https://www.blg.com/-/media/legacy-news-and-publications/documents/publication_5735.pdf?la=en
- Social Drivers and Algorithmic Mechanisms on Digital Mediahttps://pmc.ncbi.nlm.nih.gov/articles/PMC11373151/
- How to protect your organization from insider threats (ITSAP.10.003)https://www.cyber.gc.ca/en/guidance/how-protect-your-organization-insider-threats-itsap10003-0
- The unappreciated role of intent in algorithmic moderation of abusive content on social mediahttps://misinforeview.hks.harvard.edu/article/the-unappreciated-role-of-intent-in-algorithmic-moderation-of-abusive-content-on-social-media/
- The Ultimate Guide to Insider Threats (ebook)https://www.exabeam.com/wp-content/uploads/EBOOK-The-Ultimate-Guide-to-Insider-Threats.pdf
- What are algorithms and how do they make social media more harmful?https://counterhate.com/blog/what-are-algorithms-and-how-do-they-make-social-media-more-harmful/
Very Happy About EU News...
Tuesday, March 17, 2026
Monday, March 16, 2026
Tales of the Sanctum: Era of the Spellbound - Episode 11: The Tarot Be Joined (Finished March 16, 2026 14:15 EST)
Despite this storyline taking place mostly in Shepperton off the Thames, United Kingdom, it is entirely written in Moss Park, Toronto, Ontario, Canada. Shepperton is close to my heart in ideas rather than kilometers.
I am Brian Joseph Johns and this is Shhhh! Digital Media at https://www.shhhhdigital.com or https://www.shhhhdigital.ca in Toronto, Ontario, Canada at 200 Sherbourne Street Suite 701.
[Spellbound - Siouxie And The Banshees]
Do you like enigmatic characters, engrossing story, magic and the ever atemporal weave?
Play Baldur's Gate 3 [On Steam]
Chapters
- An Unveiled Past - A Veiled Future
- Change Comes in Small Packages
- Gillie's Cards Speak
- Sanctum Meet - Compare Notes
- LHR London, Heathrow
Thursday, March 12, 2026
Coming Soon...
Helayne's Return
or watch it on the Official Shhhh! Digital Media YouTube Channel...
![]() |
| Click image for contact information... |
Thursday, March 5, 2026
A Short Public Service Message from Shhhh! Digital Media
Hi. Brian Joseph Johns here with a short public service message:
First of all, I am not a Taurus (Sun), and I am not being remotely controlled by any Taurus, nor am I (attempting to) remotely control any Taurus (or Tauri?). I myself have a Taurus Moon, but my ascendent sign is not Taurus, and that kind of thing isn't hereditary, seasonal or affected by issues of health (sarcastic jest intended).
This content is entirely produced in Toronto, Ontario, Canada at 200 Sherbourne Street Suite 701 under the Shhhh! Digital Media banner.

Click image for contact information...
This content is entirely produced in Toronto, Ontario, Canada at 200 Sherbourne Street Suite 701 under the Shhhh! Digital Media banner.
![]() |
| Click image for contact information... |
Wednesday, March 4, 2026
Shhhh! Digital Media Presents... The Butterfly Dragon: Heroes of our Own Reimagined: Episode 10 - The Burden of Proof (Finished and new artwork added Wednesday March 4, 2026 13:15 EST)
Chapters
- Team is a Four Letter Word
- Two weeks later
- Bridal Path Party
- Jail Break Broke
- The Economy of Second Chances
- Twelve years ago
- Sooner or Later?
- Like Father, Like Son
Saturday, February 21, 2026
Thursday, February 12, 2026
Help Women, Children And Men Fight The Disease Cancer! Support The National Breast Cancer Foundation or get in on the Rock The Road Raffle and help Men fight Prostrate Cancer with The Canadian Cancer Society!
You read that right!
You can either help Women and the National Breast Cancer Foundation in their fight against Breast Cancer, or help Children in their fight against Childhood Cancer, or with the Canadian Cancer Society's Rock The Road Raffle, you can win one of five grand prizes, though there are literally hundreds of other ways to win and you'd be helping real life scientists and researchers in the fight against the disease, cancer!
Cancer is a leading cause of death worldwide, killing women, children and men and accounting for nearly 10 million deaths in 2020, or nearly one in six deaths according to the World Health Organization.Nearly half of all Canadians are expected to develop cancer, which is the leading cause of death, with smoking, poor diet, and aging as major factors.


