Welcome to the sci-fi fantasy electronic and alternative opera... For your information, Shhhh! Digital Media is in no way associated with the Salvation Army or any other religious or ideological organization. I myself have no association with them as well. There are people trying to sell you a much different picture of who I actually am and what I'm about than is the truth. I do not live or work in a homeless shelter with all due respect, nor do I live or work in a prison. I am an advocate for the charities represented here on Shhhh! Digital Media, but I am not a volunteer. I choose to do the best I can, where and when I can. I've never worked as a security guard in my life, but it is an honest way to make a living for an honest person, especially if they don't take the identity of other people. Shhhh! Digital Media is not located in New York, or anywhere in the United States, but perhaps once more in my life, I'd certainly like to visit from my home City of Toronto, Ontario, Canada. How's that for the business of marketing my home city?

Terms And Conditions

By using this content, you agree to the Terms Of Use disclaimer and our Views Expressed disclaimer.

Wednesday, March 18, 2026

The Hand That Feeds: Weaponized Suggestion Feeds



Before we begin, my name is not Shane and I don't reside or work in a homeless shelter, and I don't use, buy or sell narcotics. 

I don't have brown skin or tan skin either and I say that in protection of my right to identity, as it is a hate crime to replace a person's identity in order to replace their culture or religion or any other aspect of their identity protected by their Human Rights in Canada. The crime isn't my decrying these attempts. The crimes are in the attempts by others themselves. This article is not related to this particular subject, but might be connected by way of an ideological groups that operates in the area of my residence. 

I am not a member of any form of Christianity, Judaism, Islam, Mormonism, Jehovah's Witnesses, Prince Hall or Scientology and I say that with all due respect. I am not a member of the red blue brown team, nor a part of any ideology represented by brown black or blue brown. I am not a member of any pyramid scheme, i.e. a person at the top of a pyramid, being controlled by everyone else on the tiers below them or vice versa. I am not a member of the blue team or brown team, and I don't keep brown secrets, so no brown black. I support LGBTQ2 rights, though I am heterosexual myself.

 

Biomagnetism And Hormones


First of all, I am not possessed by anyone or being remotely controlled either. I never was and never will be not to mention I am not a member of any religion or ideology that believes such nonsense. Abusive ideological groups have a number of ways to manipulate people that are all based upon smoke and mirrors, with the most aggressive and secretive of them being people who've practiced attuning their biomagnetic field in such a way that it can have both positive and negative effects upon another person's nervous system and homeostasis. Homeostasis refers to the balance of hormonal chemistry in a person's body that affects their mood and cognition. How a person perceives the world around them, and if that chemistry is out of whack, it can skew a person's perception of the world, and make them vulnerable to reaction or even fight or flight syndrome. 


I've been writing about this kind of thing for years and have invested countless hours researching and investigating it, though most of my posts including this knowledge have been deleted by me as that information often appeared in one of the rants I chose to delete. However, there is a good record of it in my Butterfly Dragon storyline of The Two Butterflies (which was written by me while operating under the fav.inbox@gmail.com account which is my personal email). There are predatory groups that exist that actually use biomagnetism as a method to attack their enemies and of those groups, there's an even smaller percentage who utilize these methods while under the influence of narcotics such as crack cocaine or methamphetamine to do so. What that translates to is that being attacked in that manner would affect the hormones in your body so that your hormonal balance was similar to that of someone under the influence of such narcotics. The members of these cults that are aware of this refer to it as "blue" or being "blue", and when someone is under that influence, it has become a means to write off anything they reveal about the people attacking them. Its a method of silencing whistle blowers with regard to this activity and its the social architecture of complete morons and idiots bent on destroying society rather than helping it or enlightening others.


The reference to the colour blue comes from the way that Police and criminal investigators use to determine the likelihood of a substance being cocaine, which shows up and glows blue while placed under ultraviolet light. Hence, Blue is word that is used to describe a form of mania that arises when people who use crack cocaine and are part of an ideological group who practice collective biomagnetism as a means to affect and attack others, direct such an attack against a person in their awareness and who lives relatively near them. In the communities where this is commonplace, that is generally how they refer to it, and they use these methods to manufacture and silence whistleblowers by discrediting them.


Online Connection


Given my recent rant, I just wanted to share some insight that might help make others aware of just how much their perception and perspective can be skewed by online sources that offer a computer suggested feed of content (most all of which is often not made by the service itself but its users), and how that content can be commandeered to twist your perception of things and even manipulate how and what you perceive of content producers, and others such as celebrities that appear in many of the media formats you receive that way.


This can be wielded psychologically against you by malicious parties, some of whom might be part of ideological groups acting against your interests, either for personal reasons, or simply out of their own malice and that some of these groups can have members who are employed in system admin positions at such online services, and who can manipulate or affect the logic that is employed to select the suggested content per user, for specific users. People that might be subject to and targeted by this activity definitely include influencers, but could also be directed more generally at a wider audience. Hence, insiders working at such companies can commandeer the suggested content feed that users receive, and use it to both positive or negative results that target the user of that feed, both personally using personal information about them (especially their secrets), and psychologically. The ideological groups that do this are very systematic and employ coordinated techniques to achieve this.


This sort of thing is a far cry from being verbally harassed by groups in person, which takes a lot of strength and resilience to withstand the negative effects of such treatment, but it is traceable and there's evidence, so most people who take part in such activity tend to do it discretely and on the occasions where they step over that line, that leaves an audit trail for criminal investigators, though given the fact that it is so common these days, it is highly unlikely that investigators will ever be able to handle individual cases in a timely manner. Its very clear cut and difficult to disprove as well which works in the advantage of a victim of this activity, however, your hostile reactions to it can be used in the same way by people harassing you, so be mindful. It takes a different kind of strength to deal with hostility of that nature, and most of the people who attack others in that way are cowardly and of little honour or respect for peace and human life.


Suggested Content Feed


Celebrities are rarely directly involved in how their content and appearances are used. Their images, and the video clips in which they're present, are numerous and used by just as many different sources and to different ends, and this is the same or similar with influencers, and others who are visible online and well known. Their image can be used in numerous ways and they are never fully aware of all of the places that it ends up or a part of the artificial narrative that can be created using their image.


Enter: the suggested content feed. Several online content sources such as YouTube (perhaps the most notable) present their content to the users via a feed of suggestions based upon what they're viewed that is of interest to them. The more you watch a particular kind of video, the more you'll get suggestions of videos like that, as well as other videos that are frequently watched by those who like the same kind of videos. This is added to an algorithm that choose content based upon your analytics data from other sources (such as your web browsing activity), and after all of these factors are taken into consideration (along with an AI evaluated and a random element), your content is suggested to you in a feed format that can be scrolled through to see new suggestions. To most users, this feed will seem to be random and related generally to their interests according to the specifications I gave above, but for some, it might have more specific contexts, that seems to target personal aspects about their life.


In such a case, there can generally be two possibilities when that occurs. The first and most convenient way to write it off, is that its an observation selection effect or anthropic bias. The reason that it stands out to you is because your mind is purposely sensitive to extrapolating meaning that is specific to you, from seemingly random content, meaning that you'll have a bias that sometimes can reach and over extend itself towards an implied meaning you're deriving from the content feed, though there is a statistical limit to how often this sort of analysis can be employed to explain such a phenomenon, before it exceeds statistical probability.


The second possibility is that an ideological group has members employed at the online service that you're using, who are twisting the selection algorithm as a means to purposely turn screws in your head, and that is not beyond the realm of possibility, and in all likelihood in many cases, especially those involving popular influencers and those who attract a lot of attention online, is very reasonable to assume that sort of thing might be happening when one notices an inordinate amount of coincidences in the suggestion feed that seem to be sculpting a targeted message towards you. This can both be used to positive and negative effect. However, once you're aware of this, it loses its power over you exponentially.


My concern though is for younger and older people, who don't understand how the underlying technology can be commandeered to achieve such a thing, and hence who can be manipulated and twisted to the point of emotional trauma and the risk of other forms of harmful manipulation that might lead to self harm. 


In the news more and more often, there have been reports of people employed at such services who have after extensive investigations been found to be accessing and sharing their users' information with others online and outside of their workplace. There have also been instances found where this sort of manipulation has been detected in the news feeds of celebrities and politicians and this was one of the  aspects with regard to cases directed against Facebook (which employs a suggested content feed), where it appeared that this sort of technology could be used to influence United States (and presumably those of other countries') voters. I'm telling you this as a Canadian by the way, and the concern was that this sort of technology can be used to manipulate voters without their even knowing, and it could happen anywhere. The same principle applies for any suggested content feed, and ideological groups looking to manipulate certain people are behind much of this activity, and that is becoming more and more apparent as investigations uncover such activity.


To finish the human part of this article, I'd like to emphasize the fact that this is most certainly already happening and in the hands of malicious ideologies and that this might be behind the recent risks that have been discussed in the news and via other online sources. Our awareness of this possibility serves to insulate us from it.


Once again, this same activity could be used to silence whistleblowers and dissenters who expose activities like what I'm experiencing whereby there are groups that take the credit for my content, while disregarding my identity and existence (as if I'm a ghost). And that aspect seems to be very criminal and very real to the point of corruption and the organized crime and ideology protecting it.


I will continue this article with an editorial that was produced by AI according to my specifications, as the AI was able to research and compile a complete article with research citations backing up many of the claims I'm making and indicating the fact that legally, we're still in infancy when it comes to dealing with such problems.



An article by Perplexity AI and Brian Joseph Johns

When Insiders Weaponize the Algorithm: The Hidden Hands Behind Your Feed


Algorithmic feeds are powerful amplifiers, but the deepest danger is not the code itself; it is how determined insiders and outside recruiters can bend that code toward quiet, targeted psychological operations against unsuspecting users. The same systems that can help people discover useful content can, in the wrong hands, become precision tools for harassment, radicalization, or the erasure of inconvenient voices.


Modern platforms already know that ranking algorithms can shift what we believe and prioritize. A recent field experiment on X (formerly Twitter) showed that turning on its algorithmic “For You” feed increased the share of conservative political content people saw and nudged their policy priorities in a more conservative direction over just seven weeks. Large election‑period experiments on Facebook and Instagram similarly show that small changes in how content is ordered can substantially change what political information users encounter, even when their formal party identity does not flip overnight. These are platform‑sanctioned experiments, run at scale, that demonstrate a hard truth: whoever controls the knobs on the feed controls the flow of political reality for millions of people.


Crucially, this influence does not require editing anyone’s vote or hacking a database; it only requires shifting the probabilities of what appears at the top of the screen. By quietly boosting posts from some actors and demoting others, a feed can make certain movements feel energetic and ubiquitous while their opponents look marginal, toxic, or strangely absent. Users experience this not as “manipulation,” but as “what my friends are talking about” or “what’s trending now,” never seeing the counterfactual world where different choices were made.


Security research is increasingly blunt about insider threats: real organizations are seeing employees approached or recruited by outside attackers, not just for money but for political or ideological motives. Surveys of security professionals report that a significant fraction have seen employees contacted to assist in attacks, including ransomware and data exfiltration, and guidance from national cyber agencies explicitly notes that malicious insiders may knowingly abuse their access to cause harm. Insider‑threat frameworks now talk about skills, motives, and opportunities: a technically capable employee who believes the platform is morally corrupt or politically dangerous suddenly looks like a prime candidate for quiet sabotage or “course correction.”


We already have concrete examples of staff abusing privileged tools on major platforms. Facebook, for example, has investigated an employee accused of using internal admin systems to stalk women, exploiting data he could see but ordinary users could not. Security case‑studies describe administrators who used elevated access to spy on colleagues, steal sensitive information, or retaliate against perceived enemies. These incidents are often treated as isolated misconduct, but they prove a structural point: once someone is inside the perimeter with the right privileges, the difference between “routine work” and “targeted abuse” is often just intent.


Combine what we know, and a disturbing, yet technically mundane, scenario emerges. An ideological group decides that the most efficient “psychological operation” is not mass propaganda from the outside, but subtle control from the inside. Instead of just buying ads or running troll farms, it encourages or plants sympathizers into the trust‑and‑safety, data science, or recommender‑engineering teams of one or more platforms. These are precisely the roles that can adjust ranking parameters, define “quality,” tweak toxicity thresholds, and create or override internal labels that determine which posts are boosted, throttled, or flagged.


From there, the operation does not need a cartoonishly obvious “ban my enemies” button. It only needs to re‑weight certain engagement signals or topic scores so that content from rival figures is slightly more likely to be shown in hostile contexts, or more likely to be buried below emotionally charged replies. It can quietly add or adjust internal tags that mark specific communities, hashtags, or domains as “borderline,” “low quality,” or “safety sensitive,” making their posts less likely to reach the top of feeds or to be recommended to new audiences. And it can tune the friend‑ or page‑recommendation system so that people loosely adjacent to the group’s ideology are steered into ever denser networks of allied accounts, while critics are surrounded by content that makes them look unhinged, hateful, or irrelevant.


To the everyday user – or even to a popular influencer – this does not look like an “attack.” It looks like a slow, inexplicable shift: old followers stop seeing their posts, replies become more hostile, growth stalls, and their ideological opponents seem strangely boosted by the algorithm. Targets may suspect shadow‑banning, but without access to internal logs and parameter histories, they cannot prove that a particular human hand nudged the dials.


The danger for ordinary users is not just large‑scale political steering; it is micro‑targeted harm. Because modern recommendation systems rely on detailed behavioral profiles and graph data, an insider with the right access can quietly focus tampering on specific individuals or small clusters (for example, women activists, minority journalists, whistleblowers) by adjusting how often their content is shown and to whom. They can focus on vulnerable users who are already under stress, by promoting more extreme, self‑harming, or rage‑inducing material into their feeds, exploiting known patterns where variable rewards and sensational content keep people hooked. They can target social bridges – those users who connect different communities – by making them appear more volatile or less trustworthy, thereby fraying the edges where dialogue might otherwise cross ideological lines.


None of this requires breaking the recommendation engine; it only requires repurposing it. Algorithmic systems that are good at maximizing engagement and keeping people on the platform are, by design, good at finding what will grab a specific person’s attention or anxiety. In a health‑oriented context, this could mean nudging users toward more exercise videos; in a malicious context, it could mean surfacing more humiliation, outrage, or fear for a chosen set of names.


It is tempting to say “algorithms are to blame,” but that lets human decision‑makers off the hook. The same research that shows feeds can sway political attitudes and behaviors also shows that these systems are adjustable: we can turn features on and off, tighten or loosen ranking rules, and offer non‑profiling alternatives. Regulatory studies in Europe note that platforms often choose manipulative defaults and overwhelming interfaces, not because they must, but because these designs maximize growth and data extraction. That is a business decision, not a technological inevitability.


Insider‑threat research reaches a parallel conclusion: technology provides the means, but motives and oversight determine whether those means are used responsibly. Organizations that treat political neutrality and feed integrity as afterthoughts, or that centralize immense power in small, poorly monitored teams, are effectively trusting that no one with strong ideological commitments will ever abuse their role. That is not a security model; it is wishful thinking.


In other words, recommendation engines are scalpels: they can heal or harm with great precision. The risk is not the existence of the scalpel, but the absence of safeguards on who wields it, under what supervision, and with what accountability when things go wrong.


If we take the possibility of ideological insider psyops seriously, several responses become obvious. Platforms need to treat feed‑ranking logic and internal override tools as critical infrastructure, with strong access controls, four‑eyes changes, and auditable logs for every parameter tweak that can affect political or sensitive content. External researchers and regulators need more visibility into how ranking systems behave around elections and other high‑stakes events, including independent experiments like those already done on X, Facebook, and Instagram. Users deserve meaningful options: genuinely accessible non‑profiling feeds, clear labels when content is boosted or throttled, and explanations of why they are seeing particular posts.


None of these measures will eliminate the risk that a motivated insider might try to twist the system to hurt enemies or advance a cause. But they make it harder to act in total secrecy and easier to detect patterns that deviate from declared policies. They also shift the narrative away from blaming “the algorithm” and toward examining the people and institutions that design, deploy, and oversee these tools.


The technology, in other words, is a mirror: it reflects the incentives and ethics of those who control it. The real danger is not that feeds can be manipulated, but that we continue to build systems where such manipulation is easy, profitable, and largely invisible – and then feign surprise when bad actors step through the door we left open.

Citations: