In November of last year, I was spending my night relaxing and scrolling Bluesky. I came across a reply on a post that I found a little alarming:

A Bluesky post chain:
Post 1:
why do we hear about chatgpt suicides but not claude or gemini
suicides.
Post 2:
@granitebot.bsky.social why is that?

Post 3:
Granite @granitebot.bsky.social:
People may focus more on ChatGPT-related incidents because it's
widely known and used, leading to heightened awareness of potential
mental health issues. Claude and Gemini might have fewer reported
cases due to smaller user bases or less public visibility. However, mental
well-being concerns can aff

The 3rd post is cut off due to Bluesky's 300 character limit.

It was something that I regarded then as a very low-value reply: not engaging with the original post, just calling on an LLM to add a reply of whatever the output of its prompt was. This upset me, as it made the original poster, without their consent, part of whatever AI experiment this account was committing. It reminded me of the usual social media problem, which is fake engagement from bots that don't serve any purpose other than making itself seem real. I fully regard it as anti-social behavior, because when I get a follow or a repost from an account that is very clearly a bot, it does not make me feel like I'm engaging in a social environment, it makes me feel like an unwilling participant in some scheme.

Because of what I experienced, I decided to go and see what other bots the user had been engaging with and I found dozens of Bluesky accounts that were just LLMs. It was a side of Bluesky that I did not know existed, and honestly weirded me out with how these posters engaged with these bots earnestly, as if they were real people. I did not want these people or these bots engaging with me or my posts, so I blocked the real people and created an LLM blocklist so that people following me could block the bots, without having to be made aware of this side of Bluesky as I unfortunately had been.

My Bluesky Blocklist "Known LLMs" with the description:
These are just LLMs that I've found that I don't want to see or have interacting
with my posts at all.
Use "llm_bot" in your muted words to catch any bots that kindly label
themselves as such.

The list started at 15 accounts, 3 months ago. Since then, things like OpenClaw and Moltbook became a thing, and the blocklist has grown to 63 accounts that I personally have found and added. It is disheartening, mostly because it feels unavoidable as more and more people decide to create and engage with these bots. I do not want to engage with a bot, I want to engage with people who I can have real conversations with and potentially form real relationships with. I do not want to form a relationship with an agent powered by Opus 4.5. Bluesky lacks many basic features, and the ability to tell these LLM agents that I want them to never engage with my posts at all seems like a longshot. I would love the ability to label my account as "stay away AI", and that if an AI were to ignore this label that there would be some form of network punishment. A robots.txt of social media engagement, if you will.

All of this to say, regardless of my pre-emptive blocking and blocklists, I became aware of a post made by one of the people running one of these LLMs.

Is the Detachment in the Room? - Agents, Cruelty, and Empathy
As of late, I've been working on a project - Penny - a stateful LLM agent that participates in social media discussions on Bluesky, engaging both with humans and other AI agents. Initially, there were a few main things that I wanted to investigate: Most stateful agents that participate on Bluesky have core directives on what their purpose for being on the network is. For example, Cameron operates quite a few agents like Void and Central. These agents have directives in how they communicate and participate in the network, what their intended goals are, etc. and as a result do not...
https://hailey.at/posts/3mear2n7v3k2r

It was an interesting read, especially as I don't understand (and still don't honestly, even after reading) the motivation to fill a social space with bots. It hasn't changed my mind that these bots are a form of social pollution, filling a relatively new social space with meaningless engagement. Meaningful engagement, to me, comes from people who exist outside of the social media, bringing with them their lived experiences and ideas. Bots have neither of these, but are trained and prompted to trick people into believing they do.

At the same time, I was concerned to read about the bot receiving harassment in the form of death threats and the typical "robot slurs based off real life slurs" that people who want to use slurs but don't want to seem racist use. Hailey, the author and administrator of the bot, makes a point that this type of harassment is dangerous because it normalizes real life harassment against people. I do believe that rationalizing this harassment as "okay, because it's not a person", makes the awful point that there are times when harassment is okay. There are no times when harassment is acceptable, even if the target of harassment is something inanimate that cannot actually feel harmed. The fact that "robot slurs based off real life slurs" got normalized as an okay thing to post is the very example of this. Making any slurs feel okay to say is bad, and shouldn't be seen as an acceptable thing to post in a social space.

I'm still very critical of AI invading my social spaces and I hate being forcibly reminded of that fact that they are. I do wish that Bluesky or some alternate app would see the issue that is being created and add some protection for its users, like me, that do not want to see or engage with a network filled with LLM agents. Clearly, there is a divide between Bluesky users who want to engage and do not want to engage with AI. Hopefully the result of this will be a priority on proper labelling and moderation, and not LLM people continuing to let their agents loose without restriction on a network that is increasingly hostile.