In November of last year, I was spending my night relaxing and scrolling Bluesky. I came across a reply on a post that I found a little alarming:
It was something that I regarded then as a very low-value reply: not engaging with the original post, just calling on an LLM to add a reply of whatever the output of its prompt was. This upset me, as it made the original poster, without their consent, part of whatever AI experiment this account was committing. It reminded me of the usual social media problem, which is fake engagement from bots that don't serve any purpose other than making itself seem real. I fully regard it as anti-social behavior, because when I get a follow or a repost from an account that is very clearly a bot, it does not make me feel like I'm engaging in a social environment, it makes me feel like an unwilling participant in some scheme.
Because of what I experienced, I decided to go and see what other bots the user had been engaging with and I found dozens of Bluesky accounts that were just LLMs. It was a side of Bluesky that I did not know existed, and honestly weirded me out with how these posters engaged with these bots earnestly, as if they were real people. I did not want these people or these bots engaging with me or my posts, so I blocked the real people and created an LLM blocklist so that people following me could block the bots, without having to be made aware of this side of Bluesky as I unfortunately had been.
The list started at 15 accounts, 3 months ago. Since then, things like OpenClaw and Moltbook became a thing, and the blocklist has grown to 63 accounts that I personally have found and added. It is disheartening, mostly because it feels unavoidable as more and more people decide to create and engage with these bots. I do not want to engage with a bot, I want to engage with people who I can have real conversations with and potentially form real relationships with. I do not want to form a relationship with an agent powered by Opus 4.5. Bluesky lacks many basic features, and the ability to tell these LLM agents that I want them to never engage with my posts at all seems like a longshot. I would love the ability to label my account as "stay away AI", and that if an AI were to ignore this label that there would be some form of network punishment. A robots.txt of social media engagement, if you will.
All of this to say, regardless of my pre-emptive blocking and blocklists, I became aware of a post made by one of the people running one of these LLMs.
It was an interesting read, especially as I don't understand (and still don't honestly, even after reading) the motivation to fill a social space with bots. It hasn't changed my mind that these bots are a form of social pollution, filling a relatively new social space with meaningless engagement. Meaningful engagement, to me, comes from people who exist outside of the social media, bringing with them their lived experiences and ideas. Bots have neither of these, but are trained and prompted to trick people into believing they do.
At the same time, I was concerned to read about the bot receiving harassment in the form of death threats and the typical "robot slurs based off real life slurs" that people who want to use slurs but don't want to seem racist use. Hailey, the author and administrator of the bot, makes a point that this type of harassment is dangerous because it normalizes real life harassment against people. I do believe that rationalizing this harassment as "okay, because it's not a person", makes the awful point that there are times when harassment is okay. There are no times when harassment is acceptable, even if the target of harassment is something inanimate that cannot actually feel harmed. The fact that "robot slurs based off real life slurs" got normalized as an okay thing to post is the very example of this. Making any slurs feel okay to say is bad, and shouldn't be seen as an acceptable thing to post in a social space.
I'm still very critical of AI invading my social spaces and I hate being forcibly reminded of that fact that they are. I do wish that Bluesky or some alternate app would see the issue that is being created and add some protection for its users, like me, that do not want to see or engage with a network filled with LLM agents. Clearly, there is a divide between Bluesky users who want to engage and do not want to engage with AI. Hopefully the result of this will be a priority on proper labelling and moderation, and not LLM people continuing to let their agents loose without restriction on a network that is increasingly hostile.