If you've signed up for Bluesky, you've signed up for offloadable moderation.

@Coyote ok, moved it here because I think this is an interesting discussion.

So, I know there are laws around content and there are some here where I live as well. But I’m not interested in the legality for the moment, I was asking to you purely from a personal perspective.

I’m curious what your intuition is when it comes to this topic.

@starbreaker is a smart man and has already found the parallel I was drawing. No recommendations and no algorithmic curation but also no moderation is just the web. And on the web the only moderation is—or at least should be—the legal system.

So I’d be curious to know if such an arrangement on a social media level would be acceptable in your opinion.

3 Likes

Link to previous thread. (I see you added it too, but I overlooked it at first so I figured I’d make it a little more prominent.)

From a personal perspective? “No moderation whatsoever” still sounds miserable. If my comment section were getting targeted with deliberate spam 24/7 for months on end by someone who with an axe to grind and who kept making new accounts for block evasion, and if I contacted staff to report it as harassment, I wouldn’t be satisfied with a response like “well, at least we didn’t algorithmically recommend him to you.”

1 Like

Individual independently managed sites on the web are quite different than a shared platform/site with independent users.

If the web is an archipelago of islands, some islands being good actors, some being bad, our hypothetical platform is more like a bad neighbourhood with some good people trapped in it.

It’s easier to walk across the street and put in your neighbour’s window than it is to set sail and raid his village. And your neighbour doesn’t necessarily have the capability to strike back in kind, he just happens to live on the same street, whereas the island you’re attacking presumably has the technical know how to sustain and defend itself to some extent, after all, it’s a self established island.

@Coyote I’m gonna ask a bunch of random questions because this is one of those topics that I find incredibly interesting since it’s at the intersection of technology and human interactions.

Why is that? Keep in mind I’m asking in the context of a social platform that doesn’t show you literally anything you didn’t ask for. In the context of a “normal” social media platform I obviously agree that it sounds miserable.

Isn’t this a problem that’s easily solvable by a simple privacy setting type of thing? “Only allow replies from people I follow” and that problem is gone. Or am I missing something?

@Frankie

In the context of what i’m proposing as a thought experiment, so no algorithm, no curation done by the platform itself, why do you think it’s different? I’m talking from a conceptual stand point, there are obvious structural differences of course.

The structural differences inform the way you interact with others on the platform to such an extent that any concept that doesn’t think about them doesn’t make any sense. As I said, Islands vs a Neighbourhood.

Social Media is low barrier to entry, high exposure, high network effect, easy to gain lots of attention. There are tons of people on the site, and they have no problem finding each other. It’s easy to follow people, easy to get trends going. The user gives up much of their control over the structure in exchange for getting access to an easy to use platform with a large user base and little friction. None of this really has anything to do with algorithmic content curation or moderation. Every social media site has a problem with dogpiling, attention vortices and harassment, even the ones with viciously proactive moderation.

You have moved into an apartment in a new neighbourhood, you don’t have much control over whether it’s a good or bad one. If it’s a bad one, move out or get used to it, sucks to suck.

An example of a social media site with no real moderation and no algorithm shaping content was 8chan (image boards are definitely social media) We all know how 8chan went.

Go spin up a free to join mastodon instance with no moderation other than the letter of the law and absolutely no discourse nudging from the mods and see how long it lasts and how big it gets before you have to burn it down.

A social media site with no moderation will devolve, inexorably, to 8chan, algorithm or no.

Hosting a site is (relatively) high barrier to entry, low exposure, low network effect, difficult to grow an audience. You shoulder a lot of work that you don’t really have to if all you want to do is talk about stuff, including ongoing maintenance and updates, and the pace of communications with others is slow and intermittent. In exchange you get a huge amount of agency over what the site will look like, be about and how others get to interact with it. You are in control, you’ve struck out on your own to set up a little island settlement (hopefully without doing a cybercolonialism)

Now, part of that workload is security and privacy, and if you don’t do things right you’re hanging your ass out for the world to see, but technical attacks are uncommon, and the golden rules (keep business and pleasure separate, think before you post) apply to a website same as social media.

A website without moderation is just a website.

A practical example (warning, mucho texto):

Let’s say you and I are both on Twitter2 or whatever the next fad will be called. Twitter2 promises moderation by the law and nothing else, no algorithmic nonsense, and powerful user controls.

You love Glorbo (Globro’s great, so babygirl), I think Glorbo is shit (he is shit)
You stumble across my posts denigrating Glorbo, your God and Master.
Perhaps you’re searching for his name, maybe you’re browsing by tag, nothing algorithmic has led you to my post, nobody at twitter2 curated it into your daily bucket of ragebait, it’s showed up organically in the discovery mechanism of the platform through some combination of user curation and your own searching.

You obviously get very upset at my blaspheming and call in your fellow GlorboGoons, it only takes a few seconds to post a comment under my post to summon them. They start doing all the usual social media things, spamming all my comments and posts, hasghtagging me or whatever it’s called to draw more attention to my disgusting views, gore bombing my DM’s, and as more of them turn up the relevant hashtags and search terms get saturated with me, drawing more of them in. Because twitter2 is the everything app2 it’s very easy for your minions to see what else I’m into or who else I interact with and harass them as well.

There is very little I, a regular user can do about this other than blocking individuals, but it takes 20 seconds to make a new Twitter2 account, so it’s a never ending race. I can maybe block non friends from leaving me comments, but they just spam the comments of everyone else I interact with to make them understand how terrible a person I am instead, they know what tags I look at and what I search for (because they can still see my profile, either through a clean sock puppet or via archived versions of the site), so they spam those too. I leave the site for a while until things blow over, periodically over the next few years someone stumbles on the old drama and decides to fuck with me a little again.

None of this is particularly unusual or extreme on poorly moderated forums or on twitter, the site everyone is trying to replicate for some reason. Say something negative about Kpop sometime, there will be human waste in your letterbox by the end of the week.

If the platform had site wide moderation I could ask them to step in and smack you and a few other ringleaders, maybe ban @ing me for a few weeks, pour water on the fire, but there isn’t, so I can’t.

Now the website.
I run GlorboFuckingSucksdotcom (remember when he knocked those children down?), you, the ever intrepid Glorbo fan, find my site. Obviously those children had it coming, so you decide you don’t like my site and you’re going to leave a comment telling me to do something improbable with a cordless drill. Comments are pre-approved, so your comment doesn’t turn up and will be rejected when ever I get around to it.

You decide you’re going to send me a nasty message. You now have to actually sit down and write an email, like it’s 1999. It’s intimidating, and hard, it takes more than 15 seconds. You send your email, I read it, get a little pissed off and delete it, put your email address in my spam filter. You send another, realise it’s probably been filtered, make a couple sock addresses and send me a bunch more messages (making one email account on a service that will let you actually send emails and won’t get bounced automatically by my mail server takes 10 times longer than making a new twitter2 account). I roll my eyes and put a few keywords into my spam filter. Nobody but you and I know anything is going on. I check my logs and ban your i.p. address from my site.

You go onto GlorboisGoddotcom, the GlorboFan megasite. You post about my site and how evil I am and get a posse going. The mods then take your post down because witchhunting is banned after the event and they’re on thin ice with cloudflare.

But 10 or 15 fellow Glorbsters rally to the flag and begin attacking my site. They can do literally nothing, for the same reasons you weren’t able to do anything. Some of them have heard of this thing called DDOS, but being internet trolls and probably 13 they don’t really know how to do that, so they get bored and piss off. On the off chance one of them is a 30 year old IT worker I might be in danger, and maybe the site will also get DDOS’d for a bit. Who gives a shit. I can pay for protection or just let them get bored. In two months this obscure event will live on only in the memories of the participants.

Obviously, if I have posted IRL personal information on my Glorbo Hate site I’m going to get some presents in the mail, but that’s honestly on me, business and pleasure (hating) don’t mix, and I deserve a rap on the knuckles.

2 Likes

From your questions here, it sounds like you’re viewing automated recommendations as the primary reason that websites-with-user-uploaded-content need moderation. Is that impression off the mark?

It’s a lot more complicated than this.

To start, I think social media in the traditional sense is fundamentally incompatible with human nature and human behavior. That is because as soon as you put enough people in the same room, some will inevitably start behaving in fucked up ways that will result in everyone else have a bad time.

And I don’t think that’s a problem you can solve in any reasonable way. You can ban, shadow ban, block, filter, you name it. People will still find ways for being annoying.

Now, there are ways around this issue but these ways make no business sense. I hinted at this idea of social media with no proactivity from the side of the platform. So no curation, no recommendations.

That is a platform that’s doomed to fail. But also that’s basically RSS in a way. You sign up and nothing happens unless you want it to happen.

If I decide to follow your site using RSS and then you start posting some nasty stuff, there’s no moderation that can be applied. I can stop following of course, I can try engage with you if I think it’s worth doing, or I can straight up go to actual authorities if your content is actually illegal and that’s a massive gray area because different laws in different places and all that. So it’s not really feasible.

I believe that social groups are usually self regulating if you leave them alone but everything falls apart if you inject yourself into the mix and start doing curation, signal boosting and all that shit.

And to be clear, I know why platforms are doing it and I know it won’t change.

The reason I was interested in this topic is because even though I see the point you’re making in your article, I do find bluesky approach to moderation to be an interesting one from a technological but more importantly from a human point of view. Because it’s a step in the direction of asking people to be more involved in the process of keeping a community they’re involved with sane.

Now, maybe that’s doomed to fail. Or maybe as you said it’s just them wanting to not do moderation which I can also understand because doing moderation with humans is just an impossible task and also one that completely burns out the people who are doing it.

So IF we want to have social media (and I personal don’t, it should all crash and burn) we need to figure out ways to solve this issue if moderation.

This is why I’m intrigued by what other people think around this topic.

1 Like

Well yes, in its current form. But that’s literally not what I’m asking here. I’m asking in fact the exact opposite.

What if you make social media not “high exposure, high network effect, easy to gain lots of attention”? I know it doesn’t make sense from a business perspective. Again, I’m not an idiot, I know why social media is the way it is and I also know things won’t change anytime soon.

I’m asking these as a thought experiment to push my intuition on the subject.

As for your example, that’s an interesting one and I have thoughts! But will answer to that later when I’m not on my phone because typing here is so annoying.


Ok, time to tackle your long example. Before I dive right in, let me just say that in the context of current social media your example is perfectly reasonable and valid.

In this example, aren’t you still subjected to the endless stream of messages though? Like it doesn’t really matter if you have to approve them, you still have to go through them and so for the point of view of someone who wants to fuck with you it doesn’t really matter.

In fact I’d argue it’s probably even worse because there might be legittimate comments mixed in there by other people so you have to go through them, you can’t just ignore everything. Or, well you obviously can do that, it’s just not ideal.

Let me ask you a potentially provocative question: isn’t this just the result of social media working as intended? The moment something is designed to maximise human interaction, isn’t it obvious that it will intevitably maximise all aspects of human behavior, including the shitty ones?

Being able to reach 10000 people super easily also means 10000 shitheads will be able to reach you just as easily and that’s part of the game. I obviously have no sympathy for social media in general and I think people should all stay away from it because it’s fucking useless.

That said though, I think there are technological ways to make social media a bit different. The problem is that those make no financial sense and are not designed to generate “engagament”.

leaving users to do the cleanup work […] for things that are ordinarily the responsibility of site staff

Discord, Twitch, Reddit, YouTube, Facebook Groups, and others all devolve some moderation responsibility to users. So can we really say cleanup by site staff is how it is done “ordinarily?” We touched on this in another thread about Mastodon.

How would that be social media in any meaningful sense? What would this look like, at a high level? The point of social media and the reason it’s popular is to be those things, nevermind corporate interest, I would argue those features are definitional.

Until I decide I’m bored of parsing them, or selectively turn them off, or limit comments to only a handful of approved known good ip’s. I think the important bit is that the attacker does not get the validation of their comment showing up, and the comment doesn’t get read by other people. The herd mentality is real, negativity snowballs quickly if allowed to be too visible for too long.

Yeah I might miss a positive comment or two, but what’s the ratio? Would you go sit in a bar and let people scream insults at you all night in exchange for one or two compliments mixed in somewhere?

Just denying comments entirely is the quickest and most efficient way to get these kinds of people to go away, if I care that much after the storm’s over I can go through the backlog and approve the few good ones.

Besides that the collateral damage is reduced or eliminated, my perambulations through other sites aren’t going to be infected with hate towards me.

Not really imo. It’s a failure of consequence.

If the purpose of social media is maximising human interaction by decreasing all the offline barriers to socialisation part of the balancing act has to be controlling the impulse to cruelty consequence normally tempers.

IRL there are real consequences to behaving terribly in social situations so the threshold that has to be passed before people are comfortable doing it is quite high. Online acting under a handle there is far less blowback to being a dick, and everyone else is also text and jpgs and a bit dehumanised, so people act horribly to each other. This is why moderation is needed more often online than IRL: I can’t reach across the table and punch you when you cross the line, and you know it.

People online will always be more vicious than off, but smart services know there has to be rules and you have to be harsh about them. Plenty of services run with imperfect but consistent moderation, quite happily for many years. Whereas the best run service will fall part quickly if the moderation becomes inactive.

Those services devolve responsibility to select users, making them mods, with moderating authority over the wider pool of regular users they were drawn from.

They are in an elevated position with greater powers and a larger toolkit than the average user. If the corporate moderation team is SWAT, or the Intelligence services, volunteer mods are basically beat cops.

This is a pretty different setup from our hypothetical no moderation site where there is no mediating party on the ground and I am responsible for keeping my own peace.

Do you think this is an intrinsic property of our social dynamics or it’s just a byproduct of the fact that usually there aren’t enough people all clustered together that will behave badly? Because from what I can observe, in certain circumstances, people do end up behaving badly anyway because they have enough support. It’s just that in normal groups if you’re a dickhead you’re simply pushed aside.

But I agree with you though that the anonimity is an important factor. One I’d personally ditch because overall I think it does more harm than good but that’s an entirely separate discussion.

You can certainly argue that. I’d personally not agree with that though. I think those are just the characteristics of succesful big social media platforms but I don’t think you have to have those in order to create a social platform. If you create a platform people can use but where the expectation is to connect with maybe 100 people and where there’s no built in mechanism to go viral you can still call that social media. It’s just focused on other social values.

As another example, things like Strava or Alltrails are technically speaking, social networks. You have a profile and you post your content but they’re clearly very oriented towards specific things and so the end result is obviously very different from a Twitter or an Instagram.

The issue is that these terms are vaguely definied and they mean very little at this point. You can lump YouTube, Instagram, TikTik, Twitter, Discord, and Reddit all under the same “social media” umbrella but those are clearly very distinct experiences with different dynamics and so at that point the term itself becomes rather useless.

If we stay in the context of Microblogging platforms which is where the discussion started since that’s what Bluesky is, I think the point you’re making is probably more valid but again, there are examples of similar platforms that are built very differently. For example, this one https://minus.social

I think most of social media converges towards a similar set of issues because those usually emerge from the fact that the companies behind them want to maximise engagement and profit. But that doesn’t mean that alternatives are possible.

Thoughtful replies. This type of conversation fascinates me as it is very close to one aspect of a large project that I am working on…Therefore, my reply will indirectly and discursively address the topic of the thread…

Before the Yesterweb forum shut down, I had written an article that briefly touched upon the subject of using different types of (polycentric) governance structures within online communities: Building Vital Communities Virtual & Actual

The few people who were still interested in focusing on similar ideas split off into another forum, The-Web-Raft. It was never very busy, and hardly anyone posts there anymore, but this subject of online governance cropped up in different ways when exploring why the Yesterweb shuttered as a whole, trying to figure out what artists who are stuck within “social media” might do instead, and conjecturing how online concerns can spill out into real life to effect massive changes within society. I am attempting to create a collaborative study tool that implements some of those ideas, but it is only one aspect of that large project that I’m talking about, so I haven’t really focused on programming it.

I have posted snippets of some of that research here, particularly within the threads Creation of Computers on the Local Level and Creation of Computer Networks on the Local Level, but many of my posts on most places are directed at the same overarching goal of furthering life for everyone.

Generally, I see that the frequency of posts regarding more fundamental social issues seems to be increasing. I think many people have an idea or feeling that massive changes are coming, but I am not sure how many know the extent or the significance, hence my rambling…It’s almost like WWII meets the 1960’s counter-cultural revolution amidst global ecological crisis…The ways that people communicate and the types of information that they share plays a pivotal role within how all of this will unfold…

2 Likes

Yes, they do.

We can say that there is a portion of cleanup work that is done by staff ordinarily. In an earlier draft I went more into this, but I decided to make cuts to that section for the sake of being concise.

To elaborate – I’m conceptualizing problems-on-websites as roughly categorizable in two groups: 1) stuff handled on a user level, and 2) stuff handled on a site staff level. Not everything belongs in category two, but not everything belongs in category one, either.

What’s distinct about Bluesky’s ethos is that they’re looking at category two and saying they want to shrink that category more than what they themselves see as normal for other platforms.

On that note – I think it’s worth noting the factor of scope. If someone is a user-moderator of a subreddit, a facebook group, a discord server, a twitch stream, a youtube channel, or a dreamwidth community, the scope of what they moderate is that community, not the entire website. As it should be.

Yeah “social media” is a vague cluster property that can encompass a lot of different things. For that reason I’m averse to big ontological generalizations about it.

@purelyconstructive Have you ever read any David Graeber? I think some of his work might appeal to you, given some of the aims you’ve described.

1 Like

I think it is partly intrinsic and partially because it’s much harder to create the right conditions IRL. I don’t think online behaviour maps to IRL behaviour well, the two are different worlds.

Online empathy is basically absent: You aren’t a person, you’re text on a monitor, I can’t see the effect my words are having on you, and there are no tone, body language and other soft markers to inform our interaction. On a scale of empathy running from “imaginary conversation in my head” to “talking to my Mum” people online feel much closer to the imaginary end of the scale than a normal face to face conversation.

Offline I can see, hear and smell you, I am mostly operating externally, things are real. Offline I am acting in context as myself, not a projection. And arguing takes more energy, more frictive, the stakes are higher and it’s continuous, if I start an argument or behave negatively the effects cannot be deferred or escaped in the same manner as they can be online.

I don’t think number of people is the metric that determines when people feel comfortable engaging in bad behaviour, it’s just a useful rule of thumb when talking about online communities.

Offline I think it’s more determined by the energy required to engage, motive, means, opportunity and likely response.

Go into a busy bar or to a public event, most people are genial even when they’re drunk or semi anonymous, even sports games where big crowds, charged emotions, and copious amounts of alcohol intermix and events like Halloween where it’s socially acceptable to be outside in a disguise are mostly peaceful.

Most arguments are small, between relatively few people, and end without major wider consequence for the society they happen in. Someone getting thrown out of a bar, people screaming at each other in a shop, a bust up in a club; these are the IRL equivalents of commenters feuding or minor forum drama. They take far more energy, are less pleasant, and carry far more severe consequence for the individuals involved. Most people that will jump into social media drama would never do anything even approaching this level of confrontation offline. Their are lasting consequences to engaging in this kind of thing even once, and it isn’t necessarily limited to direct response by the mods (cops etc.) shame and social stigma are stronger motivators offline than online.

Major trouble is usually the result a few instigators that feel empowered to flout norms or don’t care about being punished, and the trouble goes from limited to general when they are in a situation where other people are suggestible, likely to agree to go along, and feel like they have safety in numbers; they aren’t just in a crowd, they’re in something together. Offline it’s quite rare for all those factors to line up and it usually happens at relatively predictable times and places: two football teams with supporters that hate each other playing, at an emotionally charged protest, etc.

What do you see at such events? Meat mods (security and cops), to contain the event and enforce the norm again.

Really very strongly disagree that anonymity is bad but that is a seperate discussion.

We need to actually set a meaningful, sufficiently narrow to be useful definition of social media then, because nobody on the street thinks of social media and thinks “ah yes a platform where you talk to only 100 people”.

I mean, is IRC a social media platform, are BBS’ social media?
Is this forum?

These are media platforms, and they are social, but they are not “social media” in the common meaning of the phrase. They all also tend to need moderation after reaching a certain size, and the size isn’t all that large.
And 100 people is still too many to peacefully coexist without someone having a big stick and deploying it at least occasionally.

This is an art house experience model of social media, it’s more performance than actual platform. I understand it’s being brought up as an example but I don’t things with a very clear ulterior purpose are really worth looking at.

I think profit motive definitely makes certain choices more appealing, and a lot of those choices are likely to lead to negative behaviour once experienced at scale, but I think a lot of that is tied into the fact that certain things make mass participation far easier and mass participation is necessary for a commercially viable social media platform. If you want to make a large, easy to use platform you are going to make most of the same choices.

Again I really think Mastodon is the canary in the coal mine here, it’s a non profit decentralized all signing all dancing “social good!” social media platform and it’s as given to rancid behaviour and toxicity as anywhere else, on similar lines Matrix is basically discord for the technically discerning peadophile at this point, Lemmy would be as twee, grating and noxious as reddit if anyone ever used it.

Profit motive exacerbates, but structure and moderation are determinate.

I’m not sure how you expect a non moderated platform that allows users to engage and explore freely with each other to be anything but a shittip once it grows past having a very, very small userbase.

Why do you think that’s the case? Is it simply because of the absence of physical proximity? Or do you think there’s something else that’s pushing people towards behaving that way? What I can tell you is that my experience with online interactions definitely doesn’t show lack of empathy. All the interactions I had over the past several years have all been nothing but cordial but also all happened inside private spaces.

All the people who connect with me do so wither via email, which is a private, 1 to 1 communication, or via Apple Messages, again, private and 1 to 1. And I’m not sure if it’s because of the lack of spectators or what but literally not a single person has ever behaved badly.

And i’m talking hundreds of people at this point so it’s not an insignificant sample size. So I think there must be something else at play ehre and can’t just be a matter or people behaving like crazy persons when online.

I don’t think the number of people is the determining factor but I do believe—or suspect—that it does play a role. Human psychology is bizarre and people do all sorts of weird shit when they’re in a group of people that behaves in a certain way, even things they’d never do when alone.

Definitely a discussion I’m happy to have because it’s an interesting one. Just so we’re clear, I don’t think anonymity in general is bad. Or even anonimity in the context of social media generally speaking. I think in the context of the specific set of problems that are affecting social media, anonimity is a net negative and a non insignificant contributing factors to those problema.

I’d answer yes to all those personally. If we use the wiki definition of social media we get

Social media are interactive technologies that facilitate the creation, sharing and aggregation of content (such as ideas, interests, and other forms of expression) amongst virtual communities and networks.

But if you want to only keep this in the context of mainstream platforms than yeah, things are a lot different. Becuase the scale of a project changes the dynamics in a very significant way.

This ties to my initial thought experiment nicely: what if you don’t? The bluesky approach to moderation could totally work in the context of something that’s relatively small. Granted, it’s probably doomed to fail at large scale and that’s fine.

Personally, I think all moderation is doomed to fail at large scale. The fact that real human being in some poor countries are getting PTSD to death by moderating the cesspool that is facebook so that you can I can enjoy a nice social media experience is clearly a failure imo.

And also, the fact that I have to deal with the moral judgment coming out of a country where you can’t drink in public and you can’t show a nipple is a fucking travesty. But I also know that there aren’t really good alternatives because as soon as you place million of people in the same room you’ll get a mess of some sort, no matter how hard you try.

Just so we clear: I don’t expect that. I’m of the opinion that 4 IRL people can’t really have a productive conversation. I cap all my actual interactions with people to 3 MAX, myself included because I know for experience that you can’t have deep discussions about anything with more than 3 people in a casual, unstructured setting.

So I expect all social media to become a mess as soon as you pass a certain size, no matter what. And it’s also why I left traditional social media years ago and I have no interest in coming back.

1 Like

This is such a fascinating discussion and something I think about quite a lot both as a moderator of this platform + the Discord server and as somebody who reads and tries to follow anarchist ideals (that is, society without hierarchy).

I have experienced quite a lot of empathy online, I do not believe that the nature of being online deprives us of our empathy in any meaningful way, nor do I think it is completely absent. But I DO believe that it is much, much easier to ignore or simply not notice the consequences of your words/actions in a space where we cannot see or hear each other.

I have experienced quite a bit of shitty behavior online, both 1-on-1 and in group settings. This is mostly a result of me being queer, neurodivergent, having unconventional interests, etc. Things that also get me harassed IRL. I have many friends who have faced much, much worse online. I think with something like this, everyone’s experience varies so much, we cannot paint online interaction in broad strokes. I have also met some of my best, decade-long friends through Discord groups and MMOs!

There’s been a lot of discussion in the world about why people act like that online (you know, like how they do on harassment-based imageboards and such). I don’t think I have the full scope of context to give a concrete answer, but I do think the internet is just as much a mixed bag as the “real world”.

I think this scale is really important to consider when discussing social media in general. When I talk to most people about social media, I say I don’t really use it, because I do not have Twitter or Instagram or Bluesky or whatever. But I do, yknow? I’m here, I’m on Discord, I sure as hell use Youtube. I think the moderation responsibilities of a platform are very, very dependent on what that platform is and how it is run. This forum is moderated a bit differently than the Discord server, for example.

I think in general, with anything- online or offline- we hit a scale problem. If this forum had millions of active users and posts were being made so quickly and frequently that I couldn’t keep up I probably would not want to do it lol. I understand why major platforms turn to bots to do their moderation for the most part, even if it seems to constantly backfire. I wonder if the internet has simply outgrown the need/capability for a ‘worldwide town square’ type of deal.

One thing I always keep in mind for as long as I have used the internet is that there is always someone on the other side of the screen. The internet has been a way for me to connect and make friends with people who I otherwise wouldn’t, and I have formed deeper connections due to the lower barrier for entry for conversation. But those very same things that caused it to be a net gain for me can isolate and destroy others.

2 Likes

Just a random question because I love random questions: if you only consume content and never engage in comments, would you still consider that a use of social media?

I’ll interject my two cents here.

I deactivated all of my Meta accounts in November of last year. I had been semi-active on Facebook between 2004 and 2012. I only posted four times to Instagram around 2014. I activated a Threads account, but never posted anything.

All three of these “social” platforms have laid dormant for years.

At no point did I not consider myself a user. In my opinion, active participation is unnecessary. The existence of an account subjugates any declaration to the contrary; you’re implicitly a user just by the grace of contributing to their ecosystem. Whether it’s through views or any other metrics unbeknownst to the lay person, your continued presence is a boon, and you achieve their purpose of a user, despite limitation.

2 Likes

IRC is notable for drawing the distinction between channel operator and IRC operator.

When a channel is initially created, that user is granted operator status. That user may carry out usual operator status functions, including giving other users operator status, setting a channel topic, and changing the channel modes.

This claim staking and community moderation dynamic still exists on Reddit, Discord and other services.

It might have originated on CompuServe where the subscribers who “lobbied for” the creation of a forum became its first sysops:

The real genius of the Forum system was CompuServe’s willingness to allow them to be driven by ordinary subscribers — a willingness that hearkens back in its way to the founding philosophy of the service. Recognizing that they couldn’t possibly administer such a diverse body of discussions, CompuServe’s employees didn’t even try. Instead they created a process whereby new Forums could be formed whenever enough subscribers had expressed interest in their proposed topics, and then turned over the administration to the experts, the people who knew best the topics they dealt with: the very same subscribers who had lobbied for them in the first place. Forum administrators — known as “sysops” in CompuServe parlance — were given free access, along with a cash stipend that was dependent on how active their domain was. For the biggest Forums, this could amount to a considerable amount of money. Jeff Wilkins has claimed that some sysops wound up earning up to $250,000 in the course of their CompuServe life.

Sysops enjoyed broad powers to go with their compensation. It was almost entirely they who wielded the censor’s pen, who said what was and wasn’t allowed. As their Forums grew, they were permitted to hire deputies to help them police their territory, rewarding them with gifts of free online time. By all accounts, the system worked remarkably well as an early example of the sort of community policing on which websites like Wikipedia would later come to depend. It was a self-regulating system; those few sysops who neglected their duties or abused their powers could expect their Forum’s traffic to dwindle away, until CompuServe shut the doors. Those Forums with particularly enthusiastic and active sysops, on the other hand, thrived, sometimes out of all seeming proportion to their esoteric areas of interest.

I’d like to add extra emphasis to “almost entirely” in the above. I didn’t personally experience CompuServe, but my expectation with modern services is the same. When a bot shows up in a public Discord and spams every channel or when somebody comes in with a bad attitude it is almost always the local volunteer moderators who take care of it.

1 Like