deleted by creator
deleted by creator
deleted by creator
Human bias is a pervasive element in many online communities, and finding a platform entirely free from it can be akin to searching for the holy grail. Maybe look into self-hosting an instance and punish moderators who don’t follow their own rules.
Regrettably, complaining tends to be a common pastime for many individuals. I acknowledge your frustrations with certain users who may appear entitled or unappreciative of the considerable effort you’ve dedicated to developing Lemmy. Shifting towards a mindset that perceives complaints as opportunities for enhancement can be transformative. Establishing a set of transparent rules or guidelines on how you prioritize issues and feature requests could help turn critiques into opportunities for improvement. This transparency can help manage expectations and foster a more collaborative relationship with the users in your community. While not all complaints may be actionable, actively listening to feedback and explaining your prioritization criteria could go a long way in building trust and goodwill. Open communication and a willingness to consider diverse perspectives can lead to a stronger, more user-centric product in the long run.
The philosophy of Complaint-Driven Development provides a simple, transparent way to prioritize issues based on user feedback:
Following these straightforward rules allows you to address the most pressing concerns voiced by your broad user community, rather than prioritizing the vocal demands of a few individuals. It keeps development efforts focused on solving real, widespread issues in a transparent, user-driven manner.
Here’s a suggestion that could help you implement this approach: Consider periodically making a post like What are your complaints about Lemmy? Developers may want your feedback. This post encourages users to leave one top-level comment per complaint, allowing others to reply with ideas or existing GitHub issues that could address those complaints. This will help you identify common complaints and potential solutions from your community.
Once you have a collection of complaints and suggestions, review them carefully and choose the top 3 most frequently reported issues to focus on for the next development cycle. Clearly communicate to the community which issues you and the team will be prioritizing based on this user feedback, and explain why you’ve chosen those particular issues. This transparency will help users understand your thought process and feel heard.
As you work on addressing those prioritized issues, keep the community updated on your progress. When the issues are resolved, make a new release and announce it to the community, acknowledging their feedback that helped shape the improvements.
Then, repeat the process: Make a new post gathering complaints and suggestions, review them, prioritize the top 3 issues, communicate your priorities, work on addressing them, release the improvements, and start the cycle again.
By continuously involving the community in this feedback loop, you foster a sense of ownership and leverage the collective wisdom of your user base in a transparent, user-driven manner.
deleted by creator
deleted by creator
deleted by creator
deleted by creator
deleted by creator
deleted by creator
On a basic level, the idea of certain sandboxing, i.e image and link posting restrictions along with rate limits for new accounts and new instances is probably a good idea.
If there were any limits for new accounts, I’d prefer if the first level was pretty easy to achieve; otherwise, this is pretty much the same as Reddit, where you need to farm karma in order to participate in the subreddits you like.
However, I do not think “super users” are a particularly good idea. I see it as preferrable that instances and communities handle their own moderation with the help of user reports - and some simple degree of automation.
I don’t see anything wrong with users having privileges; what I find concerning is moderators who abuse their power. There should be an appeal process in place to address human bias and penalize moderators who misuse their authority. Removing their privileges could help mitigate issues related to potential troll moderators. Having trust levels can facilitate this process; otherwise, the burden of appeals would always fall on the admin. In my opinion, the admin should not have to moderate if they are unwilling; their role should primarily involve adjusting user trust levels to shape the platform according to their vision.
An engaged user can already contribute to their community by joining the moderation team, and the mod view has made it significantly easier to have an overview of many smaller communities.
Even with the ability to enlarge moderation teams, Reddit relies on automod bots too frequently and we are beginning to see that on Lemmy too. I never see that on Discourse.
I think in a few years using an AI for this kind of task will be much more efficient and simpler to set up. Right now I think it would fail too much.
I very much doubt this kind of system would be implemented for Lemmy.
Yeah an appeal process to mitigate human bias would be nice.
I don’t have any hope left for Lemmy in this regard, but hopefully, some other Fediverse projects, other than Misskey, will improve the moderation system. Reddit-style moderation is one of the biggest jokes on the Internet.
I’m surprised that only one platform in the fediverse has copied Discourse; they copy Reddit instead, with the biggest joke of a moderation system on the Internet.
Lemmy was better before the Reddit exodus last year, when people started insulting others by calling them tankies and fascists. Before that, it was much more peaceful.