• 0 Posts
  • 18 Comments
Joined 1 year ago
cake
Cake day: June 17th, 2023

help-circle
  • In short, Facebook are incentivised to increase conflict and hate, it improves user engagement. They have also leveraged their large user base to boost numbers in threads significantly. Threads is already a cess pip of bigotry and hate.

    Federating with them would be like connecting your house’s drinking water pipe with the sewage pipe of an industrial pig farm. It would pollute our community to the point of destruction.

    They might try and control this initially. Unfortunately, it would almost certainly be part of an embrace, extend, extinguish attempt. ( https://en.m.wikipedia.org/wiki/Embrace,_extend,_and_extinguish ). They play nice till they have control of enough communities, then they stop the controls, to increase profits.






  • And how would that work between instances? One advantage of tags is that it would make searching between instances a lot easier. Unfortunately, you would end up with a race to the bottom, with the allowed number of tags. It would quickly become useless due to spam from instances with less interested admins. It also lumps a lot more work onto the mods and admins, for no good reason.

    Basically, your idea actively fights the nash equilibrium, something I’ve never seen work well, longer term. It’s better to change the underlying pattern and so change the equilibrium. This makes the system a lot more self correcting, even when people try and game it (and it will be gamed).


  • So who makes the decision on what is accurate Vs inaccurate? Who does the banning?

    The main goal is to take the human bias out of the loop. You can still throw a wide net, with your tags. That might even make sense, with a more niche topic. At the same time, a set of tags that closely match your search should be rated higher than one that just happens to include it. A split weighting system provides a soft pressure towards better behaviour, without being authoritarian about it.

    Tag spam would make this feature far more useless and an ordering bias.


  • Moderation requires someone to put the effort in to moderate them. For some servers, this would make sense. We don’t have a central reference however, so the default should be more permissive. If communities want to limit tags, that’s fine. If they don’t, that’s also fine. What’s awkward is when both have to play together, for searching purposes.


  • There wouldn’t be a hard coded limit, and the tags will ultimately still be there, so alternative searches can be used. The main goal is to discourage excessive tagging, or spamming popular (and irrelevant) tags to get more visibility.

    Making the default favour concise tags over a hit in a wall of them seems a good way to encourage this, without heavy handed forcing. Twitter’s wall of tags, is a good example of what we (or at least I) don’t want.


  • The weight means people won’t put every vaguely related tag on their post, and so poison other searches. It’s of the mindset that a post with a small number of tags is more likely to be what you want than a hit in amongst dozens.

    E.g. a post focused on sci-fi books might be tagged with #books #scifi. If they also include #fantasy then the ‘value’ of the tags get diluted. If they also include every other genre of story, then it’s watered down to almost nothing. If you then search for #books #scifi then the focused post will appear higher than the more general one. It’s more likely to match what you want.



  • Useful, but there would be a big risk is tag spamming.

    It might be worth weighting tags. E.g. a single tag would get a 100 point weighting, but 5 tags would only get a 20 point weighting each. This would discourage tag spamming, without compromising flexibility.

    Also, further to another comment, a global tag search should be a must have feature. Tags could be EXTREMELY useful for reducing centralisation. If you could search tags across the whole fediverse, then it will make it a lot easier to find small communities away from the big servers.



  • When working on complex tasks, it’s easy to get sucked into it and not see the wood for the trees. One of the best solutions is to talk it through with someone. Often, as you are explaining it, you will realise that it’s not doing what you just said, but something different. You also sometimes realise that your solid logic is far less logical than it seemed, inside your own head.

    Critically, none of these actually require the knowledge or interaction of the person you are talking to. Rather than explaining it to a colleague, and wasting their time, some people use an inanimate object. A rubber duck has become a common method. It’s small, easy to source, and can sit on top of a monitor etc, with a face to talk too. Other personified toys also work just as well, as do pets, babies, or life partners etc.

    Basically, it’s a method of breaking a bad cycle, by getting out of your own head, and so realise where you keep f*****g up.


  • The issue is that AI detection and AI training are very similar tasks. Anything that can be used reliably to detect an AI written article can also be used to improve it’s training, and so becomes obsolete.

    Meanwhile, a lot of people write in a manner that “looks” like an AI wrote it. This leads to the FAR more serious problem of false positives. Missing an AI written paper at school or university level isn’t a big deal. A false positive could ruin a young person’s life however. It’s the same issue the justice system faces.


  • It can be done, but it requires proper planing, fore thought, and research. I could easily see a rushed, budget conversion leading to a getto like environment.

    Such changes will take time. Right now, no-one is sure if WFH will stick. The last thing they want is to initiate a change, only to find it’s far less profitable than just waiting. Local government won’t push it yet, for similar reasons.

    The best thing right now would be to gather case studies and planning research into EXACTLY what is needed, both short term (1-10 years) and long (20-100 years). That can then both accelerate the process, once it gets going, as well as make it long term sustainable.


  • It’s the same game as email. An arms race between spam detection, and spam detector evasion. The goal isn’t to get all the bots with it, but to clear out the low hanging fruit.

    In your case, if another server noticed a large number of accounts working in lockstep, then it’s fairly obvious bot-like behaviour. If their home server also noticed the pattern and reports it (lowers the users trust rating) then it wont be dinged harshly. If it reports all is fine, then it’s also assumed the instance might be involved.

    If you control the instance, then you can make it lie, but this downgrades the instance’s score. If it’s someone else’s, then there is incentive not to become a bot farm, or at least be honest in how it reports to the rest.

    This is basically what happens with email. It’s FAR from perfect, but a lot better than nothing. I believe 99+% of all emails sent are spam. Almost all get blocked. The spammers have to work to get them through.


  • An automated trust rating will be critical for Lemmy, longer term. It’s the same arms race as email has to fight. There should be a linked trust system of both instances and users. The instance ‘vouches’ for the users trust score. However, if other instances collectively disagree, then the trust score of the instance is also hit. Other instances can then use this information to judge how much to allow from users in that instance.