archomrade [he/him]

  • 0 Posts
  • 20 Comments
Joined 1 year ago
cake
Cake day: June 20th, 2023

help-circle
  • the more popular a post gets the more likely at least one person notices it’s an AI laundered repost. As for false positives, the examples in the article are really obviously AI adjusted copies of the original images, everything is the same except the small details, there’s no mistaking that.

    I just don’t think this bodes well for facebook if a popular post or account is discovered to be fake AI generated drivel. And i don’t think it will remain obvious once active counter measures are put into place. It really, truly isn’t very hard to generate something that is mostly “original” with these tools with a little effort, and i frankly don’t think we’ve reached the top of the S curve with these models yet. The authors of this article make the same point - that outside of personally-effected individuals who have their work adapted recognizing their own content, there’s only a slim chance these hoax accounts are recognized before they reach viral popularity, especially as these models get better.

    Relying on AI content being ‘obvious’ is not a long-term solution to the problem. You have to assume it’ll only get more challenging to identify.

    I just don’t think there’s any replacement for shrinking social media circles and abandoning the ‘viral’ nature of online platforms. But I don’t even think it’ll take a concerted effort; i think people will naturally grow distrustful of large accounts and popular posts and fall backwards into what and who they are familiar with.


  • Couple problems:

    • the availability of the tools makes the potential scale of the problem pretty drastic. It may take teams of people to track this stuff down, and there’s no guarantee that you’ll catch all of it or that you won’t have false positives (which would really piss people off)

    • in a culture that seems obsessed with ‘free speech absolutism’, I imagine the Facebook execs would need to have a solid rationale to ban ‘AI generated content’, especially given how hard it would be to enforce.

    That said, Facebook does need to tamp this down, because engagement isn’t engagement when it’s taken over by AI, because AI isn’t compelled to buy shit from advertising, and it can create enough noise to make ad targeting less useful for real people.

    I personally think people will need to adapt to smaller, more familiar networks that they can trust, rather than trying to play whack-a-mole with AI content that continues to get better.

    We’re overdo for some degrowth, especially when it comes to social media.



  • There is so much work out there for free, with no copyright

    There’s actually a lot less than you’d think (since copyright lasts for so long), but even less now that any online and digitized sources are being locked down and charged for by the domain owners. But even if it were abundant, it would likely not satisfy the true concern here. If there was enough data to produce an LLM of similar quality without using copyrighted data, it would still threaten the security of those writers. What is to say a user couldn’t provide a sample of Stephen King’s writing to the LLM and have it still produce derivative work without having trained it on copyrighted data? If the user had paid for that work, are they allowed to use the LLM in the same way? If they aren’t who is really at fault, the user or the owner of the LLM?

    The law can’t address the complaints of these writers because interpreting the law to that standard is simply too restrictive and sets an impossible standard. The best way to address the complaint is to simply reform copyright law (or regulate LLM’s through some other mechanism). Frankly, I do not buy that the LLM’s are a competing product to the copyrighted works.

    The biggest cost in training is most likely the hardware

    That’s right for large models like the ones owned by OpenAI and Google, but with the amount of data needed to effectively train and fine-tune these models, if that data suddenly became scarce and expensive it could easily overtake hardware cost. To say nothing for small consumer models that are run on consumer hardware.

    capitalists just stealing whatever the fuck they want “move fast and break things”

    I understand this sentiment, but keep in mind that copyright ownership is just another form of capital.


  • Copyright is already just a band-aid for what is really an issue of resource allocation.

    If writers and artists weren’t at risk of loosing their means of living, we wouldn’t need to concern ourselves with the threat of an advanced tool supplanting them. Nevermind how the tool is created, it is clearly very valuable (otherwise it would not represent such a large threat to writers) and should be made as broadly available (and jointly-owned and controlled) as possible. By expanding copyright like this, all we’re doing is gatekeeping the creation of AI models to the largest of tech companies, and making them prohibitively expensive to train for smaller applications.

    If LLM’s are truly the start of a “fourth industrial revolution” as some have claimed, then we need to consider the possibility that our economic arrangement is ill-suited for the kind of productivity it is said AI will bring. Private ownership (over creative works, and over AI models, and over data) is getting in the way of what could be a beautiful technological advancement that benefits everyone.

    Instead, we’re left squabbling over who gets to own what and how.


  • Huh? Who’s denying the reality of the war? It really seems like it’s you who has lost sight of what war really means, you should know what the cost is of extending the conflict indefinitely. Russia will not leave Ukraine without either a treaty or a defeat, and one of those options is realistic, and the other will come at the cost of possibly millions of lives. If Ukraine pushes Russia out completely and are unable to secure a treaty, they must be ready to defend the retaken ground for the foreseeable future. You’ve abandoned diplomacy because you are (rightfully) outraged over Russia’s actions, but that outrage doesn’t change the calculus of the war.

    Honestly, it’s pointless arguing with you, because you’ve detached yourself from reason and nothing but blood will satisfy you. You are no better than the war hawks in DC, sending Ukrainians to the meat grinder in order to keep the gears of war turning and the defense contractors employed.

    Everyone who cares about the loss of lives should be looking toward a diplomatic end to the war. A desperate Russia is more likely to initiate direct conflict with NATO and the west. We must end the war now, before it escalates further.

    Anyone who says otherwise is dreaming.





  • I’m personally enraged and disgusted by Russia’s invasion and brutalizing of the Ukrainian people, and I’d be happy if this war ended with the death of Putin and his allies.

    But Im not naive enough to believe that’s a likely outcome. I think it’s far more likely that this war will impoverish Ukraine for the next half-century and create a generation of orphans and refugees that will never know the country that existed before this war.

    I think its ethically abhorrent to cheer on its continuation, and even more, I think it’s childish and absurd to think continued US involvement in it would lead to anything beneficial for the Ukrainian people.

    And I think the people who are blinded by their rage are the least qualified to cast judgement on what should be legitimate leftist geopolitical policy. The same people who are play-acting outrage against leftists being too openly violent are the ones seemingly giddy to see Ukraine being tossed near limitless weapons to add to the already half-million dead, over a conflict almost certainly created by US involvement to begin with and which is unlikely to end at all well for Ukraine. You’re advocating for the sacrifice of Ukraine for the chance to destroy Russia, and I find that more heartless and disgusting than anything anyone on Hexbear has said so far.







  • Targeted ads are designed to make you feel inadequate or incomplete. Even if it doesn’t convince you to buy the product advertised, it can still shift your expectations and world-view just by normalizing a certain type of consumption (or attitude, or media, ect).

    Just because you don’t spend money, doesn’t mean ads aren’t still subtly manipulating your expectations.

    It is a trillion dollar a year industry for a reason.