𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍

       🅸 🅰🅼 🆃🅷🅴 🅻🅰🆆. 
 𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍 𝖋𝖊𝖆𝖙𝖍𝖊𝖗𝖘𝖙𝖔𝖓𝖊𝖍𝖆𝖚𝖌𝖍 
  • 0 Posts
  • 33 Comments
Joined 2 years ago
cake
Cake day: August 26th, 2022

help-circle





  • We’re saying the same thing. AI can create art. My point was that we used to claim that art was a domain that was unassailable by machines, and this obviously is not true. So now, humans - or the particular human to whom I was replying - had a new goalpost: adaptabiility.

    We’ll keep coming up with new goalposts where “humans have an edge” that will keep us relevant and ascendant over machines, and irreplaceable. I believe we’ll run out of goalposts faster than many people would like.

    You know, there is one small other hope I have: that, despite how we’ve raised them, our children will be better than us, and will stop the cycle of wealth concentration. It’s unlikely, but it’s the only chance I see.



  • And, yet, I’ve been to an exhibit at the Philadelphia Museum of Fine Art that consist of an installation that included a toilet, among other similarly inspired works of great art.

    On a less absurd note, I don’t have much admiration for Pollock, either, but people pay absurd amounts of oof for his stuff, too.

    An art history class I once took posed the question: if you find a clearing in a wood with a really interesting pile of rocks that look suspiciously man-made, but you don’t know if a person put it together or if it was just a random act of nature… is it art? Say you’re convinced a person created it and so you call it art, but then discover it was an accident of nature, does it stop being art?

    I fail to see any great difference. AI created art is artificial, created with the intention of producing art; is it only not art because it wasn’t drawn by a human?


  • This is not a foregone conclusion.

    Sure, I agree. There’s many a slip twixt the cup and the lip. However, I’ve seen no evidence that it won’t happen, or that humans hold any inherent advantage over AI (as nascent as it may be, in the rude forms of LLMs and deep learning they’re currently in).

    If you want something to reflect upon, your statement about how humans have an advantage of adaptability sounds exactly like the previous generation of grasping at inherant human superiority that would be our salvation: creativity. It wasn’t too long ago that people claimed that machines would never be able to compose a sonnet, or paint a “Starry Night,” and yet, creativity has been one of the first walls to fall. And anyone claiming that ML only copies and doesn’t produce anything original has obviously never studied the history of fine art.

    Since noone would now claim that machines will never surpass humans in art, the goals have shifted to adaptability? This is an even easier hurdle. Computer hardware is evolving at speeds enormously faster than human hardware. With the exception of the few brief years at the start of our lives, computer software is more easily modified, updated, and improved than our poor connective neural networks. It isn’t even a competition: conputers are vastly more well equipped to adapt faster than we are. As soon as adaptability becomes a priority of focus, they’ll easily exceed us.

    I do agree, there are a lot of ways this futur could not come to pass. Personally, I think it’s most likely we’ll extinct ourselves - or, at least, the society able to continue creating computers. However, we may hit hardware limits. Quantum computing could stall out. Or, we may find that the way we create AI cripples it the same way we are, with built-in biases, inefficiencies in thinking, or simply too high of resource demands for complexity much beyond what two humans can create with far less effort and very little motivation.


  • The sad thing is that no amount of mocking the current state of ML today will prevent it from taking all of our jobs tomorrow. Yes, there will be a phase where programmers, like myself, who refuse to use LLM as a tool to produce work faster will be pushed out by those that will work with LLMs. However, I console myself with the belief that this phase will last not even a full generation, and even those collaborative devs will find themselves made redundant, and we’ll reach the same end without me having to eliminate the one enjoyable part of my job. I do not want to be reduced to being only a debugger for something else’s code.

    Thing is, at the point AI becomes self-improving, the last bastion of human-led development will fall.

    I guess mocking and laughing now is about all we can do.


  • You don’t get logged in to other accounts. Just follow people at their address, like you’d send an email. The server does the rest.

    If your question is about finding people to follow, that’s another matter. Folks on other instances won’t show up in your searches unless someone on your instance already follows them. For popular people, that’s usually no problem. For others, you might get their address from their web page. In any case, once you have their address, you just… follow them. No matter where they are, follow them from your instance and it just works. You don’t have to “log in” anywhere else; that’s the “federated” part of the fediverse.

    What’s most fantastic about it is that you can often follow accounts on entirely different platforms. How well this works depends on how well the platform supports the AP protocol, and fundamental models of data. But you can easily follow PixelFed accounts from a Mastodon account, and it works pretty well. It’s as if you could follow Instagram accounts from your Twitter account; that’s the killer feature of the Fediverse, IMO. Discovery is still clunky, and how these things interoperate in “World” can be kludgy. But the possibilities are really very revolutionary.


  • Really? I just spun up a gotosocial instance on a VPS and was up and running in a dozen minutes. Failing that, I’d have just joined mastodon.social. Why was it a hard decision for you? As a tech person, what about “federated” was confusing? I have a second account on a spoken language-specific server, for kicks; I set both of these up within an hour of each other. I donft understand how it could be considered a hard choice.

    Now, the finding people, I could understand, but since I was not on Twitter to begin with, I had nobody I cared about following. I can understand how that would be challenging, although it has nothing to do with your home server selection.





  • Go is like snakes: you’re hatched from an egg and pretty much effective from the get-go. The older you get, the bigger prey you can eat, but otherwise things don’t change much since you were hatched. Your species can thrive in almost any environment, you’re effective, you have all the tools you need straight out of the egg.

    Rust is like humans. There’s a huge incubation period, and you’re mostly helpless when you’re born, but the older you get, the more effective you become with the tools nature graced you with. And you, like Thanos, are inevitable, even if it does mean the death of billions.

    Python is like beaver. Everyone has an opinion about you: some think you’re cute, some think you’re wierd. You’re perfectly suited to your environment, but things get awkward outside of your natural habitat - you can function, but not as well as when you’re in your comfort zone. And when people encounter you where they’re not expecting, they can be unpeasantly surprised, and you can cause them trouble.

    C++ is like platypus. You resemble some other more simple, some might say sane, animal, but developed into a sort of frankenstein monster creature made from a jumble of parts and a stinger that, when it kills someone, comes as a shock. Every part of you serves some purpose, even if it seems tacked-on and out of place.

    Then there’s Node. You are everywhere. You are legion. You fill up ecosystems. People try to defend you, claiming that you serve some purpose in the foodchain, but there’s scant evidence. Attempts to eradicate you fail. You often spread deadly disease. You breed, rapidly, persistently, relentlessly. You are widely hated, and yet everwhere.

    Edit: typo


  • So, to expound on this a little…

    There’s a password manager I use, but the CLI tooling sucks. Thankfully, there’s a third party CLI tool in a language I know fairly well, and because I’m a little paranoid, I reviewed the code. Then I reviewed the code of the libraries it imported. And then the code of the libraries of the libraries it imported. Thankfully, that was as far as it went, and I was mainly looking for any code that made network calls… it was manageable, just barely.

    And I made some improvements and submitted PRs, only some of which were accepted, but I used them so I maintained a fork. Which was lucky, because a few months later upstream changed their parseargs library to a framework, and the dependencies exploded. 6 layers deep, and dozens of new dependencies - utterly unauditable without massive effort. I caught it only because of the rebase from upstream. I abandoned the rebase and now maintain a hard fork, of which I’m the only user AFAIK.

    The moral of the story is that introducing dependencies under the guise of “reuse” is a poisoned fruit, a Trojan Horse. It sounds good, but isn’t worth it in the long run. The Go team got it right with their proverb: a little copying is better than a little dependancy.



  • What op said still stands: if only one of your users follow a high-traffic, heavy-content /c/, then the server is caching all of that content for one person.

    E.g., there’s this great bot on Mastodon that posts random fractals, and the highest-voted ones “breed” to create a new generation of child fractals. The bot posts a static image and an animated movie of each new child every 4 hours. The images are ca 5mb each; the movies are between 20 & 40mb ea. That is, on average, 210mb/d, or 1.4gb per week. That’s a lot of data. You might, as an admin offering a free service, not want to have to pay for that much storage just because one or two users are suscribed to /c/flamereactor (“FlameReactor” is the name, so you can find this mind-blowingly awesome bot). There’s also bandwidth considerations, both on the pull and when users request the content.

    I like the idea, though, and will suggest a tweak, tried and true from Usenet days: provide the ability to unblock to only paying users. It’d give admins control, plus money to offset storage costs. Maybe provide three options to admins: full defederation; auto-block with any user able to unblock, for odeous but low impact sices; and auto-block with unblock for only users in some group - close friends, paying users, whatever.

    Lemmy could also transcribe content into links back to the source, but that’s just punting the bandwidth costs onto someone else, and I wouldn’t be surprised if this is frowned upon within The Federation (although it’s common practice with Reddit and X(twitter) content).