with easily accessible technology people don’t retain the skills that are supplemented by that technology.
Isn’t this the point of technology?
Sorry about that.
with easily accessible technology people don’t retain the skills that are supplemented by that technology.
Isn’t this the point of technology?
We don’t even know how they arrive at the output they arrive at, and it takes lengthy research just to find out how, say, an LLM picks the next word in an arbitrarily chosen sentence fragment. And that’s for the simpler models! (Like GPT-2)
That’s pretty crazy when you think about it.
So, I don’t think it’s fair to suggest they’re just “a new type of app”. I’m not sure what “revolutionary” really means but the technology behind the generative AI is certainly going to be applied elsewhere.
It’s anecdotal but I have found that the people who are “skeptical” (to use your word) about generative AI often turn out to be financially dependent on something that generative AI can do.
That it to say, they’re worried it will replace them at their job and so they very much want it to fail.
What does this mean? What rules are you referencing?
The whole point of ActivityPub is so this kind of thing can happen, isn’t it?
I think I see the problem. 99% of the site wasn’t dark. That reddark site was showing a hand curated list of subs that announced they were going dark, compared to the number of those subs that did go dark. The exact numbers are impossible to track down, but reddit claims they have “100k+” active communities. Less than 10% of reddit actually went dark, conservatively speaking.
Of course, all subs are not created equal, so just comparing sub numbers doesn’t tell the whole story, but even anecdotally, my sub list was mostly intact during the blackout.
I have a weak and high level grasp of how LLMs work, but what you say in this comment doesn’t seem correct. No one is really sure why LLMs sometimes make things up, and a corollary of that is that no one knows how difficult (up to impossible) it might be to fix it.