• 0 Posts
  • 18 Comments
Joined 1 year ago
cake
Cake day: June 14th, 2023

help-circle
  • dave@feddit.uktoProgrammer Humor@lemmy.mlOff by one solitude
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    Function/Method names, on the other hand, should be written so as to make the most sense to the humans reading and writing the code

    Of course—that’s why we have such classics as stristr(), strpbrk(), and stripos(). Pretty obvious what the differences are there.

    But to your point, the ‘intuitive’ counterpart to ‘zeroth’ is the item with index zero. What we have is a mishmash of accurate and colloquial terms for the same thing.



  • Partially. The summary isn’t quite in line with the detail:

    Android is the only operating system that fully immunizes VPN apps from the attack because it doesn’t implement option 121. For all other OSes, there are no complete fixes. When apps run on Linux there’s a setting that minimizes the effects, but even then TunnelVision can be used to exploit a side channel that can be used to de-anonymize destination traffic and perform targeted denial-of-service attacks.









  • Definitely LLMs have been over promised and/or misrepresented in mainstream media, but even in the last few months their utility is increasing. I’m a big advocate of finding ways to use them to enhance people (thinking partner not replacement for thinking). They are most certainly a tool, and you need to know their limitations and how to use them.

    From experience working with naive end users, they are anthropomorphising based on how the models have been reported and that’s definitely not helpful.

    As the models get more and more capable (and I’m pretty happy to make that prediction), will they reach a point where they are indistinguishable from the output of a real person? That will give us some challenges. But the interesting thing for me is that when that happens, and the AI can write that report you were paying someone to write, what was the point of the report? You could argue they were some kind of terrible UBI and we’ll end up with just the pointless output without the marginal benefit of someone’s livelihood. That needs a bigger rethink.



  • Ok, I’m not going to go point by point, as this is getting too long. All I’d say is remember where the model for ML came from (McCulloch & Pitts), and that this is the worst AI will ever be.

    If this is truly a jump across S-curves in utility, it’s bound to be slightly worse than other methods to begin with. Many of the arguments against the current approach sound like the owners of a hot air balloon business arguing with the Wright brothers.




  • I totally agree that LLMs don’t currently have a ‘concept’ of what they’re being asked. But it’s also pretty easy to ask questions of young kids that go beyond what they have a mental map of and get nonsense.

    Here’s what ChatGPT has to say in response to your question:

    An orange is a citrus fruit, typically sweet and juicy, while a potato is a starchy vegetable. They have different flavors, textures, and nutritional profiles. Oranges are typically eaten raw, while potatoes are commonly cooked before consumption, such as boiled, baked, or fried.

    Me: But what about their behaviour?

    I apologize for the confusion in my previous response. Fruits and vegetables do not exhibit behavior in the same way animals do. Behavior is a characteristic of living organisms that involves actions and responses to stimuli. Fruits and vegetables are plant-based and do not display behavior as animals do. They follow natural processes of growth, ripening, and decay but do not behave in the same sense as living creatures.

    So, a smarter parrot? How far until it’s as good as any young kid? For the record, I’ve worked in this field for a long time and am far from claiming any kind of strong AI is even close. The tests the OP did are fragile by design so their result isn’t surprising.

    And you still didn’t say anything about not using the data ‘well’. What would you like to see them doing?



  • The big pre-training is pretty much fixed. The fine tuning is continuously being tweaked, and as shown, can have dramatic effects on the results.

    The model itself just does what it does. It is, in effect, and ‘internet completer’. But if you don’t want it to just happily complete what it found on the internet (homophobia, racism, and all), you have to put extra layers in to avoid that. And those layers are somewhat hand-crafted, sometimes conflicting, and therefore unlikely to give everyone what they consider to be excellent results.