Linux gamer, retired aviator, profanity enthusiast

  • 1 Post
  • 55 Comments
Joined 1 year ago
cake
Cake day: June 20th, 2023

help-circle
  • So this has bothered me since I was a teenager.

    In Empire Strikes Back, Yoda talked like this: “Put the cart before the horse, I have.” And he mostly did it while he was pretending to be a dingus early on to test Luke’s patience. Some actual movie quotes: “I cannot teach him. The boy has not patience.” “No. Do, or do not. There is no try.” “Judge me by my size, do you?”

    In the prequel trilogy, it’s like Lucas bought into the meme that Yoda talks funny, so all of a sudden Yoda talks like this “Before the horse, the cart, I have put.” “Around the survivors, a perimeter, create!”

    Anyway.







  • If you go buy one of those laser engravers off of eBay, for some reason their data in ports are USB-A, and they come with USB A to A cables. My understanding is you can both plug it into a PC and run it kind of like a printer, click Print and the machine jumps to life, or plug in a USB key with tool path profiles on it to use standalone. Why not have a USB-B port for device mode and a USB-A port for host mode is beyond me, I don’t live in Shenzhen.







  • As a small time backyard gardener I can say from experience that 4 plants made more cherry tomatoes than I could reasonably eat. I was giving ziplock bags of cherry tomatoes away to people at work for a couple months. They probably did produce a year’s worth of cherry tomatoes, but they don’t refrigerate or freeze particularly well and they’re not a great choice for making tomato sauce because of their liquid/pulp/skin ratio.

    Similarly I’ve found that I can grow a year’s supply of red pepper flakes with a whopping two cayenne plants. The rate at which I consume red pepper flakes, I’m about out by the time this year’s peppers start ripening.

    I’m able, in my tiny little garden, to grow more of single kinds of foods than I can reasonably eat. I cannot grow enough to sustain my entire diet; I’d need more land than I own to grow grain.


  • You know, I think I agree with the spirit of that assertion but not the letter of that assertion.

    There are people who are kind of at their limit knowing that on your phone there’s a Facebook app, but you have to use your browser and go to the website on a computer. These folks will hear dial tones and TV static in their heads if you say “secure socket layer” to them. These folks have probably also sat through NordVPN ads and heard words like “secure” and “encrypted” used together, and will probably make understandable mistakes like “how’d someone steal bitcoins? I thought it was encrypted?”


  • Yeah, I’ve seen people riff on xkcd comics before but they usually do a bad job of matching the handwriting/font (I don’t know if Randall hand-letters these or if he types in a handwritey font). It’s often a deliberately bad job, because indicating that they are changing the original is a part of the message/artistic expression. Like when a word is covered with a black bar with white letters in it in a different font, an obvious revision, it’s like hearing a different voice interrupt.


  • There is a post that’s circulating around the Lemmyhood where someone asks an LLM to solve the “there’s a goat, a wolf and a cabbage that need to cross a river” problem, and it returns grammatically correct logically impossible nonsense. I think this is instructive as to how LLMs work and how useless they really are.

    Presented with a logic problem, it doesn’t attempt to solve any problems or apply logic. That it does is search through the sumtotal of all human communication, finds dozens if not hundreds of cases where this or a similar problem has been asked, and then averages the answers. Because answers might be phrased in different orders or different sentence structures, or some people published wrong or joke answers sometimes but it has no means to detect that, they get averaged in with equal weight and so the answer it puts out begins with “Take the wolf and the goat, leave the boat behind. Take the boat back.” It has a fascinating ability to output seemingly relevant and grammatically impeccable worthless noise. Just like everything I say.

    The only compelling use case I’ve seen for these things is writing frameworks for fictional stories. There was an episode of the WAN Show back when LMG still existed where Linus gave ChatGPT a prompt to create a modern take on the premise of the movie Liar Liar. And it came up with an actually compelling outline, I’d go see the movie made out of that outline. Because it’s fictional, it doesn’t have to conform to reality.

    I doubt it could write an entire acceptable movie script though, it would have gaping plot holes, would have no theme or cohesive narrative structure, but every individual line of dialog would make grammatical sense and some conversations might even seem coherent.

    As a research or information gathering tool, they’re worse than useless because it has no way of detecting if information is up to date or obsolete, serious or farcical, correct or incorrect, it just averages them all together, basically on the same theory as the Poll The Audience lifeline on Who Wants To Be A Millionaire: Most of the crowd is almost always right. Except with this approach what happens is it will cite a completely fake made up paper and attribute it to a genuinely real scientist who works in the relevant field and allegedly published in a real reputable scientific journal. It looks right, it passes the sniff test. It’s also completely useless.

    And that’s when they’re not throwing weird emotional tantrums.