• 0 Posts
  • 28 Comments
Joined 1 year ago
cake
Cake day: June 21st, 2023

help-circle

  • Maybe it’s just me, but the majority of programmers I’ve worked with don’t even know how to quit vim, let alone use it for programming. I wonder if the demographic who completed the survey accurately represents all the people who use Rust, or only those most passionate about the language. It’s also possible that ~30% of Rust programmers do actually use vim (and friends) and represent a different group of programmers than the ones I’ve worked with (who use more traditional programming languages).

    Nothing against vim of course. vim is a great editor.



  • For shared, immutable lists, you can also use Arc/Rc in place of Box! I guess what’s surprising is that [T] (and str/Path/OsStr/etc) are traditionally associated with shared borrows (like &[T]), but they’re really just unsized types that can be used anywhere there’s a ?Sized bound.

    Also, overuse of lists (as opposed to other data structures) seems to be a common problem in all programming languages among less experienced devs since they’re simple enough to use in most places. Not saying that’s what happened to the author though - these types are way less obvious than something like a hash set or queue.





  • although it uses your biometric data, it’s still a single factor of authentication

    Speaking from my experience, I use my phone for biometric authentication. At least from my point of view, I see that as two factors (what I have and what I am) since the biometric authentication only works on my phone.

    I am not sure I understood you here. What do you mean by “instead of having each service do their own thing”? Each website using their own method of delivering OTPs?

    Basically having multiple places where codes may be generated. This way you can use one location to get OTPs instead of having them delivered via SMS or generated by a different app/service. It ends up being easier and more convenient for the end user (which of course increases adoption).

    I guess this has more to do with services adopting OTP generators than sending them via SMS though.

    From the perspective of OTPs it makes much more sense to use a separate application (Like Google Authenticator or Aegis Authenticator), preferably on a separate device, to generate the OTPs.

    If logging into the password manager to get the password is sufficiently secure (locked behind MFA), then I don’t see the benefit of using a separate OTP generator (aside from maybe if your password manager has a data breach or something, which should be a non-issue except it clearly isn’t thanks to LastPass…)

    I’m starting to wonder if phones (or other auth-specific devices) should just become dedicated authentication devices and passwords should just be phased out entirely tbh. Passwords have always had issues because their static nature means if someone learns your password without your knowledge, that method of authentication becomes worthless. The main concern would be what happens when you lose your phone I suppose.




  • There are a disproportionately large number of people who get one pretty demo and think LLMs are the solution to everything. Even for translations, I’d be interested to see how accurate the major models are in real world scenarios. We’ve been struggling hard to find any practical usage of LLMs that doesn’t require the user to be able to verify the output themselves.


  • I don’t think there’s anything wrong with using HTML/XML-ish format for describing a UI (although having a standardized presentation format that all “viewers/browsers” follow exactly the same way would be nice), I’m just sad that websites have become described as UIs rather than as well-structured documents.





  • In this case, I don’t think Sink will let you selectively remove sources (although you can clear the sink if you want), but whenever you want to play a click you could clear the sink and append the clicking source to it. Alternatively, you could create a source that chains the clicking sound with something like Zero and have it repeat indefinitely, but have the Zero source play until you receive a new signal to play the click audio (and stop the source once you’re done clicking).

    I think how you should approach this depends on your architecture, so I’ll give a couple approaches I would consider if I were trying to do this myself:

    1. For a blocking approach: I’d play the click sound once (using .append on the sink, for example), then use Instant::now() - last_instant and pass whatever duration is left to wait off to thread::sleep. This would look something like this (pseudo-ish code):

      let mut audio_started = Instant::now();
      for _click_idx in 0..num_clicks {
          sink.append(click_sound.clone()); // you can buffer the click_sound source and clone the buffer using .buffered() if needed
          let remaining = Instant::now() - audio_started;
          if remaining > Duration::ZERO {
              std::thread::sleep(remaining);
          }
      }
      
    2. For a non-blocking approach where you have a separate thread managing the audio, I’d use a channel or similar as a signal for when to play the click. Your thread could then wait until that signal is received and append the click sound to the sink. You’d basically have a thread dedicated to managing the audio in this case. If you want a more complicated version this as an example, here’s a project where we used rodio with tauri (like you) to queue up audio sources to be played on demand whenever the user clicks certain buttons in the UI. The general architecture is the same - just a for loop listening for events from a channel, and then using those events to add sources to our output stream (though you can just use a Sink I believe to keep things simple).


  • If you want the source to repeat indefinitely, you can try calling repeat_infinite on it. Combine that with pausable/stoppable, and use periodic_access to occasionally check whether the audio source should be paused/stopped probably by using an Arc[AtomicBool] (using square brackets because Lemmy hates angle ones).

    It could look something like this:

    let src = ...;
    let stop = Arc::new(AtomicBool::default());
    let stop2 = stop.clone();
    let src = src
        .repeat_infinite()
        .stoppable()
        .periodic_access(Duration::from_millis(50), move |src| {
            if stop2.load(Ordering::Relaxed) {
                src.stop();
            }
        });
    
    // later on...
    stop.store(true, Ordering::Relaxed);
    

    periodic_access is also how Sink controls the source when you want to pause/stop/etc it. You could probably use Sink directly if you want more control over the source.


  • A lot of nice QoL changes (checking for missing feature flags, for example) but the thing that stood out to me the most was impl Sync for mpsc::Sender. This has always been a pain point in my opinion for the standard library’s channels, but now that they’re using crossbeam-channel internally, there’s no need to add it as a dependency anymore.

    I think some people will be upset by them dropping support for older Windows versions. I can see why they would not want to continue support for them though, it takes extra work to maintain compatibility for those old OS versions and the vast majority of users (by percentage) are on 10/11 now. Still, a shame.


  • OK, sure, we probably don’t disagree then.

    We probably don’t here, but like I said I’m not really interested in discussing the political feasibility of it.

    I am obviously NOT arguing that every resource should be public. This discussion is about AI, which was publicly funded, trained on public data, and is backed by public research. This sleight of hand to make my position sound extreme is, frankly, intellectually dishonest.

    I don’t think I ever disagreed that the models themselves should be public, and there are already many publicly available models (although it would be nice if GPT-N were). What I disagree with is the service being free. The service costs a company real money and resources to maintain, just like any other service. If it were free, the only entity that could reasonably run the models is the government, but at this point we might as well also have the government run public git servers, public package registries, etc. Honestly, I’m not sure what impression you expected me to get, considering the claim that a privately run service using privately paid-for resources should be free to the public.

    There’s a shortage, but it’s not “extreme”. ChatGPT is running fine. I can use it anytime I want instantly. You’d be laughed out of the room if you told AI researchers that ChatGPT can’t scale because we’re running out of GPUS.

    Actually no, I work directly with AI researchers who regularly use LLMs and this is the exact impression I got from them.


  • Finally, there are positive economic externalities to public AI availability.

    There are positive economic externalities to public everything availability. We don’t live in this kind of world though, someone will always try to claim a larger share due to human nature. That being said, I’m not really interested in arguing about the political feasibility (or lack thereof) of having every resource being public.

    With how many people are already using AI, it’s frankly mind boggling that they’re only losing $700k a day.

    There are significant throttles in place for people who are using LLMs (at least GPT-based ones), and there’s also a cost people pay to use these LLMs. Sure you can go use ChatGPT for free, but the APIs cost real money, they aren’t free to use. What you’re seeing is the money they lost after all the money they made as well.

    You’re also ignoring the fact that costs don’t scale proportionally with usage. Infrastructure and labor can be amortized over a greater user base. And these services will get cheaper to run per capita as time goes on and technology improves.

    I don’t disagree that the services will get cheaper and that costs don’t scale proportionally. You’re most likely right - generally speaking, that’s the case. What you’re missing though is that there is an extreme shortage of components. Scaling in this manner only works if you actually have the means to scale. As things stand, companies are struggling to get their hands on the GPUs needed for inference.