• 0 Posts
  • 27 Comments
Joined 1 year ago
cake
Cake day: July 31st, 2023

help-circle

  • Recursion makes it cheaper to run in the dev’s mind, but more expensive to run on the computer.

    Maybe for a Haskell programmer, divide-and-conquer algorithms, or walking trees. But for everything else, I’m skeptical of it being easier to understand than a stack data structure and a loop.


  • pivot_root@lemmy.worldtoProgrammer Humor@lemmy.mlStop
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    3 months ago

    In a single one-off program or something that’s already fast enough to not take more than a few seconds—yeah, the time is spent better elsewhere.

    I did mention for a compiler, specifically, though. They’re CPU bottlenecked with a huge number of people or CI build agents waiting for it to run, which makes it a good candidate for squeezing extra performance out in places where it doesn’t impact maintainability. 0.02% here, 0.15% there, etc etc, and even a 1% total improvement is still a couple extra seconds of not sitting around and waiting per Jenkins build.

    Also keep in mind that adding features or making large changes to a compiler is likely bottlenecked by bureaucracy and committee, so there’s not much else to do.


  • Not necessarily. It depends on what you’re optimizing, the impact of the optimizations, the code complexity tradeoffs, and what your goal is.

    Optimizing many tiny pieces of a compiler by 0.02% each? It adds up.

    Optimizing a function called in an O(n2) algorithm by 0.02%? That will be a lot more beneficial than optimizing a function called only once.

    Optimizing some high-level function by dropping into hand-written assembly? No. Just no.





  • Moore’s Law is Dead shared an interesting video yesterday about these chips. Supposedly, leaks from his sources at Intel say that high voltages being pushed through the ring bus cause degradation. The leaks claim it shares the same power rail as the P and E cores, meaning it’s influenced by the voltage requested by the cores.

    For context, the ring bus is responsible for communication between cores, peripherals, and the platform. This includes memory accesses, which means that if the ring bus fails and does something incorrectly, it could appear normal but result in errors far down the line.

    Going beyond the video specifically, and considering what others have suggested as workarounds, it seems like ring bus degradation might be a decent candidate for the actual root cause of these issues.

    Some observations around chips degrading were:

    • High memory pressure exacerbates the issue.
    • Chips with more cores deteriorate faster.

    Some of the suggestions to work around the issue were:

    • Lower the memory speed.
    • Lower the voltage and clock speeds.
    • Disabling E cores.

    All of those can be related to stress being put on the ring bus:

    • Higher voltage being put through the bus -> higher likelihood of physical damage
    • More memory pressure -> more usage of the bus, more opportunity for damage to accumulate
    • More cores -> more memory pressure
    • Slower memory speeds -> less maximum throughput -> less stress

    I’m not claiming anything definitive, but I think my money is on this one.






  • To offer a differing opinion, why is null helpful at all?

    If you have data that may be empty, it’s better to explicitly represent that possibility with an Optional<T> generic type. This makes the API more clear, and if implicit null isn’t allowed by the language, prevents someone from passing null where a value is expected.

    Or if it’s uninitialized, the data can be stored as Partial<T>, where all the fields are Optional<U>. If the type system was nominal, it would ensure that the uninitialized or partially-initialized type can’t be accidentally used where T is expected since Partial<T> != T. When the object is finally ready, have a function to convert it from Partial<T> into T.