• 0 Posts
  • 19 Comments
Joined 1 year ago
cake
Cake day: July 30th, 2023

help-circle



  • A to B made more sense in a world where devices cannot serve as both roles via negotiation. My android phone when I got it utilized a data transfer method of plugging my iPhone charge port into my Android charge port, then the Android initiated the connection as a host device.

    The true crime is not that the cable is bidirectional, the true crime is that there is little to no proper distinction and error checking between USB, Thunderbolt, and DisplayPort modes and are simply carried on the same connector. I have no issues with the port supporting tunneled connections - that is in fact how docking stations work - just the minimal labeling we get in modern devices.

    I’d be fine with a type-A to type-A cable if both devices had a reasonable chance at operating as both the initiator and target - but that type of behavior starts with USB-OTG and continues in type-C.







  • I get the statement you’re trying to make here - serving the name of a platform you dislike with the same reverence as he-who-must-not-be-named in Harry Potter (Voldemort) - but all you’ve done is obfuscate the search engine. Now if someone is skimming for information on the platform via search, you’ve hidden your comments and post from someone who might find your perspective useful. No one is going to try 15 ways of spelling a platform name (except maybe trying stackoverflow with and without spaces). Internet users are pretty lazy.





  • So that’s the nifty thing about Unix is that stuff like this works- when you say “locked up”, I’m assuming you refer to logging in to a graphical environment, like Gnome, KDE, XFCE, etc. To an extent, this can even apply to some heavy server processes: just replace most of the references to graphical with application access.

    Even lightweight graphical environments can take a decent amount of muscle to run, or else they lag. Plus even at a low level, they have to constantly redraw the cursor as you move it around the screen.

    SSH and plain terminals (Ctrl-Alt-F#, what number is which varies by distro) take almost no resources to run: SSH/Getty (which are already running), a quick process call to the password system, then a shell like bash or zsh. A singular GUI application may take more standing RAM at idle than this entire stack. Also, if you’re out of disk space, the graphical stack may not be able to alive

    So when you’re limited on resources, be it either by low spec system or a resource exhaustion issue, it takes almost no overhead to have an extra shell running. So it can squeeze into a tiny corner of what’s leftover on your resource-starved computer.

    Additionally, from a user experience perspective, if you press a key and it takes a beat to show up, it doesn’t feel as bad as if it had taken the same beat for your cursor redraw to occur (which also burns extra CPU cycles you may not be able to spare)




  • IMO the joke is more “timeless” because it uses state names instead of company names.

    Imagine if instead it mentioned Xerox computers, DEC terminals*, IPX, and Ethernet hubs. We’d say “wow that comic didn’t age well”. Even something as recent as “EVGA GPU” will go down in history books instead of commonplace.

    *Yes, I am aware that the VT100 terminal spec is from DEC. But they don’t make DEC terminals anymore

    10 years down the road, we don’t know what tech will look like. But there is a high likelihood that the state of Pennsylvania will still exist and hold relevance.



  • We have on prem and do all our upgrades by burn the OS and move the data, with the exception of the hypervisor OS (which has a pretty resilient bulk self upgrade built in, and we have a burn-the-OS plan documented for if they do crash). Even system file corruption of a random pet server? New VM and reattach the data disk. Need high availability? Throw F5 or HAProxy at the problem (assuming L7 protocol support).

    Both cloud and on prem can work equally when done right. The most important part is to understand that both have different types of cost (human, machine, developer) and to make the right choice based your/your customer’s needs and any applicable laws or regulations about data locality. And yeah, sometimes one will be better for someone and not someone else.

    Seven figures of cloud engineering can’t solve stupid, but neither can seven figures of datacenter. This isn’t some Sith/Jedi concept where you have hard definitions of dark and light or good and evil - though sometimes both will see each other as the enemy, and they are in a way competitors.