• 0 Posts
  • 16 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle



  • You still might want to do something like alias pbtar='tar --use-compress-prog=pbzip2 to easily use pbzip2 - unless you have an ancient system that’ll speed things up significantly. And even if you don’t it’d be nice to use it for creation - to utilize more than one core the archive needs to be created for parallel extraction.



  • Ethernet is awesome. Super fast, doesn’t matter how many people are using it,

    You wanted to say “Switched Ethernet is awesome”. The big problem of Etherpad before that was the large collision domain, which made things miserable with high load. What Ethernet had going for it before that was the low price - which is why you’ve seen 10base2 setups commonly in homes, while companies often preferred something like Token Ring.


  • It wasn’t really a replacement - Ethernet was never tied to specific media, and various cabling standards coexisted for a long time. For about a decade you had 10baseT, 10base2, 10base5 and 10baseF deployments in parallel.

    I guess when you mention coax you’re thinking about 10base2 - the thin black cables with T-pieces end terminator plugs common in home setups - which only arrived shortly before 10baseT. The first commercially available cabling was 10base5 - those thick yellow cables you’d attach a system to with AUI transceivers. Which still were around as backbone cables in some places until the early 00s.

    The really big change in network infrastructure was the introduction of switches instead of hubs - before that you had a collision domain spanning the complete network, now the collision domain was reduced to two devices. Which improved responsiveness of loaded networks to the point where many started switching over from token ring - which in later years also commonly was run over twisted pair, so in many cases switching was possible without touching the cables.


  • I do have a bunch of the HPs for work related projects - they are pretty nice, and the x86 emulation works pretty good (and at least feels better than the x86 emulation in MacOS) - but a lot of other stuff is problematic, like pretty much no support in Microsofts deployment/imaging tools. So far I haven’t managed to create answer files for unattended installation.

    As for Linux - they do at least offer disabling secure boot, so you can boot other stuff. It’d have been nicer to be able to load custom keys, though. It is nice (yet still feeling a bit strange) to have an ARM system with UEFI. A lot of the bits required to make it working either have made it, or are on the way to upstream kernels, so I hope it’ll be usable soon.

    Currently for the most stable setup I need to run it from an external SSD as that specific kernel does not have support for the internal NVME devices, and booting that thing is a bit annoying as I couldn’t get the grub on the SSD to play nice with UEFI, so I boot from a different grub, and then chainload the grub on SSD.



  • I’m running both physical hardware and cloud stuff for different customers. The problem with maintaining physical hardware is getting a team of people with relevant skills together, not the actual work - the effort is small enough that you can’t justify hiring a dedicated network guy, for example, and same applies for other specialities, so you need people capable of debugging and maintaining a wide variety of things.

    Getting those always was difficult - and (partially thanks to the cloud stuff) it has become even more difficult by now.

    The actual overhead - even when you’re racking the stuff yourself - is minimal. “Put the server in the rack and cable it up” is not hard - my last rack was filled by a high school student in a part of an afternoon, after explaining once how to cable and label everything. I didn’t need to correct anything - which is a better result than many highly paid people I’ve worked with…

    So paying for remote hands in the DC, or - if you’re big enough - just order complete racks with racked and pre-cabled servers gets rid of the “put the hardware in”.

    Next step is firmware patching and bootstrapping - that happens automatically via network boot. After that it’s provisioning the containers/VMs to run on there - which at this stage isn’t different from how you’d provision it in the cloud.

    You do have some minor overhead for hardware monitoring - but you hopefully have some monitoring solution anyway, so adding hardware, and maybe have the DC guys walk past and inform you of any red LEDs isn’t much of an overhead. If hardware fails you can just fail over to a different system - the cost difference to cloud is so big that just having those spare systems is worth it.

    I’m not at all surprised by those numbers - about two years ago somebody was considering moving our stuff into the cloud, and asked us to do some math. We’d have ended up paying roughly our yearly hardware budget (including the hours spent on working with hardware we wouldn’t have with a cloud) to host a single of one of our largest servers in the cloud - and we’d have to pay that every year again, while with our own hardware and proper maintenance planned we can let old servers we paid for years ago slowly age out naturally.




  • I switched from an IBM M13 to a Tex Shinobi with box navy a few months ago. It is not as good as buckling spring, but good enough - and the more compact keyboard, full programmability and the better trackpoint make up for it.

    I initially tried Cherry MX Blues, but they’re horrible. Never understood the Cherry hype in the 90s, and still don’t understand it now.