…you have my condolences
…you have my condolences
Yep, being familiar with the data model is 98% of the effort.
The remaining 2% is the query
I disagree unless the tests are reasonably high level.
Half the time the thing you’re testing is so poorly defined that the only way to tighten that definition is to iterate.
In this sense, you’re wasting time writing tests until you’ve iterated enough to have something worth testing.
At that point, a couple of regression tests offer the biggest bang for buck so you can sanity check things are still working when you move on to another function and forget all about this one
Wow, and here I was trying to set breakpoints using the devtools debugger and faffing around with sourcemaps.
Wish I knew about this 10 years ago!
“security reasons” is the classic cop-out for making users lives more miserable.
Like what are you gonna do, argue that you don’t care about security?
A good project manager also returns 403 forbidden to middle managers trying to scope increase projects or poach project members
Only the ones that work for the BBC
Don’t avoid JVMs for “security” reasons, the security industry will raise CVE’s against anything they think will look good on their CV.
Avoid JVM’s because they are synonymous with overengineered, bloated, overly verbose code with crazy memory requirements for what they deliver
They have played us for absolute fools
Refer to the meme - “Linux users and other Linux users”
I use Arch btw
No way, Debian stable is completely useless as a distro unless you’re in to time machines and like the feeling of being stuck 5 years behind the curve
endSegmentation fault. Core dumped
No. Markup languages are configuration for an interpreter.
inb4 code is configuration for a compiler and binary is configuration for a processor
Is that a factorial yes?
NaN
Err, no? At what point did I claim to be an expert?
It doesn’t take a genius to realise that serving 100-record chunks of a billion record dataset using limit 100 offset 582760200
is never gonna perform well
Or that converting indexed time columns to strings and doing string comparisons on them makes every query perform an entire table scan, which is obvious if you actually take the time to look at the query plan (spoiler: they don’t)
“Why can’t the system handle more than 2 queries per second? This database sucks”
Devs who don’t understand how SQL or relational databases work write absolute abortions of queries.
9 times out of 10 - yes it is absolutely the devs. I say that as the dev who gets tasked with analysing why these shitty queries from our low budget outsourced labour are so slow
What are text files if not binary blobs on disk?
What is SQL if not a query layer over a bunch of binary blobs on disk?
ngl, this meme tickled me
And then managers go “why does shadow IT exist?”