I want my NKRO.
Which can be done over USB, cheap keyboards just aren’t wired for it.
made you look
I want my NKRO.
Which can be done over USB, cheap keyboards just aren’t wired for it.
Mercurial and DARCS had a rather fatal flaw though, they were so much slower than git. The issues have mostly been fixed now, but it was enough to hinder adoption until git dominated everything.
Git also has a rather big flaw, it’s “good enough”. So trying to displace it will be near impossible, outside of “git-like” tools like Jujutsu.
Ahh, yep it turns out ARM actually removed Thumb support with their 64-bit transition, so their instruction length is fixed now, and Thumb never made it into the M* SoCs.
one of problems with CISC is that it has variable length instructions
RISC systems also have variable length instructions, they’re just a bit stricter with the implementation that alleviates a lot of the issues (ARM instructions are always either 16-bits or 32-bits, while RISC-V is always a multiple of 16-bits and self-describing, similar to UTF-8)
Edit: Oh, and ARM further restricts instruction length based on a CPU flag, so you can’t mix and match at an instruction level. It’s always one or the other, or it’s invalid.
Well that’s disappointing.
a nonprofit owned by a for profit company
It’s the other way around, the foundation owns the corporation.
Still feels like the corporation is the one making decisions though.
The funny thing is that for the longest time Intel actually had the majority share of GPUs, just by counting the ones embedded in motherboards of laptops and the like. No idea if that’s still the case, or if Nvidia or AMD has been eating into it with their new models (e.g. what powers the Steam Deck)
They’ve tried to break into the discrete market a few times, most recently with their Arc cards, but the way they approach things is just so odd. It’s like they assume the first attempt will be a smash hit and dominate, and when it doesn’t they just flounder? The Arc cards launched to a lot of fanfare and then there was just silence and delays from Intel.
Bad management, bad luck, and usual market stuff. They’re going to do anything to cut costs.
Their R&D for new fab work is falling behind competitors (Technically better doesn’t matter if nobody is buying it), they’ve had a bunch of bad CPU releases with hardware failures, and they’ve got next to no market presence with GPUs which are currently making money hand over fist (Mostly for dumb AI reasons, which is going to bite Nvidia hard when the bubble pops, because their new datacenter hardware is hyper tuned for LLMs at the expense of general compute, unlike AMD).
And the reason you’ll want to do this is that it exposes FS mounts in the service dependency tree, so e.g. you can delay starting PostgreSQL until after you’ve mounted the network share that it’s using as a backing store, while letting unrelated tasks start concurrently.
If all you want to do is pass some special mount flags (e.g. x-systemd.automount
) then fstab is the way, after all it’s still systemd that’s parsing and managing it.
I mean yeah, there’s extra stuff layered on top of the underlying protocols that is badly designed. Docker was built with a hard dependency on IPv4, so was the Dat protocol. If these things were designed properly from the start we wouldn’t be having these issues.
Apple was smart here, they mandate that iOS apps must support single stack IPv6 only and perform functional testing of that as part of the app store process. Devs can’t get away with pretending it’s not necessary and not wiring up support for it.
IPv6 is too complex, error prone and unsupported to deploy without shooting yourself in the foot, even now, a few decades after introduction.
Which is purely down to people not testing things before releasing them, because the support is there but there’s layers of unnecessary stuff put in the way. Like I had an old ISP provided router that ran Linux, but the management UI was only ever tested against v4 networks so none of the v6 stuff was actually hooked up correctly.
Support in desktops and mobile devices is effectively 100%, but even in embedded hardware there’s often full support, just not enabled correctly or tested.
If you don’t have ipv6 internally, you probably can’t access ipv6 externally. 6to4 gateways are a thing. 4to6? Not so much.
I’m pretty sure stateful gateways do exist, but it’s a massive ball of complexity that would be entirely avoided if people just used native v6.
e.g. one monitor is 96dpi, and the other is 192dpi, moving a window from one monitor to the other shouldn’t result in the window becoming a different physical size, and it should render at a natural resolution on both (i.e. scaling it to half size for display on the 96dpi monitor doesn’t count)
Only exception I got is global warming. We’ve never played this particular game before.
We also never had nukes before.
The conditions have been worse in the past, but the risks are so much worse these days.
It depends on the type of input validation you’re doing, a bunch of it is built into the browser and you don’t need JS for it.
This is a problem that Nvidia is capable of solving but they haven’t been interested in it for over a decade so I don’t see them starting now.
They actually recently open sourced a bunch of required infrastructure, and hired a bunch of the OSS driver maintainers.
It’s all still pretty crap, but there’s more hope now.
UUIDs are essentially random numbers, crypto schemes are not, they’re not comparable.
They’re also not using requests very efficiently, so who knows.
Ideally they’d be set to not be running unless they’re actively needed.
It’s a peaceful life.