• 0 Posts
  • 14 Comments
Joined 2 years ago
cake
Cake day: July 2nd, 2023

help-circle

  • You’re not wrong.

    Realistically, there’s a bit of a nuance. Many modern web apps have different components that aren’t HTML. You don’t need HTML for a component. And those non-HTML components can provide the consistency they need. Sometimes, that’s consistency for how to get the data. Sometimes, that’s consistency for how to display the data. For displaying, each component basically has its own CSS, but it doesn’t need to. A CSS class isn’t required.

    Tailwind isn’t meant to be a component system, It’s meant to supplement one. If you’re writing CSS’s components, it looks horrible. If you’re writing components at CSS that needs a foundation of best practices, it works pretty decent. They’re still consistency. They’re still components. They’re just not centered around HTML/CSS anymore. It doesn’t have to be.

    Sematically, it is still worse HTML. Realistically, it’s often faster to iterate on, easier to avoid breakage: especially as the project becomes larger. Combine that with the code being more easily copied and pasted. It can be a tough combo to beat. It’s probably just a stepping stone to whatever’s next.



  • Universities were already locking down their PCs in the 90’s, at least those with competent IT departments - BIOS password, locked boot menu, Windows 2000 with restricted user accounts.

    You need to make up your mind on what time period you’re trying to use. 90s? 2000? Before you were talking about Windows 95.

    But notice, you’re talking about universities: we’re talking about children under 18. Those computers were not as locked down. That has changed from the 90s. The security of the 90s (especially before TCP/IP was standard) was different than 2000-2010 security, which was different than 2010s+ security. Yet, you’re trying to claim it hasn’t changed? That’s so inaccurate it’s laughable.

    Even in the Linux world, Pre-IP vs Slow Internet vs Fast Internet vs Post-sudo security models have changed a lot. I’d be skeptical of anyone trying to argue that the security and lockdown of these computers has not changed in 30 years. Is that your argument? If not, why did you start with “Windows 95?”

    If you don’t do that, your every PC will have 15 copies of Counter Strike and a bunch of viruses in one week.

    And? People still get viruses. People still install games if they can. The tools to do that on PCs are far better at trying to stop those than 30, 20, or even 10 years ago. Chromebooks are even more effective than those tools at locking them down to be unusable.

    Chromebooks (and laptops in general) are way cheaper now than PCs were back then, so again, you need to buy your own and install a proper OS, the situation did not really change.

    Before: if you wanted to do work at home, you or your family had to buy a computer. Kids (might) need to convince their parents to do experiments, but it was far easier to do that to convince a school administration.

    Today? What families have a “family computer?”

    Kids get a phone, they might get a tablet, and if they get a computer, its the school one. The need for a family computer has basically gone. All of the computers are locked down. Google happens to make locked down OSes for their replacements: Chromebooks, Phones, and Tablets. Yet, according to you, the requirements hasn’t changed. Yet, from a child’s perspective: they’ll probably never get the opportunity to play with a non-locked down computer.


  • You seemed to miss their argument. Those were the standard in 1995, before OSes had really integrated the internet. Haivng a floppy disk, discarding wifi, and having drivers auto-loaded/discovered automatically (or not needed at all) are independent developments. Even when Chromebooks started becoming standard: using drivers from physical disks were rare, Windows could automatically find and update drivers (how well, eh), WiFI existed and was faster than most internets. You could install Linux and it would mostly work, provided your hardware wasn’t too new.

    The actual argument chromebooks are contributing to tech illteracy because, they’re:

    • Locked-down: devices that most can’t repair or customize, especially if given out by a school or organization. Locking them down is a feature.
    • Below cost: they’re the cheapest devices available, because Google makes more money from data.

    Organizations buy these devices because they’re cheap (than cost), lock them down, and those locked-down devices become the only computer for most students. While it’s technically possible to install Linux, these users can’t: it’s not their devices: the organizations bought them because they were cheap and easily locked down for kids. If these are their main device, and they not allowed (either technically or by policy) to install another OS: where will they learn tech literacy? Not on their phone, not on their tablet, and not on their school-issued laptop.

    They’ve been locked into a room and people wonder why they don’t know how to interact outside. You’re arguing that the room today is better than the one in 1995. That’s true, that doesn’t change the argument:

    1. Maybe they shouldn’t be locked into the room.
    2. Maybe it shouldn’t be cheaper to lock the room than to let them go outside.
    3. Maybe we need to do more to help them see outside the room.

  • They also fired all their park workers during covid and gave themselves 10 million bonuses while their workers were surviving on food stamps. Some workers had even signed non compete clauses so they literally could not use their talents elsewhere to feed themselves.

    There are plenty of things to hate Disney for, especially as they approach super-monopoly status, ruin nearly every franchise they touch, and have trouble telling what’s good or not. As a company, Disney’s morals and decisions grow more concerning every month. Disney is basically a disaster in progress.

    However, this specific complaint seems bad: it’s the wrong scale. Many companies were in the wrong during COVID, but it’s hard to look at these numbers and say the layoffs here were bad decisions based on $10M in bonuses. The scales are just too different.

    Disney laid off 32,000 park workers At a measly 40 hours per week at their “minimum wage” (formerly $15/hr, now $24/hr): that’s $83.2 million PER MONTH: $998M a year. A $10M “bonus” is 1% of that, and even smaller compared to the $6.4B of park revenue they had loss.

    The former CEO “gave up” their salary ($3M) and “bonus” ($45M in 2019), had 20-30% pay cuts to the executive staff, and a few other items. The CEO did get “$10M” in stock awards, but stock awards don’t get you off food stamps. Those stocks become nothing if the company posts bad financials, which would hurt more than just the execs.

    The $1.5B dividend payout in April 2020 looks much worse. Abigail Disney ranted about it on Twitter (now X). His rant is at the appropriate scale: Disney paid out billions before they chose to save millions. The execs got quite a bit of that dividend payout. That’s the greed.


  • Did you purposely miss the first and last questions: Which laptop is the good value?

    I never said people need to run LLMs. I said Apple dominates high-end laptops and wanted a good high-end to compare to the high-end Macbooks.

    Instead of just complaining about Apple, can do what I asked? Best cheaper laptop alternative that checks the non-LLM boxes I mentioned:

    If you want good cooling, good power (CPU and GPU), good screen, good keyboard, good battery, good WiFi, etc., the options get limited quickly.


  • Is there a particular model you’re thinking of? Not just the line. I usually find that Windows laptops don’t have enough cooling or make other sacrifices. If you want good cooling, good power (CPU and GPU), good screen, good keyboard, good battery, good WiFi, etc., the options get limited quickly.

    Even the RAM cost misses some of the picture. Apple Silicon’s RAM is available to the GPU and can run local LLMs and other machine learning models. Pre-AI-hype Macs from 2021 (maybe 2020) already had this hardware. Compare that to PC laptops from the same era. Even in this era, try getting Apple’s 200-400GB/s RAM performance on a PC laptop.

    PC desktop hardware is the most flexible option for any budget and is cost-effective for most budgets. For laptops, Apple dominates their price points, even pre-Apple-silicon.

    The OS becomes the final nail in the coffin. Linux is great, but a lot of software still only supports Windows and Apple; Linux support for the latest/current hardware can be a hit or miss (My three-year-old, 12th-gen Thinkpad just started running well). If the choice is between Mac OS or Windows 11, is there much of a choice? Does that change if a company wants to buy, manage, and support it? Which model should we be looking at? It’s about time to replace my Thinkpad.


  • Eyron@lemmy.worldtoNews@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 months ago

    You should probably read/know the actual law, rather than just getting it close. You’re probably referring to [18 USC 922 (d) (10)]((https://uscode.house.gov/view.xhtml?req=(title:18 section:922 edition:prelim)), which includes any felony-- not just shooting. That’s one of 11 listed requirements in that section, which assumes that the first requirement (a) (1) is met: not an interstate nor foreign transaction. There’s a lot more to it than just “as long as you don’t have good evience they’re going to go shoot someone”

    Even after the sale, ownership is still illegal under section (g)-- it just isn’t the seller’s fault anymore.

    This is basic information that should be known to any gun safety advocate. “Responsible” gun owners must know those laws, plus others backward and forward. One small slip-up is a felony, jail, and permanent loss of gun ownership/use. Are they really supposed to listen to those who can’t even talk about current law correctly?

    The law can be better, but you won’t do yourself any favors by misrepresenting it.


  • It seems you are mixing the concepts of voting systems and candidate selection. FPP nor FPTP should not sound scary. As a voting systems, FPP works well enough more often than many want to admit. The name just describes it in more detail: First Preference Plurality.

    Every voting system is as bottom-up or top-down as the candidate selection process. The voting system itself doesn’t really affect whether it is top down or bottom up. Requiring approval/voting from the current rulers would be top-down. Only requiring ten signatures on a community petition is more bottom up.

    The voting systems don’t care about the candidate selection process. Some require precordination for a “party”, but that could also be a party of 1. A party of 1 might not be able to get as much representation as one with more people: but that’s also the case for every voting system that selects the same number of candidates.

    Voting systems don’t even need to be used for representation systems. If a group of friends are voting on where to eat, one problem might be selecting the places to vote on, but that’s before the vote. With the vote, FPP might have 70% prefer pizza over Indian food, but the Indian food vote might still win because the pizza voters had another first choice. Having more candidates often leads to minority rule/choice, and that’s not very good for food choice nor community representation.




  • Do you use Android? AI was the last thing on their minds for AOSP until OpenAI got popular. They’ve been refining the UIs, improving security/permissions, catching up on features, bringing WearOS and Android TV up to par, and making a Google Assistant incompetent. Don’t take my word for it; you’ll rarely see any AI features before OpenAI’s popularity: [v15] (https://developer.android.com/about/versions/15/summary), v14, v13, and [v12] (https://developer.android.com/about/versions/12/summary). As an example of the benefits: Google and Samsung collaborating on WearOS allowed more custom apps and integrations for nearly all users. Still, there was a major drop in battery life and compatibility with non-Android devices compared to Tizen.

    There are plenty of other things to complain about with their Android development. Will they continue to change or kill things like they do all their other products? Did WearOS need to require Android OSes and exclude iOS? Do Advertising APIs belong in the base OS? Should vendors be allowed to lock down their devices as much as they do? Should so many features be limited to Pixel devices? Can we get Google Assistant to say “Sorry, something went wrong. When you’re ready: give it another try” less often instead of encouraging stupidity? (It’s probably not going to work if you try again).

    Google does a lot of wrong, even in Android. AI on Android isn’t one of them yet. Most other commercially developed operating systems are proprietary, rather than open to users and OEMs. The collaboration leaves much to be desired, but Android is unfortunately one of the best examples of large-scale development of more open and libre/free systems. A better solution to trying to break Android up, is taking Android and making it better than Google seems capable of.


  • I’m still rocking a Galaxy Watch 4: one of the first Samsung watches with WearOS. It has a true always-on screen, and most should. The always-on was essential to me. I generally notice within 60 minutes if an update or some “feature” tries to turn it off. Unfortunately, that’s the only thing off about your comment.

    It’s a pretty rough experience. The battery is hit or miss. At good times, I could get 3 days. Keeping it locked, (like after charging) used to kill it within 60 minute (thankfully, fixed after a year). Bad updates can kill the battery life, even when new: from 3 days life to 10 hours, then to 3 days again. Now, after almost 3 years, it’s probably about 30 hours, rather than 3 days.

    In general, the battery life with always-on display should last more than 24 hours. That’d be pretty acceptable for a smartwatch, but is it a smartwatch?

    It can’t play music on its own without overheating. It can’t hold a phone call on its own without overheating. App support is limited, but the processor seems to struggle most of the time. Actually smart features seem rare, especially for something that needs consistent charging.

    Most would be better off with a Pebble or less “smart” watch: better water resistance, better battery, longer support, 90% of the usable features, and other features to help make up for any differences.