• 0 Posts
  • 95 Comments
Joined 2 years ago
cake
Cake day: June 14th, 2023

help-circle
  • If we had the technology to freely form diamond, then it’s exceptionally hard, has incredible chemical resistance, among the very best thermal conductivities of any material, and it isn’t particularly heavy.

    Being able to coat the inside of chemical vessels and pipes with diamond would hugely increase their lifespan, a heat exchanger made out of it would be incredible. Great for food processing, since you’d be able to clean it easily; great for abrasive or highly acid / alkili materials that corrode everything else. Probably awesome as a base layer for semi-conductors, as it would be great for heat dissipation.

    But we are probably talking about nanotechnology to lay it down in sheets, which we don’t have (yet).






  • You are not joking. Comparing a $2000 Purism Liberty with eg. a $200 HMD Fusion. The Fusion has somewhat better screen and battery; much better processor and camera. More RAM, the option of more storage, has NFC. It’s also designed to be easy-to-maintain, but is somewhat thinner and lighter despite having a larger screen area. Are ‘made in USA’ and ‘open-source drivers’ worth paying 10x as much for a noticeably worse phone? (It’s not really ‘made in USA’ either - it’s a mix of US, Chinese and Indian parts assembled in the USA.)

    I think that the people who believe a US-made iPhone will also cost $2k are kidding themselves - economy of scale and all that, but it must be substantially more.



  • Enough of that crazy talk - plainly WheeledDeviceServiceFactoryBeanImpl is where the dependency injection annotations are placed. If you can decide what the code does without stepping through it with a debugger, and any backtrace doesn’t have at least two hundred lines of Spring boot, then plainly it isn’t enterprise enough.

    Fair enough, though. You can write stupid overly-abstract shit in any language, but Java does encourage it.



  • Well now. My primary exposure to Go would be using it to take first place in my company’s ‘Advent of Code’ several years ago, in order to see what it was like, after which I’ve been pleased never to have to use it again. Some of our teams have used it to provide microservices - REST APIs that do database queries, some lightweight logic, and conversion to and from JSON - and my experience of working with that is that they’ve inexplicably managed to scatter all the logic among dozens of files, for what might be done with 80 lines of Python. I suspect the problem in that case is the developers, though.

    It has some good aspects - I like how easy it is to do a static build that can be deployed in a container.

    The actual language itself I find fairly abominable. The lack of exceptions means that error handling is all through everything, and not necessarily any better than other modern languages. The lack of overloads means that you’ll have multiple definitions of eg. Math.min cluttering things up. I don’t think the container classes are particularly good. The implementation of pointers seems solely implemented to let you have null pointer exceptions, it’s a pointless wart.

    If what you’re wanting to code is the kind of thing that Google do, in the exact same way that Google do it, and you have a team of hipsters who all know how it works, then it may be a fine choice. Otherwise I would probably recommend using something else.


  • I feel that Python is a bit of a ‘Microsoft Word’ of languages. Your own scripts are obviously completely fine, using a sensible and pragmatic selection of the language features in a robust fashion, but everyone else’s are absurd collections of hacks that fall to pieces at the first modification.

    To an extent, ‘other people’s C++ / Bash scripts’ have the same problem. I’m usually okay with ‘other people’s Java’, which to me is one of the big selling points of the language - the slight wordiness and lack of ‘really stupid shit’ makes collaboration easier.

    Now, a Python script that’s more than about two pages long? That makes me question its utility. The ‘duck typing’ everywhere makes any code that you can’t ‘keep in your head’ very difficult to reason about.


  • Frezik has a good answer for SQL.

    In theory, Ansible should be used for creating ‘playbooks’ listing the packages and configuration files which are present on a server or collection of servers, and then ‘playing the playbook’ arranges it so that those servers exist and are configured as you specified. You shouldn’t really care how that is achieved; it is declarative.

    However, in practice it has input, output, loops, conditional branching, and the ability to execute subtasks recursively. (In fact, it can quite difficult to stop people from using those features, since ‘declarative’ doesn’t necessarily come easily to everyone, and it makes for very messy config.) I think those are all the features required for Turing equivalence?

    Being able to deploy a whole fleet of servers in a very straightfoward way comes as close to the ‘infinite memory’ requirement as any programming language can get, although you do need basically infinite money to do that on a cloud service.




  • addie@feddit.uktoUnited Kingdom@feddit.uk*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 months ago

    Yeah, Fark used to be great. That bear headline is a beast.

    And then they got rid of the ‘foobies’ (ie. nudity) links off of the main page in order to appeal to advertisers, then they got rid of lots of extra stuff that upset advertisers, then they started shadow-banning paying subscribers if their posts didn’t fit the narrative. And then all the users got fed up of it all and moved ever to Reddit, where the mods were more transparent and there was more of a sense of community. How ironic.

    If your core site content is users posting links and commenting on them, then there’s probably a lesson to be learned about how important it is to treat your users well and have a welcoming, inclusive community. Probably a lesson that Lemmy users have already learned, mind.


  • Is that Windsurf? My lot have just added that. Keeps suggesting making the path to every target in the build pipeline the same so that they’d overwrite each other, or perhaps implement the worst null-checking code I’ve ever seen.

    The problem with suggesting 99% stupid shit is that I’m going to ignore the 1% that it identified correctly. If it limited itself to trivial syntax errors then it might have quite a useful hit rate, but we already have tools that do that.



  • You’ve got that a bit backwards. Integrated memory on a desktop computer is more “partitioned” than shared - there’s a chunk for the CPU and a chunk for the GPU, and it’s usually quite slow memory by the standards of graphics cards. The integrated memory on a console is completely shared, and very fast. The GPU works at its full speed, and the CPU is able to do a couple of things that are impossible to do with good performance on a desktop computer:

    • load and manipulate models which are then directly accessible by the GPU. When loading models, there’s no need to read them from disk into the CPU memory and then copy them onto the GPU - they’re just loaded and accessible.
    • manipulate the frame buffer using the CPU. Often used for tone mapping and things like that, and a nightmare for emulator writers. Something like RPCS3 emulating Dark Souls has to turn this off; a real PS3 can just read and adjust the output using the CPU with no frame hit, but a desktop would need to copy the frame from the GPU to main memory, adjust it, and copy it back, which would kill performance.

  • Google did claim “half their new code” was AI-generated; obviously, take that with a pinch of salt, since they’ve a vested interest in promoting LLM.

    Speaking as a professional dev, about half of my lines-of-code consists of whitespace, opening-and-closing marks for the javadocs, and such matters as function, method and class definitions and their matching curly-close-brackets. My IDE generates all of that for me, but I dare say that I could use an LLM to do it as well, and then “half my code” would be AI-generated as well.

    My colleagues who are most enthusiastic about AI do turn in some right shit for code review; I suppose the best of it is over-complex and has confused error handling. They also tend to have about a hundred lines of what they’ve changed in the pull request description, and little or nothing about why. Github shows me what you’ve changed, I’m only interested in why you’ve done it, so that’s actually providing negative value by wasting my time having to read it.