• 0 Posts
  • 21 Comments
Joined 2 years ago
cake
Cake day: August 2nd, 2023

help-circle
  • backgroundcow@lemmy.worldtoFuck AI@lemmy.worldOn Exceptions
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    23 hours ago

    Copyright law is more or less always formulated as limits on the rights to redistribute content, not how it is used. Hence, it isn’t a particularly strange position to take that one should be allowed to do whatever one wants with gen AI in the private confines of ones home, and it is only at the moment you start to redistribute content we have to start asking the difficult questions: what is, and what is not, a derivative work of the training data? What ethical limitations, if any, should apply when we use an algorithm to effortlessly copy “a style” that another human has spent lots of effort to develop?



  • The only reason this is “click bait” is because someone chose to do this, rather than their own mental instability bringing this out organically.

    This is my point. The case we are discussing now isn’t noteworthy, because someone doing it deliberately is equally “impressive” as writing out a disturbing sentence in MS Paint. One cannot create a useful “answer engine” without it being capable of producing something that looks weird/provoking/offensive when taken out of context; no more than one can create a useful drawing program that blocks out all offensive content. Nor is it a worthwhile goal.

    The cases to care about are those where the LLM takes a perfectly reasonable conversation off the rails. Clickbait like the one in the OP is actually harmful in that they drown out such real cases, and is therefore deserving of ridicule.




  • If someone is trying to do the most good with their money, it seems logical to give via an organization that distributes the funds according to a plan. To instead hand out money to people closest at hand seems it could be motivated more by trying to make me feel good than to actually make a difference.

    Furthermore, there are larger scale systemic issues. Begging takes up a lot of time. It becomes a problem if it pays someone enough to outcompete more productive use of time that could, in some cases, pay, and in other cases, at least be more useful: childcare/teaching kids, home maintenance, cooking, cleaning, etc. In contrast, state welfare programs and aid organizations usually do not condition help on that the receiver has to sit idle for long times to receive help. Add to this that begging really only works in crowded areas, which may limit the possibility to relocate somewhere where living might be more sustainable. Hence, in the worst case, handing out money to those who begs for it could actually add to the difficulty for people stuck in a very difficult situation to get out of it.

    This “analysis” of course skips over the many, many individual circumstances that get people into a situation where begging seems the right choice. What we should be doing is investing public funds even heavier in social programs and other aids to (1) avoid as much as possible that people end up in these situations; and (2) get people out of these situations as effectively as possible.


  • I don’t get this. Why are so many countries willing to play Trump’s game? It seems a horrible long-term strategy to allow one country to hold global trade hostage this way. Shouldn’t we negotiate between ourselves, i.e., between the affected countries?

    The idea should be: for us, exports of X, Y, and Z are taking a hit, and for you A, B, and C. So, let’s lower our tariffs in these respective areas to soften the blow to the affected industries. That way, we would partly make up for, say, lost exports to the US for cars, at the cost of additional competition on the domestic market for, say, soy beans; and vise-versa; evening out the effects as best we can.

    With such agreements in place, we can return to Trump from a stronger position and say: we are willing to negotiate, but not under threat. We will do nothing until US tariffs are back to the levels before this started. But, at that point, we will be happy to discuss the issues you appear to see with trade inbalances and tariffs, so that we can find a mutual beneficial agreement going forward.

    Something like this would send a message that would do far more good towards trade stability for the future.


  • What we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data

    Prove to me that this isn’t exactly how the human mind – i.e., “real intelligence” – works.

    The challenge with asserting how “real” the intelligence-mimicking behavior of LLMs is, is not to convince us that it “just” is the result of cold deterministic statistical algoritms running on silicon. This we know, because we created them that way.

    The real challenge is to convince ourselves that the wetware electrochemical neural unit embedded in our skulls, which evolved through a fairly straightforward process of natural selection to improve our odds at surviving, isn’t relying on statistical models whose inner principles of working are, essentially, the same.

    All these claims that human creativity is so outstanding that it “obviously” will never be recreated by deterministic statistical models that “only” interpolates into new contexts knowledge picked up from observation of human knowledge: I just don’t see it.

    What human invention, art, idé, was so truly, undeniably, completely new that it cannot have sprung out of something coming before it? Even the bloody theory of general relativity–held as one of the pinnacles of human intelligence–has clear connections to what came before. If you read Einstein’s works he is actually very good at explaining how he worked it out in increments from models and ideas - “what happens with a meter stick in space”, etc.: i.e., he was very good at using the tools we have to systematically bring our understanding from one domain into another.

    To me, the argument in the linked article reads a bit as “LLM AI cannot be ‘intelligence’ because when I introspect I don’t feel like a statistical machine”. This seems about as sophisticated as the “I ain’t no monkey!” counter- argument against evolution.

    All this is NOT to say that we know that LLM AI = human intelligence. It is a genuinely fascinating scientific question. I just don’t think we have anything to gain from the “I ain’t no statistical machine” line of argument.





  • These two are not interchangeable or really even comparable though?

    For GNU Make, yes they are. These are fully comparable tools for writing sophisticated dynamic build systems. “Plain make”, not so much.

    [cmake] makes your build system much, much more robust, far easier to maintain, much more likely to work on other systems than your own, and far easier to integrate with other dependent projects.

    This is absolutely incorrect. I assume (although I have never witnessed it) that a true master of cmake could use it to create a robust, maintainable, transferable build system. Very much like there are people who are able to make delicate ice sculptures using a chainsaw. But in no way does these properties follow from the choice of cmake as a build system (as insinuated in your post), rather, the word we are looking for here is: despite using cmake.

    I apologize for my inflammatory language. I may just have a bit of PTSD from having to build a lot of other people’s software through multiple layers of meta build systems. And cmake comes back, time and time again, as introducing loads of obstacles.