• 0 Posts
  • 12 Comments
Joined 2 years ago
cake
Cake day: July 5th, 2023

help-circle


  • Interesting points, maybe a book I’ll have to give a read to. I’ve long thought that information overload on its own leads to a kind of subjective compression and that we’re seeing the consequences of this, plus late stage capitalism.

    Basically, if we only know about 100 people and 10 events and 20 things, we have much more capacity to form nuanced opinions, like a vector with lots of values. We don’t just have an opinion about the person, our opinion toward them is the sum of opinions about what we know about them and how those relate to us.

    Without enough information, you think in very concrete ways. You don’t build up much nuance, and you have clear, at least self-evident logic for your opinions that you can point at.

    Hit a sweet spot, and you can form nuanced opinions based on varied experiences.

    Hit too much, and now you have to compress the nuances to make room for more coarse comparisons. Now you aren’t looking at the many nuances and merits, you’re abstracting things. Necessary simulacrum.

    I’ve wondered if this is where we’ve seen so much social regression, or at least being public about it. There are so many things to care about, to know, to attend to, that the only way to approach it is to apply a compression, and everyone’s worldview is their compression algorithm. What features does a person classify on?

    I feel like we just aren’t equipped to handle the global information age yet, and we need specific ways of being to handle it. It really is a brand new thing for our species.

    Do we need to see enough of the world to learn the nuances, then transition to tighter community focus? Do we need strong family ties early with lower outside influence, then melting pot? Are there times in our development when social bubbling is more ideal or more harmful than otherwise? I’m really curious.

    Anecdotally, I feel like I benefitted a lot from tight-knit, largely anonymous online communities growing up. Learning from groups of people from all over the world of different ages and beliefs, engaging in shared hobbies and learning about different ways of life, but eventually the neurons aren’t as flexible for breadth and depth becomes the drive.


  • PixelProf@lemmy.catoProgrammer Humor@programming.devExcel
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    4 months ago

    Oh yeah, the 365 version is terrible. And post of the time, it could have been a Python Gradio interface or similar simple implementation without having to fight so much to make basic things work. Most of what I want Excel to do it just isn’t efficient enough for; particularly with lets and lambdas, it’s gotten quite powerful as a programming paradigm where you can visualize and manipulate your data spatially in a kind of Logo / NetLogo style way which is really interesting, but the second you reference a few thousand cells a few times even a solid CPU starts screaming.

    I use Excel for a decent number of tasks and can do some magic with it, but only ever really for work where it’s easier to share a weird Excel sheet than it is to pass around a Python script (which given I teach Python, isn’t actually as often as most people experience).


  • But what about those of us in R1C1 mode using lambdas to do recursive cell operations across data pulled from multiple sheets? Am I anywhere near the kinda of Eldritch horrors discussed? I’ve also written indirect references based on Sheet name to populate filters from web scraped tables. I just don’t know how deep the pit goes at this point.


  • For those undiagnosed wondering about the accuracy of this, let’s play real ADHD bingo. Gather 5 of these and have experienced some form of it for most of your life:

    • Losing and misplacing things very frequently
    • Restlessness, squirming, seeming like you’re motorized
    • Blurting out answers to questions before the questions are completed
    • Lots of thoughtless mistakes, not focusing on details
    • Avoids talks requiring extended concentration
    • Struggle to wait your turn
    • Overly talkative
    • Forgetting daily activities

    I’ll note as someone who took a long while to really accept my diagnosis: And to a distressing degree.

    Like, I didn’t just forget where I put my phone regularly, I’d lose expensive electronics on my ride home from school. I’d regularly forget my backpack on my way to school. I regularly needed replacement keys for my dorm.

    I wasn’t just overly talkative, I’d miss busses constantly because I couldn’t stop talking. I don’t even like people all that much, I just can’t stop. Unless it’s a topic I’m not interested in. Then it’s agony.

    I didn’t just avoid unnecessary things that needed my focus; my heart would race and I’d get aggressive because I needed to checks notes copy information from one page over to another… Carefully.

    I wouldn’t just cut someone off to answer them before they finished, I’d get this feeling of a ringing in my ears and internal screaming, digging my nails into my hands, to try and be nice… Before cutting them off to answer before they finished anyways, but later than I intended.

    Every day.

    It’s not fun. I’ve spent tens of thousands of dollars on late fees, extensions to degree because of missed deadlines, procrastinated dental bills. It’s agonizing. It’s pain. You will know what it is to talk to other people, have them go, “Oh my God, me too! Like sometimes, I clean, and I just don’t stop” and when you say, “I know, and then I’m just on the ground sweating and crying and feel like throwing up because I e been there for like 3 hours and missed my appointment” and you get the, “What’s wrong with you?” look. The ADH is often related; the Disorder, I’ve been surprised to learn over the years, often isn’t. I assumed people hid this distress, too.

    Positive note for any concerns: Medication, therapy, and education are huge helpers. It isn’t perfect, things are just harder and that’s how it is, but they improve. I’m a professor, I have nearly 1000 students, 50 teaching assistants, and need to schedule, effectively, 120+ meetings and put out around 400 documents that must all line up every 4 months. It’s not hopeless, it’s just hard.


  • Insane compute wasn’t everything. Hinton helped develop the technique which allowed more data to be processed in more layers of a network without totally losing coherence. It was more of a toy before then because it capped out at how much data could be used, how many layers of a network could be trained, and I believe even that GPUs could be used efficiently for ANNs, but I could be wrong on that one.

    Either way, after Hinton’s research in ~2010-2012, problems that seemed extremely difficult to solve (e.g., classifying images and identifying objects in images) became borderline trivial and in under a decade ANNs went from being almost fringe technology that many researches saw as being a toy and useful for a few problems to basically dominating all AI research and CS funding. In almost no time, every university suddenly needed machine learning specialists on payroll, and now at about 10 years later, every year we are pumping out papers and tech that seemed many decades away… Every year… In a very broad range of problems.

    The 580 and CUDA made a big impact, but Hinton’s work was absolutely pivotal in being able to utilize that and to even make ANNs seem feasible at all, and it was an overnight thing. Research very rarely explodes this fast.

    Edit: I guess also worth clarifying, Hinton was also one of the few researching these techniques in the 80s and has continued being a force in the field, so these big leaps are the culmination of a lot of old, but also very recent work.


  • Lots of good comments here. I think there’s many reasons, but AI in general is being quite hated on. It’s sad to me - pre-GPT I literally researched how AI can be used to help people be more creative and support human workflows, but our pipelines around the AI are lacking right now. As for the hate, here’s a few perspectives:

    • Training data is questionable/debatable ethics,
    • Amateur programmers don’t build up the same “code muscle memory”,
    • It’s being treated as a sole author (generate all of this code for me) instead of like a ping-pong pair programmer,
    • The time saved writing code isn’t being used to review and test the code more carefully than it was before,
    • The AI is being used for problem solving, where it’s not ideal, as opposed to code-from-spec where it’s much better,
    • Non-Local AI is scraping your (often confidential) data,
    • Environmental impact of the use of massive remote LLMs,
    • Can be used (according to execs, anyways) to replace entry level developers,
    • Devs can have too much faith in the output because they have weak code review skills compared to their code writing skills,
    • New programmers can bypass their learning and get an unrealistic perspective of their understanding; this one is most egregious to me as a CS professor, where students and new programmers often think the final answer is what’s important and don’t see the skills they strengthen along the way to the answer.

    I like coding with local LLMs and asking occasional questions to larger ones, but the code on larger code bases (with these small, local models) is often pretty non-sensical, but improves with the right approach. Provide it documented functions, examples of a strong and consistent code style, write your test cases in advance so you can verify the outputs, use it as an extension of IDE capabilities (like generating repetitive lines) rather than replacing your problem solving.

    I think there is a lot of reasons to hate on it, but I think it’s because the reasons to use it effectively are still being figured out.

    Some of my academic colleagues still hate IDEs because tab completion, fast compilers, in-line documentation, and automated code linting (to them) means you don’t really need to know anything or follow any good practices, your editor will do it all for you, so you should just use vim or notepad. It’ll take time to adopt and adapt.


  • As someone who researched AI pre-GPT to enhance human creativity and aid in creative workflows, it’s sad for me to see the direction it’s been marketed, but not surprised. I’m personally excited by the tech because I personally see a really positive place for it where the data usage is arguably justified, but we either need to break through the current applications of it which seems more aimed at stock prices and wow-factoring the public instead of using them for what they’re best at.

    The whole exciting part of these was that it could convert unstructured inputs into natural language and structured outputs. Translation tasks (broad definition of translation), extracting key data points in unstructured data, language tasks. It’s outstanding for the NLP tasks we struggled with previously, and these tasks are highly transformative or any inputs, it purely relies on structural patterns. I think few people would argue NLP tasks are infringing on the copyright owner.

    But I can at least see how moving the direction toward (particularly with MoE approaches) using Q&A data to support generating Q&A outputs, media data to support generating media outputs, using code data to support generating code, this moves toward the territory of affecting sales and using someone’s IP to compete against them. From a technical perspective, I understand how LLMs are not really copying, but the way they are marketed and tuned seems to be more and more intended to use people’s data to compete against them, which is dubious at best.


  • Not to fully argue against your point, but I do want to push back on the citations bit. Given the way an LLM is trained, it’s not really close to equivalent to me citing papers researched for a paper. That would be more akin to asking me to cite every piece of written or verbal media I’ve ever encountered as they all contributed in some small way to way that the words were formulated here.

    Now, if specific data were injected into the prompt, or maybe if it was fine-tuned on a small subset of highly specific data, I would agree those should be cited as they are being accessed more verbatim. The whole “magic” of LLMs was that it needed to cross a threshold of data, combined with the attentional mechanism, and then the network was pretty suddenly able to maintain coherent sentences structure. It was only with loads of varied data from many different sources that this really emerged.


  • My guess was that they knew gaming was niche and were willing to invest less in this headset and more in spreading the widespread idea that “Spatial Computing” is the next paradigm for work.

    I VR a decent amount, and I really do like it a lot for watching TV and YouTube, and am toying with using it a bit for work-from-home where the shift in environment is surprisingly helpful.

    It’s just limited. Streaming apps aren’t very good, there’s no great source for 3D movies (which are great, when Bigscreen had them anyways), they’re still a bit too hot and heavy for long-term use, the game library isn’t very broad and there haven’t been many killer app games/products that distinct it from other modalities, and it’s going to need a critical amount of adoption to get used in remote meetings.

    I really do think it’s huge for given a sense of remote presence, and I’d love to research how VR presence affects remote collaboration, but there are so many factors keeping it tough to buy into.

    They did try, though, and I think they’re on the right track. Facial capture for remote presence and hybrid meetings, extending the monitors to give more privacy and flexibility to laptops, strong AR to reduce the need to take the headset off - but they’re first selling the idea, and then maybe there will be a break. I’ll admit the industry is moving much slower than I’d anticipated back in 2012 when I was starting VR research.


  • My two cents, after years of Markdown (and md to PDF solutions) and LaTeX and a full two years of trying to commit to bashing my head against Word for work purposes, I’m really enjoying Typst. It didn’t take long to convert my themes, having docs I can import which are basically just variables to share across documents in a folder has been really helpful. Haven’t gone too deep into it but I’m excited to give it a deeper test run over the next little bit.