Cybersecurity professional with an interest/background in networking. Beginning to delve into binary exploitation and reverse engineering.

  • 1 Post
  • 183 Comments
Joined 1 year ago
cake
Cake day: March 27th, 2024

help-circle


  • I have a 6 bay, so yeah that might be a little limiting. I have all my personal stuff backed up to an encrypted cloud mount, the bulk of my storage space is pirated media I could download again, and I have the Synology using SHR so I just plug in a bigger drive, expand the array, then plug in another bigger drive and repeat. Because of duplication sectors you might not benefit as much from that method with just 4 bays. Or if you have enough stuff you can’t feasible push to up to the cloud to give piece of mind during rebuilding I guess.










  • I’m starting to come around to this thought about myself as well. Not only do I also don this instead of a million stressful open tabs, when it’s work adjacent stuff that I’ll probably need to reference back again I take my own notes on the content in Obsidian, snip and paste in screenshots I might want to reference from the web page, and include command examples or code block snippets verbatim, and save the hyperlink in a yaml header parameter. I’ve gone to reference stuff just to find it completely scoured off the internet and The Internet Archive having it being hit or miss.

    Everyone is all hype for local LLM’s ingesting and referencing internal/personal knowledge bases in their responses, and I’m over here like “uh I’ll just hit cmd+shift+f thx”.


  • Ok, thanks for that clarification. I guess I’m a bit confused as to why a comparison is being drawn between neurons in a neural network and neurons in a biological brain though.

    In a neural network, the neuron receives an input, performs a mathematical formula, and returns an output right?

    Like you said we have no understanding of what exactly a neuron in the brain is actually doing when it’s fired, and that’s not considering the chemical component of the brain.

    I understand why terminology was reused when experts were designing an architecture that was meant to replicate the architecture of the brain. Unfortunately, I feel like that reuse of terminology is making it harder for laypeople to understand what a neural network is and what it is not now that those networks are a part of the zeitgeist thanks to the explosion of LLM’s and stuff.