What is the point of AI safety if there is no intent to complete goals? Why would we need to align it with our goals if it wasn’t able to create goals and subgoals of its own?
Saying it’s just a “stochastic parrot” is an outdated understanding of how modern LLM models actually work - I obviously can’t convince you of something you yourself don’t believe in but I’m just hoping you can keep an open mind in future instead of rejecting the premise outright - the way early proponents of the scientific method like Descartes rejected the idea that animals could ever be considered intelligent or conscious because they were merely biological “machines”.
human rights are not just based on what the law says…
if a community that had an unwritten custom that everyone could access the village well, and then a new ruler passed a law restricting certain people from the well, would you say no rights were violated since well access was never legally codified?
unless of course you’re a feudalist/capitalist that supports the commons enclosures or the Highland “clearances”…