One of the problems with the ‘alignment problem’ is that one group doesn’t care about a large part of the possible alignment problems and only cares about theoretical extinction level events and not about already occurring bias, and other issues. This also causes massive amounts of critihype.
https://en.wikipedia.org/wiki/AI_alignment
I genuinely think the alignment problem is a really interesting philosophical question worthy of study.
It’s just not a very practically useful one when real-world AI is so very, very far from any meaningful AGI.
One of the problems with the ‘alignment problem’ is that one group doesn’t care about a large part of the possible alignment problems and only cares about theoretical extinction level events and not about already occurring bias, and other issues. This also causes massive amounts of critihype.