Member since July 26, 2021
Great discussion, many interesting topics! Wondering about one point: you discuss why you expect AGI to be better than humans quickly, given that humans are so weak. OK. But then, you seem to conclude that this means that humans won't be able to deal with malicious AGIs any more. Shouldn't that happen only when AGIs exceed humanity's collective intelligence? It doesn't have to be a single person who shut the rogue AGI down. Just like it won't be a single person who will one day create one.