JuliusCaesar wrote:Lol there are so many things wrong with that statement.
Ethics: I'm not even going to get into it, check scientific american's website I'm pretty sure they had a piece on the difficulties in programming ethics.
Practicality: not for a bit before it's doable, techwise.
Social Implications: it took the catholic church hundereds of years to apologize to Galileo. It will take even longer before anyone capable of implementing this tries it.
I don't have much else to say, other than we will long be subjugated by china Nazis or sent back to the stone age following WWIII over oil before this happens.
You (of course, what did anyone expect) misunderstand.
I'm trying to explain that the government I'm advocating doesn't have personal bias and is completely and utterly rational. Or actually as much as is possible using the most modern technology.
If you have the "perfect" self-correctng, adapting system (possibly even containing humans) that will carry out a set goal in the most efficient way possible then it is simply a matter of giving this system a goal. The system won't have obvious things we associate with any known governmental system. No actual politics, no backstabbing, no favoring the rich, the old, the powerful, ... Just one will, one goal.
We actually have a few elements that will be useful in this process. Meritocracies and companies have shown us the way in how we should regulate the human element. Not sufficiently, but it's a matter of fine-tuning.
BTW, robots have made scientific discoveries, are about to run armies, design bridges, run financial and social simulations, can be soldiers, are more efficient at manufacturing than humans, ...
If you want something done right you extend the human will by artificial means. Yet strangely enough we're still in the bronze age with our political system.
BTW, stop using the argument that because it won't happen I am wrong. That doesn't make any sense.