Humans are naturally fallible and even smart ones are prone to snap judgment on partial or biased information or emotional entanglement
So let's create a decisionmaking AI - we can teach it how to govern fairly and it will make the best decisions
But wait, any AI we create is naturally going to be programmed with our limited point of view in mind and may end up making a weird choice due to a programming flaw or edge case it wasn't trained to handle
So what we do is create two more AIs, with slightly different parameterizations and randomized training scenarios
The three AIs will act as checks and balances on one another
We can even try to embody different aspects of how humans approach problems and make decisions, using something like Jungian archetypes to help choose among difficult tradeoffs