Perhaps it's already obvious to some that the development of Friendly AI is a possible crucial solution, given the circumstances as they are perceived from some broad perspectives. Of course, there may be other possible crucial solutions, given the circumstances… etc. However, if Friendly AI isn't yet an obvious possible crucial solution (to you, if you intuitively care), this short non-mathematical formulation might make it more so in some cases. If not, it made it a little more so for me and this would merely be show and tell…
A well-formed cognitive agent would have a continually increasing range of flexibility about the states of reality it recognizes as facilitating cognitive agency and a continually increasing range of conceivability and actuating potential of possible states. This seems to imply that smarter cognitive agents can pose an arbitrarily high threat to less smart cognitive agents, since the abilities of smarter cognitive agents are inherently in conflict, even if this is not the intention, of less smart cognitive agents: Less smart cognitive agents have less flexibility about the states of reality it recognizes as facilitating cognitive agency and, as if the situation wouldn't already be bad enough, less range of conceivability and actuating potential of possible states to accommodate.
And, of course, this might do nothing else than only to represent an instance of how people tend to need to put things in their own terms. Anyway, there it is, a good, concise reason for Friendly AI, I think.