I think it would be an exaggeration to say Eliezer doesn't accept the possibility of a non-volitional Friendly AI. If I understand him correctly, the volition scenario only applies in case there is no objective morality; and even if there isn't, then even if he believes (for reasons I don't fully understand and don't fully agree with) that volitional ethics would be justified in such a case, he certainly doesn't come across to me as not willing to accept that he might be wrong. (In fact, his is the only strategy for creating a moral AI that I know of that is explicitly built around that possibility).
Reading his earlier stuff that he no longer agrees with certainly should make it clear he's at least capable of thinking such thoughts.
He does de-emphasize non-volitional possibilities for ethics and morality a lot, but that seems only logical considering the reactions you mention (and the fact that he does not believe them to be true).
OK time for some quotes methinks:
The Sysop Scenario also makes it clear that individual volition is one of the strongest forces in Friendliness; individual volition may even be the only part of Friendliness that matters - death wouldn't be intrinsically wrong; it would be wrong only insofar as some individual doesn't want to die. Of course, we can't be that sure of the true nature of ethics; a fully Friendly AI needs to be able to handle literally any moral or ethical question a human could answer, which requires understanding of every factor that contributes to human ethics. Even so, decisions might end up centering solely around volition, even if it starts out being more complicated than that.
(Note also that an AI with shaper semantics cannot nonconsensually change the programmer's brain in order to satisfy a shaper. Shapers are not meta-supergoals, but rather the causes of the current supergoal content. Supergoals satisfy shapers, and reality satisfies supergoals; manipulating reality to satisfy shapers is a non-sequitur. Thus, manipulating the universe to be "morally symmetric", or whatever, is a non-sequitur in the first place, and violates the volition-based Friendliness that is the output of moral symmetry in the second place.)
All that is required is that the initial shaper network of the Friendly AI converge to normative altruism. Which requires all the structural Friendliness so far described, an explicit surface-level decision of the starting set to converge, prejudice against circular logic as a surface decision, protection against extraneous causes by causal validity semantics and surface decision, use of a renormalization complex enough to prevent accidental circular logic, a surface decision to absorb the programmer's shaper network and normalize it, plus the assorted injunctions, ethical injunctions, and anchoring points that reduce the probability of catastrophic failure. Add in an initial, surface-level decision to implement volitional Friendliness so that the AI is also Friendly while converging to final Friendliness...
And that is Friendly AI.
The second quote clearly states that volition-based Friendliness is the output of moral symmetry. There is no 'might be' here. More importantly, the third quote seems to say that volition Friendliness is to be included as an initial component of the system. This
does require definition and coding.
Q