This may be an Eliezer-type question, it may be unanswerable, it may not, you guys tell me.
Everyone involved in Seed AI style projects seems to be assuming/presuming that these computers will be able to make moral or ethical distinctions. Yes/no?
If so, what are the mores or ethics they will use? How are they applied?
I understand that Bayes Theorem is the core mathematical concept, but can you walk me through an example how it would be used - for instance, is it a beneficial thing for a family of 2 drivers to have 3 vehicles, with all the subtexts and underlying concepts which can arrise from such a question...
Thanks in advance,
Discarnate