Allow me to propose some thoughts on the matter. I beleive as was previously mentioned, that the web of biological and non biological systems in our lives will become increasingly interwoven. Uploading in the truest sense of the word is something I am quite interested in and will participate as soon as technology allows.If opportunity presents itself in no other fashion save ginie pig status, then so be it, I will offer myself as fodder for scientific progress,while hopefully becoming one of the first individuals to experience multiple forms of conciousness [:o]
My ultimate goal is omnipotence and uploading and various other forms of non biological augmentation are a beggining to my path of exploration into the deepest realms of the cosmos, it only seems reasonable to start with the most profound inner machinations of my concious.
You're wonderfully honest. I like that. The power of a transhuman future is incredibly appealing, and the thought of being able to supplement your intelligence with vast knowledge and spread out across the stars is amazing. I think that to many people, though, omnipotence is a pretty threatening term. I personally want to live safely backed up and retundantly stored in an uploaded form thousands of years from now... but omnipotence? That would imply, among many other things, the capability to hurt or destroy others. That's one thing I don't support, though I'm not saying you do.
On that note, one thing I don't think is very well mapped out, is how an uploaded society would function. Though it's definitely getting ahead of ourselves, I'd like to see what we could come up with. It would, of course, have to combine elements of hardware, software, and userspace management. To design a system robust enough to supports millions of people would be quite a challenge.
I'm assuming two things. One, artificial intelligence produces the capability for sentient copies of sentient creatures to be made, and accurate test for sentiences exists that by checking certain points in a complex program can determine if it's sentient or not. (That's a big assumption, but one I hope is true.) I'm also assuming artificial intelligence will produce non-sentient but highly intelligent 'robots' that can be made to perform things (without suffering or tedium, with no more sentience than a word processor) that we'd rather not do, or just aren't fit to do.
I think first every upload would start in their own basic environment, with almost unlimited priviledges within that area and their computing allowance. Each realm or reality or whatever you want to call it would have to be completely customizable by the user. There would be one limit on what you could do within your world. More on that in a bit.
I think first off there would be a distinction between the sensorium and the consciousness of the uploaded mind. The sensorium would be basically like a monitor of a very high resolution with multiple senses. People should have the capability to program their own user interface modifications to the 'monitor', and control their inputs and outputs completely. To visit another person, you would give them a knock or a message or, if prior permissions had been worked out, just pop right into their sensorium. Your sensorium would draw their environment relative to yours, much how the real world works right now when in a room with someone. While the projection and the senses and thus the consciousness of the person moves around, it would 'really' always be safe within the 'home' directory or environment of the process of your mind.
This is very telling philosphically of an uploaded world. It would probably be impossible to damage another person in the virtual world. If all else fails, even if you are shouting at them, they can just ignore you. If they don't give you access to whatever environment they are in, you can't get to them. This prevents any violent crime from being perpetrated against each other. Some days, you may not feel like doing collision detection, even. Just walk straight through the crowd. Maybe that would be a faux pass, in the future.
Objects, in a digital environment, could possibly be hyperlinked much in the same way links are drawn nowadays; just attach a text address to the object and it can jump you right too it. You could program (or merely install) certain aspects into your sensorium, like, say an object you come across has a book linked to it; your user interface could be set up to automatically clone a real paper-bound feeling book, with the next. Or to drop it in your knapsack, or to just put it up on a panel invisible to everyone else in the environment a few feet in front of your head.
Users should be able to tinker with their brains however they want, though this would be viewed as extremely dangerous. Users should also be able to reprogram their basic sensory apparati to operate however they want.
Knowledgebases should be ready for import or more traditional learning.
The most important part is that no other citizen in the whole society should have any power over any other citizen in any real, physical way. No one should be able to delete anybody else, no one should be able to force sensorium elements onto anyone else.
The only limitations I can see are to ensure that all citizens get an equal amount of processing time. This is a toughie. While many worlds probably wouldn't require much simulation short of what needs to be broadcasted to the sensorium, I guess it really depends on how much processing power is available. If the computer continually builds itself as it needs more distributed processing, then maybe it wouldn't be an issue. As it stands, everyone would have to deal with absolutely the same resources, in the abscence of any life-threatening resource concerns. Of course, before the whole world goes into computers and people are still buying their way into digital environments, they'll buy their rights and powers and spaces in specialized systems, but when all the resources members of the computing world could ever want are produced by 9 retundant fusion reactors maintained basically for free by robots and non-sentient but intelligent enough to run it artificial intelligence? Without power to exert over anybody or need for anything, I think economic systems fall apart too. This means war, murder, governments, economics (at least as it stands today; I suppose there would still be a knowledge economy, or an experience economy... I think a lot of study would have to go into it to really detail this one.), involuntary death, taxes, murder, rape... all would be 'obselete' and impossible.
There is a couple opportunities for crime though... hopefully solutions can be found.
A citizen could, under his own environment, take the code that created him and produce another person, which he could exert complete control over. This is madness and terribly frightening. This is why a comprehensive test for sentience would have to be developed; it could test programs being run on userspace for sentience at predefined intervals and automatically grant citizenship to any new citizens created. Possibly, if resources are very limited, it could keep a program designed in the same way as a program that could develop sentience from running (provided reproduction in the first place is under current restriction). This kind of reeks of the halting problem, but maybe a specialized system for solving this particular case could pick out certain actions and systems behaviour neccessary for it. It would probably be a huge list of criteria.
The second problem is expansion. A colony under these rules would certainly have to have a window to the outside world. It may have to migrate galaxies to survive, or establish retundant backup systems in a different solar system, or any number of things. It would have nano-factories connected to it, but how do we decide who runs them? Everyone can't at once, and we wouldn't want everybody to be able to produce anything as a window into the real world. I imagine the colony would already have quite a few inlets into the real world, as teleprescence robots and telescopes views set up as environments to walk around in. Reaching outside of the spectrum of the internal world of the machine carries inherent risks. While the colony would be retundant and somehow protected, we wouldn't want every resident creating a nanomachine to try to vandalize the computing machines. But I am not comfortable with just like, making a list of people who can be free to get out of the simulation; this seems horribly unfair. I don't see an adequate solution to this problem yet. Perhaps a non-sentient but intelligent Systems Administrator would have access to the upper level of everyone's minds, and be able to tell without a doubt what that person's intentions are, the ultimate Truth Machine. People who want to destroy the colony would be allowed to use the machines, but not produce anything that could be used to destroy the colony. It would have to be one hell of an expert system though. I would have my doubts for a long time.
Any other suggestions or solutions? I would love to put together a document detailing what work (that we can immediately understand and work out) that needs to be done. After all, it is new ground; the society of the future. We need a way for people to find each other, and community areas to socialize... I begin to worry about it as the population in concept approaches the millions and millions of personalities.