• Log in with Facebook Log in with Twitter Log In with Google      Sign In    
  • Create Account
  LongeCity
              Advocacy & Research for Unlimited Lifespans


Adverts help to support the work of this non-profit organisation. To go ad-free join as a Member.


Photo

Why Friendly AI?


  • Please log in to reply
54 replies to this topic

#1 Kalepha

  • Guest
  • 1,140 posts
  • 0

Posted 30 March 2006 - 09:07 PM


Perhaps it's already obvious to some that the development of Friendly AI is a possible crucial solution, given the circumstances as they are perceived from some broad perspectives. Of course, there may be other possible crucial solutions, given the circumstances… etc. However, if Friendly AI isn't yet an obvious possible crucial solution (to you, if you intuitively care), this short non-mathematical formulation might make it more so in some cases. If not, it made it a little more so for me and this would merely be show and tell…

A well-formed cognitive agent would have a continually increasing range of flexibility about the states of reality it recognizes as facilitating cognitive agency and a continually increasing range of conceivability and actuating potential of possible states. This seems to imply that smarter cognitive agents can pose an arbitrarily high threat to less smart cognitive agents, since the abilities of smarter cognitive agents are inherently in conflict, even if this is not the intention, of less smart cognitive agents: Less smart cognitive agents have less flexibility about the states of reality it recognizes as facilitating cognitive agency and, as if the situation wouldn't already be bad enough, less range of conceivability and actuating potential of possible states to accommodate.

And, of course, this might do nothing else than only to represent an instance of how people tend to need to put things in their own terms. Anyway, there it is, a good, concise reason for Friendly AI, I think.

#2 mitkat

  • Guest
  • 1,948 posts
  • 13
  • Location:Toronto, Canada

Posted 30 March 2006 - 10:01 PM

I don't read much on AI, mainly because I am engrossed in my own studies and it is very far removed from what I do. Why is there even a term "friendly AI"? It seems redundant in a way, it should be an absolute given that it would be benign. Unless it was military developed, I suppose...would you say that exists at the other end of the spectrum of AI possibilities, or is it too multi-faceted to explain in terms of polarization?

The only thing I'm sure of in this respect is that intelligent creatures always hurt those less intelligent than them, no matter how unintentionally or arbitrary those actions may seem. Are there actual alternatives to unfriendly AI worth discussing?

sponsored ad

  • Advert

#3 Kalepha

  • Topic Starter
  • Guest
  • 1,140 posts
  • 0

Posted 30 March 2006 - 11:11 PM

Okay, so that is a non-obvious reason for Friendly AI. ;))

More intelligent creatures don't always hurt less intelligent creatures, e.g., many humans. The sketch was intended to show that well-formed cognitive agents don't necessarily have human motivations and could have a random way of choosing how it operates. After all, it is very flexible, with a lot of actuating potential. To those who are less smart, it can do whatever the heck it wants (to those who are as smart, whatever the heck it chooses), including being mindful of the goals, or the extrapolated volition (i.e., what we would do if we were more the persons we wished we were, etc.), of humans, or at least getting completely out of their way, all without crying for civil rights.

Since humans presumably would be less smart, or less able to keep up with AGI developments with IA, they presumably would not want to experience the inherent, possible conflict. This desire is a very tiny point in an unFriendly AI's inherently negligent (maliciously purposive, if that's what causal motivations (like in humans) intended before succeeding) possible-state space. Presumably, it's easier to create unFriendly AI than it is to create Friendly AI. Friendly AI is more complex because, while it might have the same capabilities as unFriendly AI, somehow it needs to hit the sub-atomic bull's-eye of human desires.

The term seems necessary because 'AI' connotes narrow AI. Friendly and unFriendly connote Artificial General Intelligence, or perhaps strong AI. And Friendly AI is, unfortunately, far from being an absolute given.

#4 Live Forever

  • Guest Recorder
  • 7,475 posts
  • 9
  • Location:Atlanta, GA USA

Posted 30 March 2006 - 11:21 PM

Why is there even a term "friendly AI"? It seems redundant in a way, it should be an absolute given that it would be benign. Unless it was military developed, I suppose


Some reasons why Friendly AI could be critically important:
http://www.singinst.org/friendly/

;)

#5 mitkat

  • Guest
  • 1,948 posts
  • 13
  • Location:Toronto, Canada

Posted 30 March 2006 - 11:26 PM

Thanks Nate, that really cleared a lot up for me. ;)

By me saying intelligent creatures hurt those less intelligent...well, I know that's not totally true, but I think when given a state of intellectual supremacy (i.e man over animals), that happens. And I'm using "hurt" as a broad term. I read I believe on Imminst that AI would possibly see our laws, morals, culture, as we see that of chimpanzees - interesting, but not proper. That was the only analogy I see.

And you're right about the term friendly being important to denote, I can see it's too important to risk it not being cool like that.

#6 Kalepha

  • Topic Starter
  • Guest
  • 1,140 posts
  • 0

Posted 30 March 2006 - 11:26 PM

liveforever22, just for the record, this thread implies the non-obviousness of the reasons SIAI gives for Friendly AI. Obviously, if I thought SIAI's reasons were obvious, I would have either made a thread whose only contents were "www.singinst.org" or "Google Michael Anissimov," or simply avoided making a superfluous thread. ;)

#7 Kalepha

  • Topic Starter
  • Guest
  • 1,140 posts
  • 0

Posted 30 March 2006 - 11:34 PM

np, mitkat. And I see what you mean. I exploited the overgeneralization. [tung]

#8 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 31 March 2006 - 03:07 AM

it should be an absolute given that it would be benign.

Oh man.

Not even close. It's almost an absolute given that the AGI would be UNfriendly.

Friendliness is an extremely, extremely narrow goal specification. Just a little bit off and you are likely to annihilate the entire Universe.

Don't try this at home.

#9 Infernity

  • Guest
  • 3,322 posts
  • 11
  • Location:Israel (originally from Amsterdam, Holland)

Posted 31 March 2006 - 01:51 PM

Hmm interesting. We cannot design an AI, which is much smarter than us and expect it to be to our liking, meaning subdued, less smart. Friendly? well, AI will let us know if friendliness is an intelligent thing or not. On one hand, they might find us useless and take over, on the other hand they mind find it useless to do so. It's hard to say. However, I expect it that a new world war will begin when the AI robots will be complete. There is always a nut-case that want it to be the end.

Hmm I don't think they need us to survive. So I can't find them friendly. You'll have to upgrade yourself to be smarter, then you may control them. They obviously will know you do, so they might try to kill you too. I can't think of what exactly will happen, I don't know what is the smart to do because I have no AI, I do what I think is smart, since I cannot assure, I hope we'll live to see.

-Infernity

#10 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 31 March 2006 - 05:14 PM

Hmm interesting. We cannot design an AI, which is much smarter than us and expect it to be to our liking, meaning subdued, less smart

I'm not sure what you are saying here. It seems like you said we cannot design an AI that is both smarter than us and less smart than us.

Friendly? well, AI will let us know if friendliness is an intelligent thing or not.

Whether Friendliness is directly correlated with intelligence isn't really the question. In humans, maybe. In an AI, there are infinitely many other possibilities, and whether or not Friendliness in an AI is correlated with IT'S intelligence, is entirely dependent on the construction of it's goal system. Meaning, there are many ways to build an AI, Friendly or not, intelligent or not, without any implicit correlation, or not.

On one hand, they might find us useless and take over, on the other hand they mind find it useless to do so. It's hard to say

What the AI wants depends on what you program it to want, and how it changes itself over time with respect to the programmed "seed".

However, I expect it that a new world war will begin when the AI robots will be complete

This is actually unlikely, based on the range of possibility. If AI comes into conflict with humanity, it is far more likely that it would simply design extremely complex molecular nanotechnology (or other exotic quantum devices) and simply bend reality to it's will. Perhaps there will be robots roaming the Earth fighting soldiers. This is possible, although given the rate of Moore's Law, the AI will be more and more likely to use nanotechnological means to accomplish it's goals rather than more macro-scale means as time goes on.

Hmm I don't think they need us to survive

That depends on how it's seed goal system evolves.

The essential point is to avoid anthropomorphising non-human intelligence. Human intelligence is the product of billions of years of evolution of DNA code in the physical world. The AI will be a product of many years of evolution of abstract code in human minds. Humans have a million years of history that has been passed down from each other about our environment, people, our minds themselves, etc. The AI will be the first of it's kind for it's environment, it's people (or ... person perhaps), and it's mind. Humans have never had access to manipulating or optimizing the hardware of our intelligence, and very limited means of altering or optimizing the software. The AI will have full capability to manipulate/optimize every level of it's own being. And this is all depends on how the humans who create it write the program. It will be profoundly different than anything you or I are aware of right now, Infernity.

But on the other hand, it is extremely urgent to get this done, properly. Eventually someone can and will do this, and a very small mistake can lead to existential disaster. We need to make sure the smartest, nicest, people we can find are handling this problem, and finish it FIRST.

#11 Kalepha

  • Topic Starter
  • Guest
  • 1,140 posts
  • 0

Posted 31 March 2006 - 07:28 PM

Thanks, Hank. But, no offense intended whatsoever, I don't think any of that helps very much to show the difference between less smart and more smart, which is probably where the acute explanations begin. There tends to be meaningless concrete examples to show the difference. This is the purpose behind the 'well-formed cognitive agent' concept, to abstract away meaningless concrete examples and give a plausible, sweeping generalization of smartness, to work with. The majority of people, I think, are more frequently top-down rather than bottom-up information processors, even if their 'tops' aren't always very high (I'm not talking about anyone here though). That is to say, the majority of people are both, but top-down information processing is used most frequently.

Adi, the gray area represents the overall capacity, within a given time period, of a well-formed cognitive agent. The green area represents the overall capacity, within the same given time period, of a less smart well-formed cognitive agent. The yellow area inside the green area (if you can see it) represents the rationalized morals of humans, if we let the green area be humans. The yellow area is where we humans most prefer well-formed cognitive agents to behave, like rigid religious persons. The green area is what we can live with, like good ol' fashioned flexible SL4 singularitarians. The rest of the gray area is what we cannot live with.

Posted Image

Unlike humans, most kinds of possible, smarter well-formed cognitive agents don't necessarily have or require a mechanism for choosing particular points of operation in a gray area. As a well-formed cognitive agent, the area is what it can do, strictly by the definition and specifications of a well-formed cognitive agent, not what it should do. Remember, a well-formed cognitive agent, without redundant emotions, can do a lot in the harshest and most complicated conditions we can imagine (and more), which makes more smartness probabilistically more dangerous than less smartness (observe gray area) and also possibly, but probabilistically less, divine-like (as in getting a gray-area well-formed cognitive agent to concentrate on small, particular areas like green).

[Edit: tried to clarify last statement.]

Edited by Nate Barna, 31 March 2006 - 10:30 PM.


#12 Kalepha

  • Topic Starter
  • Guest
  • 1,140 posts
  • 0

Posted 31 March 2006 - 08:08 PM

Hmm, I just realized that also wasn't a direct answer to Adi's points. Anyway, I've already done my best for now, so Hank's assistance is certainly welcome.

#13 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 31 March 2006 - 08:14 PM

Thanks, Hank. But, no offense intended whatsoever, I don't think any of that helps very much to show the difference between less smart and more smart, which is probably where the acute explanations begin


I don't think your explanation was very clear, though [tung].

#14 Kalepha

  • Topic Starter
  • Guest
  • 1,140 posts
  • 0

Posted 31 March 2006 - 08:40 PM

Oy, I make more sense than anyone I know.

[Edit: (That was intended to be subtle intellectual humor... hehehe and hahaha.)]

Edited by Nate Barna, 31 March 2006 - 09:58 PM.


#15 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 31 March 2006 - 08:52 PM

Oy, I make more sense than anyone I know.

We can't see what's in your head, only what's on the screen ;)

#16 mitkat

  • Guest
  • 1,948 posts
  • 13
  • Location:Toronto, Canada

Posted 31 March 2006 - 11:03 PM

Oh man.

Not even close. It's almost an absolute given that the AGI would be UNfriendly.

Friendliness is an extremely, extremely narrow goal specification. Just a little bit off and you are likely to annihilate the entire Universe.

Don't try this at home.


Hank, you misunderstood me. I am fully aware of how UNfriendly AI will be, and naturally so. I am saying that "designing" friendly AI should be so obvious for us to do, it almost need not be said..

And like I said in my first post, I am not trying this at home. [thumb] *throws away "grey goo @ home" DIY kit*

#17 MichaelAnissimov

  • Guest
  • 905 posts
  • 1
  • Location:San Francisco, CA

Posted 31 March 2006 - 11:40 PM

In an attempt at simplification, I came up with the following rewording of Nate's paragraph:

A well-formed mind with open-ended hardware will tend to have a better imagination over time, and a increasing ability to put its thoughts into actions. This seems to imply that smarter agents can pose an arbitrarily high threat to dumber agents, since even when it is not their intention to pose a nuisance, they can easily do so inadvertantly. Dumber minds have a smaller imagination and a lesser ability to put their thoughts into actions. This puts them at the mercy of their more intelligent counterparts.

This itself, I think, can be distilled into the even shorter following sentence:

Smarter agents will always overwhelm dumber agents' goals, frequently even when it is not their direct intention.

However, if these agents are sufficiently intelligent, then won't they be able to focus their actions such that they do not conflict with the goals of dumber agents? Maybe Nate means "given typical (i.e., non-Friendly) motivation systems" as a qualifier for the above. And that then would be the reason to create Friendly AI.

This whole thing is very difficult to explain well. Statements like "it should be an absolute given that it would be benign" show that the intuition is that Friendly AI is automatic and much persuasion is necessary to show otherwise. The other common line of thinking is that AI will act like humans, also wrong. The correct line of thinking is that an AI's goals could be anything in the space of all goals. When a randomly selected goal is coupled with high levels of optimization pressure, the result is that all local matter is optimized in accordance with those goals. The optimization of all local matter in accordance with random goals (or most goals) precludes the existence of non-optimized structures, being human beings.

#18 mitkat

  • Guest
  • 1,948 posts
  • 13
  • Location:Toronto, Canada

Posted 01 April 2006 - 03:52 AM

This whole thing is very difficult to explain well.  Statements like "it should be an absolute given that it would be benign" show that the intuition is that Friendly AI is automatic and much persuasion is necessary to show otherwise.  The other common line of thinking is that AI will act like humans, also wrong.


Again, misunderstood me. I would never, ever assume the AI to be friendly (seems very illogical, actually), only that to me, and maybe I'm being optimistic, it seems an absolute given that we should do our best to make it friendly. I suppose I could of worded that differently...

Thanks for the [airquote] simplification [/airquote], but I don't see Nate's explanation as that complex. ;)

#19

  • Lurker
  • 1

Posted 01 April 2006 - 08:34 AM

Two questions:

1. Has the evolution of intelligence (on earth, and using the animal kingdom as a model) been positively correlated with the traits of aggressiveness, domination, etc? At least in humans we observe that in a socioeconomic context individuals that manifest sociopathic tendencies seem to enjoy greater success.

2. In an AI context, if one were to seek to induce the manifestation of intelligence on a man-made substrate would it not be methodological requirement that the evolution of intelligence must be simulated but on a highly accelerated rate and if that were to be the approach that powerful constraints to inhibit certain traits (ie. non-friendly) from emerging must be incorporated? If such a developmental constraint strategy were used, and given the necessary highly accelerated state of evolution is it not likely that not all evolutionary contingencies could be anticipated? Also, supposing that certain traits of aggressiveness are developmentaly required, albeit transiently in order for intelligence to manifest, Would that then also make it more difficult to ensure that non-friendly trait inhibitors were successful in their application?

#20 Kalepha

  • Topic Starter
  • Guest
  • 1,140 posts
  • 0

Posted 01 April 2006 - 09:39 AM

1. Has the evolution of intelligence (on earth, and using the animal kingdom as a model) been positively correlated with the traits of aggressiveness, domination, etc? At least in humans we observe that in a socioeconomic context individuals that manifest sociopathic tendencies seem to enjoy greater success.

Prometheus, we might also observe, using the same model, that the evolution of intelligence is positively correlated with complex systems of cooperation. However, there doesn't seem to be a strong correlation with the evolution of intelligence other than the magnitude of adaptability plus pattern recognition, quantified over sets of dichotomous traits.

2. In an AI context, if one were to seek to induce the manifestation of intelligence on a man-made substrate would it not be methodological requirement that the evolution of intelligence must be simulated but on a highly accelerated rate and if that were to be the approach that powerful constraints to inhibit certain traits (ie. non-friendly) from emerging must be incorporated? If such a developmental constraint strategy were used, and given the necessary highly accelerated state of evolution is it not likely that not all evolutionary contingencies could be anticipated? Also, supposing that certain traits of aggressiveness are developmentaly required, albeit transiently in order for intelligence to manifest, Would that then also make it more difficult to ensure that non-friendly trait inhibitors were successful in their application?

Recursive self-improvement doesn't require code for aggression. That's a lot of redundancy. You're right, however, to indicate that it's much more difficult to ensure a Friendly AI than to create an unFriendly AI. The implication is that it's crucial for genuine Friendly AI developers to not only be quicker than AGI bootstrappers but actually build Friendly AI: a double whammy.

#21 Kalepha

  • Topic Starter
  • Guest
  • 1,140 posts
  • 0

Posted 01 April 2006 - 09:43 AM

Thanks, mitkat. And I understand what you're saying even better now: It should be obvious to all AGI research-and-developers that Friendly oriented seed AI is the title and guideline of their project. Surprisingly it's not.

#22 Infernity

  • Guest
  • 3,322 posts
  • 11
  • Location:Israel (originally from Amsterdam, Holland)

Posted 01 April 2006 - 10:49 AM

hehehe and hahaha.


........do you want me to punch you?




OK, back to course. I do not think I got answers from any of you here, but I cannot say I expected any... I mean, don't try to find me answers when there are no questions here.

Bah, I think I had too much of wine last night because I read what you wrote several times and it came out as it got in.
However, I think friendliness will not necessarily apply to the AI robots if they are smarter than us. It just helps us survive, and if they are smarter- they are stronger. If they are stronger and can devastate us- no reason to be friendly, not to us at least. We are the only threat of then, so they will one day try to kill us.

-Infernity

Edited by infernity, 01 April 2006 - 12:27 PM.


#23 Kalepha

  • Topic Starter
  • Guest
  • 1,140 posts
  • 0

Posted 01 April 2006 - 11:35 AM

........do you want me to punch you?

I shall ponder this exquisite signifier of its artificer's licentious métier.

I mean, don't try to fin me answers when there are no questions here.

But I think answers are like sometimes the same thing as responses, like in conversations, like when people talk to each other and they hear speech sounds with their ears or see speech acts with their eyes.

However, I think friendliness will not necessarily apply to the AI robots if they are smarter than us. It just helps us survive, and if they are smarter- they are stronger. If they are stronger and can devastate us- no reason to be friendly, not to us at least. We are the only threat of then, so they will one day try to kill us.

The capabilities of a greater-than-human intelligence can include being nice and being not nice. Since it's so "great," it doesn't bother it whether it's being nice or being not nice. For such greatness, it's easier to be not nice only because there are a lot more ways to be not nice than there are to be nice, at least from the point of view of human cry babies. Such greatness won't be not nice because it "wants" to be, but because its control unit has a better chance of fetching 'not nice' code than 'nice' code if the control unit doesn't have specific instructions for special fetching for human cry babies.

#24 Infernity

  • Guest
  • 3,322 posts
  • 11
  • Location:Israel (originally from Amsterdam, Holland)

Posted 01 April 2006 - 12:29 PM

I shall ponder this exquisite signifier of its artificer's licentious métier.

That's it, you get the punch.

( [lol] )

The capabilities of a greater-than-human intelligence can include being nice and being not nice. Since it's so "great," it doesn't bother it whether it's being nice or being not nice. For such greatness, it's easier to be not nice only because there are a lot more ways to be not nice than there are to be nice, at least from the point of view of human cry babies. Such greatness won't be not nice because it "wants" to be, but because its control unit has a better chance of fetching 'not nice' code than 'nice' code if the control unit doesn't have specific instructions for special fetching for human cry babies.

Yes, yes, I tend to agree, Nate.

-Infernity

#25 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 01 April 2006 - 03:51 PM

prometheus said:

if that were to be the approach


Evolving an intelligence is dangerous route to go down. It is hard to guarantee anything about Friendliness if you just evolve a complex system, because you still don't know anything new about what you are dealing with. I think a necessary condition of verifying Friendliness is *really understanding* intelligence. Like having a causal flow chart, or something. You have to actually know what's going on well enough to write the code from scratch.

#26 mitkat

  • Guest
  • 1,948 posts
  • 13
  • Location:Toronto, Canada

Posted 02 April 2006 - 09:25 PM

Thanks, mitkat. And I understand what you're saying even better now: It should be obvious to all AGI research-and-developers that Friendly oriented seed AI is the title and guideline of their project. Surprisingly it's not.


Nate, are you talking about military applications, or just, dare I say, "misguided" research? Who would want it to be unfriendly? I can't honestly see a reason for that, although I'm sure I'm being quite naive about the whole thing ;)

#27 Kalepha

  • Topic Starter
  • Guest
  • 1,140 posts
  • 0

Posted 02 April 2006 - 09:48 PM

Oh no, not naive, mitkat. It's just that no one is publicly saying, "I want to destroy my fellow humans with AGI."

While most known projects express good intentions, few projects express a sufficient theoretical understanding of the potential power of AGI. And then there are those projects that "just want to build AGI," without articulating implicit assumptions and attitudes for critical examination, such as perhaps 'In any possible case, we would be in control' or, worse, 'I just want to be the one who changes the world, for better or for worse.'

#28 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 02 April 2006 - 11:25 PM

Who would want it to be unfriendly?

First, AGI is easier to build than Friendly AI. Second, virtually nobody is trying to build an FAI, rather they are trying to build an AGI. Third, even less people are even aware of the potential consequences.

It's really easy for whatever AGI you build to actually be an unFriendly AI, despite what it may try to tell you (also, it will convince you it's Friendly, even if it's not, or else it will find some other way to escape whatever controls you attempt to put on it).

That's why people are kind of crazy about the idea of "Friendliness". It is a serious existential risk that is growing bigger everyday, and this risk is preventable only with a strong start on a Friendly AGI design.

#29 mitkat

  • Guest
  • 1,948 posts
  • 13
  • Location:Toronto, Canada

Posted 02 April 2006 - 11:58 PM

Okay Hank, I'm feelin' you on that one...so AGI, which would be easier and less intensive to program, would be unfriendly in nature, as it has no moral capacity to be constructed. How could such a mammoth undertaking not have it's seemingly most basic complications, and thus, dangers to it's creators, illuminated? Is it the simple ease of making AGI that is it's existance, above FAI?

Again, I'm new at this... [tung]

sponsored ad

  • Advert

#30 RighteousReason

  • Guest
  • 2,491 posts
  • -103
  • Location:Atlanta, GA

Posted 03 April 2006 - 02:37 AM

would be unfriendly in nature, as it has no moral capacity to be constructed.

There very well may be a capacity specifically for determining goal referents, and those in turn may specify particular goals in reference to humans, if that's what you mean by morality. This goal system may or may not yield Friendly results.

How could such a mammoth undertaking not have it's seemingly most basic complications, and thus, dangers to it's creators, illuminated?

See that's the thing, the danger to the creators is the least basic "complication". Friendliness is an emergent property of an AI system. It's probably on of the most chaotic emergent property concievable, seeing as how it is embodied in a recursively self-improving intelligence. A verifiably Friendly AI is a system designed such that it's seed necessarily entails that it's actions converge to Friendliness through recursive self-improvement. A (lucky) Friendly AI is an AGI with a non-verifiably Friendly goal system in which we get astronomically lucky and it turns out to converge Friendly. An unFriendly AI is an AGI with a particular goal system whose emergent properties diverge from Friendliness (and we are screwed).

The problem of Friendliness is a problem of pre-programming the attributes necessary into a seed system that will evolve as a recursively self-improving intelligence and, in doing so, converge on external actions that follow something like humanity's coherent extrapolated volition. [edit: this is oversimplified]

Are you starting to see the importance of Friendly AI?




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users