Saw this today and thought it was quite interesting
From these two facts it follows that the evolutionary path to life-forms capable of space colonization leads through a "Great Filter," which can be thought of as a probability barrier. (I borrow this term from Robin Hanson, an economist at George Mason University.) The filter consists of one or more evolutionary transitions or steps that must be traversed at great odds in order for an Earth-like planet to produce a civilization capable of exploring distant solar systems. You start with billions and billions of potential germination points for life, and you end up with a sum total of zero extraterrestrial civilizations that we can observe. The Great Filter must therefore be sufficiently powerful--which is to say, passing the critical points must be sufficiently improbable--that even with many billions of rolls of the dice, one ends up with nothing: no aliens, no spacecraft, no signals. At least, none that we can detect in our neck of the woods.
The Great Filter, then, would have to be something more dramatic than run-of-the mill societal collapse: it would have to be a terminal global cataclysm, an existential catastrophe. An existential risk is one that threatens to annihilate intelligent life or permanently and drastically curtail its potential for future development. In our own case, we can identify a number of potential existential risks: a nuclear war fought with arms stockpiles much larger than today's (perhaps resulting from future arms races); a genetically engineered superbug; environmental disaster; an asteroid impact; wars or terrorist acts committed with powerful future weapons; superintelligent general artificial intelligence with destructive goals; or high-energy physics experiments. These are just some of the existential risks that have been discussed in the literature, and considering that many of these have been proposed only in recent decades, it is plausible to assume that there are further existential risks we have not yet thought of.
What I think is a fatal or very serious flaw in The Great Filter hypothesis is that all the existential risks we can currently comprehend do not necessarily lead to the extinction of evolution. Evolution would continue after a nuclear war. Evolution would most likely accelerate after superintelligent general artificial intelligence with destructive goals took over the planet. The only event that would guarantee an end to evolution is the accidental creation of a black hole or if some type II civilization accidentally blew up their home star. Of course, by the time they have enough capability to become a solid type I or type II civilization they should be detectable, so some of the "far-out" existential risks become a mute/irrelevant points. Thus, nearly every catastrophic event we can envision would only be a temporary set-back to the evolution of intelligent beings.