On Wed, 14 Jul 1999 eugene.leitl at lrz.uni-muenchen.de wrote:
> From: ohgs at chatham.demon.co.uk (Oliver Sparrow)
>> (Arthur T. Murray) scribet
>> > I take comfort from Vinge's persuasive argument of inevitability.
>> Vernor V poses a sensible question: if life is an inevitable outcome of
> chemistry, then where is everyone? Five possibilities, which I have
> extended:
>> 0: We are wrong about everything and our hypotheses are meaningless.
> 1: We are wrong about the chemistry: it actually requires very fine tuning.
> Life is rare.
> 2: Life is common, but unique civilisations are not. Thus 'do not
> interfere' is the key rule. We are presently in a cosmic game park (or
>> visited by machinery built of the dark matter that makes up 30-50% of
> the universal mass, but which we cannot yet detect. Or whatever.
> 3: It is dangerous out there. Keep silent or you attract attention.
> As a subset, societies of a certain density and technical complexity
> always destroy themselves.
> 4: Aware biological life has a very short time span of technical
> civilisation before it finds a 'better way to be'. A few hundred
> years, perhaps? Thus brief flickers of radio emissions, thus the
> failure of SETI.
>> My bets are on (4), although (1) may be the case. We cannot yet know.
>> Vinge's singularity consists of the projection of exponential increases in
> knowledge and capability to a point at which the realization of potential
> takes one into a realm that we cannot now foresee. At least one element of
> this is situation consists of finding non-biological, unlimited frameworks
> in which to be aware, interacting and sensual beings. That this is to occur
> "in a computer" gives quite the wrong contextual messages. One might - for
> example - be able to create an unlimited number of unique sub-spaces to
> our universe, that have the properties allowing them individually (a) to
> carry out the operations which supported unlimited awareness and (b) to
> abut with and interact with conventional space when and how this was
> appropriate. Not exactly a mind in a box, more in a portable, private
> universe with its own time-like analogues and its own rule set designed to
> support mentation and little else. General Motor's chief product for 2050,
> perhaps. ,
> _______________________________
>> Oliver Sparrow
>> Send unsubscribe requests to: majordomo at excelsior.org> Archive located at: http://www.excelsior.org/transhuman_tech_list/>
It's also likely that as civilizations reach the point where they could
expand to every corner of the galaxy they realize that just compounds the
resource problem. You could argue that some civilizations would
have an ethic to maximize their numbers (be fruitful and multiply)
Because of relativity limits and energy costs, it's
doesn't work to ship bulk materials from one star to another, barring some
unforseen FTL technology. Because of this each colony is effectively on
it's own with the exception of information, which is easy to pass and at
the maximum possible speed, c. The other incentive to travel would be to
gather information and to protect itself from a single catastrophe (nearby
supernova or gamma ray burst) wiping all its members out. Subtle,
efficient technologies like nanotech and robots could gather reconn in
other stars but not be detectable to a civ at our level because such small
probes would not be huge energy emitters in contrast to huge ships
carrying live beings and so traveling at higher speeds.
It would seem likely a ET at that level could
1) model and predict all astrophysical risks in it's area thus know when
stars are going to blow up centuries in advance 2) redesign it's form to
be resistant to damage (like Oliver's subspace idea) 3) realize that
unbridled expansion would possibly lead to conflict with other
civilizations, some which might not be detectable before hand to
it.
3) unlike some SciFi, two civs that randomly meet would all most certainly
be hugely different in sophistication (unlike in startrek for example).
Either way puts the civilization in a tough position, if it englobes a
young civilization then someday it may have to "put it down" to protect
its interests as it grows, which would be a difficult moral situation.
Conversely it would possibly have to fight a superior civilization. The
only good result would be some kind of cooperation or a Niven style
disinterest due to radical differences in desired environments (which
could later change).
It's possible some mixture of these tends to keep ETIs low observable,
though I would think it likely that the laws of physics and logistics
would be the best since that would restrict everyone the same. It's also
likely in my opinion that intelligence tends to lean to maximizing the
processing of information, even in our civilization virtually everything
that is done is directly or indirectly the processing of information.
Someone can buy a car to result in the experience of driving the car,
arriving at a distant place and experience that, or use it to make a
living to support their lifestyle. It's just likely to me that
higher civilizations will do the same but in much more efficient
ways.
as we know
improving/increasing processing of information would require
1) minimizing the space-time volume of the processing
2) minimizing the energy loss per operation (reversible processing,
quantum computing, etc)
3) increasing storage
4) increase the efficiency of algorithms
5) do processing in N> 3 space
In contrast to obvious type II and III civilizations, something
doing the above would be difficult to see.
Mark
---