The Search for Extraterrestrial Life and Post-Biological Intelligence

Papers presented at an international symposium considering the true nature of extraterrestrial Intelligence.  


Introduction: The True Nature of Aliens

Is it time to re-think ET?

For well over a half-century, a small number of scientists have conducted searches for artificially produced signals that would indicate the presence of intelligence elsewhere in the cosmos. This effort, known as SETI (Search for Extraterrestrial Intelligence), has yet to find any confirmed radio transmissions or pulsing lasers from other beings. But the hunt continues, recently buoyed by the discovery of thousands of exoplanets. For many, the abundance of habitable real estate makes it difficult to believe that Earth is the only world where life and intelligence have arisen.

SETI practitioners mostly busy themselves with refining their equipment and their lists of target solar systems. They seldom consider the nature of their prey – what form extraterrestrial intelligence might take. Their premise is that any technically sophisticated species will eventually develop signaling technology, irrespective of their biology or physiognomy.

This view may not seem anthropocentric, for it makes no overt assumptions about the biochemistry of extraterrestrials; only that intelligence will arise on at least some worlds with life. However, the trajectory of our own technology now suggests that within a century or two of our development of radio transmitters and lasers, we are likely to build machines with artificial, generalized intelligence. We are engineering our successors, and the next intelligent species on Earth is not only certain to dwarf our own cognitive abilities, but will be able to engineer its own, superior descendants by design, rather than counting on uncertain, Darwinian processes. Assuming that something similar happens to other technological societies, then the implications for SETI are profound.

In September, 2015, the John Templeton Foundation’s Humble Approach Initiative sponsored a three-day symposium entitled “Exploring Exoplanets: The Search for Extraterrestrial Life and Post-Biological Intelligence.” The venue for the meeting was the Royal Society’s Chicheley Hall, north of London, where a dozen researchers gave informal presentations and engaged in the type of lively dinner table conversations that such meetings inevitably spawn.

The subject matter was broad, ranging from the multi-pronged search for habitable planets and how we might detect life, to the impact of both the search and an eventual discovery. However, the matter of post-biological intelligence – briefly described above – or the possibility of non-Darwinian evolutionary processes, was an incentive for many of the symposium contributions.

We present here short write-ups of seven of these talks. They are more than simply interesting: they suggest a revolution in how we should think about, and search for, our intellectual peers. Indeed, they suggest that “peers” may be too generous to Homo sapiens. As these essays argue, the majority of the cognitive capability in the cosmos may be far beyond our own.

-- Seth Shostak

This symposium was chaired by Martin J. Rees, OM, Kt, FRS and Paul C.W. Davies, AM, and organized by Mary Ann Meyers, JTF’s Senior Fellow. Also present was B. Ashley Zauderer, Assistant Director of Math and Physical Sciences at the Templeton Foundation.

[Go to Top]

 


POST-HUMAN EVOLUTION ON EARTH AND BEYOND

Martin J Rees
Institute of Astronomy
Madingley Road
Cambridge CB3 OHA
mjr@ast.cam.ac.uk

ABSTRACT

The pace of technological advance on Earth is such that post-humans – whether organic, cyborg or entirely inorganic – could emerge within a few centuries (or indeed within a single century). In the billions of years lying ahead, such entities, continuing to evolve not through natural selection but on the (far faster) timescale of technological evolution could spread through the cosmos (in a manner whose details we manifestly cannot even conceive) . If advanced life had emerged on other planets, and followed a similar evolutionary track to what has happened on Earth, then the era of ‘organic’ intelligence will be a thin sliver of time compared to the far longer post-human era dominated by ‘machines’. This suggests that, if SETI succeeded, the most likely source of any artificial emissions would be unlikely to come from anything resembling the ‘organic’ civilization that prevails on Earth.

Extraterrestrial life and intelligence have always been fascinating topics on the speculative fringe of science. But in the last decade or two, serious advances on several fronts have generated wider interest in these subjects – indeed, they have become almost ‘mainstream’. One can highlight four areas where there’s a gratifying crescendo of interest and understanding:

(i) The discovery and study of exoplanets began only 20 years ago. It is now one of the most vibrant frontiers of science. Data are accumulating at an accelerating rate; we can confidently assert that there are billions of Earth-like planets in our Galaxy; it is not premature to seek evidence that some have biospheres

(ii) There has been substantial recent progress in understanding the origin of life. It’s been clear for decades that the transition from complex chemistry to the first entities that could be described as ‘living’ poses one of the crucial problems in the whole of science. But until recently, people shied away from it, regarding it as neither timely nor tractable. In contrast, numerous distinguished scientists are now committed to this challenge.

(iii) Advances in computational power and robotics have led to growing interest in the possibility that ‘artificial intelligence’ (AI) could in the coming decades achieve (and exceed) human capabilities over a wider range of conceptual and physical tasks. This has stimulated discussions of the nature of consciousness (is it an ‘emergent’ property or something more special?), and further speculation by ethicists and philosophers on what forms of inorganic intelligence might be created by us – or might already exist in the cosmos – and how humans might relate to them.

(iv) In the coming years there will be expanded and better-resourced efforts to search for ET; these will focus wider interest on the subject and thereby generate new ideas.

SOME HISTORY

Speculations on ‘the plurality of inhabited worlds’ date back to antiquity. From the 17th to the 19th century, it was widely suspected that the other planets of our Solar System were inhabited. The arguments were often more theological than scientific. Eminent 19th century thinkers like Whewell and Brewster argued that life must pervade the cosmos, because otherwise such vast domains of space would seem such a waste of the Creator’s efforts. An interesting and amusing critique of such ideas is given in books by Alfred Russel Wallace, the co-developer of natural selection theory. Wallace is specially scathing about the physicist David Brewster (remembered for the ‘Brewster angle’ in optics) who conjectured on such grounds that even the Moon must be inhabited [1]. Brewster argued that had the Moon “been destined to be merely a lamp to our Earth, there was no occasion to variegate its surface with lofty mountains and extinct volcanoes, and cover it with large patches of matter that reflect different quantities of light and give its surface the appearance of continents and seas. It would have been a better lamp had it been a smooth piece of lime or of chalk.”

By the end of the nineteenth century, so convinced were many astronomers that life existed on other planets in our Solar System that a prize of 100,000 francs was offered to the first person to make contact with them. And the prize specifically excluded contact with Martians – that was considered far too easy! The erroneous claim that Mars was crisscrossed by canals had been taken as proof positive of intelligent life on the Red Planet.

The space age brought sobering news. Venus, a cloudy planet that promised a lush tropical swamp-world, turned out to be a crushing, caustic hell-hole. Mercury was a pockmarked blistering rock. And NASA’s Curiosity probe (and its predecessors) showed that Mars, though the most Earth-like body in the Solar System, was actually a frigid desert with a very thin atmosphere. There may be creatures swimming under the ice of Jupiter’s moon Europa, or Saturn’s moon Enceladus, but nobody can be optimistic.

However, the prospects brighten enormously when we extend our gaze beyond our Solar System – beyond the reach of any probe we can devise today. What has transformed and energized the whole field of exobiology is the realization that stars are orbited by retinues of planets. Giordano Bruno speculated about this in the 16th century. From the 1940s onward, astronomers suspected he was correct: the earlier idea that our Solar system formed from a tidal stream torn out by the tidal pull of a close-passing star (which would have implied that planetary systems were rare) had by then been discredited. But it wasn’t until the mid-1990s that evidence for exoplanets started to emerge. Moreover, Bruno famously went further, and conjectured that on some of those planets there might be other creatures “as magnificent as those upon our human Earth.” Will he one day be proved right on this bolder speculation too?

ORIGIN OF LIFE

There seem good prospects for progress in understanding the origin of life. What triggered the transition from complex molecules to entities that can metabolize and reproduce? It might have involved a fluke so rare that it happened only once in the entire Galaxy. On the other hand, this crucial transition might have been almost inevitable given the ‘right’ environment. We just don’t know – nor do we know if the DNA/RNA chemistry of terrestrial life is the only possibility, or just one chemical basis among many options that could be realized elsewhere.

The origin of life is now attracting stronger interest: it’s no longer deemed to be one of those problems (consciousness, for instance, is still in this category) which, though manifestly important, doesn’t seem timely or tractable – and is relegated to the ‘too difficult box’. And of course the understanding of life’s beginnings is important not only for our assessment of the likelihood of alien life, but also to the most firmly earthbound evolutionary biologist.

And there is a second still more fascinating question (Bruno’s conjecture) : if simple life exists, what are the odds that it evolves into something that we would recognize as intelligent? Even if primitive life were common, the emergence of ‘advanced’ life may not be – it may depend on many contingencies (phases of glaciation, the Earth’s tectonic history, asteroid impacts, and so forth). Several authors have speculated about possible ‘bottlenecks’ – key stages in evolution that are hard to transit. Perhaps the transition to multi-cellular life is one of these. (The fact that simple life on Earth seems to have emerged quite quickly, whereas even the most basic multi-cellular organisms took nearly 3 billion years, suggests that there may be severe barriers to the emergence of any complex life.) Or the ‘bottleneck’ could come later.

Even in a complex biosphere, the emergence of intelligence isn’t guaranteed. If, for instance, the dinosaurs hadn’t been wiped out, the chain of mammalian evolution that led to humans might have been foreclosed and we can’t predict whether another species would have taken our role. Some evolutionists regard the emergence of intelligence as a contingency – even an unlikely one. The alternative view is represented by Simon Conway Morris (see his contribution to this workshop).

Perhaps, more ominously, there could be a ‘bottleneck’ at our own present evolutionary stage – the stage when intelligent life develops powerful technology. If so, the long-term prognosis for ‘Earth-sourced’ life depends on whether humans survive this critical evolutionary phase. This does not mean that the Earth has to avoid a disaster – only that, before it happens some humans or advanced artefacts have spread beyond their home planet.

In considering the possibilities of life elsewhere, we should surely be open-minded about where it might emerge and what forms it could take – and to devote some thought to non-earthlike life in non-earthlike locations. But it plainly makes sense to start with what we know (the ‘searching under the streetlamp’ strategy) and to deploy all available techniques to discover whether any exoplanet atmospheres display evidence for a biosphere. Clues will surely come in the next decade or two from high-resolution spectra using the James Webb Space Telescope and the next generation of 30+ meter ground-based telescopes expected to be operational in the 2020s. To optimize the prospects, we shall need beforehand to have scanned the whole sky to identify the nearest earthlike planets. Even for these, next-generation telescopes will have a hard job separating out the spectrum of the planet’s atmosphere from the spectrum of the hugely brighter central star.

Conjectures about advanced or intelligent life are of course far more shaky than those about simple life. But the firmest guesses that we can make are based on extrapolating the far future of Earth-based life. I would argue that this suggests two things about the entities that SETI searches could reveal.

(a) They will not be ‘organic’ or biological

(b) They will not remain on the planet where their biological precursors lived.

FAR FUTURE OF EARTH-SOURCED INTELLIGENCE

During this century, the entire Solar System – planets, moons and asteroids – will be explored by flotillas of tiny robotic craft. The next step would be the deployment of large-scale robotic fabricators, which can construct and assemble large structures in space (and fabrication in space will be a better use of materials mined from asteroids or the Moon than bringing them back to Earth). The Hubble Telescope’s successors, with huge gossamer-thin mirrors assembled under zero gravity, will further expand our vision of stars, galaxies and the wider cosmos.

But what role will humans play? There’s no denying that NASA’s Curiosity rover, now trundling across a giant Martian crater, may miss startling discoveries that no human geologist could overlook. But robotic techniques are advancing fast, allowing ever more sophisticated unmanned probes – and, later in the century, robotic fabricators will be building huge lightweight structures in space. The practical case for manned spaceflight gets ever-weaker with each advance in robotics and miniaturization. If some people now living one day walk on Mars (as I hope they will) it will be as an adventure, and as a step towards the stars.

The current cost gap between manned and unmanned missions is huge. Unless motivated by prestige and bankrolled by superpowers, manned missions beyond the Moon will perforce be cut-price ventures, accepting high risks – perhaps even ‘one-way tickets’. These missions will be privately funded; no Western government agency would expose civilians to such hazards. There would, despite the risks, be many volunteers – driven by the same motives as early explorers, mountaineers, and the like. But don’t ever expect mass emigration. No place in our Solar system offers an environment even as clement as the Antarctic or the top of Everest. Space doesn’t offer an escape from Earth’s problems.

Nonetheless, a century or two from now, there may be small groups of pioneers living independent from the Earth – on Mars or on asteroids. Whatever ethical constraints we impose here on the ground, we should surely wish these adventurers good luck in genetically modifying their progeny to adapt to alien environments. This might be the first step towards divergence into a new species: the beginning of the post-human era. And genetic modification would be supplemented by cyborg technology – indeed there may be a transition to fully inorganic intelligences.

(As a parenthetic comment, I’d note that the most crucial impediment to routine space flight, even in Earth’s orbit and still more for those venturing further, stems from the intrinsic inefficiency of chemical fuel, and the consequent requirement to carry a weight of fuel far exceeding that of the payload. So long as we are dependent on chemical fuels, interplanetary travel will remain a challenge. It’s interesting to note, incidentally, that this is a generic constraint, based on fundamental chemistry, on any organic intelligence that had evolved on another planet. If a planet’s gravity is strong enough to retain an atmosphere at a temperature where water doesn’t freeze and metabolic reactions aren’t too slow, the energy required to lift a molecule from it will require more than one molecule of chemical fuel).

Nuclear energy (or, more futuristically, matter/antimatter annihilation) could be a transformative fuel. But even then, the transit time beyond nearby stars exceeds a human lifetime. Interstellar travel (except for unmanned probes, DNA samples, etc.) is therefore an enterprise for post-humans. They could be silicon-based. Alternatively, they could be organic creatures who had won the battle with death, or perfected the techniques of hibernation or suspended animation.

Few doubt that machines will gradually surpass more and more of our distinctively human capabilities – or enhance them via cyborg technology. Disagreements are basically about the timescale – the rate of travel, not the direction of travel. The cautious amongst us envisage timescales of centuries rather than decades for these transformations. Be that as it may, the timescales for technological advance are but an instant compared to the timescales of the Darwinian selection that led to humanity’s emergence – and (more relevantly) they are less than a millionth of the vast expanses of cosmic time lying ahead. So the outcomes of future technological evolution will surpass humans by as much as we (intellectually) surpass a bug.

But we humans shouldn’t feel too humbled. Even though we are surely not the terminal branch of an evolutionary tree, we could be of special cosmic significance for jump-starting the transition to silicon-based (and potentially immortal) entities, spreading their influence far beyond the Earth, and far transcending our limitations.

Philosophers debate whether “consciousness” is special to the wet, organic brains of humans, apes and dogs. Might it be that robots, even if their intellects seem superhuman, will still lack self-awareness or inner life? The answer to this question crucially affects how we react to the far-future scenario I’ve sketched. If the machines are zombies, we would not accord them the same value as humans, and the post-human future would seem bleak. But if they are conscious, we should surely welcome the prospect of their future hegemony.

The far future will bear traces of humanity, just as our own age retains influences of ancient civilizations. Humans and all they have thought might be a transient precursor to the deeper cogitations of another culture — one dominated by machines, extending deep into the future and spreading far beyond Earth.

I think it’s likely that the machines will gain dominance on Earth – perhaps indeed before the stage when any self-sustaining human colony gets established away from our planet. This is because there are chemical and metabolic limits to the size and processing power of ‘wet’ organic brains. Maybe we’re close to these already. But no such limits constrain silicon based computers (still less, perhaps, quantum computers): for these, the potential for further development could be as dramatic as the evolution from monocellular organisms to humans. So, by any definition of ‘thinking’, the amount and intensity that’s done by organic human-type brains will be utterly swamped by the cerebrations of AI. Moreover, the Earth’s biosphere in which organic life has symbiotically evolved is not a constraint for advanced AI. Indeed it is far from optimal – interplanetary and interstellar space will be the preferred arena where robotic fabricators will have the grandest scope for construction, and where non-biological ‘brains’ may develop insights as far beyond our imaginings as string theory is for a mouse.

Abstract thinking by biological brains has underpinned the emergence of all culture and science. But this activity – spanning tens of millennia at most – will be a brief precursor to the more powerful intellects of the inorganic post-human era.

Human brains have changed little since our ancestors roamed the African savannah and coped with the challenges that life then presented. It’s surely remarkable that these brains have allowed us to make sense of the quantum and the cosmos – far removed from the ‘common sense’ everyday world in which we evolved. Nonetheless, some key features of reality may be beyond our conceptual grasp. Scientific frontiers are advancing fast. Answers to many current mysteries will surely come into focus. but we may at some point ‘hit the buffers’. Some insights may have to await post-human intelligence. There may be phenomena, crucial to our long-term destiny, that we are not aware of, any more than a monkey comprehends the nature of stars and galaxies. Some ‘brains’ may structure their consciousness in a fashion that we can’t conceive, and have a quite different perception of reality.

In cosmological terms (or indeed in a Darwinian timeframe) a millennium is but an instant. So let us ‘fast forward’ not even for a few millennia, but for an ‘astronomical’ timescale millions of times longer than that. The ‘ecology’ of stellar births and deaths in our Galaxy will proceed gradually more slowly, until jolted by the ‘environmental shock’ of an impact with Andromeda, maybe four billion years hence. The debris of our Galaxy, Andromeda and their smaller companions within the local group will thereafter aggregate into one amorphous galaxy. Distant galaxies will not only move further away, but recede faster and faster until they disappear – rather as objects falling onto a black hole encounter a horizon, beyond which they are lost from view and causal contact.

But the remnants of our Local Group could continue for far longer – time enough, perhaps for Kardashev Type III phenomenon to emerge as the culmination of the long-term trend for living systems to gain complexity and ‘negative entropy’. All the atoms that were once in stars and gas could be transformed into structures as intricate as a living organism or a silicon chip but on a cosmic scale.

But even these speculations don’t take us to the utter limits. I have assumed that the universe itself will expand, at a rate that no future entities have power to alter. And that everything is in principle understandable as a manifestation of the basic laws governing particles, space and time that have been disclosed by contemporary science. Some science fiction authors envisage stellar-scale engineering to create black holes and wormholes – concepts far beyond any technological capability that we can envisage, but not in violation of these basic physical laws. But are there new ‘laws’ awaiting discovery? And will the present ‘laws’ be immutable, even to a Type III intelligence able to draw on galactic-scale resources?

Post-human intelligences (autonomously-evolving artefacts) will achieve the processing power to simulate living things – even entire worlds. These super or hyper-computers would have the capacity to simulate not just a simple part of reality, but a large fraction of an entire universe.

And then of course the question arises: if these simulations exist in far larger numbers than the universe themselves, could we be in one of them? Could we ourselves not be part of what we think of as bedrock physical reality? Could we be ideas in the mind of some supreme being who is running a simulation? Indeed, if the simulations outnumber the universes, as they would if one universe contained many computers making many simulations, then the likelihood is that we are ‘artificial life’ in this sense. This concept opens up the possibility of a new kind of ‘virtual time travel’, because the advanced beings creating the simulation can, in effect, rerun the past. It’s not a time-loop in a traditional sense: it’s a reconstruction of the past, allowing advanced beings to explore their history.

These ideas would have the extraordinary consequence that we may not be part of the deepest reality: we may be a simulation. The possibility that we are creations of some supreme (or super) being, blurs the boundary between physics and idealist philosophy, between the natural and the supernatural. We may be in the matrix rather than directly manifesting the basic physical laws.

SETI: PROSPECTS AND TECHNIQUES

The scenarios I’ve just described would have the consequence – a boost to human self-esteem! – that even if life had originated only on the Earth, it would not remain a trivial feature of the cosmos: humans may be closer to the beginning than to the end of a process whereby ever more complex intelligence spreads through the Galaxy. But of course there would in that case be no ‘ET’ at the present time.

Suppose however that there are many other planets where life began; and suppose that on some of them Darwinian evolution followed a similar track. Even then, it’s highly unlikely that the key stages would be synchronized. If the emergence of intelligence and technology on a planet lags significantly behind what has happened on Earth (because the planet is younger, or because the ‘bottlenecks’ have taken longer to negotiate there than here) then that planet would plainly reveal no evidence of ET. But life on a planet around a star older than the Sun could have had a head-start of a billion years or more. Thus it may already have evolved much of the way along the futuristic scenarios outlined in the last section.

One generic feature of these scenarios is that ‘organic’ human-level intelligence is just a brief interlude before the machines take over. The history of human technological civilization is measured in millennia (at most) – and it may be only one or two more centuries before humans are overtaken or transcended by inorganic intelligence, which will then persist, continuing to evolve, for billions of years. This suggests that if we were to detect ET, it would be far more likely to be inorganic: we would be most unlikely to ‘catch’ alien intelligence in the brief sliver of time when it was still in organic form.

SETI searches are surely worthwhile, despite the heavy odds against success, because the stakes are so high. That’s why we should surely acclaim the launch of Breakthrough Listen – a major ten-year commitment by the Russian investor Yuri Milner to buy time on the world’s best radio telescopes and develop instruments to scan the sky in a more comprehensive and sustained fashion than ever before. Breakthrough Listen will carry out the world’s deepest and broadest search for extraterrestrial technological life using several of the world’s largest professional radio and optical telescopes. The project will deploy radio dishes at Green Bank and at Parkes – and hopefully others including the Arecibo Observatory. The radio telescopes will be used to search for non-natural radio transmissions from nearby and distant stars, from the plane of the Milky Way, from the Galactic Centre, and from nearby galaxies. They will search over a wide frequency bandwidth from 100 MHz to 50 GHz using advanced signal processing equipment developed by a team centered at UC Berkeley.

SETI searches seek some electromagnetic transmission that is manifestly artificial. But even if the search succeeded (and few of us would bet more than one percent on this), it would still in my view be unlikely that the ‘signal’ would be a decodable message. It would more likely represent a byproduct (or even a malfunction) of some super-complex machine far beyond our comprehension that could trace its lineage back to alien organic beings (which might still exist on their home planet, or might long ago have died out). The only type of intelligence whose messages we could decode would be the (perhaps small) subset that used a technology attuned to our own parochial concepts.

Even if intelligence were widespread in the cosmos, we may only ever recognize a small and atypical fraction of it. Some ‘brains’ may package reality in a fashion that we can’t conceive. Others could be living contemplative lives, perhaps deep under some planetary ocean, doing nothing to reveal their presence. It makes sense to focus searches first on Earth-like planets orbiting long-lived stars. But science fiction authors remind us that there are more exotic alternatives. In particular, the habit of referring to ET as an ‘alien civilization’ may be too restrictive. A ‘civilization’ connotes a society of individuals: in contrast, ET might be a single integrated intelligence. Even if signals were being transmitted, we may not recognize them as artificial because we may not know how to decode them. A radio engineer familiar only with amplitude-modulation might have a hard time decoding modern wireless communications. Indeed, compression techniques aim to make the signal as close to noise as possible – insofar as a signal is predictable, there’s scope for more compression.

Perhaps the Galaxy already teems with advanced life, and our descendants will ‘plug in’ to a galactic community – as rather junior members. On the other hand, Earth’s intricate biosphere may be unique and the searches may fail. This would disappoint the searchers. But it would have an upside. Humans could then be less cosmically modest. Our tiny planet – this pale blue dot floating in space – could be the most important place in the entire cosmos. Either way, our cosmic habitat seems ‘tuned’ to be an abode for life. Even if we are now alone in the universe, we may not be the culmination of this ‘drive’ towards complexity and consciousness.

The focus of the ‘Breakthrough Listen’ project is on the radio and optical parts of the spectrum. But of course, in our state of ignorance about what might be out there, we should clearly encourage searches in all wavebands (e.g. the X-ray band) and also be alert for artefacts and other evidence of non-natural phenomena. I don’t think even the optimistic SETI searchers would rate the chance of success as more than a few percent – and most of us are more pessimistic, but nevertheless think the stakes are so high that it’s worth a gamble – we’d surely all like to see searches begun in our lifetime.

Finally, there are two familiar maxims that pertain to this quest. First ‘extraordinary claims will require extraordinary evidence’ and second ‘absence of evidence isn’t evidence of absence’.

REFERENCES

[1] Wallace, A. R 1903, Man’s Place in the Universe, Chapman and Hall (London) pp 15 – 19

[Go to Top]


SUPERINTELLIGENT AI AND THE POSTBIOLOGICAL COSMOS APPROACH

Susan Schneider
Department of Philosophy and Cognitive Science Program, The University of Connecticut
Center for Theological Inquiry, Princeton
Technology and Ethics Group, Yale University
susansdr@gmail.com

ABSTRACT

The postbiological approach in astrobiology has been largely independent of the discussions of superintelligence in the AI literature, despite the increasing attention on superintelligent AI in both academe and in the media. In this paper, I bring these issues together. In my view, one route to understanding superintelligent alien civilizations, as well as superintelligence on Earth (should either ever exist) could involve identifying general features of computational systems, without which a superintelligence would be far less efficient. By drawing from Nick Bostrom’s work on superintelligent AI on Earth, as well as ideas from computational neuroscience, I will attempt to identify some goals and cognitive capacities likely to be possessed by superintelligent beings. I will then comment on some social implications of the postbiological approach

INTRODUCTION

Thinking about how aliens in other technological societies might think, if they exist at all, is obviously speculative, even for a philosopher. After all, exoplanets are habitable, we do not know if they are inhabited. We do not currently have an agreed-upon account of the origin of life on Earth, and we do not know how easy it is for life to originate elsewhere. And even if microbial life exists on many exoplanets, perhaps it is rare for microbial life to evolve into intelligent life. Or, perhaps it isn’t rare for intelligence to evolve, but civilizations do not survive their own technological maturity. Perhaps we are one of only a few technological civilizations in the universe, or perhaps we are alone.

But I am going to assume, optimistically, that advanced civilizations are out there. After all, if even one technological civilization exists, it is likely to be older than us, and it could have spread throughout the universe. Further, some proponents of the search for extraterrestrial intelligence (SETI) estimate that we will encounter alien intelligence within the next several decades. Even if you hold a more conservative estimate – say, that the chance of encountering alien intelligence in the next 50 years is 5 percent – the stakes for our species are high. Knowing that we are not alone in the universe would be a profound realization, and contact with an alien civilization could produce amazing technological innovations and cultural insights. It thus can be valuable to consider these questions, albeit with the goal of introducing possible routes to answering them, rather than producing definitive answers. So, let us ask: how might aliens think? Believe it or not, it’s possible to say something concrete in response to this question.

We can approach this issue by drawing from science and the humanities, rather than just science In particular, I will draw from neuroscience, philosophy, astrobiology and artificial intelligence (AI). My point of departure is the intriguing position in astrobiology that the most intelligent alien civilizations may be postbiological, being synthetic superintelligences – creatures that are vastly smarter than humans in every respect, scientific reasoning, social skills, and more [1], [2], [3], [4], [5], [6].

The postbiological approach has been largely independent of the discussions of superintelligence, despite the increasing attention on superintelligent AI in both academe and in the media [7]. Herein, I bring these issues together, drawing from [4]. In my view, to understand the most intelligent alien civilizations, as well as superintelligence on Earth, we can look for general features of computational systems, without which a superintelligence would be far less efficient. So using work on superintelligent AI on Earth, as well as ideas from computational neuroscience, I will briefly and provisionally attempt to identify some goals and cognitive capacities likely to be possessed by superintelligent beings.

Section One overviews the postbiological cosmos approach. Section Two discusses Nick Bostrom’s recent book on superintelligence, which focuses on the genesis of superintelligent AI (“SAI”) on Earth; as it happens, many of Bostrom’s observations are informative in the present context. I then isolate a specific type of superintelligence that is of particular import in the context of alien superintelligence, biologically inspired superintelligences (“BISAs”). Section Three concludes by raising some issues for future reflection.

THE POSTBIOLOGICAL COSMOS APPROACH IN ASTROBIOLOGY

Our culture has long depicted aliens as humanoid creatures with small, pointy chins, massive eyes, and large heads, apparently to house brains that are larger than ours. Paradigmatically, they are “little green men.” While we are aware that our culture is anthropomorphizing, I imagine that my suggestion that aliens are supercomputers may strike you as far-fetched. So what is my rationale for the view that most intelligent alien civilizations will have members that possess SAI? I offer three observations that, together, motivate this conclusion.

(1) The short window observation. Once a society creates the technology that could put them in touch with the cosmos, they are only a few hundred years away from changing their own paradigm from biology to AI [3], [6], [2]. This “short window” makes it more likely that the aliens we encounter would be postbiological.

The short-window observation is supported by human cultural evolution, at least thus far. Our first radio signals date back only about 120 years, and space exploration is only about 50 years old, but we are already immersed in digital technology, such as cell-phones and laptop computers. It is probably a matter of less than 50 years before sophisticated internet connections are wired directly into our brains. Indeed, implants for Parkinson’s are already in use, and in the United States the Defense Advanced Research Projects Agency (DARPA) has started to develop neural implants that interface directly with the nervous system, regulating conditions such as post-traumatic stress disorder, arthritis, depression, and Crohn’s disease. DARPA’s program, called “ElectRx,” aims to replace certain medications with “closed-loop” neural implants, implants that continually assess the state of one’s health, and provide the necessary nerve stimulation to keep one’s biological systems functioning properly [8]. Eventually, implants will be developed to enhance normal brain functioning, rather than for medical purposes.

You may object that this argument employs “N = 1 reasoning,” generalizing from the human case to the case of alien civilizations. But it strikes me as being unwise to discount arguments based on the human case. Human civilization is the only one we know of and we had better learn from it. It is no great leap to claim that other civilizations will develop technologies to advance their intelligence and survival. This is especially true if the alien civilizations evolved with similar evolutionary pressures as those on Earth. And, as I will explain in a moment, synthetic intelligence will likely outperform unenhanced brains.

A second objection to my short-window observation rightly points out that nothing I have said thus far suggests that humans will be superintelligent. I have merely said that future humans will be posthuman. While I offer support for the view that our own cultural evolution suggests that humans will eventually be postbiological, this does not show that advanced alien civilizations will reach superintelligence. So even if one is comfortable reasoning from the human case, the human case does not support the position that the members of advanced alien civilizations will be superintelligent.

This is a correct reading of my first observation. Whether or not they would be superintelligent is the addressed by the second.

(2) The greater age of alien civilizations. Proponents of SETI have often concluded that alien civilizations would be much older than our own: “… all lines of evidence converge on the conclusion that the maximum age of extraterrestrial intelligence would be billions of years, specifically [it] ranges from 1.7 billion to 8 billion years” ([2] p 468). If civilizations are millions or billions of years older than us, many would be vastly more intelligent than we are. By our standards, many would be superintelligent. We are galactic babies.

But would they be forms of AI, as well as forms of superintelligence? I believe so. Even if they were biological, merely having biological brain enhancements, their superintelligence would be reached by artificial means, and we could regard them as having forms of “artificial intelligence.” But I suspect something stronger than this, which leads me to my third observation:

(3) It is likely that these synthetic beings will not be biologically-based. Currently, silicon appears to be a better medium for information processing than the brain itself, and future materials may even prove superior to silicon. Neurons reach a peak speed of about 200 Hz, which is seven orders of magnitude slower than current microprocessors ([7] p 59). While the brain can compensate for some of this with massive parallelism, features such as “hubs,” and so on, crucial mental capacities, such as attention, rely upon serial processing, which is incredibly slow, and has a maximum capacity of about seven manageable chunks [9]. Further, the number of neurons in a human brain is limited by cranial volume and metabolism, but computers can occupy entire buildings or cities, and can even be remotely connected across the globe [7]. Of course, the human brain is far more intelligent than any modern computer. But intelligent machines can in principle be constructed by reverse engineering the brain, and improving upon its algorithms.

In sum: I have observed that there seems to be a short window from the development of the technology to access the cosmos and the development of postbiological minds and AI. I then observe that we are galactic babies: extraterrestrial civilizations are likely to be vastly older than us, and thus they would have already reached not just postbiological life, but superintelligence. Finally, I noted that they would likely have SAI, because silicon is a superior medium for superintelligence. From this I conclude that many advanced alien civilizations will be populated by forms with SAI.

Even if I am wrong – even if the majority of alien civilizations turn out to be biological – it may be that the most intelligent alien civilizations will be ones in which the inhabitants are SAI. Further, creatures that are silicon-based, rather than biologically-based, are more likely to endure space travel, having durable systems that are practically immortal, so they may be the kind of the creatures we first encounter.

HOW MIGHT SUPERINTELLIGENT ALIENS THINK

There has been a good deal of attention by computer scientists, philosophers, and the media on the topic of superintelligent AI. Nick Bostrom’s recent book on superintelligence focuses on the development of superintelligence on Earth, but we can draw from his thoughtful discussion [7]. Bostrom distinguishes three kinds of superintelligence:

(1) Speed superintelligence – even a human emulation could in principle run so fast that it could write a PhD thesis in an hour.

(2) Collective superintelligence – the individual units need not be superintelligent, but the collective performance of the individuals outstrips human intelligence.

(3) Quality superintelligence – at least as fast as human thought, and vastly smarter than humans in virtually every domain.

Any of these kinds could exist alongside one or more of the others.

An important question is whether we can identify common goals that these types of superintelligences may share. Bostrom suggests:

The Orthogonality Thesis:

“Intelligence and final goals are orthogonal – more or less any level of intelligence could in principle be combined with more or less any final goal.” ([7] p 107)

Bostrom is careful to underscore that a great many unthinkable kinds of SAI could be developed. At one point, he raises a sobering example of a superintelligence with the final goal of manufacturing paper clips ([7] pp 107–108, 123–125). While this may initially strike you as a harmless endeavor although hardly a life worth living, Bostrom points out that a superintelligence could utilize every form of matter on Earth in support of this goal, wiping out biological life in the process. Indeed, Bostrom warns that superintelligence emerging on Earth could be of an unpredictable nature, being “extremely alien” to us ([7] p 29). He lays out several scenarios for the development of SAI. For instance, SAI could be arrived at in unexpected ways by clever programmers, and not be derived from the human brain whatsoever. He also takes seriously the possibility that Earthly superintelligence could be biologically inspired, that is, developed from reverse engineering the algorithms that cognitive science says describe the human brain, or from scanning the contents of human brains and transferring them to a computer (i.e. “uploading”).

Although the final goals of superintelligence are difficult to predict, Bostrom singles out several instrumental goals as being likely, given that they support any final goal whatsoever:

The Instrumental Convergence Thesis:

Several instrumental values can be identified which are convergent in the sense that their attainment would increase the chances of the agent’s goal being realized for a wide range of final goals and a wide range of situations, implying that these instrumental values are likely to be pursued by a broad spectrum of situated intelligent agents. ([7] p 109)

The goals that he identifies are resource acquisition, technological perfection, cognitive enhancement, self-preservation, and goal content integrity (i.e. that a superintelligent being’s future self will pursue and attain those same goals). He underscores that self-preservation can involve group or individual preservation, and that it may play second-fiddle to the preservation of the species the AI was designed to serve ([7] p 109).

Let us call an alien superintelligence that is based on reverse engineering an alien brain, including uploading it, a biologically-inspired superintelligent alien (“BISA”). Although BISAs are inspired by the brains of the original species that the superintelligence is derived from, a BISA’s algorithms may depart from those of their biological model at any point.

BISAs are of particular interest in the context of alien superintelligence. For if Bostrom is correct that there are many ways superintelligence can be built, but a number of alien civilizations develop superintelligence from uploading or other forms of reverse engineering, it may be that BISAs are the most common form of alien superintelligence out there. This is because there are many kinds of superintelligence that can arise from raw programming techniques employed by alien civilizations. (Consider, for instance, the diverse range of AI programs under development on Earth, many of which are not modelled after the human brain). This may leave us with a situation in which the class of SAIs is highly heterogeneous, with members generally bearing little resemblance to each other. It may turn out that of all SAIs, BISAs bear the most resemblance to each other. In other words, BISAs may be the most cohesive subgroup because the other members are so different from each other.

Here, you may suspect that because BISAs could be scattered across the galaxy and generated by multitudes of species, there is little interesting that we can say about the class of BISAs. But notice that BISAs have two features that may give rise to common cognitive capacities and goals:

(1) BISAs are descended from creatures that had motivations like: find food, avoid injury and predators, reproduce, cooperate, compete, and so on.

(2) The life forms that BISAs are modeled from have evolved to deal with biological constraints like slow processing speed and the spatial limitations of embodiment.

Could (1) or (2) yield traits common to members of many superintelligent alien civilizations? I suspect so.

Consider (1). Intelligent biological life tends to be primarily concerned with its own survival and reproduction, so it is more likely that BISAs would have final goals involving their own survival and reproduction, or at least the survival and reproduction of the members of their society. If BISAs are interested in reproduction, we might expect that, given the massive amounts of computational resources at their disposal, BISAs would create simulated universes stocked with artificial life and even intelligence or superintelligence. If these creatures were intended to be “children” they may retain the goals listed in (1) as well.

You may object that it is useless to theorize about BISAs, as they can change their basic architecture in numerous, unforeseen ways, and any biologically-inspired motivations can be constrained by programming. There may be limits to this, however. If a superintelligence is biologically-based, it may have its own survival as a primary goal. In this case, it may not want to change its architecture fundamentally, but stick to smaller improvements. It may think: when I fundamentally alter my architecture, I am no longer me [10]. Uploads, for instance, may be especially inclined not to alter the traits that were most important to them during their biological existence.

Consider (2). The designers of the superintelligence, or a self-improving superintelligence itself, may move away from the original biological model in all sorts of unforeseen ways, although I have noted that a BISA may not wish to alter its architecture fundamentally. But we could look for cognitive capacities that are useful to keep; cognitive capacities that sophisticated forms of biological intelligence are likely to have, and which enable the superintelligence to carry out its final and instrumental goals. We could also look for traits that are not likely to be engineered out, as they do not detract the BISA from its goals.

If (2) is correct, we might expect the following, for instance.

(i) Learning about the computational structure of the brain of the species that created the BISA can provide insight into the BISAs thinking patterns. One influential means of understanding the computational structure of the brain in cognitive science is via “connectomics,” a field that seeks to provide a connectivity map or wiring diagram of the brain [11]. While it is likely that a given BISA will not have the same kind of connectome as the members of the original species, some of the functional and structural connections may be retained, and interesting departures from the originals may be found.

(ii) BISAs may have viewpoint-invariant representations. At a high level of processing your brain has internal representations of the people and objects that you interact with that are viewpoint-invariant. Consider walking up to your front door. You’ve walked this path hundreds, maybe thousands of times, but technically, you see things from slightly different angles each time, as you are never positioned in exactly the same way twice. You have mental representations that are at a relatively high level of processing and are viewpoint invariant. It seems difficult for biologically-based intelligence to evolve without viewpoint invariant representations, as they enable categorization and prediction [12]. Such representations arise because a system that is mobile needs a means of identifying items in its ever-changing environment, so we would expect biologically-based systems to have them. BISA would have little reason to give up object-invariant representations insofar as it remains mobile or has mobile devices sending it information remotely.

(iii) BISAs will have language-like mental representations that are recursive and combinatorial. Notice that human thought has the crucial and pervasive feature of being combinatorial. Consider the thought “wine is better in Italy than in China.” You probably have never had this thought before, but you were able to understand it. The key is that the thoughts are combinatorial because they are built out of familiar constituents, and combined according to rules. The rules apply to constructions out of primitive constituents, that are themselves constructed grammatically, as well as to the primitive constituents themselves. Grammatical mental operations are incredibly useful: it is the combinatorial nature of thought that allows one to understand and produce these sentences on the basis of one’s antecedent knowledge of the grammar and atomic constituents (e.g. wine, China). Relatedly, thought is productive: in principle, one can entertain and produce an infinite number of distinct representations because the mind has a combinatorial syntax [13].

Brains need combinatorial representations because there are infinitely many possible linguistic representations, and the brain only has a finite storage space. Even a superintelligent system would benefit from combinatorial representations. Although a superintelligent system could have computational resources that are so vast that it is mostly capable of pairing up utterances or inscriptions with a stored sentence, it would be unlikely that it would trade away such a marvelous innovation of biological brains. If it did, it would be less efficient, since there is the potential of a sentence not being in its storage, which must be finite.

(iv) BISAs may have one or more global workspaces. When you search for a fact or concentrate on something, your brain grants that sensory or cognitive content access to a “global workspace” where the information is broadcast to attentional and working memory systems for more concentrated processing, as well as to the massively parallel channels in the brain [14]. The global workspace operates as a singular place where important information from the senses is considered in tandem, so that the creature can make all-things-considered judgments and act intelligently, in light of all the facts at its disposal. In general, it would be inefficient to have a sense or cognitive capacity that was not integrated with the others, because the information from this sense or cognitive capacity would be unable to figure in predictions and plans based on an assessment of all the available information.

(v) A BISA’s mental processing can be understood via functional decomposition. As complex as alien superintelligence may be, humans may be able to use the method of functional decomposition as an approach to understanding it. A key feature of computational approaches to the brain is that cognitive and perceptual capacities are understood by decomposing the particular capacity into their causally organized parts, which themselves can be understood in terms of the causal organization of their parts. This is the aforementioned “method of functional decomposition” and it is a key explanatory method in cognitive science. It is difficult to envision a complex thinking machine not having a program consisting of causally interrelated elements each of which consists in causally organized elements.

All this being said, superintelligent beings are by definition beings that are superior to humans in every domain. While a creature can have superior processing that still basically makes sense to us, it may be that a given superintelligence is so advanced that we cannot understand any of its computations whatsoever. It may be that any truly advanced civilization will have technologies that will be indistinguishable from magic, as Arthur C. Clarke once suggested [15]. I obviously speak to the scenario in which the SAI’s processing makes some sense to us, one in which developments from cognitive science yield a glimmer of understanding into the complex mental lives of certain BISAs.

SOME ISSUES FOR FURTHER REFLECTION

In the spirit of encouraging future discussion, I will close by raising issues for future reflection.

Given the vast variety of possible intelligences, it is an intriguing question to ask whether creatures with different sensory modalities may have the same kind of thoughts or think in a similar ways as humans. There is a debate in the field of philosophy of mind that is relevant to this question. Contemporary neo-empiricists, such as the philosopher Jesse Prinz, have argued that all concepts are modality specific, being couched in a particular sensory format, such as vision [16]. If he’s correct, it may be difficult to understand the thinking of creatures with vastly different sensory experiences than us. But I am skeptical. For instance, consider my aforementioned comment on viewpoint invariant representations. At a higher level of processing, information seems to become less viewpoint dependent. Similarly, it becomes less modality specific, as with the processing in the human brain, as it ascends from particular sensory modalities to the brain’s association areas and into working memory and attention, where it is in a more neutral format.

But these issues are subtle and deserve a lengthier treatment. I pursued issues related to this topic in my monograph, The Language of Thought, which looked at whether thinking is independent of the kind of perceptual modalities humans have and is also prior to the kind of language we speak [12]. In the context of alien life or SAI, an intriguing question is the following: If there is an inner mental language that is independent of sensory modalities, having the aforementioned combinatorial structure, would this be some sort of common ground, should we encounter other advanced intelligences? (Many of these issues apply to the case of intelligent biological alien life as well, and could also be helpful in the context of the development of SAI on Earth.)

The ethical and metaphysical issues surrounding postbiological intelligence concern me greatly. Perhaps the best way to introduce the ethical and metaphysical issues is to consider that the post-biological cosmos approach involves a shift in our usual perspective about intelligent life in the universe. Normally, we think of encountering alien intelligence as encountering creatures with radically different biological features and sensory experiences. The shift of focus is twofold: first, the focus moves away from biology to superintelligent AI, and this will involve theorizing about the computational abilities of advanced artificial intelligence. Second, as we reflect on the nature of postbiological intelligence, we must be keenly aware that we may be reflecting upon the nature of our own descendants as well as aliens. In essence, the line between “us” and “them” blurs, and our focus moves away from biology to the difficult task of understanding the computations and behaviors of creatures that will be far more advanced than we are.

What does this all mean? In contrast to Ray Kurzweil’s utopian enthusiasm for the singularity, I do not see normative evaluations of whether a post-biological existence is desirable for our species in the astrobiology literature, and there has been little discussion of the singularity within contemporary metaphysics and philosophy of mind. But it is important to reflect upon the ethical, philosophical and social implications of all this. Would superintelligent AI, including our own postbiological descendants, be selves or persons? Could they be conscious? My own view is that the question of whether AI could be conscious is key – if the synthetic being in question is not capable of consciousness, that is, if it doesn’t feel like anything to be it, then why would it be a self or person? I’ve discussed the issue of consciousness elsewhere [4], but since that point, I have been increasingly convinced that the question of machine consciousness is an open question that cannot be solved today. In addition to the matter of whether the substrate in question (e.g., graphene, silicon) supports consciousness, the devil is in the details of the particular AI design. That is, we would have to determine whether the architecture of the particular AI in question even employs conscious thought. Consciousness is associated with slower, more deliberative processing in humans, and it is unclear whether superintelligence would even need conscious processing, as it would have mastered so much already. What would be novel to it? And would consciousness even be associated with slower, deliberative processing in an AI in any case?

The science fiction treatment of androids may lead us to believe that machines can feel – for instance, consider the Samantha program in the film Her, or consider Asimov’s robot stories. But this is just science fiction, and the empirical and philosophical question of whether AI can be conscious remains open.

CONCLUSION

In this brief piece, I’ve discussed why it is likely that the alien civilizations we encounter will be forms of superintelligent AI (or “SAI”). I then turned to the difficult question of how such creatures might think. I provisionally attempted to identify some goals and cognitive capacities likely to be possessed by superintelligent beings. I discuss Nick Bostrom’s recent book on superintelligence, which focuses on the genesis of SAI on Earth; as it happens, many of Bostrom’s observations were informative in the present context [7]. Finally, I isolated a specific type of superintelligence that is of particular import in the context of alien superintelligence, biologically-inspired superintelligences (“BISAs”). I urged that if any superintelligences we encounter are BISAs, certain work in computational neuroscience, cognitive neuroscience and philosophy of mind may provide resources for at least a rough understanding of the computations of BISAs.

REFERENCES

[1] Cirkovic, M. and Bradbury, R. 2006, “Galactic Gradients, Postbiological Evolution and the Apparent Failure of SETI,” New Astronomy 11, pp. 628–639

[2] Dick, S. 2013, “Bringing Culture to Cosmos: the Postbiological Universe,” Cosmos and Culture: Cultural Evolution in a Cosmic Context, S. J. Dick and M. Lupisella eds., Washington, DC: NASA, online at http://history.nasa.gov/SP-4802.pdf

[3] Shostak, S. 2009, Confessions of an Alien Hunter, National Geographic (Washington, DC)

[4] Schneider, S. 2015, “Alien Minds,” in Discovery, Steven Dick, ed., Cambridge University Press (Cambridge)

[5] Davies, P. 2010, The Eerie Silence, Houghton Mifflin Harcourt (London)

[6] Bradbury, R., Cirkovic, M., and Dvorsky, G. 2011, “Dysonian Approach to SETI: A Fruitful Middle Ground?” Journal of the British Interplanetary Society, 64, pp. 156–165

[7] Bostrom, N. 2014, Superintelligence: Paths, Dangers, Strategies, Oxford University Press (Oxford)

[8] Guerini, Federico 2014, “DARPA’s ElectRx Project: Self-Healing Bodies Through Targeted Stimulation Of The Nerves,” http://www.forbes.com/sites/federicoguerrini/2014/08/29/darpas-electrx-p... Forbes Magazine, 8/29/2014. Extracted Sept. 30, 2014

[9] Miller, R. 1956, “The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information,” The Psychological Review, 63, pp. 81–97

[10] Schneider, S. 2011a, “Mindscan: Transcending and Enhancing the Brain,” Neuroscience and Neuroethics: Issues At the Intersection of Mind, Meanings and Morality, J. Giordano ed., Cambridge University Press (Cambridge)

[11] Seung, S. 2012, Connectome: How the Brain’s Wiring Makes Us Who We Are, Houghton Mifflin Harcourt (Boston)

[12] Hawkins, J. and Blakeslee, S. 2004, On Intelligence: How a New Understanding of the Brain will Lead to the Creation of Truly Intelligent Machine, Times Books (New York)

[13] Schneider, S. 2011b, The Language of Thought: a New Philosophical Direction, MIT Press (Boston)

[14] Baars, B. 2008, “The Global Workspace Theory of Consciousness,” The Blackwell Companion to Consciousness, M. Velmans and S. Schneider eds.,Wiley-Blackwell (Boston), pp. 236-247

[15] Clarke, A. 1962, Profiles of the Future: An Inquiry into the Limits of the Possible, Harper and Row (New York)

[16] Prinz, J. 2004, Furnishing the Mind: Concepts and their Perceptual Basis, MIT Press (Boston)

[Go to Top]


THINKING OUTSIDE THE SETI BOX

Seth Shostak
SETI Institute
189 Bernardo Ave.
Mountain View, CA 94043
seth@seti.org

Introduction

We consider the biological provincialism of traditional SETI, and why there are good arguments for thinking that the bulk of the intelligence in the cosmos is synthetic.  Given this possibility, the SETI community should consider how to conduct a meaningful search for intelligence that is not constrained to habitable worlds.  To that end, we consider some of the factors that might govern the behavior of highly advanced, cognitive machinery and some strategies that might aid in the discovery of same.

THE ANTHROPOCENTRIC BIAS

The premise of most SETI experiments, the Search for Extraterrestrial Intelligence, was established with Frank Drake’s pioneering Project Ozma more than five decades ago [1]. Today’s efforts differ in scale, but not in approach: Their strategy is to seek signals produced by cosmic inhabitants whose level of technology is at least as advanced as our own. 

For more than two decades, SETI has been largely underwritten by private donations, and because of this the scientists involved are often pressured to make some estimate of the chances of success.  To this end, they will frequently invoke the well-known Drake Equation which quantifies the number of galactic societies currently producing detectable signals.  If some estimate of the prevalence of transmitting sources can be made, then a timescale for SETI success can also be made.

Unfortunately, the value of many of the parameters of this equation are still unknown, and the few for which new data have recently become available are little changed from the estimates made when the equation was first written.  The Drake Equation, while ubiquitous and helpful in formulating the problem of SETI, does little to determine the odds for any particular experiment.

Of possibly greater importance is the Equation’s influence in setting strategy.  It assumes that SETI will succeed only if there are at least a few thousand technically accomplished civilizations resident in the Milky Way.  Detectable societies are assumed to consist of a large number of individuals, resident on a planet that’s not only amenable to life but also able to beget and sustain complex organisms.  In other words, a world analogous to our own.

That view hasn’t changed in a half century.  New thinking on how to conduct SETI has been less about the nature of the beings we seek or their habitat, and more about their presumed behavior. 

As example, a matter of popular discussion is whether signals from extraterrestrials are more likely to be deliberate beacons, or accidental leakage. This discussion is largely motivated by the trend in our own society to shift to higher efficiency communication modes (e.g., direct satellites and fiber optics in place of traditional broadcasting.) This change has led many to opine that advanced civilizations will be economical, and not generate significant leakage. However, while this argument sounds plausible, there’s no denying that it is highly parochial, and based on human experience a scant century after the invention of practical radio and lasers.  And even this modest speculation on the conduct of extraterrestrials – they will be more efficient users of energy than we are – has had little impact on SETI experiments.

In fact, experiments do what they are able, and are mostly indifferent to whether the signal being sought is intentional or otherwise.  SETI today continues to adopt the playbooks of the past: the aliens are analogous to us, only more advanced.  The circumstances of their environment are also presumed to be similar to ours.

Unsurprisingly then, SETI practitioners have been heartened by recent discoveries of exoplanets.  The good news is that worlds akin to our own could exist in great abundance. Current estimates are that between 0.1 and 0.2 of all star systems host an Earth-size planet in the habitable zone [2].  This implies that tens of billions of these favored locales pepper the Galaxy.

But there is also bad news.  At a time when the prospects for beings comparable to ourselves are improving, there is a slow-growing realization that biological intelligence may be only a short-lived – and possibly cryptic – stepping stone to the real thinkers of the cosmos: synthetic intelligence.

PROSPECTS FOR SYNTHETIC INTELLIGENCE

If researchers in the field of artificial intelligence (AI) are to be believed, we will invent machines that are our cognitive equals by mid-century.  Roboticist Hans Moravec has pointed out that the exponential improvement in digital electronics will produce workaday computers with reckoning power comparable to a human brain in less than a decade’s time [3].  This rapid betterment in computation has led some, such as Vernor Vinge and Ray Kurzweil, to predict a future time – the “singularity” – at which our own intellectual capacities will be swamped by that of our devices [4],[5].

Of course, there are already machines that can outperform the human brain in tasks generally regarded as “intelligent.”  The best chess playing computer can beat the best grand master, and the recent triumph of IBM’s Watson computer against seasoned contestants on a television quiz show attracted widespread attention, if not admiration.  More recently, Google’s AlphaGo software beat a world expert human at the game of Go, one that is considerably more complex than chess.  But as AI entrepreneur Peter Voss has noted, these attainments merely point up the current situation in which one can either build a machine that is excellent at a narrowly scoped task (e.g., chess) or one that is quite mediocre at many things [6].  In order to challenge the intellectual abilities of humans, what’s required is what is termed GAI – generalized artificial intelligence.

It is not the intent of this essay to either review or critique developments in AI research, but rather to assume that GAG machines will appear – if not in this century, then in the next.  The timing is of little consequence to the implications for SETI.  But the events following this development are straightforward:

1.  If our own example can be taken as typical, then GAI quickly follows on the heels of radio technology – within a few centuries.

2.  There is no reason to believe that the evolution of “wet ware” – augmentations of our own brains – can keep pace with GAI.

3.  Because artificial intelligence can quickly evolve (by its own design), it will soon outstrip the cognitive capability of biological beings.

4.  Artificial intelligence will be self-repairing, and therefore of indefinite lifetime.

5.  GAI will be the dominant form of intelligence for any society that has progressed even slightly beyond the point of being able to send signals into space.

6.  Unlike biology, which has been “engineered” bottom-up, GAI will be engineered top-down. We cannot hope to forecast what talents or interests it will have, but the one aspect of its functionality that seems safe to assume is survival. This sounds Darwinian, and therefore biological, but is essential if we are to find GAI now, billions of years into the history of the cosmos.

The bottom line is simple, if disquieting: biological brains will beget synthetic ones.  If this technical evolution is commonplace, then there’s reason to expect that the majority of the intelligence in the universe is non-biological.  This intelligence would not be dependent on water worlds, atmospheres, or planets at all.  Consequently the premise of most SETI – that we should expect to find signals from old, habitable worlds – could be wide of the mark [7],[8].

It seems probable that the future of our hunt for extraterrestrials will require more than just new equipment.  We’ll need to rethink what it is we seek.

SO HOW DO WE FIND IT?

Adapting our SETI strategies to the challenge of uncovering GAI may sound simple at first. Nothing more is required than to put less emphasis on targeting habitable planets, or even individual stars, and simply scan as much of the sky as possible.  However, there may be opportunities to increase our chances of success by augmenting this simple, brute-force approach with insights about the likely nature or behavior of synthetic intelligence.

First, we are probably well advised to avoid hubris.  There may be little we can fathom about the nature of artificial intelligence that might be the result of millions of generations of self-improvement – improvement not predicated on the slight and random modifications of Darwin, but directed changes.  Such intelligence will surely be as superior to us as we are to the nematodes in the garden.  Consequently, we should not feel too sure about our speculations as to what AGI might do or how it might be detected.  Imaginative ideas about the interests and activities of synthetic beings are plentiful in fiction, but these ideas are vulnerable to anthropocentric bias. 

However, there are at least a few aspects of GAI that seem less suspect:

1.  Assuming that for such machines more computation is better, they can be expected to prefer locations with abundant energy and an effective heat sink.  The former suggests the neighborhoods of early-type stars or black holes (either of the stellar variety or the massive objects hunkered at the centers of galaxies.)  It’s been suggested that the outer regions of galaxies might be preferred locales for such machines because of their slightly lower temperatures, resulting in greater thermal efficiency [9].  However, given that the efficiency depends only on a temperature ratio between source and sink, this argument is of significance only if the energy source is no more than a few hundred degrees, as space is cold almost everywhere. 

2.  The short timescales for self-improvement may set up a “winner take all” situation. Whatever machine first appears in a given part of the cosmos could endlessly trump others that arise, since even a cosmically short period of time is a great number of GAI generations, and the new kids on the block could never catch up.

3.  Given the dangers present in the universe, a machine might wish to buy insurance in the form of backup machines.  These could be kept at a distance that would minimize simultaneous annihilation, but linked to the mother machine so that updates could be continually offered. Detecting this telemetry might offer a way to discover GAI, although one can assume that the communication would be point to point and unlikely to be intercepted with our instruments.

4.  Another possible organization scheme for GAI might be hierarchical.  Social systems might make sense if the increase of information in a machine eventually becomes small compared to the timescale for interaction with other machines (the light travel time between them).  In other words, if the new capability acquired per year by a GAI eventually becomes a very small fraction of the previously accumulated capability, then interchanging information makes sense, since that information is not rendered obsolete and irrelevant in the time it takes to effect the exchange.

5.  Whether intelligent machines would have any interest in broadcasting (as opposed to point-to-point telemetry) is impossible to know.  One metric for intelligence is the ability to foresee danger and avoid it.  The cleverest GAI, by this measure, might be less concerned about revealing their presence with easily found signals.  They might also wish to communicate with other such machines that are largely outside their light cone, as these would have information that they could not obtain otherwise [10].

These considerations offer a few plausible arguments as to where we should look for GAI. However, they promise little in terms of assuring SETI scientists that such machines would have any motive to make themselves known.

In the case of biological beings, we can safely assume the presence of curiosity, as this trait is necessary to divine the laws of nature and build transmitters we could find.  But artificial sentience might not share this type of curiosity.  Maybe after solving all the puzzles of science, GAI would be happy to indulge itself with endless entertainments – perhaps with Bostrom-like simulations [11].  If they are capable of self-repair (an assumption in all of the above), then it may be that their primary project is to forestall the heat death of the universe and an end to their own existence.

CONCLUSIONS

What might SETI practitioners do to increase their chances of detecting what is likely to be the most prevalent form of intelligence in the cosmos?  Unfortunately, the list is short. 

A search for unusual phenomena in the vicinity of high-density energy sources is a straightforward desideratum.  Another is to consider that the oldest of such machines might wish to contact their peers in other parts of the cosmos to compare notes and offer novel information.  This suggests an experiment in which SETI searches for signals (radio or optical) in the direction of stellar black holes or quasars that are antipodal.  E.g., two stellar black holes on opposite sides of the sky might conceivably host AGI whose beamed data would pass through our neighborhood.

Perhaps the best strategy to find the universe’s intellectual giants is the least deliberate: simply be careful to note any unusual phenomena uncovered in the course of astronomical research. Are there nebulae with anomalous, depleted deuterium?  Do some stars or galaxies display unnatural infrared excess, a possible tipoff to energy-intensive residents [12],[13]?  Are there cosmological behaviors without natural explanation?

It is easy to design an experiment to find the aliens of sci-fi, for these are robustly similar to ourselves.  But when you don’t know your prey, the hunt can be hard.

REFERENCES

[1] Drake, F. 1960, “How can we detect radio transmissions from distant planetary systems,”Sky and Telescope 39, 140

[2] Petigura, E. A., Howard, A. W., and Marcy, G. W. 2013, “Prevalence of Earth-size planets orbiting Sun-like stars,” PNAS110, No. 48, 19273

[3] Moravec, Hans 2000, Robot: Mere Machine to Transcendent Mind, Oxford

University Press (Oxford)

[4] Vinge, V. 1993 “The coming technological singularity,” Vision-21:

Interdisciplinary Science & Engineering in the Era of CyberSpace, proceedings of a Symposium

held at NASA Lewis Research Center (NASA Conference Publication CP-10129)

[5] Kurzweil, Ray 2005, The Singularity is Near, Viking Penguin (New York)

[6] Voss, Peter 2015, http://www.agi-3.com/technology.html

[7] Shostak, S. 1998, Sharing the Universe, Berkeley Hills Books (Berkeley)

[8] Shostak, S. 2011, “Seeking intelligence far beyond our own,” International Astronautics Congress, IAC-11.A4.2.4

[9] Cirkovic, M. M. and Bradbury, R.J. 2006, “Galactic gradients, postbiological evolution, and the apparent failure of SETI,” New Astronomy11, 628

[10] Windell, Alex Noholoa 2015, private communication

[11] Bostrom, N. 2003, Philosophical Quarterly, 53 No. 211, 243

[12] Carrigan, R. 2009, “The IRAS-based whole-sky upper limit on Dyson spheres,” Ap. J.698 2075

[13] Griffith, R. L., Wright, J. T., Maldonado, J., Povich, M. S., Sigurdsson, S., Mullan, B. 2015, “The Ĝ Infrared Search for Extraterrestrial Civilizations with Large Energy Supplies. III. The Reddest Extended Sources in WISE,” arXiv:1504.03418 [astro-ph.GA]

[Go to Top]