Life 3.0: Being Human in the Age of Artificial Intelligence is an accessible book about AI that offers a review of some of the most important ideas being debated among AI researchers as they conceive and implement this novel technology. This article will review some of the themes that Life 3.0 addresses.

What is Life?

The question of how to define “life” has been a source of debate among scientists and philosophers for a very long time. Tegmark does not wade into these debates, but instead makes the decision from the outset to define “life,” in its broadest sense, as “a process that can retain its complexity and replicate.” He adds to the explanation that what is replicated “isn’t matter (made of atoms) but information (made of bits) specifying how the atoms are arranged.” Concerning this “information-processing” definition of life, he writes:

[W]e can think of life as a self-replicating information-processing system whose information (software) determines both its behavior and the blueprints for its hardware.

With this definition in mind, Life 3.0 stipulates that life admits of three levels of sophistication.

  • Life 1.0: Can survive and replicate; design for its software and hardware are determined by its DNA, and change only through evolution over many generations (simple biological)
  • Life 2:0: Can design its software but evolves its hardware; humans can learn complex new skills, including languages, sports and professions, and can fundamentally update their worldview and
    goals (cultural)
  • Life 3.0: Can design its hardware; does not yet exist on Earth, can dramatically redesign not only its software, but its hardware as well, rather than having to wait for it to gradually evolve over generations (technological)

Intelligence

Tegmark defines “intelligence” in a very broad way, just as he defines “life” very broadly. Intelligence, he claims, amounts to the “ability to accomplish complex goals.” Accordingly, there can be many possible types of intelligence.

When it comes to artificial general intelligence, or AGI, Tegmark accepts the view that this would be human-level intelligence, or the “ability to accomplish any goal at least as well as humans.”

Chapter 3 of Life 3.0 reviews different ways in which Tegmark believes that artificial intelligence is being, or could be, used by humanity. These include:

  • Space exploration
  • Finance
  • Manufacturing
  • Transportation
  • Energy
  • Healthcare
  • Secure communication
  • Legal system
  • Weapons

AGI Aftermath

Among the most engaging and creative sections of Life 3.0 are those which consider in detail scenarios that might emerge for humanity on the path to, or in the aftermath of the development of artificial general intelligence (should this ever occur).

Libertarian utopia

Tegmark describes this as a situation in which humans, cyborgs, uploads and superintelligences coexist peacefully thanks to property rights. It is the scenario that he finds articulated by futurists like Hans Moravec and Ray Kurzweil, in which many humans will have upgraded their bodies or even uploaded their minds into new hardware. The majority of interactions would occur in virtual environments, “for convenience and speed,” even if many minds continue to enjoy interactions and activities involving physical bodies as well. Tegmark imagines different “zones,” including human-only zones, machine zones, and mixed zones. In the final analysis, Tegmark considers this scenario unlikely, partly because he does not consider it obvious that cyborgs or uploads would ever be made.

Benevolent dictator

In this scenario, a “benevolent superintelligence” is established which “runs the world and enforces strict rules designed to maximize its model of human happiness.” Humanity would be free from poverty, disease, and all necessary goods and services would be taken care of. Crime would be eliminated, since the superintelligence would be omniscient and efficient in enforcing social rules. It would be left to the superintelligence to figure out what a human utopia would amount to (and wouldn’t bring it about, say, through mass chemical intoxication). Tegmark considers one downside of this scenario to the fact that some people might feel a lack of freedom in shaping their society and destiny, given the single overall path to human happiness that had been followed in order to maximize human happiness in the aggregate.

Gatekeeper

In this scenario, a superintelligent AI is created with the goal of interfering as little as necessary to prevent the creation of another superintelligence. However, Tegmark imagines critics of such a scenario opposing the obstruction of future technological progress that would rely on further superintelligence.

1984

The “1984” scenario that Tegmark imagines is one in which, for reasons such as to manage technological risks, progress in artificial intelligence comes to a halt because of human intervention. According to Tegmark, this would require an Orwellian surveillance state in which certain forms of AI research are prohibited.

Conclusion

In addition to these scenarios, Tegmark considers several other scenarios, each with its potential benefits and drawbacks. One of the virtues of this book is that, without becoming too technical, it helps readers to envision different paths human history might take in light of artificial intelligence technology – and asks readers to begin to think about which is most aligned with their own preferences and values.

Share.

Bradley Murray is a psychoanalyst and author of a book on Kant’s philosophy and articles on the impact of future technology on human life.