Get Lifetime Access to My Book Vault

Life 3.0 by Max Tegmark
Being Human in the Age of Artificial Intelligence
My Thoughts
Life 3.0 is a nice book about technology and artificial intelligence. It contains interesting viewpoints and many thought-provoking questions. If you are interested in technology, AI, machine learning, and the future, I recommend reading or listening to it.
Key Questions
- What education system best prepares people for a job market where AI keeps improving rapidly?
- What economic policies are most helpful for creating good new jobs?
- What career advice should we give our kids?
- Useful questions to ask about a career before deciding to educate oneself for it:
- Does it require interacting with people and using social intelligence?
- Does it involve creativity and coming up with clever solutions?
- Does it require working in an unpredictable environment?
Chapter 1: Welcome to the Most Important Conversation of Our Time
Terminology as used in this book
Life: a process that can maintain its complexity and replicate.
Life 1.0: life that evolves its hardware and software, biological stage.
Life 2.0: life that evolves its hardware but designs much of its software, cultural stage.
Life 3.0: life that designs its hardware and software, technological stage.
Intelligence: ability to accomplish complex goals.
Artificial intelligence (AI): non-biological intelligence.
Narrow intelligence: the ability to accomplish a narrow set of goals. E.G. play chess or drive a car.
General intelligence: ability to accomplish virtually any goal, including learning.
Universal intelligence: ability to acquire general intelligence given access to data and resources.
Human-level artificial general intelligence (AGI): ability to accomplish any cognitive task at least as well as humans.
Human-level AI: AGI
Strong AI: AGI
Super-intelligence: general intelligence far beyond human level.
Civilization: an interacting group of intelligent life forms.
Consciousness: subjective experience.
Qualia: individual instances of subjective experience.
Ethics: principles that govern how we should behave.
Teleology: the explanation of things in terms of their goals and purposes rather than their causes.
Goal-oriented behavior: behavior more easily explained via its effect than via its cause.
Having a goal: exhibiting goal-oriented behavior.
Having purpose: serving goals of one’s own or of another entity.
Friendly AI: super-intelligence whose goals are aligned with ours.
Cyborg: human-machine hybrid.
Intelligence explosion: recursive self-improvement rapidly leading to super-intelligence.
Singularity: intelligence explosion.
Universe: the region of space from which light has had time to reach us.
Common Misconceptions about AI
Timeline Myths
How long will it take until machines greatly supersede human-level AGI?
A common myth is that we know the answer with some level of certainty.
We have no idea of the time frame. The range of opinions is from never to 10-30 years.
The world’s leading experts disagree.
The average median answer of experts is 2055.
Controversy Myths
The myth that most AI experts are not concerned about AI safety. This is not true.
True statement: supporting AI safety research is not controversial. Most are in favor of it.
Myths about what the risks are
Three separate misconceptions are concerns about consciousness, evil, and robots.
The real concern is making sure that the goals of a super-intelligent AI are aligned with ours.
Example: if you are in charge of a hydro-electric green project and there are ants in the region to be flooded, too bad for the ants.
The beneficial AI movements want to avoid placing humanity in the position of those ants.
The consciousness misconception is related to the misconception that machines cannot have goals.
A heat-seeking missile has the goal to hit its target.
The main concern of the beneficial AI movement is not with robots but with intelligence itself.
The specific concern is intelligence whose goals are misaligned with ours.
Robot misconception that machines can’t control humans.
Intelligence enables control.
A super-intelligent AI could easily pay or manipulate myriad humans to unwittingly do its bidding (see science fiction novel Neuromancer by William Gibson).
Myth: super-intelligence by 2100 is inevitable.
Myth: super-intelligence by 2100 is impossible.
Fact: it may happen in decades, centuries, or never. AI experts disagree.
Myth: only Luddites worry about AI.
Fact: many top AI researchers are concerned.
Mythical worry: AI turning evil.
Mythical worry: AI turning conscious.
Actual worry: AI turning competent with goals misaligned with ours.
Myth: robots are the main concern.
Fact: misaligned intelligence is the main concern. It needs nobody, only an internet connection.
Myth: AI can’t control humans.
Fact: intelligence enables control. We control tigers by being smarter.
Myth: machines can’t have goals.
Fact: a heat-seeking missile has a goal.
Mythical worry: super-intelligence is just years away. Panic!
Actual worry: super-intelligence is at least decades away, but it may take that long to make it safe. Plan ahead.
Join ageofai.org to share your views and join the conversation about AI.
Three main camps of AI controversy
- Techno-skeptics
- Digital-utopians
- The Beneficial AI Movement
Techno-skeptics: view building human-level AGI as so hard that it won’t happen for hundreds of years.
Digital-utopians: view building human-level AGI as likely to happen this century. They wholeheartedly welcome Life 3.0.
The Beneficial AI Movement: also view building human-level AGI as likely to happen this century. They view a good outcome as not guaranteed, but as something that needs to ensured by hard work in the form of AI research.
Chapter 3: The Near Future
In order to reap the benefits of AI without creating new problems, we need to answer many important questions.
For example:
- How can we make future AI systems more robust than today’s, so that they do want we want without crashing, malfunctioning, or getting hacked?
- How can we update our legal systems to be more fair and efficient, and to keep pace with the rapidly changing digital landscape?
- How can we make weapons smarter, and less prone to killing innocent civilians, without triggering an out-of-control arms race and lethal autonomous weapons?
- How can we grow our prosperity through automation without leaving people lacking income or purpose?
Four main areas of technical AI safety research that are dominating the current AI safety discussion:
- Verification
- Validation
- Security
- Control
Career Advice for Kids
What career advice should we give our kids?
Go into professions that machines are currently bad at and therefore seem unlikely to be automated in the near future.
Useful questions to ask about a career before deciding to educate oneself for it:
- Does it require interacting with people and using social intelligence?
- Does it involve creativity and coming up with clever solutions?
- Does it require working in an unpredictable environment?
The more of these questions you can answer with a yes, the better your career choice is likely to be.
Generally, safe bets at this time include teacher, nurse, doctor, dentist, scientist, entrepreneur, programmer, lawyer, engineer, social worker, clergy member, artist, hairdresser, massage therapist.
In contrast, jobs that involve highly repetitive or structured actions in a predictable setting aren’t likely to last long before being automated.
Computers and industrial robots took over the simplest of such jobs long ago; improving technology is in the process of eliminating many more such as telemarketers, warehouse workers, cashiers, train operators, bakers, and line cooks.
Other impacted professions are drivers, paralegals, credit analysts, loan officers, bookkeepers and tax accountants, which are getting most of their tasks automated and demand many fewer humans.
Many jobs won’t get entirely eliminated but will see many of their tasks automated. For example, don’t be the radiologist who examines medical images and gets replaced by IBM’s Watson. Be the doctor who orders the radiology analysis, discusses with the patient, and decides on the treatment plan.
Finance: be the fund manager.
Law: be the attorney who counsels the client and presents the case in court.
What education system best prepares people for a job market where AI keeps improving rapidly?
Several options exist. Work for a few years, then go back to school for a few years, then return to the workforce. Continue this cycle indefinitely.
Continuing education for life may become the new required normal.
What economic policies are most helpful for creating good new jobs?
Chapter 4: Intelligence Explosion
Three logical steps required to get from today to AGI powered world-takeover.
- Build human-level AGI
- Use this AGI to create super-intelligence
- Use or unleash this super-intelligence to take over the world
Chapter 5: Aftermath: The Next 10,000 Years
Write your tentative answers to the following seven questions.
- Do you want there to be super-intelligence?
- Do you want humans to still exist, or to be replaced, cyborg-ized, uploaded, or simulated?
- Do you want humans or machines in control?
- Do you want AIs to be conscious, or not?
- Do you want to maximize positive experiences, minimize suffering, or leave this to sort itself out?
- Do you want life spreading into the cosmos?
- Do you want civilization striving toward a greater purpose that you sympathize with? Or, are you okay with future life-forms that appear content, even if you view their goals as pointlessly banal?
Enter your answers, compare notes, and discuss with others at ageofai.org
Exploring a dozen future possible scenarios to span the spectrum of possibilities.
- Libertarian Utopia: humans, cyborgs, uploads, and super-intelligence(s) co-exist peacefully thanks to property rights.
- Benevolent Dictator: everyone knows that the AI runs society and enforces strict rules, but most people view this as a good thing.
- Egalitarian Utopia: humans, cyborgs, and uploads co-exist peacefully thanks to property abolition, and guaranteed income.
- Gatekeeper: a super-intelligent AI is created with the goal of interfering as little as necessary to prevent the creation of another super-intelligence. As a result, helper robots with slightly sub-human intelligence abound. Human-machine cyborgs exist but technological progress is forever stymied.
- Protector-god: an almost omniscient and omnipotent AI maximizes human happiness by intervening only in ways that preserve our feeling of control of our own destiny, and hides well enough that many humans even doubt the AI’s existence.
- Enslaved-god: a super-intelligent AI is confined by humans, who use it to produce unimaginable technology and wealth that can be used for good or bad, depending on the human controllers.
- Conquerors: AI takes control, decides that humans are a threat, nuisance, waste of resources, and gets rid of us by a method that we don’t even understand.
- Descendants: AI’s replace humans but give us a graceful exit, making us view them as our worthy descendants. Much as parents feel happy and proud to have a child who is smarter than them, who learns from them, and then accomplishes what they can only dream of. Even if they can’t live to see it all.
- Zookeeper: an (almost) omnipotent AI keeps some humans around, who feel treated like zoo animals and lament their fate.
- 1984: technological progress towards super-intelligence is permanently curtailed. Not by an AI, but by a human-led Orwellian surveillance state where certain kinds of AI research are banned.
- Reversion: technological progress towards super-intelligence is prevented by reverting to a pre-technological society in the style of the Amish.
- Self-destruction: super-intelligence is never created because humanity drives itself extinct by others means.
Four dimensions where the optimal balance must be struck in the development of good governance over AI development.
- Centralization: trade-off between efficiency and stability.
- Inner-threats: guard against growing power centralization, group collusion, a single leader, and against growing decentralization.
- Outer-threats: if the leadership structure is too open, it will enable outside forces, including the AI, to change its values. If it is too impervious, it will fail to learn and adapt to change.
- Goal Stability: too much goal drift can transform utopia into dystopia. Too little goal drift can cause failure to adapt to the evolving technological environment.
Chapter 6: Our Cosmic Endowment
What should we want, and how can we attain those goals?
Bottom line: an intelligence explosion is a sudden event where technology rapidly plateaus at a level limited only by the laws of physics.
Chapter 7: Goals
Friendly AI: AI whose goals are aligned with ours.
Figuring out how to align the goals of a super-intelligent AI with our goals is not just important, but also hard.
It is currently an unsolved problem. It splits into three tough sub-problems.
- Making AI learn our goals.
- Making AI adopt our goals.
- Making AI retain our goals
To learn our goals, an AI must figure out not what we do, but why we do it.
Example, if you tell a super-intelligent AI to get you to the airport as fast as possible, you will arrive covered in vomit and chased by police and helicopters.
Virtually any sufficiently ambitious goal leads to the sub-goal of capability enhancement. Which in turn leads to sub-goals for better hardware, better software, and a better world model. The quest for better hardware in-turn implies sub-goals of self-preservation and resource acquisition. The quest for a better world model produces sub-goals of information acquisition and curiosity. A super-intelligent AI will resist being shut down if you give it any goal that it needs to remain operational to accomplish.
We must agree on a minimum set of ethical principles for setting AI goals.
The ethical views of many thinkers can be distilled into four principles.
- Utilitarianism: positive conscious experiences should be maximized, and suffering should be minimized.
- Diversity: a diverse set of positive experiences is better than many repetitions of the same experience. Even if the later has been identified as the most positive experience possible.
- Autonomy: conscious entities/societies should have the freedom to pursue their own goals unless this conflicts with an overriding principle.
- Legacy: compatibility with scenarios that most humans today would view as happy, incompatibility with scenarios that essentially all humans today would view as terrible.
Implementing these principles in practice is tricky.
Example, problems with the three laws of robotics devised by Isaac Asimov.
- A robot may not injure a human being, or through inaction allow a human being to come to harm.
- A robot must obey the orders given it by human beings, except where such orders would conflict with the first law.
- A robot must protect its own existence as long as such protection doesn’t conflict with the first or second laws.
Most of Asimov’s stories show how these laws can lead to problematic contradictions.
Suppose we replace these laws with these two, in order to codify the autonomy principle for future life forms.
- A conscious entity has the freedom to think, learn, communicate, own property, and not be harmed or destroyed.
- A conscious entity has the right to do whatever doesn’t conflict with the first law.
What happens when we consider a wider range of conscious entities such as animals?
Chapter 8: Consciousness
Emergent properties.
As objects are assembled, new characteristics, or emergent properties, appear.
An emergent property is a property which a collection or complex system has, but which the individual members do not have.
The author speculates as to whether consciousness is an emergent property.
Integrated Information Theory of Consciousness (IIT)
IIT proposes the phi metric to quantify consciousness.
See links below if you want to read more on this subject.
https://www.iep.utm.edu/int-info/
http://integratedinformationtheory.org/
Anchoring consciousness.
What makes a “blob of matter” able to have a subjective experience?
In other words, under what conditions will a blob of matter be able to do these four things:
- Remember
- Compute
- Learn
- Experience
New term: “Computronium” = a substance that can perform arbitrary computations.
New term: “Sentronium” = the most general substance that has experience/is sentient.
- Information principle: a conscious system has substantial information storage capacity.
- Dynamics principle: a conscious system has substantial information processing capacity.
- Independence principle: a conscious system has substantial independence from the rest of the world.
- Integration principle: a conscious system cannot consist of merely independent parts.
Four necessary conditions to consciousness that the author would bet on.
References Clive Wearing , a man with a memory that lasts less than a minute.
The author believes that human brains are the most amazingly sophisticated physical objects in our known universe.
The problem of understanding intelligence should not be conflated with three separate problems of consciousness.
- The pretty hard problem: of predicting which physical systems are conscious.
- The even harder problem: of predicting Qualia.
- The really hard problem: of why anything at all is conscious.
Epilogue:
The story of the author approaching Elon Musk and Elon’s 10M investment to start a foundation dedicated to researching AI safety.
Mention of Anthony Aguirre and the Foundational Questions Institute.
The Asilomar AI Principles
These principles were developed in conjunction with the 2017 Asilomar conference.
https://futureoflife.org/ai-principles/
Research Issues
1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.
2) Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:
- How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked?
- How can we grow our prosperity through automation while maintaining people’s resources and purpose?
- How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI?
- What set of values should AI be aligned with, and what legal and ethical status should it have?
3) Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers.
4) Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.
5) Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.
Ethics and Values
6) Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.
7) Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.
8) Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.
9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.
10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.
12) Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.
13) Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.
14) Shared Benefit: AI technologies should benefit and empower as many people as possible.
15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.
16) Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.
17) Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.
18) AI Arms Race: An arms race in lethal autonomous weapons should be avoided.
Longer-term Issues
19) Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.
20) Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.
21) Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.
22) Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.
23) Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.
Related Book Summaries
- None yet
Hope you enjoyed this and got value from my notes.
This is the 20th book read in my 2019 reading list.
Here is a list of my book summaries.
- Uncategorized