As Sam Harris so eloquently put, there really are only 2 possibilities to consider.
- Continued improvement of information technology, automation and data processing is halted completely (by a catastrophic event such as nuclear war, total economic collapse, natural disaster)
- Or … the human race continues to develop and improve upon information technology, at whatever pace, in perpetuity.
Assuming that No. 1 is unlikely to be experienced any time soon on a global scale then it follows that advancements will continue to be made in information processing and computing. There is no reason for us to stop developing our machine intelligence and much to benefit from. It doesn’t matter at what rate these advancements are made but logically if left unimpeded we will eventually reach a point where machine intelligence meets and surpasses human intelligence. From that point, self-improving intelligent machines will improve exponentially and in ways that we cannot now nor will ever fully understand.
The question really is not “if” but “when”. However, when you consider the implications of such an event, the “when” really doesn’t matter. Let’s say that there was an infectious disease that had the potential at some point in the future if left unchecked to spread to an extent where there would be no hope of recovery for life on the planet. Would it really matter when that tipping point occurs? If we knew that the tipping point was likely to be in the next 100, 200 or 1000 years, at what point would we decide to act to avoid the annihilation of life on earth? We would already be devoting considerable research into how we might halt its progress or eliminate it completely.
Now in part this is because the risks of machine intelligence aren’t as widely understood as death by disease, at least on a personal level. The risks of AI are much broader in nature covering many different facets of our existence – is that why we are not taking this as seriously as we should?
Take just one example – our modern society is based largely on information systems. Our systems of economy and finance, government, military security, business and industry, healthcare and education to name a few are all highly dependent on, and in some cases highly automated by, the use of information processing.
Simultaneously the Internet of Things, ubiquitous network communications and general interconnectedness of everything provides broad reach across all of these domains.
Imagine then a superintelligent and autonomous machine, able to out-hack and out-smart any safeguards humans could construct with freedom to muster resources from any system on the planet to meet its own goals.
Sounds like the stuff of science fiction except for the fact that AI is already being used as the foundation for cybersecurity applications in order to defeat the hackers and viruses that threaten our systems today. It’s not such a leap to assume that the gamekeeper could turn poacher.
Pick just one of these domains and imagine a total shutdown of the systems involved. The de-stabilisation of nations would swiftly follow. The machine would not need to wage war on mankind – nation would battle nation in the ensuing chaos.
The videos curated here provide some insights into the risks and issues of artificial or machine intelligence. There are more questions than answers but it’s time we gave those questions a broader audience.
We believe this is the number 1 existential risk facing mankind today. Time to sit up and take notice.