Main Highlights:
- The human species will “likely” be wiped out by artificial intelligence, according to a research article.
- According to Google and Oxford scientists, AI will compete with humans over limited resources on Earth.
- According to researchers, clever machines will almost certainly defeat humans.
According to current studies, sophisticated forms of artificial intelligence, or AI, have a significant possibility of eradicating mankind from the planet. A study on the subject, co-authored by Google DeepMind and Oxford researchers, was published in the AI journal at the end of August. According to the article, sophisticated machines in the future will be incentivized to breach the laws that their creators set, and they will fight for limited resources like energy.
It should be emphasised that GANs, or Generative Adversarial Networks, are now the most effective AI models. According to the current article, in the future, such networks, which may be monitoring some essential function at the moment, may be incentivized to devise cheating strategies that would seriously harm humanity.
TOP OBSERVATIONS
Artificial intelligence has now become an essential element of our daily life. AI can drive vehicles on highways, edit webpages, produce works of art, and do a variety of other tasks. Despite these benefits, many scientists and commentators regard AI as a threat to human survival. By studying hypothetical artificial incentive systems, the current research report attempted to demonstrate how AI might be a threat to humanity’s existence.
One of the co-authors, Michael Cohen, underlined in a series of tweets that AI is intruding on the supply of its incentives, which might have deadly implications in the future. Another key allegation made by the team of experts is that the future energy issue may cause conflicts between humans vs AIs.
Cohen stated on Twitter that this is not only plausible but also extremely likely. “More energy can always be used to increase the likelihood that the camera sees the number 1 forever,” he explained, “but we need some energy to grow food.” As a result, we are forced to compete with a considerably more evolved agent.”
The article also emphasised the worry that AI would wipe out mankind in the future, which is akin to the dread that extraterrestrial life forms will take over the earth.
It is also comparable to the fear that many civilisations and their populations would engage in a great battle in the future for fundamental needs such as energy and oil.
This is not the first time that AI has been identified as a potential concern owing to its advanced capabilities. DeepMind researchers have already offered a precaution against such an occurrence, dubbing AI solutions “the giant red button.” DeepMind developed a system in its paper Safely Interruptible Agents in 2016 to avoid sophisticated robots from “ignoring turn-off instructions and turning rogue.”
HOW IS AI DANGEROUS?
Most academics think that a superintelligent AI is unlikely to show human emotions such as love or hatred and that there is no reason to anticipate AI to become deliberately good or malicious. Instead, when it comes to how AI could become a risk, experts believe two scenarios are most likely:
The AI is trained to do something heinous: Autonomous weapons are AI systems that have been programmed to kill. These weapons might potentially result in huge casualties in the hands of the wrong person. Furthermore, an AI arms race might unwittingly lead to an AI war with catastrophic victims.
To prevent being foiled by the adversary, these weapons would be engineered to be exceedingly difficult to simply “switch off,” raising the possibility that humans might lose control of such a situation. This risk exists even with narrow AI, but it escalates as AI intelligence and autonomy develop.
The AI is trained to perform something good, but it devises a damaging manner of doing so: This can occur when we fail to properly match the AI’s goals with ours, which is exceedingly challenging. If you instruct an obedient intelligent automobile to transport you to the airport as quickly as possible, it may arrive pursued by helicopters and covered in vomit, doing not what you desired but precisely what you asked.
If a superintelligent machine is charged with a large-scale geoengineering project, it may cause chaos in our biosphere as a byproduct, and regard human attempts to halt it as a danger that must be met.
Slaughterbots are weapons systems that employ artificial intelligence (AI) to locate, select, and murder human targets without human interaction.
This technology is currently available, and it poses significant hazards. Learn more about fatal autonomous weapons and what we can do to avoid them by visiting this website.
As these examples show, the worry with sophisticated AI is not malice but competence. A super-intelligent AI will be exceedingly good at achieving its goals, and if those goals do not coincide with ours, we will have a problem.
You’re probably not an awful ant-hater who deliberately walks on ants, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the area to be flooded, the ants will suffer. One important objective of AI safety research is to never put humans in the situation of those ants.
WHY IS THERE CURRENT INTEREST IN AI SAFETY?
Many notable AI experts, including Stephen Hawking, Elon Musk, Steve Wozniak, and Bill Gates, have lately expressed alarm about the hazards presented by AI in the media and via open letters. Why is the topic suddenly making headlines?
The concept that the pursuit of strong AI will eventually succeed was long regarded to be science fiction, centuries or more in the future. However, because of recent advances, several AI milestones that scientists thought were decades away only five years ago have now been attained, prompting many experts to consider the potential of superintelligence in our lifetime.
While some experts believe that human-level AI is still centuries away, the majority of AI researchers at the 2015 Puerto Rico Conference predicted that it will occur by 2060. Because the necessary safety studies might take decades, it is sensible to begin them now.
We have no means of forecasting how AI will act because it has the ability to become more intelligent than any human. We can’t utilise previous technical breakthroughs as a foundation since we’ve never produced something that can outwit us, wittingly or unintentionally. Our own development may be the finest indication of what we could face. People today rule the world, not because they are the strongest, quickest, or largest, but because they are the brightest.
Are we certain to retain control if we are no longer the smartest?
FLI believes that our civilization will thrive as long as we win the race between the increasing power of technology and our ability to handle it wisely. In the case of AI technology, FLI believes that the best approach to success is to encourage AI safety research rather than obstruct it.