[Chapter 647: When AI is smarter than humans...]
"The most common answer is technology, which is indeed true. Technology is a great achievement accumulated by our human history."
"The current technological development is very rapid, which is the direct reason. This is why our humans are so productive now, but we want to explore the ultimate reasons that are farther away in the future."
"We are 250,000 generations away from our ancestors. During this period, we went from picking up stones on the ground as weapons to being able to use atomic energy to create devastating super bombs. Now we know that such a complex mechanism takes a long time to evolve. These huge changes depend on the tiny changes in the human brain. There is not much difference between the chimpanzee brain and the human brain, but humans win, we are outside and they are inside the zoo!"
"Therefore it is concluded that in the future, any significant change in thinking matrix will lead to a huge difference."
Ren Hong took a sip of water, paused for a while and continued:
"Some of my colleagues think that we or humans will soon invent technology that can completely change human thinking patterns, that is, super artificial intelligence, or super ai or super intelligent body."
"The artificial intelligence that we humans now master now is to input a certain instruction into a box. In this process, we need programmers to transform knowledge into runnable programs, and to build a professional system, php, c and other computer languages."
"They're stiff, you can't stretch them out, basically you can only get what you put in, that's it."
"Although our artificial intelligence technology is developing rapidly and is becoming more mature today, it still has not achieved the same ability as humans, with strong cross-domain complex and comprehensive learning."
"So we're facing a question now: How long will it take for humans to have this powerful capability for AI?"
"Matrix Technology also conducted a questionnaire survey on top artificial intelligence experts in the world to collect their opinions. One of the questions is: In what year do you think humans will create artificial intelligence that reaches the human level?"
"We define AI in this questionnaire as having the ability to accomplish any task as good as an adult. An adult will be good at different jobs, etc. In this way, the capabilities of the AI will no longer be limited to a single field."
"The middle number of answers to this question is now, the mid-21st century range. It seems that it will take some time now, and no one knows the exact time, but I think it should be soon."
"We know that the signal transmission speed of neurons in axons is up to 100 meters per second, but in computers, signals propagate at the speed of light. In addition, there is a size limitation. The human brain is only as big as a head, and you cannot expand it twice, but computers can expand it multiple times, as big as a box, as a room, or even a building, which cannot be ignored."
"So super ai may be lurking in it, just like atomic energy lurking in history until it was awakened in 1945."
"And in this century, human beings may awaken super ai wisdom, and we will see the big explosion of wisdom. When people are thinking about what is smart and what is stupid, especially when we talk about power and rights."
"For example, chimpanzees are very strong, and their size is equivalent to two healthy men. However, the key between the two depends more on what humans can do, not what chimpanzees can do."
"So, when super AI appears, the fate of human beings may depend on what this super intelligent body wants to do."
"Imagine that super intelligence may be the last invention that humans need to create. Super intelligence is smarter than humans and better at creating than us. It will do so in a very short time, which means it will be a shortened future."
"Imagine all the crazy technology we have ever fantasized about, maybe humans can complete and realize it within a certain time, such as ending aging, immortality, and colonization of the universe."
"Super wisdom has a way to develop these things, which seem to exist only in the science fiction world but also conform to the laws of physics. Super Wisdom has a way to develop these things, and it is faster and more efficient than humans. It takes us humans 1,000 years to complete an invention, and it may only take one hour or even shorter. This is the shortened future."
"If there was a super intelligent body with such mature technology now, its power would be unimaginable to humans. Usually, it could get whatever it wanted, and our human future would be dominated by this super ai's preference."
"So the question is, what is its preference?"
"This issue is very tricky and serious. To make progress in this area, such as a way to think that we must avoid personalizing or blocking or dispersing it, and there is a sense of opinion."
"This question is ironic because every news report about the future of artificial intelligence or related topics, including the topic we are doing, may be labeled with posters of Hollywood sci-fi movie Terminator as a label, robots against humans (shrugging, laughing out loud)."
"So, I personally think we should express this problem in a more abstract way, rather than the fact that robots stand up to fight against humans, wars and so on under the narrative of Hollywood movies, which is too one-sided."
"We should regard super ai abstract as an optimization process, such as a process of optimization by programmers."
"Super AI or super intelligent body is a very powerful optimization process, which is very good at leveraging resources to achieve the ultimate goal, which means there is no necessary connection between having high intelligence and having a goal that is useful to humans."
"If this sentence is not easy to understand, let's give a few examples: If the task we give AI is to make people laugh, our current home machine assistants and other robots may make funny performances to make people laugh, which is a typical behavior of weak AI."
"And when the AI assigned to the task is a super intelligent body, super ai, it will realize that there is a better way to achieve this effect or complete the task: it may control the world and insert electrodes into all human facial muscles to make humans laugh continuously."
"For example, the task of this super AI is to protect the safety of the owner, then it will choose a better treatment method. It will imprison the owner at home and not let it go out to protect the owner's safety. It may still be dangerous at home. It will also take into account various factors that may threaten and cause the failure of the task, and wipe them out one by one, eliminate all factors that have malicious intentions to the owner, and even control the world. All these actions are for the task to not fail. It will make the most extreme optimization choice and take action to achieve the goal of completing the task."
"For example, suppose that we give this super AI the mission goal to solve an extremely difficult mathematical problem, it will realize that there is a more effective way to complete the mission goal, that is, to turn the entire world, the entire earth and even the more exaggerated scale into a super-large computer, so that its computing power will be stronger and it will be easier to complete the mission goal. And it will realize that this method will not be recognized by us, humans will stop it, and humans will be potential threats in this mode, and for this purpose it will solve all obstacles for the final goal, including humans, any affairs, such as planning some secondary plans to eliminate humans and other behaviors."
"Of course, these are exaggerated descriptions. We can't be wrong with this kind of thing, but the theme represented by the above three exaggerated examples is very important, namely: If you create a very powerful optimization program to achieve maximum goals, you must ensure that your goals in the sense and everything you care about must be accurate. If you create a powerful optimization process and give it a wrong or inaccurate goal, the consequences may be like the example above."
"Someone might say that if a 'computer' starts plugging electrodes into a human face, we can turn off the computer. In fact, this is definitely not an easy thing, if we rely heavily on this system, such as the Internet we rely on, do you know where the Internet switch is?"
"So there must be a reason, we humans are smart to meet threats and try to avoid them, not to mention the same thing that is smarter than us, it will only do better than us."
"On this issue, we should not be confident that we can control everything."
"Then simplify this problem, such as putting artificial intelligence into a small box, creating an insurance software environment, such as a virtual reality simulator that it cannot escape."
"But, we really have full confidence and grasp that it will not be possible to find a loophole, a loophole that will allow him to escape?"
“Even we human hackers can discover cyber vulnerabilities every moment.”
"Maybe I'd say, I'm not very confident to make sure that Super Ai will find the vulnerability and escape. So we decided to disconnect the internet to create a void insulation, but I have to reiterate that human hackers can cross such voids again and again with social engineering."
"For example, when I'm talking, I'm sure an employee here asked him to hand over his account details at some time, for the reason to give to someone in the computer information department, or for other examples. If you are this artificial intelligence, you can imagine using complexly wrapped electrodes in your body to create a radio wave to communicate."
"Or you can pretend something is wrong. At this time, the programmer will open you to see where something went wrong, they find the source code, and in the process you can gain control. Or you can plan a very attractive technology blueprint. When we implement it, there will be some side effects of the secret you have planned as artificial intelligence to achieve your obscure purpose, etc., and there are countless examples."
"So, any attempt to control a super AI is extremely ridiculous. We cannot express excessive confidence in our ability to control a super intelligent body forever. It will one day break free from control. After that, will it be a kind god?"
"I personally think that artificial intelligence is an inevitable problem, so I think we should need to understand one thing more, that is, if we create a super AI, even if it is not restricted by us, it should still be harmless to us, it should be on our side, and it should have the same values as us."
"So are you optimistic about how this problem can be effectively solved?"
“We don’t need to write down everything we care about about Super AI, or even turn these things into computer languages, because it is a task that can never be accomplished. Instead, the artificial intelligence we create uses its own wisdom to learn our values, and can inspire it to pursue our values, or do what we will agree with, and solve valuable problems.”
"This is not impossible, but possible. The result can benefit humans a lot, but it will not happen automatically, and its values need to be guided."
"The initial conditions of the Big Bang of Wisdom need to be correctly established from the most primitive stage."
"If we want everything not to be deviated from our expectations, the values of artificial intelligence and our values complement each other not only in familiar situations, such as when we can easily check its behavior, but also in the unprecedented situations that all artificial intelligence may encounter, in the future without boundaries, there are also many profound problems that need to be solved: how it makes decisions, how it solves logical uncertainty and many similar problems, etc."
"This task seems a bit difficult, but it's not as difficult as creating a super intelligent body, right?"
"It's really quite difficult (the laughter spreads all over the audience again)!"
"What we are worried about is that creating a super AI is really a big challenge, and creating a safe super AI is a bigger challenge. The risk is that if the first problem is solved, the second problem of ensuring security cannot be solved, so I think we should come up with solutions that will not deviate from our values so that we can use it when we need it."
"Now, maybe we can't solve the second security problem, because there are some factors that you need to understand, and you need to apply the details of that actual architecture to be implemented effectively."
“If we can solve this problem, it will be smoother when we enter the era of real super intelligence, which is something that is very worth a try for us.”
"And I can imagine that if everything goes well, hundreds, thousands or millions of years later, when our descendants come to our century, they may have ancestors, the most important thing our generation does, is to make the most correct decision."
"Thanks!"
(To be continued.)
: Visit the website
Chapter completed!