From "2001: A Space Odyssey" to "The Matrix", movies and science fiction writers have been reminding us to be wary of artificial intelligence, because it may betray or even destroy humanity at any time. As humans continue to make breakthroughs in the field of artificial intelligence, this concern is being mentioned more and more. Now, the five technology giants that are actively developing artificial intelligence, including Google's parent company Alphabet, Facebook , Microsoft , IBM and Amazon, have also realized this problem and are planning to develop a mature set of ethical standards for artificial intelligence. According to the New York Times, the five companies recently held a meeting to discuss the ethical standards of artificial intelligence in the labor market, transportation, and war. They also plan to establish a relevant industry association, the purpose of which is to ensure that artificial intelligence benefits humans rather than harms them, but the participants have not yet determined the specific name of the association. Although technology giants are actively researching artificial intelligence, there are also calls in the technology community to be wary of artificial intelligence dominating humans. For example, at the Code Conference held in June this year, Elon Musk, the founder of Tesla and Space X , said that with the development of artificial intelligence, humans will be left far behind in intelligence in the future and become pets of artificial intelligence. Last July, Musk also donated $10 million to the Future of Life Institute, which is responsible for assessing and analyzing the risks posed by artificial intelligence. Later, Stephen Hawking also claimed in a TV show that some laboratory somewhere in Silicon Valley was brewing an evil plan, and that artificial intelligence would evolve faster than humans, and their goals would be unpredictable to humans. "Once machines have reached a critical stage where they can evolve on their own, we will not be able to predict whether their mission will be the same as ours," Hawking said. In response, Google Chairman Eric Schmidt said that artificial intelligence will develop in the direction of human welfare, and there will be a system to prevent artificial intelligence from developing in the wrong direction. "We have all seen those science fiction movies," he said, "but in the real world, humans definitely know how to shut down its system when artificial intelligence becomes dangerous." Not only technology companies and civil organizations want to formalize the ethics and laws of artificial intelligence, but government agencies have also participated in this discussion. In May this year, the US government held the first White House discussion meeting on AI laws and regulations in Seattle. The theme of this symposium was "Should the government regulate artificial intelligence?", but the meeting did not reach a conclusion on this issue, let alone introduce specific regulatory measures. So, before the government took concrete action, tech companies took action - they decided to develop their own framework to allow AI research to proceed and ensure that it does not harm humans. Eric Horvitz, a Microsoft researcher who participated in this industry discussion, founded an association called "100 Years of Artificial Intelligence Research" at Stanford University, which recently published its 2016 research report. In the report, titled "Artificial Intelligence and Life in 2030," the authors argue that AI "is likely to be regulated." "The consensus among the research group is that attempts to regulate AI are often misguided because we do not have a clear definition of what AI is, and therefore different risks need to be considered in different areas," the report said. The report also recommends that government agencies increase professional awareness of AI and increase public and private investment in the field of AI. A memo about the meeting shows that the five companies will announce the establishment of an industry organization to regulate artificial intelligence in mid-September. People familiar with the matter said that Alphabet's subsidiary Google DeepMind requested to participate in this industry association as an independent. The association will be modeled after human rights groups such as the Global Network Initiative, but with a stronger focus on issues of free speech and privacy. As a winner of Toutiao's Qingyun Plan and Baijiahao's Bai+ Plan, the 2019 Baidu Digital Author of the Year, the Baijiahao's Most Popular Author in the Technology Field, the 2019 Sogou Technology and Culture Author, and the 2021 Baijiahao Quarterly Influential Creator, he has won many awards, including the 2013 Sohu Best Industry Media Person, the 2015 China New Media Entrepreneurship Competition Beijing Third Place, the 2015 Guangmang Experience Award, the 2015 China New Media Entrepreneurship Competition Finals Third Place, and the 2018 Baidu Dynamic Annual Powerful Celebrity. |
>>: With iPhone 7 about to be released, where will Apple go in the post-iPhone era?
What should you do if a mosquito lands on you? Ma...
The matter began with a platform turnover statist...
The five elements in the product manager's kn...
The end of the year is approaching and various ac...
We spend one third of our lives sleep Good qualit...
In ancient times, people conducted hydrogeologica...
HandlerThread is a thread class used in Android d...
What are the problems that most people have in us...
From today's aerial photos, we can see that t...
Review expert: Peng Guoqiu, deputy chief physicia...
Xinhua News Agency, Beijing, March 11 (Xinhua) --...
If someone asks you, "What are the names of ...
“Should we also do video accounts and live broadc...
There is another new smartphone report, and this ...