Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Google has made one of the most objective changes on Principles of artificial intelligence Since its first publication in 2018, it was monitored by Washington PostThe search giant edited the document to remove the pledges that made it promising that it would not “design or publish” AI tools for use in weapons or monitoring technology. Previously, these instructions included a section entitled “Applications that we will not follow”, which is not present in the current version of the document.
Instead, there is now a section entitled “Responsible Development and Publishing”. There, Google says it will implement “appropriate human supervision, due deception, feedback mechanisms to comply with the user’s goals, social responsibility, the principles of international law and large -scale human rights.”
This commitment is much wider than the specified company recently made at the end of last month when the previous version of the principles of artificial intelligence was still alive on its website. For example, in terms of its connection with weapons, the company said previously that it will not design artificial intelligence for use in “weapons or other technologies that its purpose or its main implementation is to cause or facilitate the injury of people directly.” As for artificial intelligence monitoring tools, the company said it will not develop technology that violates “internationally accepted standards”.
When asked to comment, a Google Engadget spokesman pointed to A. Blog post The company was published on Thursday. In IT, CEO DeepMind Demis Hassabis and James Manyika, First Vice President of Research, Laborators, Technology and Society in Google, says of the emergence of artificial intelligence as a “general purpose technology” that necessitates changing policy.
“We believe that democracies should lead to developing artificial intelligence, guided by basic values such as freedom, equality and respect for human rights. We believe that companies, governments and organizations that share these values must work together to create artificial intelligence that protects people, promotes global growth, and supports national security.” The two books. “… are guided by the principles of our artificial intelligence, we will continue to focus on artificial intelligence research and applications that are compatible with our mission, our scientific focus, our experiences of experience, and survival consistent with the principles of international law and human rights – a specific work by carefully evaluating whether the benefits exceed the risks Possible great.
When Google published AI’s principles for the first time in 2018, I did it in the aftermath of Project Maven. It was a controversial governmental contract, if Google decided to renew it, the company would have provided AI to the Ministry of Defense to analyze the drones. Dozens of Google employees left the company to protest the contract, as thousands signed a petition in the opposition. When Google eventually published its new instructions, the CEO of Sundar Pichai told the employees that his hope was to stand a “time test.”
However, by 2021, Google began to follow the military contracts again, with what is said to be a “aggressive” offer to hold a joint Cloud Cloud in the Pentagon. At the beginning of this year, Washington Post I mentioned that Google has worked again and again with the Israeli Ministry of Defense to expand the government’s use of artificial intelligence tools.