Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Join daily and weekly newsletters to obtain the latest updates and exclusive content to cover the leading artificial intelligence in the industry. Learn more
Google It removed its long -term ban against the use of artificial intelligence of weapons and surveillance systems, which represents a major shift in the company’s moral position on developing artificial intelligence that former employees and industry experts say can reshape how Silicon Valley approaches the integrity of artificial intelligence.
The change, which was quietly implemented this week, eliminates the main parts of Amnesty International Principles from Google This explicitly prevented the company from developing artificial intelligence or monitoring. These principles, Created in 2018It was a standard in industry to develop responsible artificial intelligence.
“The last stronghold.” Terry Petzo Fry, who spent five years in the implementation of Google, said he was not banned. The principles of the original Amnesty International As a great manager for the management of external products, posts and AI responsible at Google Cloud, in A. Post Blues. “Google stood alone at this level of clarity about its obligations of what it could be built.”
the Reforme principles Remove four specific ban: technologies that are likely to cause general damage, weapons applications, monitoring systems, and technologies that violate international law and human rights. Instead, Google now says it “will reduce unintended or harmful results” and is compatible with “the principles of international law and human rights that are widely accepted.”
This transformation comes in a particularly sensitive moment, as artificial intelligence capabilities are rapidly progressing and intensifying discussions on the appropriate handrails of technology. Timing raised questions about Google’s motives, although the company maintains these changes have been long in development.
“We are in a state where there is no great confidence in large technology, and every step it seems to remove handrails creates more lack of confidence,” Petzo Fry said in an interview with Venturebeat. She emphasized that the clear ethical boundaries were decisive to build artificial intelligence systems worthy of confidence during her journal in Google.
the The original principles It appeared in 2018 amid employee protests Mavin projectThe Pentagon, which involves Amnesty International to analyze the drones. While Google ultimately refused to renew this contract, new changes could indicate openness to similar military partnerships.
The review maintains some elements of the previous ethical framework of Google, but it turns into a prohibition of specific applications to emphasize risk management. This approach is closely consistent with industry standards such as NIST AI Risk Management FrameAlthough critics argue that they provide less tangible restrictions on applications that are likely to be harmful.
“Even if the accuracy is not the same, ethical considerations are not less important to create good artificial intelligence,” Petzo Fry pointed out, highlighting how to improve ethical considerations to the effectiveness of artificial intelligence products and access to them.
Industry observers say this change in politics can affect how other technology companies deal with artificial intelligence ethics. Google The original principles A precedent for companies self -regulation has been developed in developing artificial intelligence, as many institutions are looking for Google for guidelines about the implementation of the responsible AI.
amendment Amnesty International Principles from Google It reflects the broader tensions in the technology industry between rapid innovation and moral restrictions. With the intensification of competition in the development of artificial intelligence, companies face pressure to achieve a balance between responsible development with market requirements.
“I am concerned about how quickly reach there to the world, and if more and more handrails are removed,” Petzo Fry said, expressing his concern about the competitive pressure to launch artificial intelligence products quickly without adequate assessment of the possible consequences.
The review also raises questions about Google’s internal decision -making operations and how employees can navigate ethical considerations without an explicit ban. During its time in Google, Pizzo Frey created reviews that collected various views to assess the potential effects of AI applications.
While Google maintains its commitment to the development of the responsible AI, removing the specified ban represents a major exit from its former leadership role in creating clear ethical boundaries of artificial intelligence applications. As artificial intelligence continues, the industry is monitoring to find out how this shift can affect the broader scene of developing and organizing artificial intelligence.