Google makes u-turn on military AI
Having fueled much debate and protest, Google have announced a promise to restrict the use of their AI technology for use in military applications. Just over a month ago, Google announced a plan to use machine-learning tools to analyse drone footage in a bid to boost their effectiveness.
The announcement came as a shock to many, particularly Google's staff, whom had little knowledge of this controversial agenda. It seems the brand has changed course, informing employees last week it would not renew its contract with the US Department of Defence when it expires next year.
With large tech companies like Boston Dynamics and Google achieving significant steps in the development of AI, the debate has been gathering pace as to what restrictions should be placed on the use of this research. Drone strikes across the Middle East have often grabbed the headlines as the battlefield becomes increasingly automated.
Google have now taken the stance that they will not develop AI technology that causes harm to people, a position one hopes would go without saying. They announced a whole host of new guidelines, with chief executive Sundar Pichai stating the restrictions will extend to:
> technologies that cause or are likely to cause overall harm
> weapons or other technologies whose principal purpose is to cause or directly facilitate injury to people
> technology that gathers or uses information for surveillance violating internationally accepted norms
> technologies whose purpose contravenes widely accepted principles of international law and human rights
These guidelines will be instated at the start of 2019, but to what end they’re being developed now is anyone’s guess.