Last July, 13 US military commanders and technology executives met at the Pentagon’s Silicon Valley outpost, two miles from Google headquarters. It was the second meeting of an advisory board set up in 2016 to counsel the military on ways to apply technology to the battlefield. Milo Medin, a Google vice president, turned the conversation to using artificial intelligence in war games. Eric Schmidt, Google’s former boss, proposed using that tactic to map out strategies for standoffs with China over the next 20 years.
A few months later, the Defense Department hired Google’s cloud division to work on Project Maven, a sweeping effort to enhance its surveillance drones with technology that helps machines think and see. The pact could generate millions in revenue for Alphabet Inc’s internet giant.
But inside a company whose employees largely reflect the liberal sensibilities of the San Francisco Bay Area, the contract is about as popular as President Donald Trump. Not since 2010, when Google retreated from China after clashing with state censors, has an issue so roiled the rank and file. Almost 4,000 Google employees, out of an Alphabet total of 85,000, signed a letter asking Google Chief Executive Officer Sundar Pichai to nix the Project Maven contract and halt all work in “the business of war.”
The petition cites Google’s history of avoiding military work and its famous “do no evil” slogan. One of Alphabet’s AI research labs has even distanced itself from the project. Employees against the deal see it as an unacceptable link with a US administration many oppose and an unnerving first step toward autonomous killing machines. About a dozen staff are resigning in protest over the company’s continued involvement in Maven, Gizmodo reported on Monday.
The internal backlash, which coincides with a broader outcry over how Silicon Valley uses data and technology, has prompted Pichai to act. He and his lieutenants are drafting ethical principles to guide the deployment of Google’s powerful AI tech, according to people familiar with the plans. That will shape its future work. Google is one of several companies vying for a Pentagon cloud contract worth at least $10 billion. A Google spokesman declined to say whether that has changed in light of the internal strife over military work.
Pichai’s challenge is to find a way of reconciling Google’s dovish roots with its future. Having spent more than a decade developing the industry’s most formidable arsenal of AI research and abilities, Google is keen to wed those advances to its fast-growing cloud-computing business. Rivals are rushing to cut deals with the government, which spends billions of dollars a year on all things cloud. No government entity spends more on such technology than the military. Medin and Alphabet director Schmidt, who both sit on the Pentagon’s Defense Innovation Board, have pushed Google to work with the government on counter-terrorism, cybersecurity, telecommunications and more.
To dominate the cloud business and fulfill Pichai’s dream of becoming an “AI-first company,” Google will find it hard to avoid the business of war.
Inside the company there is no greater advocate of working with the government than Google Cloud chief Diane Greene. In a March interview, she defended the Pentagon partnership and said it’s wrong to characterize Project Maven as a turning point. “Google’s been working with the government for a long time,” she said.
The Pentagon created Project Maven about a year ago to analyze mounds of surveillance data. Greene said her division won only a “tiny piece” of the contract, without providing specifics. She described Google’s role in benign terms: scanning drone footage for landmines, say, and then flagging them to military personnel.
“Saving lives kind of things,” Greene said. The software isn’t used to identify targets or to make any attack decisions, Google says. Many employees deem her rationalizations unpersuasive. Even members of the AI team have voiced objections, saying they fear working with the Pentagon will damage relations with consumers and Google’s ability to recruit. At the company’s I/O developer conference last week, Greene told Bloomberg News the issue had absorbed much of her time over the last three months.
Also Read: Google Assistant: From making calls, food delivery to Maps integration, everything you need to know
Googlers’ discomfort with using AI in warfare is longstanding. AI chief Jeff Dean revealed at the I/O conference that he signed an open letter back in 2015 opposing the use of AI in autonomous weapons. Providing the military with Gmail, which has AI capabilities, is fine, but it gets more complex in other cases, Dean said. “Obviously there’s a continuum of decisions we want to make as a company,” he said.
Last year, several executives—including Demis Hassabis and Mustafa Suleyman, who run Alphabet’s DeepMind AI lab, and famed AI researcher Geoffrey Hinton—signed a letter to the United Nations outlining their concerns. “Lethal autonomous weapons … [will] permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend,” the letter reads. “We do not have long to act.” London-based DeepMind assured staff it’s not involved in Project Maven, according to a person familiar with the decision. A DeepMind spokeswoman declined to comment.
Richard Moyes, director of Article 36, a non-profit focused on weapons, is cautious about pledges from companies that humans—not machines—will still make lethal decisions. “This could be a stepping stone to giving those machines greater capacity to make determination of what is or what’s not a target,” he said.
Moyes, a partner of the DeepMind Ethics & Society group, hasn’t spoken to Google or DeepMind about the Pentagon project. AI military systems have already made mistakes. Nighat Dad, director of the Digital Rights Foundation, cites the case of two Al Jazeera reporters who filed legal complaints that they were erroneously placed on a drone “kill list” by the US government’s Skynet surveillance system. Dad sent a letter in April to Pichai asking Google to end the Project Maven contract, but says she hasn’t received a reply.
Source : http://indianexpress.com
A few months later, the Defense Department hired Google’s cloud division to work on Project Maven, a sweeping effort to enhance its surveillance drones with technology that helps machines think and see. The pact could generate millions in revenue for Alphabet Inc’s internet giant.
But inside a company whose employees largely reflect the liberal sensibilities of the San Francisco Bay Area, the contract is about as popular as President Donald Trump. Not since 2010, when Google retreated from China after clashing with state censors, has an issue so roiled the rank and file. Almost 4,000 Google employees, out of an Alphabet total of 85,000, signed a letter asking Google Chief Executive Officer Sundar Pichai to nix the Project Maven contract and halt all work in “the business of war.”
The petition cites Google’s history of avoiding military work and its famous “do no evil” slogan. One of Alphabet’s AI research labs has even distanced itself from the project. Employees against the deal see it as an unacceptable link with a US administration many oppose and an unnerving first step toward autonomous killing machines. About a dozen staff are resigning in protest over the company’s continued involvement in Maven, Gizmodo reported on Monday.
The internal backlash, which coincides with a broader outcry over how Silicon Valley uses data and technology, has prompted Pichai to act. He and his lieutenants are drafting ethical principles to guide the deployment of Google’s powerful AI tech, according to people familiar with the plans. That will shape its future work. Google is one of several companies vying for a Pentagon cloud contract worth at least $10 billion. A Google spokesman declined to say whether that has changed in light of the internal strife over military work.
Pichai’s challenge is to find a way of reconciling Google’s dovish roots with its future. Having spent more than a decade developing the industry’s most formidable arsenal of AI research and abilities, Google is keen to wed those advances to its fast-growing cloud-computing business. Rivals are rushing to cut deals with the government, which spends billions of dollars a year on all things cloud. No government entity spends more on such technology than the military. Medin and Alphabet director Schmidt, who both sit on the Pentagon’s Defense Innovation Board, have pushed Google to work with the government on counter-terrorism, cybersecurity, telecommunications and more.
To dominate the cloud business and fulfill Pichai’s dream of becoming an “AI-first company,” Google will find it hard to avoid the business of war.
Inside the company there is no greater advocate of working with the government than Google Cloud chief Diane Greene. In a March interview, she defended the Pentagon partnership and said it’s wrong to characterize Project Maven as a turning point. “Google’s been working with the government for a long time,” she said.
The Pentagon created Project Maven about a year ago to analyze mounds of surveillance data. Greene said her division won only a “tiny piece” of the contract, without providing specifics. She described Google’s role in benign terms: scanning drone footage for landmines, say, and then flagging them to military personnel.
“Saving lives kind of things,” Greene said. The software isn’t used to identify targets or to make any attack decisions, Google says. Many employees deem her rationalizations unpersuasive. Even members of the AI team have voiced objections, saying they fear working with the Pentagon will damage relations with consumers and Google’s ability to recruit. At the company’s I/O developer conference last week, Greene told Bloomberg News the issue had absorbed much of her time over the last three months.
Also Read: Google Assistant: From making calls, food delivery to Maps integration, everything you need to know
Googlers’ discomfort with using AI in warfare is longstanding. AI chief Jeff Dean revealed at the I/O conference that he signed an open letter back in 2015 opposing the use of AI in autonomous weapons. Providing the military with Gmail, which has AI capabilities, is fine, but it gets more complex in other cases, Dean said. “Obviously there’s a continuum of decisions we want to make as a company,” he said.
Last year, several executives—including Demis Hassabis and Mustafa Suleyman, who run Alphabet’s DeepMind AI lab, and famed AI researcher Geoffrey Hinton—signed a letter to the United Nations outlining their concerns. “Lethal autonomous weapons … [will] permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend,” the letter reads. “We do not have long to act.” London-based DeepMind assured staff it’s not involved in Project Maven, according to a person familiar with the decision. A DeepMind spokeswoman declined to comment.
Richard Moyes, director of Article 36, a non-profit focused on weapons, is cautious about pledges from companies that humans—not machines—will still make lethal decisions. “This could be a stepping stone to giving those machines greater capacity to make determination of what is or what’s not a target,” he said.
Moyes, a partner of the DeepMind Ethics & Society group, hasn’t spoken to Google or DeepMind about the Pentagon project. AI military systems have already made mistakes. Nighat Dad, director of the Digital Rights Foundation, cites the case of two Al Jazeera reporters who filed legal complaints that they were erroneously placed on a drone “kill list” by the US government’s Skynet surveillance system. Dad sent a letter in April to Pichai asking Google to end the Project Maven contract, but says she hasn’t received a reply.
Source : http://indianexpress.com