in

Should Google’s algorithms make life and death decisions?


About a dozen Google employees are now resigning over Google providing artificial intelligence to Project Maven, a US Defense Department pilot program with the goal of making analysis of drone footage faster by automatically categorizing images as objects or people using machine learning. This raises several questions about Google’s ethics, and how the future of machine learning and AI should be directed.

AI and machine learning can be used for an endless variety of useful consumer and commercial applications which seem harmless enough, but as the technology develops, more concerning use cases are starting to appear. Project Maven has brought the issue, along with Google, into the spotlight. 

When it comes to drone strikes, there are life and death stakes, so the ethics of Google’s decision to get involved with the US military have been called into question, and rightly so. Should algorithms be making life and death decisions? Could the further development of this technology be paving a path toward autonomous weapons systems?

Google has a responsibility to consider the implications of its technologies on its users. In the case of Project Maven, the results could be lethal for the company’s users, who are located all around the globe. Drones can also have important implications for privacy too, even here in the US.

If you think you have nothing to worry about, consider the fact that the US Department of Transportation, with Google, Qualcomm and Microsoft’s participation, will be testing drones in several American cities for a number of applications not currently allowed by law, citing the possibility of new economic and safety benefits. But, what is the trade off for those benefits? While a future full of AI-powered drone delivery services sounds cool, what new threats to privacy would it introduce?

Google isn’t subject to public accountability for its decisions, but given that users across the world entrust the company with their data, perhaps more scrutiny is in order.

We should be asking more questions about large tech companies’ decisions and be ready to protest them when they promise not to be evil, as Google’s old motto says, and don’t deliver on that promise. Otherwise, we as users will have no say in directing the future of technologies like AI and machine learning which could have grave consequences for privacy and even human lives.

Were the Google employees right to resign? Let us know what you think in the comments!



Source link

24 Amazing Products With Over 1,000 Reviews On Amazon

Cambridge Analytica is under investigation by the FBI after Facebook data harvesting scandal