This year, employees at Google have very publicly protested against the company’s involvement in projects they are concerned about. The first protest was over Project Maven, a programme developed for the US military to speed up analysis of drone footage by automatically classifying images of objects and people. The fallout saw thousands of employees signing a petition and about a dozen resigning, citing ethical concerns over the use of AI (artificial intelligence) technology in drone warfare as well as worries about the company’s political decisions. The outcry ultimately led to Google announcing that it will not seek another contract in that arena.
The latest controversy revolves around Dragonfly, a project to provide China with search and personalised mobile news services that comply with the government’s censorship and surveillance requirements. In a letter to Google management, employees lamented that “currently we do not have the information required to make ethically-informed decisions about our work, our projects, and our employment. That the decision to build Dragonfly was made in secret, and progressed even with the AI Principles in place, makes clear that the Principles alone are not enough. We urgently need more transparency, a seat at the table, and a commitment to clear and open processes: Google employees need to know what we’re building.”
The AI Principles referred to basically comprise a high-level construct equivalent to Isaac Asimov’s famed ‘Three Laws of Robotics’. In a nutshell, the principles Google engineers seek to uphold are that AI should: 1) Be socially beneficial; 2) Avoid creating or reinforcing unfair bias; 3) Be built and tested for safety; 4) Be accountable to people; 5) Incorporate privacy design principles; 6) Uphold high standards of scientific excellence; 7) Be made available for uses that accord with these principles. The principles also explicitly exclude AI development for technologies that cause or are likely to cause overall harm or personal injury, and technologies that are unfairly used for surveillance.
On the one hand it’s encouraging to see engineers take an ethical stand like this. Far too often I think there is a tendency to just do the engineering and leave the ethics to somebody else. It also says a lot about Google’s company culture that employees feel empowered to take a stand like this, and about the power of social media and the Internet as a medium for them to bring their concerns into the public awareness.
On the other hand maybe ethics shouldn’t be the domain of engineers, or at least their problem. Maybe Google will just go out and find engineers who don’t have the same moral qualms, or are willing to sign away their rights to object when they sign their employment contract. If they were clever about it, they could probably compartmentalise a project’s development in such a way that the components all seem innocent but they are able to be integrated into a nefarious whole. But as we’ve seen time and time again, nowadays it just takes one leak about subterfuge of that kind and the scandal can bring even the biggest company to its knees (or at least give it a solid kick to the shins).
On an emotional level, it gives me a warm fuzzy feeling to see these guys taking a stand. To encourage or accept a culture in which people disengage their work from their ethical concerns is not the kind of world I want to live in. It also bodes well for the future of AI and for oversight of its application, seeing as it has the potential for so much harm.
AI might one day reach the point where the ‘intelligence’ part is indistinguishable from a human’s, but we must make sure the ‘artificial’ part never gets overlooked. We are not artificial, we are the real thing and it is important that we care about ethics and take a stand to protect them.
Brett van den Bosch
Editor
Tel: | +27 11 543 5800 |
Email: | [email protected] |
www: | www.technews.co.za |
Articles: | More information and articles about Technews Publishing |
© Technews Publishing (Pty) Ltd | All Rights Reserved