Microsoft CEO Satya Nadella today shared his thoughts on the future of artificial intelligence. Since there is a lot of talk about whether A.I. is a good thing or a bad thing these days, he wrote that that we should build intelligence that augments human abilities and experiences.
Ultimately, it’s not going to be about human vs. machine. We humans have creativity, empathy, emotion, physicality, and insight that can then be mixed with powerful A.I. computation—the ability to reason over large amounts of data and do pattern recognition more quickly—to help move society forward. Second, we also have to build trust directly into our technology. We must infuse technology with protections for privacy, transparency, and security. A.I. devices must be designed to detect new threats and devise appropriate protections as they evolve. And third, all of the technology we build must be inclusive and respectful to everyone.
He also shared the below principles and goals, as an industry and a society, that we should discuss and debate.
- A.I. must be designed to assist humanity: As we build more autonomous machines, we need to respect human autonomy. Collaborative robots, or co-bots, should do dangerous work like mining, thus creating a safety net and safeguards for human workers.
- A.I. must be transparent: We should be aware of how the technology works and what its rules are. We want not just intelligent machines but intelligible machines. Not artificial intelligence but symbiotic intelligence. The tech will know things about humans, but the humans must know about the machines. People should have an understanding of how the technology sees and analyzes the world. Ethics and design go hand in hand.
- A.I. must maximize efficiencies without destroying the dignity of people: It should preserve cultural commitments, empowering diversity. We need broader, deeper, and more diverse engagement of populations in the design of these systems. The tech industry should not dictate the values and virtues of this future.
- A.I. must be designed for intelligent privacy—sophisticated protections that secure personal and group information in ways that earn trust.
- A.I. must have algorithmic accountability so that humans can undo unintended harm. We must design these technologies for the expected and the unexpected.
- A.I. must guard against bias, ensuring proper, and representative research so that the wrong heuristics cannot be used to discriminate.
Read Satya’s full post here.