"Just because we don't know quite what will go wrong doesn't mean we shouldn't think about it.

Originally shared by Wayne Radinsky

"Just because we don't know quite what will go wrong doesn't mean we shouldn't think about it. That's the basic idea of safety engineering: You think hard about what might go wrong to prevent it from happening. Lots of people conflate safety engineering with alarmism. But when the leaders of the Apollo program carefully thought through everything that could go wrong when you sent a rocket with astronauts to the moon, they weren't being alarmist. They were doing precisely what ultimately led to the success of the mission. This is how we should think about AI."

So says Max Tegmark, an MIT physics professor who wrote a book, "Life 3.0: Being Human in the Age of Artificial Intelligence," in which he first "explains how today's AI research will likely lead to the creation of a superintelligent AI, then goes further to explore the possible futures that could result from this creation."
https://www.spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/interview-max-tegmark-on-superintelligent-ai-cosmic-apocalypse-and-life-3-0

Comments

Popular posts from this blog

#vegetarian #vegan #evolution