Predicting the social and behavioral impact of future technologies, before they are achieved, would allow us to guide their development and regulation before these impacts get entrenched. Traditionally, this prediction has relied on qualitative, narrative methods. We suggest that the reason this method has not been fully embraced, despite its potential benefits, is that experimental scientists may be reluctant to engage in work facing such serious validity threats as science fiction science. To address these threats, we consider possible constraints on the kind of technology that science fiction science may study, as well as the unconventional, immersive methods that science fiction science may require.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
People believe that effort is valuable, but what kind of value does it confer? We find that displays of effort signal moral character. Eight studies (N = 5,502) demonstrate that the exertion of effort is deemed morally admirable and is monetarily rewarded, even in situations where effort does not directly generate additional product, quality, or economic value.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
With the rapid development of artificial intelligence have come concerns about how machines will make moral decisions, and humans have disagreed about what moral algorithms should be used. To address this concern, we deployed the Moral Machine, an online experimental platform designed to explore the moral dilemmas faced by autonomous vehicles. This platform gathered 39.61 million decisions from 233 countries, territories and islands. We found variations in preferences for values such as sparing humans over animals, avoiding intervention, and preferring to spare the young, fitting with a three-cluster structure.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Autonomous vehicles (AVs) should reduce traffic accidents, but they will sometimes have to choose between two evils, such as running over pedestrians or sacrificing themselves and their passenger to save the pedestrians. Defining the algorithms that will help AVs make these moral decisions is a formidable challenge. We found that participants approved of utilitarian AVs (that is, AVs that sacrifice their passengers for the greater good) and would like others to buy them, but they would prefer not to ride in such AVs themselves, and they would disapprove of their government mandating utilitarian AVs.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.
Abstract will be added soon.