Analysis of the implications of the Moral Machine project as an implementation of the concept of coherent extrapolated volition for building clustered trust in autonomous machines

Main Article Content

Krzysztof Sołoducha
https://orcid.org/0000-0003-1351-5487

Abstract

In this paper, we focus on the analysis of Eliezer Yudkowsky’s concept of “coherent extrapolated volition” (CEV) as a response to the need for a post-conventional, persuasive morality that meets the criteria of active trust in the sense of Anthony Giddens, which could be used in the case of autonomous machines. Based on the analysis of the results of the Moral Machine project, we formulate some guidelines for transformation of the idea of a coherent extrapolated volition into the concept of a coherent, extrapolated and clustered volition. The argumentation used in the paper is intended to show that the idea of CEV transformed into its clustered version can be used to build a technically and socially efficient decision-making pattern database for autonomous machines.

Article Details

How to Cite
Sołoducha, K. (2022). Analysis of the implications of the Moral Machine project as an implementation of the concept of coherent extrapolated volition for building clustered trust in autonomous machines. Philosophical Problems in Science (Zagadnienia Filozoficzne W Nauce), (73), 231–255. Retrieved from https://zfn.edu.pl/index.php/zfn/article/view/616
Section
Articles
Author Biography

Krzysztof Sołoducha, Military University of Technology

Krzysztof Sołoducha PhD, prof. WAT

krzysztof.soloducha@wat.edu.pl

https://www.researchgate.net/profile/Krzysztof-Soloducha-2

 

References

Arrow, K.J., 1973. Some Ordinalist-Utilitarian Notes on Rawls’s Theory of Justice. The Journal of Philosophy [Online], 70(9), pp.245–263. https://doi.org/10.2307/2025006.

Asimov, I., 2004. Runaround. I, robot. Bantam hardcover edition, Bantam spectra book. New York: Bantam Books, pp.30–55.

Awad, E. et al., 2020. Universals and variations in moral decisions made in 42 countries by 70,000 participants. Proceedings of the National Academy of Sciences [Online], 117(5), pp.2332–2337. https://doi.org/10.1073/pnas.1911517117.

Bigman, Y.E. and Gray, K., 2020. Life and death decisions of autonomous vehicles. Nature [Online], 579(7797), E1–E2. https://doi.org/10.1038/s41586-020-1987-4.

Bostrom, N., 2016. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press.

Burrell, J., 2016. How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society [Online], 3(1), pp.1–12. https://doi.org/10.1177/2053951715622512.

Czyżowska, D., Niemczyński, A. and Kmieć, E., 1993. Formy rozumowania moralnego Polaków w świetle danych z badania metodą Lawrence’a Kohlberga. Kwartalnik Polskiej Psychologii Rozwojowej, 2(1), pp.19–38.

Dignum, V., 2022. Responsible AI: From Principles to Action. Available at: <https://www.youtube.com/watch?v=LwKDOwwJpL4> [visited on 11 January 2023].

Edmonds, E., 2017. Americans Feel Unsafe Sharing the Road with Fully Self-Driving Cars. Available at: <https://newsroom.aaa.com/2017/03/americans-feel-unsafe-sharing-road-fully-self-driving-cars/> [visited on 11 January 2023].

Eschenbach, W.J.v., 2021. Transparency and the Black Box Problem: Why We Do Not Trust AI. Philosophy & Technology [Online], 34(4), pp.1607–1622. https://doi.org/10.1007/s13347-021-00477-0.

Foot, P., 2002. The Problem of Abortion and the Doctrine of the Double Effect. In: P. Foot, ed. Virtues and Vices: And Other Essays in Moral Philosophy [Online]. Oxford: Oxford University Press, pp.5–15. https://doi.org/10.1093/0199252866.003.0002.

Fukuyama, F., 1995. Trust: The Social Virtues and the Creation of Prosperity. 1st ed., A Free Press paperbacks book. New York: Free Press.

Fürnkranz, J. and Hüllermeier, E., 2011. Preference Learning: An Introduction. In: J. Fürnkranz and E. Hüllermeier, eds. Preference Learning [Online]. Berlin; Heidelberg: Springer, pp.1–17. https://doi.org/10.1007/978-3-642-14125-6_1.

Gellner, E., 2005. Words and Things: An Examination of, and an Attack on, Linguistic Philosophy. 1. publ, Routledge classics. London: Routledge.

Giddens, A., 1991. Modernity and Self-Identity: Self and Society in the Late Modern Age. Stanford, CA: Stanford University Press.

Górnicka, J., 1980. Rozwój moralny w koncepcji Lawrence’a Kohlberga. Człowiek i Światopogląd, 6, pp.113–123.

Greene, J.D., 2013. Moral Tribes: Emotion, Reason, and the Gap Between Us and Them. London: Atlantic Books.

Gryz, J., 2021. Sztuczna Inteligencja: powstanie, rozwój, rokowania. Available at: <https://www.youtube.com/watch?v=3ZDfVgC897k> [visited on 11 January 2023].

Harsanyi, J.C., 1975. Can the Maximin Principle Serve as a Basis for Morality? A Critique of John Rawls’s Theory. The American Political Science Review [Online], 69(2). Ed. by J. Rawls, pp.594–606. https://doi.org/10.2307/1959090.

Hofstede, G.H., Hofstede, G.J. and Minkov, M., 2010. Cultures and Organizations: Software of the Mind: Intercultural Cooperation and Its Importance for Survival. 3rd ed. New York: McGraw-Hill.

Hofstede, G.J., 2011. Geert Hofstede on Culture. Available at: <https://www.youtube.com/watch?v=wdh40kgyYOY> [visited on 11 January 2023].

Holstein, T. and Dodig-Crnkovic, G., 2018. Avoiding the intrinsic unfairness of the trolley problem. Proceedings of the International Workshop on Software Fairness [Online]. Gothenburg Sweden: ACM, pp.32–37. https://doi.org/10.1145/3194770.3194772.

Inglehart, R. and Welzel, C., 2005. Modernization, Cultural Change, and Democracy: The Human Development Sequence [Online]. Cambridge: Cambridge University Press. https://doi.org/10.1017/CBO9780511790881.

Jörgensen, J., 1937. Imperatives and logic. Erkenntnis [Online], 7(1), pp.288–296. https://doi.org/10.1007/BF00666538.

Karpus, J. et al., 2021. Algorithm exploitation: Humans are keen to exploit benevolent AI. iScience [Online], 24(6), p.102679. https://doi.org/10.1016/j.isci.2021.102679.

Kohlberg, L., 1958. The Development of Modes of Moral Thinking and Choice in the Years 10 to 16 [Online]. PhD thesis. Chicago, IL: University of Chicago. Available at: <https://www.proquest.com/openview/c503bf59d762abe5818e1b24c484d41a/1?pq-origsite=gscholar&cbl=18750&diss=y> [visited on 11 January 2023].

Lo, T., 2019. "My Amazon Alexa Went Rogue and Ordered Me to Stab Myself in the Heart". Available at: <https://www.mirror.co.uk/news/uk-news/my-amazon-echo-went-rogue-21127994> [visited on 11 January 2023].

Maroto-Gómez, M. et al., 2022. An adaptive decision-making system supported on user preference predictions for human–robot interactive communication. User Modeling and User-Adapted Interaction [Online], pp.1–45. https://doi.org/10.1007/s11257-022-09321-2.

Miłaszewicz, D., 2016. Zaufanie jako wartość społeczna. Studia Ekonomiczne [Online], (259), pp.80–88. Available at: <http://cejsh.icm.edu.pl/cejsh/element/bwmeta1.element.cejsh-d64b921a-5db7-4208-9eb7-73baaa05f7e4> [visited on 11 January 2023].

Mirnig, A.G. and Meschtscherjakov, A., 2019. Trolled by the Trolley Problem: On What Matters for Ethical Decision Making in Automated Vehicles. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems [Online]. Glasgow: ACM, pp.1–10. https://doi.org/10.1145/3290605.3300739.

Nagel, T., 1986. The View from Nowhere. 1st ed. Oxford: Oxford University Press.

Nozick, R., 2013. Anarchy, State, and Utopia. New York: Basic Books.

Pasquale, F., 2016. The Black Box Society: The Secret Algorithms That Control Money and Information. Cambridge, MA: Harvard University Press.

Peruzzi, N., Aseron, R. and Bhaskaran, V., 2015. A Beginner’s Guide to Conjoint Analysis. Available at: <https://www.youtube.com/watch?v=RvmZG4cFU0k> [visited on 11 January 2023].

Rawls, J., 1971. A Theory of Justice [Online]. Cambridge, MA: The Belknap Press of Harvard University Press. Available at: <http://www.gbv.de/dms/bowker/toc/9780674880146.pdf> [visited on 11 January 2023].

Searle, J.R., 1964. How to Derive “Ought” from “Is”. The Philosophical Review [Online], 73(1), pp.43–58. https://doi.org/10.2307/2183201.

Wysocki, I., 2021. The problem of indifference and homogeneity in Austrian economics: Nozick’s challenge revisited. Philosophical Problems in Science (Zagadnienia Filozoficzne w Nauce) [Online], (71), pp.9–44. Available at: <https://zfn.edu.pl/index.php/zfn/article/view/554> [visited on 24 January 2022].

Yudkowsky, E., 2004. Coherent Extrapolated Volition. San Francisco, CA: Singularity Institute for Artificial Intelligence.