Is AI case that is explainable, intelligible or hopeless?
Main Article Content
Abstract
Wrocław University of Science and Technology, Poland This article is a review of the book Making AI Intelligible. Philosophical Foundations, written by Herman Cappelen and Josh Dever, and published in 2021 by Oxford University Press. The authors of the reviewed book address the difficult issue of interpreting the results provided by AI systems and the links between human-specific content handling and the internal mechanisms of these systems. Considering the potential usefulness of various frameworks developed in philosophy to solve the problem, they conduct a thorough analysis of a wide spectrum of them, from the use of Saul Kripke’s work to a critical analysis of the explainable AI current.
Article Details
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
References
Cappelen, H. and Dever, J., 2021. Making AI Intelligible: Philosophical Foundations [Online]. 1st ed. Oxford: Oxford University Press. https://doi.org/10.1093/oso/9780192894724.001.0001.
Krzanowski, R. and Polak, P., 2022. The Meta-Ontology of AI systems with Human-Level Intelligence. Philosophical Problems in Science (Zagadnienia Filozoficzne w Nauce), (73), pp.23–24.
Murez, M. and Recanati, F., 2016. Mental Files: an Introduction. Review of Philosophy and Psychology [Online], 7(2), pp.265–281. https://doi.org/10.1007/s13164-016-0314-3.
Polak, P., 2015. Bezgłośna komputerowa rewolucja w naukach eksperymentalnych [recenzja]. Philosophical Problems in Science (Zagadnienia Filozoficzne w Nauce), (58), pp.151–157.
Spence, E., 2021. Stoic Philosophy and the Control Problem of AI Technology: Caught in the Web, Values and identities. Lanham: Rowman & Littlefield Publishers.