A self-help guide for autonomous systems

Author unknown

Abstract

Abstract: When things go badly, we notice that something is amiss, figure out what went wrong and why, and attempt to repair the problem. Artificial systems depend on their human designers to program in responses to every eventuality and therefore typically don’t even notice when things go wrong, following their programming over the proverbial, and in some cases literal, cliff. This article describes our work on the Meta-Cognitive Loop, a domain-general approach to giving artificial systems the ability to notice, assess, and repair problems. The goal is to make artificial systems more robust.

Other Versions

No versions found

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 103,401

External links

  • This entry has no external links. Add one.
Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

  • Only published works are available at libraries.

Similar books and articles

On a Cognitive Model of Semiosis.Piotr Konderak - 2015 - Studies in Logic, Grammar and Rhetoric 40 (1):129-144.
Consciousness and Rationality: The Lesson from Artificial Intelligence.Philip Woodward - 2022 - Journal of Consciousness Studies 29 (5-6):150-175.
Is life as a multiverse phenomenon?Claus Emmeche - 1993 - In Christopher G. Langton (ed.), Artificial Life III ( = Santa Fe Institute Studies in the Sciences of Complexity, Proceedings Volume XVII). Reading, Massachusetts.: Addison-Wesley Publishing Company.
The missing G.Erez Firt - 2020 - AI and Society 35 (4):995-1007.

Analytics

Added to PP
2009-01-28

Downloads
0

6 months
0

Historical graph of downloads

Sorry, there are not enough data points to plot this chart.
How can I increase my downloads?

Citations of this work

Paraconsistent logic.Graham Priest - 2008 - Stanford Encyclopedia of Philosophy.

Add more citations

References found in this work

No references found.

Add more references