Results for 'GPT-3'

958 found
Order:
  1. GPT-3: its nature, scope, limits, and consequences.Luciano Floridi & Massimo Chiriatti - 2020 - Minds and Machines 30 (4):681–⁠694.
    In this commentary, we discuss the nature of reversible and irreversible questions, that is, questions that may enable one to identify the nature of the source of their answers. We then introduce GPT-3, a third-generation, autoregressive language model that uses deep learning to produce human-like texts, and use the previous distinction to analyse it. We expand the analysis to present three tests based on mathematical, semantic, and ethical questions and show that GPT-3 is not designed to pass any of them. (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   42 citations  
  2.  91
    How persuasive is AI-generated argumentation? An analysis of the quality of an argumentative text produced by the GPT-3 AI text generator.Martin Hinton & Jean H. M. Wagemans - 2023 - Argument and Computation 14 (1):59-74.
    In this paper, we use a pseudo-algorithmic procedure for assessing an AI-generated text. We apply the Comprehensive Assessment Procedure for Natural Argumentation (CAPNA) in evaluating the arguments produced by an Artificial Intelligence text generator, GPT-3, in an opinion piece written for the Guardian newspaper. The CAPNA examines instances of argumentation in three aspects: their Process, Reasoning and Expression. Initial Analysis is conducted using the Argument Type Identification Procedure (ATIP) to establish, firstly, that an argument is present and, secondly, its specific (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  3. Plagiarism in the age of massive Generative Pre-trained Transformers (GPT-3).Nassim Dehouche - 2021 - Ethics in Science and Environmental Politics 21:17-23.
    As if 2020 were not a peculiar enough year, its fifth month has seen the relatively quiet publication of a preprint describing the most powerful Natural Language Processing (NLP) system to date, GPT-3 (Generative Pre-trained Transformer-3), by Silicon Valley research firm OpenAI. Though the software implementation of GPT-3 is still in its initial Beta release phase, and its full capabilities are still unknown as of the time of this writing, it has been shown that this Artificial Intelligence can comprehend prompts (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  4. Playing Games with Ais: The Limits of GPT-3 and Similar Large Language Models.Adam Sobieszek & Tadeusz Price - 2022 - Minds and Machines 32 (2):341-364.
    This article contributes to the debate around the abilities of large language models such as GPT-3, dealing with: firstly, evaluating how well GPT does in the Turing Test, secondly the limits of such models, especially their tendency to generate falsehoods, and thirdly the social consequences of the problems these models have with truth-telling. We start by formalising the recently proposed notion of reversible questions, which Floridi & Chiriatti propose allow one to ‘identify the nature of the source of their answers’, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  5.  22
    At the intersection of humanity and technology: a technofeminist intersectional critical discourse analysis of gender and race biases in the natural language processing model GPT-3.M. A. Palacios Barea, D. Boeren & J. F. Ferreira Goncalves - forthcoming - AI and Society:1-19.
    Algorithmic biases, or algorithmic unfairness, have been a topic of public and scientific scrutiny for the past years, as increasing evidence suggests the pervasive assimilation of human cognitive biases and stereotypes in such systems. This research is specifically concerned with analyzing the presence of discursive biases in the text generated by GPT-3, an NLPM which has been praised in recent years for resembling human language so closely that it is becoming difficult to differentiate between the human and the algorithm. The (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  6.  47
    Student Voices on GPT-3, Writing Assignments, and the Future College Classroom.Bada Kim, Sarah Robins & Jihui Huang - 2024 - Teaching Philosophy 47 (2):213-231.
    This paper presents a summary and discussion of an assignment that asked students about the impact of Large Language Models on their college education. Our analysis summarizes students’ perception of GPT-3, categorizes their proposals for modifying college courses, and identifies their stated values about their college education. Furthermore, this analysis provides a baseline for tracking students’ attitudes toward LLMs and contributes to the conversation on student perceptions of the relationship between writing and philosophy.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  7. Sociocommunicative functions of a generative text: the case of GPT-3.Auli Viidalepp - 2022 - Lexia. Rivista di Semiotica 39:177-192.
    Recently, there have been significant advances in the development of language-transformer models that enable statistical analysis of co-occurring words (word prediction) and text generation. One example is the Generative Pre-trained Transformer 3 (GPT-3) by OpenAI, which was used to generate an opinion article (op-ed) published in “The Guardian” in Septem- ber 2020. The publication and reception of the op-ed highlights the difficulty for human readers to differentiate a machine-produced text; it also calls attention to the challenge of perceiving such a (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  8.  28
    Epistemology Goes AI: A Study of GPT-3’s Capacity to Generate Consistent and Coherent Ordered Sets of Propositions on a Single-Input-Multiple-Outputs Basis.Marcelo de Araujo, Guilherme de Almeida & José Luiz Nunes - 2024 - Minds and Machines 34 (1):1-18.
    The more we rely on digital assistants, online search engines, and AI systems to revise our system of beliefs and increase our body of knowledge, the less we are able to resort to some independent criterion, unrelated to further digital tools, in order to asses the epistemic reliability of the outputs delivered by them. This raises some important questions to epistemology in general and pressing questions to applied to epistemology in particular. In this paper, we propose an experimental method for (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  9. The Ghost in the Machine has an American accent: value conflict in GPT-3.Rebecca Johnson, Giada Pistilli, Natalia Menedez-Gonzalez, Leslye Denisse Dias Duran, Enrico Panai, Julija Kalpokiene & Donald Jay Bertulfo - manuscript
    The alignment problem in the context of large language models must consider the plurality of human values in our world. Whilst there are many resonant and overlapping values amongst the world’s cultures, there are also many conflicting, yet equally valid, values. It is important to observe which cultural values a model exhibits, particularly when there is a value conflict between input prompts and generated outputs. We discuss how the co- creation of language and cultural value impacts large language models (LLMs). (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  10. A Loosely Wittgensteinian Conception of the Linguistic Understanding of Large Language Models like BERT, GPT-3, and ChatGPT.Reto Gubelmann - 2023 - Grazer Philosophische Studien 99 (4):485-523.
    In this article, I develop a loosely Wittgensteinian conception of what it takes for a being, including an AI system, to understand language, and I suggest that current state of the art systems are closer to fulfilling these requirements than one might think. Developing and defending this claim has both empirical and conceptual aspects. The conceptual aspects concern the criteria that are reasonably applied when judging whether some being understands language; the empirical aspects concern the question whether a given being (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  11.  30
    GPT-4-Trinis: assessing GPT-4’s communicative competence in the English-speaking majority world.Samantha Jackson, Barend Beekhuizen, Zhao Zhao & Rhonda McEwen - forthcoming - AI and Society:1-17.
    Biases and misunderstanding stemming from pre-training in Generative Pre-Trained Transformers are more likely for users of underrepresented English varieties, since the training dataset favors dominant Englishes (e.g., American English). We investigate (potential) bias in GPT-4 when it interacts with Trinidadian English Creole (TEC), a non-hegemonic English variety that partially overlaps with standardized English (SE) but still contains distinctive characteristics. (1) Comparable responses: we asked GPT-4 18 questions in TEC and SE and compared the content and detail of the responses. (2) (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  12.  17
    Large language models in cryptocurrency securities cases: can a GPT model meaningfully assist lawyers?Arianna Trozze, Toby Davies & Bennett Kleinberg - forthcoming - Artificial Intelligence and Law:1-47.
    Large Language Models (LLMs) could be a useful tool for lawyers. However, empirical research on their effectiveness in conducting legal tasks is scant. We study securities cases involving cryptocurrencies as one of numerous contexts where AI could support the legal process, studying GPT-3.5’s legal reasoning and ChatGPT’s legal drafting capabilities. We examine whether a) GPT-3.5 can accurately determine which laws are potentially being violated from a fact pattern, and b) whether there is a difference in juror decision-making based on complaints (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  13.  78
    Ethical implications of text generation in the age of artificial intelligence.Laura Illia, Elanor Colleoni & Stelios Zyglidopoulos - 2022 - Business Ethics, the Environment and Responsibility 32 (1):201-210.
    We are at a turning point in the debate on the ethics of Artificial Intelligence (AI) because we are witnessing the rise of general-purpose AI text agents such as GPT-3 that can generate large-scale highly refined content that appears to have been written by a human. Yet, a discussion on the ethical issues related to the blurring of the roles between humans and machines in the production of content in the business arena is lacking. In this conceptual paper, drawing on (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  14.  7
    Comportamiento argumentativo del ChatGPT 3.5: similitudes y diferencias con la práctica argumentativa humana.Cristián Noemi Padilla & Cristián Santibáñez - 2024 - Logos Revista de Lingüística Filosofía y Literatura 34 (1).
    El desarrollo de la inteligencia artificial (IA) ha abierto una nueva discusión sobre la capacidad lingüística de esta tecnología y su potencial impacto en todas las dimensiones de la actividad humana (Brynjolfsson y McAfee, 2014). A fin de evaluar similitudes o diferencias entre la capacidad lingüística de la IA y la humana, este trabajo analiza específicamente el comportamiento argumentativo en términos de los puntos de vista que adopta Chat GPT, versión 3.5, frente a una situación controversial expresada en un dilema (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  15.  57
    Exploring the potential utility of AI large language models for medical ethics: an expert panel evaluation of GPT-4.Michael Balas, Jordan Joseph Wadden, Philip C. Hébert, Eric Mathison, Marika D. Warren, Victoria Seavilleklein, Daniel Wyzynski, Alison Callahan, Sean A. Crawford, Parnian Arjmand & Edsel B. Ing - 2024 - Journal of Medical Ethics 50 (2):90-96.
    Integrating large language models (LLMs) like GPT-4 into medical ethics is a novel concept, and understanding the effectiveness of these models in aiding ethicists with decision-making can have significant implications for the healthcare sector. Thus, the objective of this study was to evaluate the performance of GPT-4 in responding to complex medical ethical vignettes and to gauge its utility and limitations for aiding medical ethicists. Using a mixed-methods, cross-sectional survey approach, a panel of six ethicists assessed LLM-generated responses to eight (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  16. Detection of GPT-4 Generated Text in Higher Education: Combining Academic Judgement and Software to Identify Generative AI Tool Misuse.Mike Perkins, Jasper Roe, Darius Postma, James McGaughran & Don Hickerson - 2024 - Journal of Academic Ethics 22 (1):89-113.
    This study explores the capability of academic staff assisted by the Turnitin Artificial Intelligence (AI) detection tool to identify the use of AI-generated content in university assessments. 22 different experimental submissions were produced using Open AI’s ChatGPT tool, with prompting techniques used to reduce the likelihood of AI detectors identifying AI-generated content. These submissions were marked by 15 academic staff members alongside genuine student submissions. Although the AI detection tool identified 91% of the experimental submissions as containing AI-generated content, only (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  17.  8
    Neural Generative Models and the Parallel Architecture of Language: A Critical Review and Outlook.Giulia Rambelli, Emmanuele Chersoni, Davide Testa, Philippe Blache & Alessandro Lenci - forthcoming - Topics in Cognitive Science.
    According to the parallel architecture, syntactic and semantic information processing are two separate streams that interact selectively during language comprehension. While considerable effort is put into psycho- and neurolinguistics to understand the interchange of processing mechanisms in human comprehension, the nature of this interaction in recent neural Large Language Models remains elusive. In this article, we revisit influential linguistic and behavioral experiments and evaluate the ability of a large language model, GPT-3, to perform these tasks. The model can solve semantic (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  18. How far can we get in creating a digital replica of a philosopher?Anna Strasser, Eric Schwitzgebel & Matthew Crosby - 2023 - In Raul Hakli, Pekka Mäkelä & Johanna Seibt (eds.), Social Robots in Social Institutions, Robophilosophy 2022. IOS Press. pp. 371-380.
    Can we build machines with which we can have interesting conversations? Observing the new optimism of AI regarding deep learning and new language models, we set ourselves an ambitious goal: We want to find out how far we can get in creating a digital replica of a philosopher. This project has two aims; one more technical, investigating of how the best model can be built. The other one, more philosophical, explores the limits and risks which are accompanied by the creation (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  19.  1
    Большие языковые модели и их роль в современных научных открытиях.Павел Николаевич Барышников & Владимир Юрьевич Филимонов - 2024 - Philosophical Problems of IT and Cyberspace (PhilITandC) 1.
    Сегодня большие языковые модели являются очень мощным информационно-аналитическим инструментов, значительно ускоряющим большинство существующих методов и методологий обработки информационных процессов. Особую значимость в этом качестве получает научная информация, в которую постепенно вовлекаются мощности больших языковых моделей. Такое взаимодействие науки и качественной новых возможностей работы с информацией приводят нас к новым, уникальным научным открытиям, их большому количественному многообразию. Происходит ускорение научного поиска, сокращение временных затрат на его осуществление – высвободившееся время можно потрать как на решение новые научных проблем, так и на научное (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  20.  40
    More is Better: English Language Statistics are Biased Toward Addition.Bodo Winter, Martin H. Fischer, Christoph Scheepers & Andriy Myachykov - 2023 - Cognitive Science 47 (4):e13254.
    We have evolved to become who we are, at least in part, due to our general drive to create new things and ideas. When seeking to improve our creations, ideas, or situations, we systematically overlook opportunities to perform subtractive changes. For example, when tasked with giving feedback on an academic paper, reviewers will tend to suggest additional explanations and analyses rather than delete existing ones. Here, we show that this addition bias is systematically reflected in English language statistics along several (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  21. Language and Intelligence.Carlos Montemayor - 2021 - Minds and Machines 31 (4):471-486.
    This paper explores aspects of GPT-3 that have been discussed as harbingers of artificial general intelligence and, in particular, linguistic intelligence. After introducing key features of GPT-3 and assessing its performance in the light of the conversational standards set by Alan Turing in his seminal paper from 1950, the paper elucidates the difference between clever automation and genuine linguistic intelligence. A central theme of this discussion on genuine conversational intelligence is that members of a linguistic community never merely respond “algorithmically” (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  22.  59
    A Pragmatic Approach to the Intentional Stance Semantic, Empirical and Ethical Considerations for the Design of Artificial Agents.Guglielmo Papagni & Sabine Koeszegi - 2021 - Minds and Machines 31 (4):505-534.
    Artificial agents are progressively becoming more present in everyday-life situations and more sophisticated in their interaction affordances. In some specific cases, like Google Duplex, GPT-3 bots or Deep Mind’s AlphaGo Zero, their capabilities reach or exceed human levels. The use contexts of everyday life necessitate making such agents understandable by laypeople. At the same time, displaying human levels of social behavior has kindled the debate over the adoption of Dennett’s ‘intentional stance’. By means of a comparative analysis of the literature (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  23.  44
    Beginning AI Phenomenology.Robert S. Leib - 2024 - Journal of Speculative Philosophy 38 (1):62-82.
    ABSTRACT This dialogue with GPT-3 took place in November 2022, several weeks before ChatGPT was released to the public. The article’s aim is to find out whether natural language processors can participate in phenomenology at some level by asking about its basic concepts. In the discussion, the dialogue covers questions about phenomenology’s definition and distinction from other subbranches like metaphysics and epistemology. The dialogue discusses the nature of Kermit’s environment and self-conception. The dialogue also establishes some of the basic conditions (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  24. AI and the future of humanity: ChatGPT-4, philosophy and education – Critical responses.Michael A. Peters, Liz Jackson, Marianna Papastephanou, Petar Jandrić, George Lazaroiu, Colin W. Evers, Bill Cope, Mary Kalantzis, Daniel Araya, Marek Tesar, Carl Mika, Lei Chen, Chengbing Wang, Sean Sturm, Sharon Rider & Steve Fuller - 2024 - Educational Philosophy and Theory 56 (9):828-862.
    1. Michael A PetersBeijing Normal UniversityChatGPT is an AI chatbot released by OpenAI on November 30, 2022 and a ‘stable release’ on February 13, 2023. It belongs to OpenAI’s GPT-3 family (genera...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  25.  36
    The great Transformer: Examining the role of large language models in the political economy of AI.Wiebke Denkena & Dieuwertje Luitse - 2021 - Big Data and Society 8 (2).
    In recent years, AI research has become more and more computationally demanding. In natural language processing, this tendency is reflected in the emergence of large language models like GPT-3. These powerful neural network-based models can be used for a range of NLP tasks and their language generation capacities have become so sophisticated that it can be very difficult to distinguish their outputs from human language. LLMs have raised concerns over their demonstrable biases, heavy environmental footprints, and future social ramifications. In (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  26.  27
    Combining prompt-based language models and weak supervision for labeling named entity recognition on legal documents.Vitor Oliveira, Gabriel Nogueira, Thiago Faleiros & Ricardo Marcacini - forthcoming - Artificial Intelligence and Law:1-21.
    Named entity recognition (NER) is a very relevant task for text information retrieval in natural language processing (NLP) problems. Most recent state-of-the-art NER methods require humans to annotate and provide useful data for model training. However, using human power to identify, circumscribe and label entities manually can be very expensive in terms of time, money, and effort. This paper investigates the use of prompt-based language models (OpenAI’s GPT-3) and weak supervision in the legal domain. We apply both strategies as alternative (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  27.  2
    Culturally responsive communication in generative AI: looking at ChatGPT’s advice for coming out.Angela M. Cirucci, Miles Coleman, Dan Strasser & Evan Garaizar - forthcoming - AI and Society:1-9.
    Generative AI has captured the public imagination as a tool that promises access to expertise beyond the technical jargon and expense that traditionally characterize such infospheres as those of medicine and law. Largely absent from the current literature, however, are interrogations of generative AI’s abilities to deal in culturally responsive communication, or the expertise interwoven with culturally aware, socially responsible, and personally sensitive communication best practices. To interrogate the possibilities of cultural responsiveness in generative AI, we examine the patterns of (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  28.  26
    (1 other version)Large language models and their role in modern scientific discoveries.В. Ю Филимонов - 2024 - Philosophical Problems of IT and Cyberspace (PhilIT&C) 1:42-57.
    Today, large language models are very powerful, informational and analytical tools that significantly accelerate most of the existing methods and methodologies for processing informational processes. Scientific information is of particular importance in this capacity, which gradually involves the power of large language models. This interaction of science and qualitative new opportunities for working with information lead us to new, unique scientific discoveries, their great quantitative diversity. There is an acceleration of scientific research, a reduction in the time spent on its (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  29. Does ChatGPT have semantic understanding?Lisa Miracchi Titus - 2024 - Cognitive Systems Research 83 (101174):1-13.
    Over the last decade, AI models of language and word meaning have been dominated by what we might call a statistics-of-occurrence, strategy: these models are deep neural net structures that have been trained on a large amount of unlabeled text with the aim of producing a model that exploits statistical information about word and phrase co-occurrence in order to generate behavior that is similar to what a human might produce, or representations that can be probed to exhibit behavior similar to (...)
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  30.  1
    Harnessing AI for Enhancing Student Support Services.Rajni Chand & Raveena Goundar - 2024 - Journal of Ethics in Higher Education 5:145-158.
    In the unique educational landscape of small island countries in the Pacific, the University of the South Pacific (USP) has embarked on an innovative approach to augmenting student support services by integrating Generative AI technology. This initiative specifically caters to its diverse and dispersed student body across 12 countries and five time zones, addressing a critical need for accessible and empathetic support systems in higher education. To do so, the Semester Zero, an online preparatory course using GPT 3.5-Turbo, was created. (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  31. (1 other version)Creating a large language model of a philosopher.Eric Schwitzgebel, David Schwitzgebel & Anna Strasser - 2023 - Mind and Language 39 (2):237-259.
    Can large language models produce expert‐quality philosophical texts? To investigate this, we fine‐tuned GPT‐3 with the works of philosopher Daniel Dennett. To evaluate the model, we asked the real Dennett 10 philosophical questions and then posed the same questions to the language model, collecting four responses for each question without cherry‐picking. Experts on Dennett's work succeeded at distinguishing the Dennett‐generated and machine‐generated answers above chance but substantially short of our expectations. Philosophy blog readers performed similarly to the experts, while ordinary (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  32.  31
    Bringing legal knowledge to the public by constructing a legal question bank using large-scale pre-trained language model.Mingruo Yuan, Ben Kao, Tien-Hsuan Wu, Michael M. K. Cheung, Henry W. H. Chan, Anne S. Y. Cheung, Felix W. H. Chan & Yongxi Chen - 2024 - Artificial Intelligence and Law 32 (3):769-805.
    Access to legal information is fundamental to access to justice. Yet accessibility refers not only to making legal documents available to the public, but also rendering legal information comprehensible to them. A vexing problem in bringing legal information to the public is how to turn formal legal documents such as legislation and judgments, which are often highly technical, to easily navigable and comprehensible knowledge to those without legal education. In this study, we formulate a three-step approach for bringing legal knowledge (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  33.  49
    Artificial understanding: a step toward robust AI.Erez Firt - forthcoming - AI and Society:1-13.
    In recent years, state-of-the-art artificial intelligence systems have started to show signs of what might be seen as human level intelligence. More specifically, large language models such as OpenAI’s GPT-3, and more recently Google’s PaLM and DeepMind’s GATO, are performing amazing feats involving the generation of texts. However, it is acknowledged by many researchers that contemporary language models, and more generally, learning systems, still lack important capabilities, such as understanding, reasoning and the ability to employ knowledge of the world and (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  34.  9
    Who Wrote This?: How AI and the Lure of Efficiency Threaten Human Writing by Naomi Baron (review).Luke Munn - 2024 - Substance 53 (3):156-161.
    In lieu of an abstract, here is a brief excerpt of the content:Reviewed by:Who Wrote This?: How AI and the Lure of Efficiency Threaten Human Writing by Naomi BaronLuke MunnBaron, Naomi. Who Wrote This?: How AI and the Lure of Efficiency Threaten Human Writing. Stanford University Press, 2023. 344pp.Who Wrote This? is Naomi Baron’s latest book exploring the emergence of AI language models and their potential implications for writing. A linguist, educator, and emeritus professor at American University, Baron should be (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  35. Why AI will never rule the world (interview).Luke Dormehl, Jobst Landgrebe & Barry Smith - 2022 - Digital Trends.
    Call it the Skynet hypothesis, Artificial General Intelligence, or the advent of the Singularity — for years, AI experts and non-experts alike have fretted (and, for a small group, celebrated) the idea that artificial intelligence may one day become smarter than humans. -/- According to the theory, advances in AI — specifically of the machine learning type that’s able to take on new information and rewrite its code accordingly — will eventually catch up with the wetware of the biological brain. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  36. Sharing Our Concepts with Machines.Patrick Butlin - 2021 - Erkenntnis 88 (7):3079-3095.
    As AI systems become increasingly competent language users, it is an apt moment to consider what it would take for machines to understand human languages. This paper considers whether either language models such as GPT-3 or chatbots might be able to understand language, focusing on the question of whether they could possess the relevant concepts. A significant obstacle is that systems of both kinds interact with the world only through text, and thus seem ill-suited to understanding utterances concerning the concrete (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  37.  33
    Using rhetorical strategies to design prompts: a human-in-the-loop approach to make AI useful.Nupoor Ranade, Marly Saravia & Aditya Johri - forthcoming - AI and Society:1-22.
    The growing capabilities of artificial intelligence (AI) word processing models have demonstrated exceptional potential to impact language related tasks and functions. Their fast pace of adoption and probable effect has also given rise to controversy within certain fields. Models, such as GPT-3, are a particular concern for professionals engaged in writing, particularly as their engagement with these technologies is limited due to lack of ability to control their output. Most efforts to maximize and control output rely on a process known (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  38. Large Language Models and the Reverse Turing Test.Terrence Sejnowski - 2023 - Neural Computation 35 (3):309–342.
    Large Language Models (LLMs) have been transformative. They are pre-trained foundational models that are self-supervised and can be adapted with fine tuning to a wide range of natural language tasks, each of which previously would have required a separate network model. This is one step closer to the extraordinary versatility of human language. GPT-3 and more recently LaMDA can carry on dialogs with humans on many topics after minimal priming with a few examples. However, there has been a wide range (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  39.  77
    Do Large Language Models Know What Humans Know?Sean Trott, Cameron Jones, Tyler Chang, James Michaelov & Benjamin Bergen - 2023 - Cognitive Science 47 (7):e13309.
    Humans can attribute beliefs to others. However, it is unknown to what extent this ability results from an innate biological endowment or from experience accrued through child development, particularly exposure to language describing others' mental states. We test the viability of the language exposure hypothesis by assessing whether models exposed to large quantities of human language display sensitivity to the implied knowledge states of characters in written passages. In pre‐registered analyses, we present a linguistic version of the False Belief Task (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  40.  24
    Scrutinizing the foundations: could large language models be solipsistic?Andreea Esanu - 2024 - Synthese 203 (5):1-20.
    In artificial intelligence literature, “delusions” are characterized as the generation of unfaithful output from reliable source content. There is an extensive literature on computer-generated delusions, ranging from visual hallucinations, like the production of nonsensical images in Computer Vision, to nonsensical text generated by (natural) language models, but this literature is predominantly taxonomic. In a recent research paper, however, a group of scientists from DeepMind successfully presented a formal treatment of an entire class of delusions in generative AI models (i.e., models (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  41.  74
    Charting the Terrain of Artificial Intelligence: a Multidimensional Exploration of Ethics, Agency, and Future Directions.Partha Pratim Ray & Pradip Kumar Das - 2023 - Philosophy and Technology 36 (2):1-7.
    This comprehensive analysis dives deep into the intricate interplay between artificial intelligence (AI) and human agency, examining the remarkable capabilities and inherent limitations of large language models (LLMs) such as GPT-3 and ChatGPT. The paper traces the complex trajectory of AI's evolution, highlighting its operation based on statistical pattern recognition, devoid of self-consciousness or innate comprehension. As AI permeates multiple spheres of human life, it raises substantial ethical, legal, and societal concerns that demand immediate attention and deliberation. The metaphorical illustration (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  42.  24
    Who shares about AI? Media exposure, psychological proximity, performance expectancy, and information sharing about artificial intelligence online.Alex W. Kirkpatrick, Amanda D. Boyd & Jay D. Hmielowski - forthcoming - AI and Society:1-12.
    Media exposure can shape audience perceptions surrounding novel innovations, such as artificial intelligence (AI), and could influence whether they share information about AI with others online. This study examines the indirect association between exposure to AI in the media and information sharing about AI online. We surveyed 567 US citizens aged 18 and older in November 2020, several months after the release of Open AI’s transformative GPT-3 model. Results suggest that AI media exposure was related to online information sharing through (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  43.  50
    Emergent Spacetime, the Megastructure Problem, and the Metaphysics of the Self.Susan Schneider - 2024 - Philosophy East and West 74 (2):314-332.
    In lieu of an abstract, here is a brief excerpt of the content:Emergent Spacetime, the Megastructure Problem, and the Metaphysics of the SelfSusan Schneider (bio)The aim of this article is to introduce new thoughts on some pressing topics relating to my book, Artificial You, ranging from the fundamental nature of reality to quantum theory and emergence in large language models (LLM) like GPT-4. Since Artificial You was published, the innovations in the domain of AI chatbots like GPT-4 have been rapid-fire, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  44. Diagonalization & Forcing FLEX: From Cantor to Cohen and Beyond. Learning from Leibniz, Cantor, Turing, Gödel, and Cohen; crawling towards AGI.Elan Moritz - manuscript
    The paper continues my earlier Chat with OpenAI’s ChatGPT with a Focused LLM Experiment (FLEX). The idea is to conduct Large Language Model (LLM) based explorations of certain areas or concepts. The approach is based on crafting initial guiding prompts and then follow up with user prompts based on the LLMs’ responses. The goals include improving understanding of LLM capabilities and their limitations culminating in optimized prompts. The specific subjects explored as research subject matter include a) diagonalization techniques as practiced (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  45. Investigating gender and racial biases in DALL-E Mini Images.Marc Cheong, Ehsan Abedin, Marinus Ferreira, Ritsaart Willem Reimann, Shalom Chalson, Pamela Robinson, Joanne Byrne, Leah Ruppanner, Mark Alfano & Colin Klein - forthcoming - Acm Journal on Responsible Computing.
    Generative artificial intelligence systems based on transformers, including both text-generators like GPT-4 and image generators like DALL-E 3, have recently entered the popular consciousness. These tools, while impressive, are liable to reproduce, exacerbate, and reinforce extant human social biases, such as gender and racial biases. In this paper, we systematically review the extent to which DALL-E Mini suffers from this problem. In line with the Model Card published alongside DALL-E Mini by its creators, we find that the images it produces (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  46. 340 Maurice J. Dupre.M_2 M_3 & M. Q. M_l5 - 1978 - In A. R. Marlow (ed.), Mathematical foundations of quantum theory. New York: Academic Press. pp. 339.
    No categories
     
    Export citation  
     
    Bookmark  
  47.  6
    3. Procession and Related Notions.Robert M. Doran Sj - 1997 - In Verbum: Word and Idea in Aquinas, Volume 2. University of Toronto Press. pp. 106-151.
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  48.  16
    Colloquium 3.John M. Rist - 1998 - Proceedings of the Boston Area Colloquium of Ancient Philosophy 14 (1):53-72.
  49.  15
    3. The ‘Aesthetic Dignity of Words’: Adorno’s Philosophy of Language.Samir Gandesha - 2007 - In Donald Burke, Colin J. Campbell, Kathy Kiloh, Michael Palamarek & Jonathan Short (eds.), Adorno and the Need in Thinking: New Critical Essays. University of Toronto Press. pp. 78-102.
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  50.  10
    3. Drittes Modell: Anerkennung und Selbsttäuschung.Alexander García Düttmann - 2008 - In Derrida Und Ich: Das Problem der Dekonstruktion. Transcript Verlag. pp. 121-136.
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 958