Results for 'Bayesian learning theory'

967 found
Order:
  1.  90
    Editors' Introduction: Why Formal Learning Theory Matters for Cognitive Science.Sean Fulop & Nick Chater - 2013 - Topics in Cognitive Science 5 (1):3-12.
    This article reviews a number of different areas in the foundations of formal learning theory. After outlining the general framework for formal models of learning, the Bayesian approach to learning is summarized. This leads to a discussion of Solomonoff's Universal Prior Distribution for Bayesian learning. Gold's model of identification in the limit is also outlined. We next discuss a number of aspects of learning theory raised in contributed papers, related to both (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  2.  13
    Beyond the discussion between Learning Theory of Piagetian Propositional Logic and that of Bayesian Causational Inference. 은은숙 - 2019 - Journal of the New Korean Philosophical Association 97:247-266.
    15년 전부터 등장한 베이지안 확률론적 추론 모형은 통계학, 과학철학, 심리학, 인지과학, 컴퓨터과학, 신경과학 등에서 학계의 연구를 지배하는 강력한 핵심논제가 된 후로 이젠 교육학과 심지어는 논리학 분야에까지 큰 반향을 일으키고 있다. 이러한 다양화로 인해, 굿에 따르면, 베이즈주의 유형은 46,656 가지로 분류된다. 이러한 다양한 모형들 중에서 베이지안 확률모형을 학습이론에 적용하려는 학자들은 학습자의 학습 과정이 정확히 베이지안 확률추론 과정을 항상 따른다고 주장한다. 그런데 이러한 확률론적 모형은 피아제의 구성주의적 전망을 계승한 것은 분명하다. 왜냐하면, 이 모델을 지지하는 학자들 자신이 자신들의 학적 운동을 “새로운 합리적 구성주의”라고 (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  3.  78
    General properties of bayesian learning as statistical inference determined by conditional expectations.Zalán Gyenis & Miklós Rédei - 2017 - Review of Symbolic Logic 10 (4):719-755.
    We investigate the general properties of general Bayesian learning, where “general Bayesian learning” means inferring a state from another that is regarded as evidence, and where the inference is conditionalizing the evidence using the conditional expectation determined by a reference probability measure representing the background subjective degrees of belief of a Bayesian Agent performing the inference. States are linear functionals that encode probability measures by assigning expectation values to random variables via integrating them with respect (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  4.  85
    General properties of general Bayesian learning.Miklós Rédei & Zalán Gyenis - unknown
    We investigate the general properties of general Bayesian learning, where ``general Bayesian learning'' means inferring a state from another that is regarded as evidence, and where the inference is conditionalizing the evidence using the conditional expectation determined by a reference probability measure representing the background subjective degrees of belief of a Bayesian Agent performing the inference. States are linear functionals that encode probability measures by assigning expectation values to random variables via integrating them with respect (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  5.  10
    A Bayesian approach to (online) transfer learning: Theory and algorithms.Xuetong Wu, Jonathan H. Manton, Uwe Aickelin & Jingge Zhu - 2023 - Artificial Intelligence 324 (C):103991.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  6. A higher order Bayesian decision theory of consciousness.Hakwan Lau - 2008 - In Rahul Banerjee & Bikas K. Chakrabarti (eds.), Models of brain and mind: physical, computational, and psychological approaches. Boston: Elsevier.
    It is usually taken as given that consciousness involves superior or more elaborate forms of information processing. Contemporary models equate consciousness with global processing, system complexity, or depth or stability of computation. This is in stark contrast with the powerful philosophical intuition that being conscious is more than just having the ability to compute. I argue that it is also incompatible with current empirical findings. I present a model that is free from the strong assumption that consciousness predicts superior performance. (...)
    Direct download  
     
    Export citation  
     
    Bookmark   33 citations  
  7. Bayes or Bust?: A Critical Examination of Bayesian Confirmation Theory.John Earman - 1992 - MIT Press.
    There is currently no viable alternative to the Bayesian analysis of scientific inference, yet the available versions of Bayesianism fail to do justice to several aspects of the testing and confirmation of scientific hypotheses. Bayes or Bust? provides the first balanced treatment of the complex set of issues involved in this nagging conundrum in the philosophy of science. Both Bayesians and anti-Bayesians will find a wealth of new insights on topics ranging from Bayes’s original paper to contemporary formal (...) theory.In a paper published posthumously in 1763, the Reverend Thomas Bayes made a seminal contribution to the understanding of "analogical or inductive reasoning." Building on his insights, modem Bayesians have developed an account of scientific inference that has attracted numerous champions as well as numerous detractors. Earman argues that Bayesianism provides the best hope for a comprehensive and unified account of scientific inference, yet the presently available versions of Bayesianisin fail to do justice to several aspects of the testing and confirming of scientific theories and hypotheses. By focusing on the need for a resolution to this impasse, Earman sharpens the issues on which a resolution turns. John Earman is Professor of History and Philosophy of Science at the University of Pittsburgh. (shrink)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   384 citations  
  8.  15
    Teaching-Learning Model of Structure-Constructivism Based on Piagetian Propositional Logic and Bayesian Causational Inference. 은은숙 - 2020 - Journal of the New Korean Philosophical Association 99:191-217.
    본 연구의 목적은 최근 20여 년 동안 진행되어 온 학습이론에 대한 피아제의 명제논리학적 학습이론과 베이즈주의의 확률론적 학습이론의 융합에 근거하는 새로운 융합교수학습모형을 개발하는 것이다. 연구자는 이 새로운 교수학습모델을 “베이지안 구조구성주의 교수학습모형”(Bayesian structure-constructivist Model of Teaching-learning: 이하 약칭 BMT)이라 명명한다. 본고는 역사-비판적 관점 및 형식화적 관점에서 피아제의 명제논리학적 학습모형에서 해석된 학습이론과 베이즈주의의 확률론적 추론모형에서 해석된 학습이론을 일차적으로 분석하고, 논문의 후반부에서는 이를 근거로 교수법의 관점에서 양자의 학습이론을 통합하는 새로운 교수학습모델, 즉 BMT의 중요한 특성들을 세부적으로 제시한다. 몇 가지 핵심만 언급하면, 첫째로, BMT는 (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  9.  52
    A Bayesian Theory of Sequential Causal Learning and Abstract Transfer.Hongjing Lu, Randall R. Rojas, Tom Beckers & Alan L. Yuille - 2016 - Cognitive Science 40 (2):404-439.
    Two key research issues in the field of causal learning are how people acquire causal knowledge when observing data that are presented sequentially, and the level of abstraction at which learning takes place. Does sequential causal learning solely involve the acquisition of specific cause-effect links, or do learners also acquire knowledge about abstract causal constraints? Recent empirical studies have revealed that experience with one set of causal cues can dramatically alter subsequent learning and performance with entirely (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  10. Theory-based Bayesian models of inductive learning and reasoning.Joshua B. Tenenbaum, Thomas L. Griffiths & Charles Kemp - 2006 - Trends in Cognitive Sciences 10 (7):309-318.
  11.  20
    Is Trust the result of Bayesian Learning?Bernd Lahno - 2004 - Jahrbuch Für Handlungs- Und Entscheidungstheorei 3:47-68.
  12.  13
    Bayesian Teaching Model of image Based on Image Recognition by Deep Learning. 은은숙 - 2020 - Journal of the New Korean Philosophical Association 102:271-296.
    본고는 딥러닝의 이미지 인식 원리와 유아의 이미지 인식 원리를 종합하면서, 이미지-개념 학습을 위한 새로운 교수학습모델, 즉 “베이지안 구조구성주의 교수학습모델”(Bayesian Structure-constructivist Teaching-learning Model: BSTM)을 제안한다. 달리 말하면, 기계학습 원리와 인간학습 원리를 비교함으로써 얻게 되는 시너지 효과를 바탕으로, 유아들의 이미지-개념 학습을 위한 새로운 교수 모델을 구성하는 것을 목표로 한다. 이런 맥락에서 본고는 전체적으로 3가지 차원에서 논의된다. 첫째, 아동의 이미지 학습에 대한 역사적 중요 이론인 “대상 전체론적 가설”, “분류학적 가설”, “배타적 가설”, “기본 수준 범주 가설” 등을 역사 비판적 관점에서 검토한다. 둘째, (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  13.  19
    Learning Bayesian networks from data: An information-theory based approach.Jie Cheng, Russell Greiner, Jonathan Kelly, David Bell & Weiru Liu - 2002 - Artificial Intelligence 137 (1-2):43-90.
  14.  9
    Predicting Learning: Understanding the Role of Executive Functions in Children's Belief Revision Using Bayesian Models.Joseph A. Colantonio, Igor Bascandziev, Maria Theobald, Garvin Brod & Elizabeth Bonawitz - forthcoming - Topics in Cognitive Science.
    Recent studies suggest that learners who are asked to predict the outcome of an event learn more than learners who are asked to evaluate it retrospectively or not at all. One possible explanation for this “prediction boost” is that it helps learners engage metacognitive reasoning skills that may not be spontaneously leveraged, especially for individuals with still-developing executive functions. In this paper, we combined multiple analytic approaches to investigate the potential role of executive functions in elementary school-aged children's science (...). We performed an experiment that investigates children's science learning during a water displacement task where a “prediction boost” had previously been observed—children either made an explicit prediction or evaluated an event post hoc (i.e., postdiction). We then considered the relation of executive function measures and learning, which were collected following the main experiment. Via mixed effects regression models, we found that stronger executive function skills (i.e., stronger inhibition and switching scores) were associated with higher accuracy in Postdiction but not in the Prediction Condition. Using a theory-based Bayesian model, we simulated children's individual performance on the learning task (capturing “belief flexibility”), and compared this “flexibility” to the other measures to understand the relationship between belief revision, executive function, and prediction. Children in the Prediction Condition showed near-ceiling “belief flexibility” scores, which were significantly higher than among children in the Postdiction Condition. We also found a significant correlation between children's executive function measures to our “belief flexibility” parameter, but only for children in the Postdiction Condition. These results indicate that when children provided responses post hoc, they may have required stronger executive function capacities to navigate the learning task. Additionally, these results suggest that the “prediction boost” in children's science learning could be explained by increased metacognitive flexibility in the belief revision process. (shrink)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  15.  19
    Bayes and Darwin: How replicator populations implement Bayesian computations.Dániel Czégel, Hamza Giaffar, Joshua B. Tenenbaum & Eörs Szathmáry - 2022 - Bioessays 44 (4):2100255.
    Bayesian learning theory and evolutionary theory both formalize adaptive competition dynamics in possibly high‐dimensional, varying, and noisy environments. What do they have in common and how do they differ? In this paper, we discuss structural and dynamical analogies and their limits, both at a computational and an algorithmic‐mechanical level. We point out mathematical equivalences between their basic dynamical equations, generalizing the isomorphism between Bayesian update and replicator dynamics. We discuss how these mechanisms provide analogous answers (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  16.  59
    Conditional Learning Through Causal Models.Jonathan Vandenburgh - 2020 - Synthese (1-2):2415-2437.
    Conditional learning, where agents learn a conditional sentence ‘If A, then B,’ is difficult to incorporate into existing Bayesian models of learning. This is because conditional learning is not uniform: in some cases, learning a conditional requires decreasing the probability of the antecedent, while in other cases, the antecedent probability stays constant or increases. I argue that how one learns a conditional depends on the causal structure relating the antecedent and the consequent, leading to a (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  17. The Structure and Dynamics of Scientific Theories: A Hierarchical Bayesian Perspective.Leah Henderson, Noah D. Goodman, Joshua B. Tenenbaum & James F. Woodward - 2010 - Philosophy of Science 77 (2):172-200.
    Hierarchical Bayesian models (HBMs) provide an account of Bayesian inference in a hierarchically structured hypothesis space. Scientific theories are plausibly regarded as organized into hierarchies in many cases, with higher levels sometimes called ‘paradigms’ and lower levels encoding more specific or concrete hypotheses. Therefore, HBMs provide a useful model for scientific theory change, showing how higher‐level theory change may be driven by the impact of evidence on lower levels. HBMs capture features described in the Kuhnian tradition, (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   37 citations  
  18. Bayesian theories of conditioning in a changing world.Aaron C. Courville, Nathaniel D. Daw & David S. Touretzky - 2006 - Trends in Cognitive Sciences 10 (7):294-300.
  19.  35
    Confidence biases and learning among intuitive Bayesians.Louis Lévy-Garboua, Muniza Askari & Marco Gazel - 2018 - Theory and Decision 84 (3):453-482.
    We design a double-or-quits game to compare the speed of learning one’s specific ability with the speed of rising confidence as the task gets increasingly difficult. We find that people on average learn to be overconfident faster than they learn their true ability and we present an intuitive-Bayesian model of confidence which integrates confidence biases and learning. Uncertainty about one’s true ability to perform a task in isolation can be responsible for large and stable confidence biases, namely (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  20. A Unified Account of General Learning Mechanisms and Theory‐of‐Mind Development.Theodore Bach - 2014 - Mind and Language 29 (3):351-381.
    Modularity theorists have challenged that there are, or could be, general learning mechanisms that explain theory-of-mind development. In response, supporters of the ‘scientific theory-theory’ account of theory-of-mind development have appealed to children's use of auxiliary hypotheses and probabilistic causal modeling. This article argues that these general learning mechanisms are not sufficient to meet the modularist's challenge. The article then explores an alternative domain-general learning mechanism by proposing that children grasp the concept belief through (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  21. Learning as Hypothesis Testing: Learning Conditional and Probabilistic Information.Jonathan Vandenburgh - manuscript
    Complex constraints like conditionals ('If A, then B') and probabilistic constraints ('The probability that A is p') pose problems for Bayesian theories of learning. Since these propositions do not express constraints on outcomes, agents cannot simply conditionalize on the new information. Furthermore, a natural extension of conditionalization, relative information minimization, leads to many counterintuitive predictions, evidenced by the sundowners problem and the Judy Benjamin problem. Building on the notion of a `paradigm shift' and empirical research in psychology and (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  22. Learning Concepts: A Learning-Theoretic Solution to the Complex-First Paradox.Nina Laura Poth & Peter Brössel - 2020 - Philosophy of Science 87 (1):135-151.
    Children acquire complex concepts like DOG earlier than simple concepts like BROWN, even though our best neuroscientific theories suggest that learning the former is harder than learning the latter and, thus, should take more time (Werning 2010). This is the Complex- First Paradox. We present a novel solution to the Complex-First Paradox. Our solution builds on a generalization of Xu and Tenenbaum’s (2007) Bayesian model of word learning. By focusing on a rational theory of concept (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  23. Bayesian Argumentation and the Value of Logical Validity.Benjamin Eva & Stephan Hartmann - unknown
    According to the Bayesian paradigm in the psychology of reasoning, the norms by which everyday human cognition is best evaluated are probabilistic rather than logical in character. Recently, the Bayesian paradigm has been applied to the domain of argumentation, where the fundamental norms are traditionally assumed to be logical. Here, we present a major generalisation of extant Bayesian approaches to argumentation that (i)utilizes a new class of Bayesian learning methods that are better suited to modelling (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   25 citations  
  24.  24
    Artificial Grammar Learning Capabilities in an Abstract Visual Task Match Requirements for Linguistic Syntax.Gesche Westphal-Fitch, Beatrice Giustolisi, Carlo Cecchetto, Jordan S. Martin & W. Tecumseh Fitch - 2018 - Frontiers in Psychology 9:387357.
    Whether pattern-parsing mechanisms are specific to language or apply across multiple cognitive domains remains unresolved. Formal language theory provides a mathematical framework for classifying pattern-generating rule sets (or “grammars”) according to complexity. This framework applies to patterns at any level of complexity, stretching from simple sequences, to highly complex tree-like or net-like structures, to any Turing-computable set of strings. Here, we explored human pattern-processing capabilities in the visual domain by generating abstract visual sequences made up of abstract tiles differing (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  25. Majority Rule, Rights, Utilitarianism, and Bayesian Group Decision Theory: Philosophical Essays in Decision-Theoretic Aggregation.Mathias Risse - 2000 - Dissertation, Princeton University
    My dissertation focuses on problems that arise when a group makes decisions that are in reasonable ways connected to the beliefs and values of the group members. These situations are represented by models of decision-theoretic aggregation: Suppose a model of individual rationality in decision-making applies to each of a group of agents. Suppose this model also applies to the group as a whole, and that this group model is aggregated from the individual models. Two questions arise. First, what sets of (...)
     
    Export citation  
     
    Bookmark  
  26. Logical ignorance and logical learning.Richard Pettigrew - 2020 - Synthese 198 (10):9991-10020.
    According to certain normative theories in epistemology, rationality requires us to be logically omniscient. Yet this prescription clashes with our ordinary judgments of rationality. How should we resolve this tension? In this paper, I focus particularly on the logical omniscience requirement in Bayesian epistemology. Building on a key insight by Hacking :311–325, 1967), I develop a version of Bayesianism that permits logical ignorance. This includes: an account of the synchronic norms that govern a logically ignorant individual at any given (...)
    Direct download (10 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  27.  66
    Decision Theory with a Human Face.Richard Bradley - 2017 - Cambridge University Press.
    When making decisions, people naturally face uncertainty about the potential consequences of their actions due in part to limits in their capacity to represent, evaluate or deliberate. Nonetheless, they aim to make the best decisions possible. In Decision Theory with a Human Face, Richard Bradley develops new theories of agency and rational decision-making, offering guidance on how 'real' agents who are aware of their bounds should represent the uncertainty they face, how they should revise their opinions as a result (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   86 citations  
  28.  15
    Deep learning technology of Internet of Things Blockchain in distribution network faults.Chuncheng Shi, Rui Li & Hong Zhang - 2022 - Journal of Intelligent Systems 31 (1):965-978.
    Nowadays, the development of human society and daily life are inseparable from the power supply. Therefore, people also put forward higher requirements for the reliability of distribution network, but power companies can only passively deal with distribution network failures, which is a bottleneck for the improvement of distribution network reliability. The Internet of Things is the best solution for online equipment status monitoring and basic data sharing for large, widely distributed, relatively fixed, and large numbers of equipment. The construction of (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  29. Cognitive Biases, Linguistic Universals, and Constraint‐Based Grammar Learning.Jennifer Culbertson, Paul Smolensky & Colin Wilson - 2013 - Topics in Cognitive Science 5 (3):392-424.
    According to classical arguments, language learning is both facilitated and constrained by cognitive biases. These biases are reflected in linguistic typology—the distribution of linguistic patterns across the world's languages—and can be probed with artificial grammar experiments on child and adult learners. Beginning with a widely successful approach to typology (Optimality Theory), and adapting techniques from computational approaches to statistical learning, we develop a Bayesian model of cognitive biases and show that it accounts for the detailed pattern (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  30. Apragatic Bayesian Platform for Automating Scientific Induction.Kevin B. Korb - 1992 - Dissertation, Indiana University
    This work provides a conceptual foundation for a Bayesian approach to artificial inference and learning. I argue that Bayesian confirmation theory provides a general normative theory of inductive learning and therefore should have a role in any artificially intelligent system that is to learn inductively about its world. I modify the usual Bayesian theory in three ways directly pertinent to an eventual research program in artificial intelligence. First, I construe Bayesian inference (...)
     
    Export citation  
     
    Bookmark   1 citation  
  31.  15
    The Probabilistic Foundations of Rational Learning.Simon M. Huttegger - 2017 - Cambridge University Press.
    According to Bayesian epistemology, rational learning from experience is consistent learning, that is learning should incorporate new information consistently into one's old system of beliefs. Simon M. Huttegger argues that this core idea can be transferred to situations where the learner's informational inputs are much more limited than Bayesianism assumes, thereby significantly expanding the reach of a Bayesian type of epistemology. What results from this is a unified account of probabilistic learning in the tradition (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   12 citations  
  32. Generalization, similarity, and bayesian inference.Joshua B. Tenenbaum & Thomas L. Griffiths - 2001 - Behavioral and Brain Sciences 24 (4):629-640.
    Shepard has argued that a universal law should govern generalization across different domains of perception and cognition, as well as across organisms from different species or even different planets. Starting with some basic assumptions about natural kinds, he derived an exponential decay function as the form of the universal generalization gradient, which accords strikingly well with a wide range of empirical data. However, his original formulation applied only to the ideal case of generalization from a single encountered stimulus to a (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   118 citations  
  33. Wisdom of the Crowds vs. Groupthink: Learning in Groups and in Isolation.Conor Mayo-Wilson, Kevin Zollman & David Danks - 2013 - International Journal of Game Theory 42 (3):695-723.
    We evaluate the asymptotic performance of boundedly-rational strategies in multi-armed bandit problems, where performance is measured in terms of the tendency (in the limit) to play optimal actions in either (i) isolation or (ii) networks of other learners. We show that, for many strategies commonly employed in economics, psychology, and machine learning, performance in isolation and performance in networks are essentially unrelated. Our results suggest that the appropriateness of various, common boundedly-rational strategies depends crucially upon the social context (if (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  34.  23
    Bayesian Surprise Predicts Human Event Segmentation in Story Listening.Manoj Kumar, Ariel Goldstein, Sebastian Michelmann, Jeffrey M. Zacks, Uri Hasson & Kenneth A. Norman - 2023 - Cognitive Science 47 (10):e13343.
    Event segmentation theory posits that people segment continuous experience into discrete events and that event boundaries occur when there are large transient increases in prediction error. Here, we set out to test this theory in the context of story listening, by using a deep learning language model (GPT‐2) to compute the predicted probability distribution of the next word, at each point in the story. For three stories, we used the probability distributions generated by GPT‐2 to compute the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  35.  62
    Too Many Cooks: Bayesian Inference for Coordinating Multi‐Agent Collaboration.Sarah A. Wu, Rose E. Wang, James A. Evans, Joshua B. Tenenbaum, David C. Parkes & Max Kleiman-Weiner - 2021 - Topics in Cognitive Science 13 (2):414-432.
    Collaboration requires agents to coordinate their behavior on the fly, sometimes cooperating to solve a single task together and other times dividing it up into sub‐tasks to work on in parallel. Underlying the human ability to collaborate is theory‐of‐mind (ToM), the ability to infer the hidden mental states that drive others to act. Here, we develop Bayesian Delegation, a decentralized multi‐agent learning mechanism with these abilities. Bayesian Delegation enables agents to rapidly infer the hidden intentions of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  36.  96
    Learning from others: conditioning versus averaging.Richard Bradley - 2017 - Theory and Decision 85 (1):5-20.
    How should we revise our beliefs in response to the expressed probabilistic opinions of experts on some proposition when these experts are in disagreement? In this paper I examine the suggestion that in such circumstances we should adopt a linear average of the experts’ opinions and consider whether such a belief revision policy is compatible with Bayesian conditionalisation. By looking at situations in which full or partial deference to the expressed opinions of others is required by Bayesianism I show (...)
    No categories
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   14 citations  
  37.  29
    Sticking to the Evidence? A Behavioral and Computational Case Study of Micro‐Theory Change in the Domain of Magnetism.Elizabeth Bonawitz, Tomer D. Ullman, Sophie Bridgers, Alison Gopnik & Joshua B. Tenenbaum - 2019 - Cognitive Science 43 (8):e12765.
    Constructing an intuitive theory from data confronts learners with a “chicken‐and‐egg” problem: The laws can only be expressed in terms of the theory's core concepts, but these concepts are only meaningful in terms of the role they play in the theory's laws; how can a learner discover appropriate concepts and laws simultaneously, knowing neither to begin with? We explore how children can solve this chicken‐and‐egg problem in the domain of magnetism, drawing on perspectives from computational modeling and (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  38. Predictive Minds Can Be Humean Minds.Frederik T. Junker, Jelle Bruineberg & Thor Grünbaum - forthcoming - British Journal for the Philosophy of Science.
    The predictive processing literature contains at least two different versions of the framework with different theoretical resources at their disposal. One version appeals to so-called optimistic priors to explain agents’ motivation to act (call this optimistic predictive processing). A more recent version appeals to expected free energy minimization to explain how agents can decide between different action policies (call this preference predictive processing). The difference between the two versions has not been properly appreciated, and they are not sufficiently separated in (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  39. Rawlsian “Justice” and the Evolutionary Theory of Games: Cultural Evolution and the Origins of the Natural Maximin Rule.Mantas Radžvilas - 2011 - Problemos 80:35-53.
    This paper is dedicated to the analysis of the maximin principle, which is one of the key theoretical concepts of John Rawls’s theory of justice, and the problem that this principle creates for any attempt to provide a naturalistic interpretation of Rawls’s concept of fairness . Analysis shows that maximin principle is, in fact, incompatible with the Bayesian decision theory. This paper is intended to show that recent breakthroughs in evolutionary game theory could help to reconcile (...)
     
    Export citation  
     
    Bookmark  
  40. Coherentism, reliability and bayesian networks.Luc Bovens & Erik J. Olsson - 2000 - Mind 109 (436):685-719.
    The coherentist theory of justification provides a response to the sceptical challenge: even though the independent processes by which we gather information about the world may be of dubious quality, the internal coherence of the information provides the justification for our empirical beliefs. This central canon of the coherence theory of justification is tested within the framework of Bayesian networks, which is a theory of probabilistic reasoning in artificial intelligence. We interpret the independence of the information (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   93 citations  
  41. Bayesian chance.William Harper, Sheldon J. Chow & Gemma Murray - 2012 - Synthese 186 (2):447-474.
    This paper explores how the Bayesian program benefits from allowing for objective chance as well as subjective degree of belief. It applies David Lewis’s Principal Principle and David Christensen’s principle of informed preference to defend Howard Raiffa’s appeal to preferences between reference lotteries and scaling lotteries to represent degrees of belief. It goes on to outline the role of objective lotteries in an application of rationality axioms equivalent to the existence of a utility assignment to represent preferences in Savage’s (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  42.  8
    A Hierarchical Bayesian Model of Adaptive Teaching.Alicia M. Chen, Andrew Palacci, Natalia Vélez, Robert D. Hawkins & Samuel J. Gershman - 2024 - Cognitive Science 48 (7):e13477.
    How do teachers learn about what learners already know? How do learners aid teachers by providing them with information about their background knowledge and what they find confusing? We formalize this collaborative reasoning process using a hierarchical Bayesian model of pedagogy. We then evaluate this model in two online behavioral experiments (N = 312 adults). In Experiment 1, we show that teachers select examples that account for learners' background knowledge, and adjust their examples based on learners' feedback. In Experiment (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  43.  96
    Rational Learners and Moral Rules.Shaun Nichols, Shikhar Kumar, Theresa Lopez, Alisabeth Ayars & Hoi-Yee Chan - 2016 - Mind and Language 31 (5):530-554.
    People draw subtle distinctions in the normative domain. But it remains unclear exactly what gives rise to such distinctions. On one prominent approach, emotion systems trigger non-utilitarian judgments. The main alternative, inspired by Chomskyan linguistics, suggests that moral distinctions derive from an innate moral grammar. In this article, we draw on Bayesian learning theory to develop a rational learning account. We argue that the ‘size principle’, which is implicated in word learning, can also explain how (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   23 citations  
  44.  36
    Probabilistic models as theories of children's minds.Alison Gopnik - 2011 - Behavioral and Brain Sciences 34 (4):200-201.
    My research program proposes that children have representations and learning mechanisms that can be characterized as causal models of the world Bayesian Fundamentalism.”.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  45.  36
    The psychology of dynamic probability judgment: order effect, normative theories, and experimental methodology.Jean Baratgin & Guy Politzer - 2007 - Mind and Society 6 (1):53-66.
    The Bayesian model is used in psychology as the reference for the study of dynamic probability judgment. The main limit induced by this model is that it confines the study of revision of degrees of belief to the sole situations of revision in which the universe is static (revising situations). However, it may happen that individuals have to revise their degrees of belief when the message they learn specifies a change of direction in the universe, which is considered as (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  46. At the Core of Our Capacity to Act for a Reason: The Affective System and Evaluative Model-Based Learning and Control.Peter Railton - 2017 - Emotion Review 9 (4):335-342.
    Recent decades have witnessed a sea change in thinking about emotion, which has gone from being seen as a disruptive force in human thought and action to being seen as an important source of situation- and goal-relevant information and evaluation, continuous with perception and cognition. Here I argue on philosophical and empirical grounds that the role of emotion in contributing to our ability to respond to reasons for action runs deeper still: The affective system is at the core of the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  47.  25
    Five Ways in Which Computational Modeling Can Help Advance Cognitive Science: Lessons From Artificial Grammar Learning.Willem Zuidema, Robert M. French, Raquel G. Alhama, Kevin Ellis, Timothy J. O'Donnell, Tim Sainburg & Timothy Q. Gentner - 2020 - Topics in Cognitive Science 12 (3):925-941.
    Zuidema et al. illustrate how empirical AGL studies can benefit from computational models and techniques. Computational models can help clarifying theories, and thus in delineating research questions, but also in facilitating experimental design, stimulus generation, and data analysis. The authors show, with a series of examples, how computational modeling can be integrated with empirical AGL approaches, and how model selection techniques can indicate the most likely model to explain experimental outcomes.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  48.  8
    Critique on the Formal Validity and Pedagogical-Epistemological Implication of Bayesian Model for “Pedagogical Inference”. 은은숙 - 2021 - Journal of the New Korean Philosophical Association 105:181-204.
    본 연구는 “교육학적 추론을 위한 베이지언 모델”의 형식적 타당성 및 이 모델이 갖는 교육학적 함의와 인식론적 함의에 대해 비판적으로 검토한다.BR 베이즈주의 학습이론가들에 따르면, 교육학적 목표를 가장 잘 성취하기 위해서는 “정확한 가설”(h)에 대한 학습자의 믿음을 최대화하는 “데이터”(d)를 교사가 선택해야 한다. 달리 말하면, 학생이 추측하는 문제의 가설(개념)이 교사가 목표로 하는 바로 그 가설(개념)에 최대로 가까워지게 하는 예시를 교사가 학생에게 제공해야 한다. 이를 위해서는 교사가 생산하는 “데이터의 분포”(p teacher (d|h))가 “가설(h)에 대한 학습자의 사후 믿음”(p learner (h|d))을 최대화하는 데이터들을 중심으로 균등하게 분포되어야 할 것이다. (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  49.  15
    (1 other version)Innateness and (bayesian) visual perception: Reconciling nativism and development.Brian J. Scholl - 2005 - In Peter Carruthers, Stephen Laurence & Stephen P. Stich (eds.), The Innate Mind: Structure and Contents. New York, US: Oxford University Press on Demand. pp. 34.
    This chapter explores a way in which visual processing may involve innate constraints and attempts to show how such processing overcomes one enduring challenge to nativism. In particular, many challenges to nativist theories in other areas of cognitive psychology have focused on the later development of such abilities, and have argued that such development is in conflict with innate origins. Innateness, in these contexts, is seen as antidevelopmental, associated instead with static processes and principles. In contrast, certain perceptual models demonstrate (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  50. A dynamic interaction between machine learning and the philosophy of science.Jon Williamson - 2004 - Minds and Machines 14 (4):539-549.
    The relationship between machine learning and the philosophy of science can be classed as a dynamic interaction: a mutually beneficial connection between two autonomous fields that changes direction over time. I discuss the nature of this interaction and give a case study highlighting interactions between research on Bayesian networks in machine learning and research on causality and probability in the philosophy of science.
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   4 citations  
1 — 50 / 967