Connell, Louise and Lynott, Dermot
(2024)
What Can Language Models Tell Us About Human Cognition?
Current Directions in Psychological Science, 33 (3).
pp. 181-189.
ISSN 0963-7214
Abstract
Language models are a rapidly developing field of artificial intelligence with enormous potential to improve our understanding of human cognition. However, many popular language models are cognitively implausible on multiple fronts. For language models to offer plausible insights into human cognitive processing, they should implement a transparent and cognitively plausible learning mechanism, train on a quantity of text that is achievable in a human’s lifetime of language exposure, and not assume to represent all of word meaning. When care is taken to create plausible language models within these constraints, they can be a powerful tool in uncovering the nature and scope of how language shapes semantic knowledge. The distributional relationships between words, which humans represent in memory as linguistic distributional knowledge, allow people to represent and process semantic information flexibly, robustly, and efficiently.
Item Type: |
Article
|
Keywords: |
language models; linguistic distributional knowledge; semantics; cognitive plausibility; |
Academic Unit: |
Faculty of Science and Engineering > Psychology |
Item ID: |
19109 |
Identification Number: |
https://doi.org/10.1177/09637214241242746 |
Depositing User: |
Louise Connell
|
Date Deposited: |
29 Oct 2024 11:38 |
Journal or Publication Title: |
Current Directions in Psychological Science |
Publisher: |
Sage |
Refereed: |
Yes |
URI: |
|
Use Licence: |
This item is available under a Creative Commons Attribution Non Commercial Share Alike Licence (CC BY-NC-SA). Details of this licence are available
here |
Repository Staff Only(login required)
|
Item control page |
Downloads per month over past year
Origin of downloads