What Is It Like to Be a Bot?: The world according to GPT-4
The recent explosion of Large Language Models (LLMs) has provoked lively debate about “emergent” properties of the models, mainly concerning their purported “sparks” of General Intelligence. Here, I examine another potentially emergent capacity, namely, consciousness. Using OpenAI’s GPT-4 as exemplar and interlocutor, I argue that the blanket dismissal of LLM sentience is unwarranted, and undermined by a three-way analogy among bats, humans, and GPT-4. Inquiry into the emergence of sentience is facilitated with philosophical phenomenology and cognitive ethology, examining the pattern of errors made by GPT-4 and proposing their origin in the absence of the subjective awareness of time. This deficit suggests that GPT-4 ultimately lacks a capacity to construct a stable perceptual world; the temporal vacuum undermines any capacity for GPT-4 to construct a consistent, continuously updated, model of its environment. Accordingly, none of GPT-4’s statements are epistemically secure. Because the anthropomorphic illusion is so strong, I conclude by suggesting that GPT-4 works with its users to construct improvised works of fiction.