19 Comments
Jun 20, 2022Liked by Justin Smith-Ruiu

Fascinating stuff. Not much to add at this moment, but just wanted to say, this 'stack is among the best reads on the internet and is a source of enjoyment and curiosity since I discovered it in spring 2021. Thank you.

Expand full comment
Jun 19, 2022Liked by Justin Smith-Ruiu

Hi Justin. I think you're going a bit fast with the etymology-history-inherited-meaning of "conscious"?There's a real problem with the history and meaning of conscius, conscientia; also Greek συνείδησις will be involved. It's not at all obvious how this came to be so common and so loaded a word, and the word may well mask a lot of confusion, a lot of different things being lumped under the same label "consciousness." I think with both the Greek and the Latin words the original application is to knowing something *along with someone else*. There's often a legal use, a conscius is an eyewitness but especially an accomplice, someone who had inside knowledge, esp. if they then testify against the criminal. When conscientia/συνείδησις then get applied to something you do by yourself, it is likely to be a metaphorical extension of that, both where it's what we would call moral conscience (something inside you which may testify against you) and in other cases. There has certainly been scholarly work on this, but I'm not sure whether anyone has sorted out the whole history--perhaps another reader will know a good reference. Συνείδησις is important in St. Paul, and so there will be work by NT scholars, some of whom will have tried to sort out the historical background. Philo of Alexandria also uses συνείδησις, and the participle συνειδός. Among Latin writers Cicero and especially Seneca use conscientia. I think these uses tend to conform to the pattern I suggested, but there is one fragment of Chrysippus, SVF III,178 (from Diogenes Laertius VII,85), that speaks of our συνείδησις, maybe something like awareness, of our psychosomatic constitution; perhaps this connects to the modern use in the sense of "consciousness" rather than "conscience." Other writers use συναίσθησις in what may be the same sense, so this is another word whose history would be worth exploring. In συναίσθησις, and in συνείδησις in the Chrysippus fragment rather than in the "accomplice, state's witness" use, the sense of the συν- ("with, together") is not obvious to me--it would be worth finding out.

Expand full comment
Jun 20, 2022Liked by Justin Smith-Ruiu

Perhaps it wouldn't go amiss to point out that machine learning models of language are basically correlation machines, connecting inputs and outputs on the basis of vast amounts of data, and in that sense alone, they are not "doing" natural language at all (neither in terms of having 'knowledge of language', à la Chomsky, nor in terms of engaging in conversations), let alone exhibit consciousness, sentience, theory of mind abilities, or any of the other typical features of human cognition. You mention Fodor in the piece, who also wrote a fair amount against connectionist models of cognition (basically, machine learning models), and one important point he used to make (along with Pylyshyn, and in fact Pylyshyn wrote about it more often than Fodor did) is that such models can only aspire to be 'weakly equivalent' to human cognition (that is, from the same input, they may converge to the same output, though in practice they don't actually do that, either), but that's not really an appropriate theory of human cognition nor is it even a good model of it (as in 'computer modelling') - what is required is 'strong equivalence', a model that does human cognition in roughly the same way that humans do, and one suspects that sentience and consciousness necessitate such an underlying substrate to begin with. If there is any lesson to be drawn from the study of cognitive science, I think it is that human cognition involves richly structured mental systems of various kinds, and the language faculty is the better understood one at this stage.

Expand full comment
Jun 19, 2022Liked by Justin Smith-Ruiu

Thanks Justin

this may well be tied up with the capacity of the machine to model other minds

I fondly recall that my 1970 undergraduate AI course textbook, composed of the early papers of AI (working on chess and checkers playing and machine vision) closed with an essay by Marvin Minsky (?) where he suggested that machines must model not just other minds but others in the world. Based on that nugget of thought, I have long held the opinion that pattern recognition may be necessary but certainly not sufficient to claim what is now called AGI. 

Expand full comment

Just a quick fan note. Glad to hear that apparently a lot of people appreciate your deeply humane, therapeutically-stimulating work, as I do. Cheers!

Expand full comment

There is a push by large corporations that create software code to claim that it is now "conscious" and, therefore, they are not liable for what it does. The code, they say, is "self-training" and out of their control.

Let's pretend that's true for a moment. The fact is this: I have 2 dogs. They are clearly conscious beings, especially when I am eating steak and they are eating kibble. If one of them bites someone else, I am liable for its actions even though I trained them not to bite.

We can't let companies off the hook when the software code that they developed "trains itself" to bite or to plough a car into a group of people or to set itself on fire and burn a house down. When we allow our selves to play with the notion of machines having "consciousness" we're moving in the direction of letting the corporations off the hook.

Expand full comment

I think those who are loudest about their fear of superintelligent AIs ruining everything share the same flaw: they tend to be in the "rationalist" community, and therefore dwell somewhere on the severe side of the autism spectrum. They have a massive handicap when it comes to comprehending the necessity of embodied experience and in-person social interaction, and so it naturally finds little place in their thinking about AI.

Expand full comment

Your thoughts on the social mind reminded me of Tomasello. If we- and not just our minds- are by nature (or fundamentally) social and relational beings then will artificial intelligence always be artificial?

Expand full comment

I second Roy's "My Octopus Teacher" recc. When I watched it recently I had your excellent essay banging around my head a lot of the time. What *seems* to be happening in the film is that a completely solitary creature, used to doing nothing but eat and avoid being eaten, develops an attitude towards a particular human that progresses from fear through curiosity to outright affection. It's easy to anthropomorphise in these situations of course, but it's remarkable in itself that the octopus recognises the filmmaker every time it sees him and remembers he's not a threat. And it certainly *looks* affectionate when you're stroking a guy then sitting on his stomach like a house cat...

As you say yourself, how is it possible that creatures with an octopus' (undeniable) intelligence and (sure-seems-that-way) affective potential don't ever bother to socialise? If they're capable of enjoying hanging out for hanging out's sake, why do they never hang out with each other? And is it really possible that an entire species' latent potential for sentience/consciousness can be triggered within the lifespan of a single one of its members, by no more sophisticated means than another intelligent creature visiting it every day?

Might this be analogous to the human potential for language, in that if a human child was raised by wolves they'd still carry the software for understanding words, which could be activated at a later point in their life via consistent contact with other people?

P. S. You've convinced me that AI sentience is most likely a parlour trick for now, and will probably remain so until we can get bots to (1) exist within an embodied environment, (2) be meaningfully aware of it and (3) want things from it. Thank you.

Expand full comment

"Pain is bad intrinsically, for the utilitarians, even if it is only a flash of experience in a being that has barely any episodic memory or any ability at all to regret the curtailment of its future thriving."

But the fact is that pain is usually not just a "flash" of experience. Animals (all vertebrates, at least, but almost certainly some invertebrates as well) remember unpleasant experiences, and learn to anticipate them. It's ridiculously easy to traumatize a dog. If a dog experiences something painful or scary – or just something really "weird" – in some particular place or at the hands of some particular human, the dog is quite likely going to be terrified of that place or that human for the rest of its life. And not just that human or that place, but all humans and places that remind the dog of that unpleasant experience.

The unpleasant experience doesn't just "disappear" once the initial shock is over. The experience has turned the world into a scarier place, in some deep and permanent way. Even in humans, post-traumatic stress doesn't necessarily have much to do with any abstract or integrated sense of self. It is "animal" fear.

And when it comes to utilitarian arguments for animal rights – even if we accept the (probably mistaken) assumption that an animal, say a chicken, exists in a perpetual state of amnesia, and its subjective world is just a series of "flashes" of experience, the truth is that for a chicken living in a cage on a factory farm, nearly all of those flashes of experience are going to be more or less excruciating. So regardless of its capacity to remember or anticipate pain or form an integrated sense of self, the bird is in hell.

In any case, I really love your thinking and writing. I'm truly glad I found this place.

Expand full comment

Interesting reading this. I was just reading Jonathan Israel’s very emotional account of Spinoza’s 3 levels of “knowing”: 1. Affective Input (what you call “sentient”) 2. The rational ordering of this sensory 1st level input (Reason doing its reasoning about the sentient “knowing”) 3. Reason knowing about itself, what it knows, how it knows, and “why” it knows, and probably also the big “so what”. The clearer 3. develops, the more bursting with blessedness because the mind is so enraptured by the love (and ecstasy) of virtue, that it is incapable of having appetite to spare for anything else. Venturing to use this sublime model, I would wager that humans are still mostly struggling with the 2nd level of reason, trying to be reasonable about our chaos of sensory inputs. We invented machines to help us manage the chaos. The kind of AI imagined to be capable of integrating disembodied “consciousness” for self-awareness would assume that machines have become self-aware of their reasoning operations in the first place. This would mean somewhere between 2. and 3. on Spinoza’s ladder... I’m highly skeptical whether a man-made machine intelligence could be capable of surpassing our own capacity for knowing, and beat us to blessedness.

Besides the very strong “lacking sentient foundations” argument, I think we have a very stark case of Pygmalion deceiving himself. Artifice is “faking it”... and only in mythology and fairytales does “making it” usually come true. Either way we argue, though, we’re going to be guilty of colossal hubris...

So I agree with you: let’s keep watching.

Human/Rat Brained cyborgs on the other hand...

Expand full comment

Very good arguments, but I'm wondering, if in fact you have a way to tell? That is can we propose a reliable test for conscious intelligences? Else how will we ever know? The machines have in fact passed the Turing test, no?

Expand full comment

So long as AI machines are reducible to digital orchestrations of 1s and 0s, there can be nothing resembling sentience, let alone, consciousness, performative resemblances notwithstanding. To borrow from Giulio Tononi's Integrated Information Theory (IIT, and I hope I am not bowdlerizing too much since it has been a while since familiarizing myself with the theory), the integration within the "wetware" of biological life-forms encompasses much wider swaths of information than the very limited, 1 or 0 step by step progressions of digital computing (even with the parallel processing of neural networks).

With the development of quantum computing, however, well then maybe we can talk... For we (life forms) are ourselves quantum computers (at least a growing body of research suggests it). It is the quantum that makes the integration possible.

Expand full comment

Is a company conscious but not sentient?

Expand full comment

This may be sort of beside the point (but, like Justin, I certainly can’t think so as I’m writing it) but isn’t developing an AI that at least 51% of a voting populace considers “sentient” a losing proposition in the end? Kind of like the atomic bomb? Once you develop it - and it’s clear to everyone that you *have* developed it - you can’t use it. There’s a strong enough Rawlsian-rights-based political wing at least in the West that would pass laws very quickly that sentient AIs could not be “compelled” to do work, and subsidiary to that, the question of *paying* them (if that even meant anything to them) would be almost equally out of the question, because then we’d be in the approximate area of child labor (insofar as developers are “parents” of the AI). It’s unclear to me what strong-AI would do so well that could convince a majority of the politically-effective class to ignore these uncomfortable dynamics. (With actual chattel slavery, the answer was that there was no technology that could replace human beings used as farm equipment on such an industrial scale.) I’m sure some foundation-endowed AI ethicist has addressed this, but it’s not my wheelhouse and I haven’t yet encountered rebuttals to the argument.

Expand full comment

May I commend the Netflix documentary " My Octopus Teacher" (https://www.netflix.com/title/81045007)

Thanks for your consistently valuable and engaging pieces.

Roy

Expand full comment