One of the more entertaining aspects of the current conversations around AI is how much it challenges what we've believed, perhaps for centuries, about the nature of thinking.
We've discovered that we don't currently possess objective and testable definitions for words like sentience and consciousness. For reference, here are a couple of current definitions...
Sentience - feeling or sensation as distinguished from perception and thought. (Merriam-Webster)
Consciousness - the state of understanding and realizing something. (Cambridge Dictionary)
Words like feeling, sensation, perception, thought, and realizing don't add a lot of value and tend to make definitions circular, but if you distill them you end up understanding as a component of each. "Understanding" is a word worth exploring.
Just for reference here are some of Merriam-Webster's definitions of the word "understand"...
- : to grasp the meaning of
- : to grasp the reasonableness of
- : to have thorough or technical acquaintance with or expertness in the practice of
- : to be thoroughly familiar with the character and propensities of
Of course, these definitions don't really get you down the path of being objective and testable, but there are hints.
Leaping way ahead of the description of my metaphysics, I'll assert the following definition for understanding...
In the presence of an assertion made by a teacher, a learner can independently reproduce the assertion utilizing a learner maintained model.
Where...
Assertion - some observation about an attribute of one or more either physical or informational components.
Teacher - the source and possibly an evaluator of an assertion.
Learner - the recipient of an assertion tasked with the objective of "understanding" it.
Model - an arrangement of informational components.
Here's the process...
- A teacher generates an assertion about some aspect of the universe. Note that "teacher" is a broad term that identifies the source of something to potentially be learned.
- The assertion is communicated to a learner. A learner is a mechanism that can create and modify a model.
- The learner takes the assertion and incorporates it into its model. Note that this may require a modification to the learner's model.
- The model may or may not be tested by the learner to ensure that it's consistent with other models that it maintains.
- The learner either inspects or executes the model with the intent of reproducing the teacher's assertion.
- The learner echoes the teacher's assertion along with a collection of adjacent assertions generated by its model to the teacher.
- The teacher compares the learner's assertions with the results of evaluating its models.
- The teacher declares either understanding or lack of understanding on the part of the learner.
The implication is that, going through this process, the model maintained by the learner can reproduce results that are acceptable by the teacher in the absence of the teacher in similar circumstances.
A variation on this separates the assertion role of the teacher from the evaluation role of the teacher. This is basically the process of science, where the universe takes on the role of assertor and a peer group takes on the role of evaluator.
So what happens to "sentience" and "consciousness"? Sentience basically means the ability to incorporate externally supplied assertions into a testable model, with an emphasis on receiving the assertion. Consciousness is the same thing with an emphasis on modifying and/or internally evaluating the model.
The extension of this is that asserting that humans are the only sentient and conscious entities in the known universe is silly and ignorant. All of life exhibits these behaviors and characteristics in some fashion. This likely makes for a good definition of "life" in contrast to non-life.
Not all of the definitions of consciousness include "understanding" as a key aspect. That seems especially true for the ones that are trying to explore consciousness as a "hard problem", as opposed to the "easy problems" that do include things like understanding. Isn't it is the harder aspects that seem most challenging to think about regarding AI and non-human life?
ReplyDeleteWhat are these harder aspects?
DeleteMainly its subjective, “what it’s like” nature, and things related to that such as the fundamentally difficult (perhaps impossible) problem of treating it scientifically when it is strictly subjective. See, for example: https://en.wikipedia.org/wiki/Hard_problem_of_consciousness or https://plato.stanford.edu/entries/consciousness/#ProExpSuc
Delete