The AI Deep Dive
In a recent essay, “The Ultimate Labor-Saving Device,” I recounted my response to a student’s questions about artificial intelligence, related to my previous piece, “The Intellectual Welfare State.” A central theme of that earlier discussion was the likely, or perhaps inevitable, retarding effects of increasing AI dependency on human intellectual development. This time, prodded by the same thoughtful student’s subsequent correspondence, we cut a little closer to the ultimate meaning and significance of civilization’s precipitous and unreflective dive into the sea — or the abyss — of artificial intelligence. In what follows, I include the substance of two e-mail exchanges, with cosmetic modifications, presenting my student’s challenges and reservations in italics, followed by my own answers and speculations.
Part One: AI As Learner and Teacher
As for the use of AI, you suggested its harmful effect comparing it to calculators: As our calculating skills diminished due to calculators, we would lose our intellectual ability because of AI. We are going to rely more and more on AI for thinking, understanding, summarizing, analyzing, questioning, etc. Therefore, the use of it will not lead to self-improvement.
However, what if we are using the machine in another way? Not treating it as an answerer but as a personal trainer. In reading comprehension, for example, I could ask, “I want to confirm if I understand this or that paragraph. Will you ask me some questions to challenge my knowledge?” Or in practicing English speaking, I could ask, “Can you identify and correct grammatical mistakes in my speech?” In terms of effectiveness, it works, regardless of its harm, and that’s why it is intimidating and considered as a real alternative or replacement.
“It works.” As I always ask of anyone who uses that popular expression in our modern pragmatic world: Works for what?
Let’s begin by considering your specific example of AI as a “personal trainer” in reading comprehension. (By the way, a personal trainer is by definition a trainer who interacts with you personally, which is exactly what a non-person cannot do, no matter how well it is programmed to respond to your input in a mock-personalized way, such as by responding to your questions in language that seems to roughly correspond to your way of asking. Socrates might be the “personal trainer” par excellence, the interlocutor whose every conversation, every question, every analogy or example, is carefully measured to suit the soul of the particular human being in front of him at this moment. What he does, no computer program will ever be able to do, for reasons that should be relatively obvious from reading Plato’s dialogues, or even from remembering any of the conversations you have shared with your own somewhat Socratic teacher.)
What kind of questions test reading comprehension? If the reading you are using is strictly informational or explains basic facts or opinions, then I assume AI could prompt you with questions good enough to determine whether you had understood the information or basic implications of the reading adequately. This might be useful for doing reading comprehension exercises in second language learning, for instance.
I say “might be useful,” but still it probably would not be as useful as getting similar questions from a real teacher, who would have a specific (personal) idea of what he expected you to understand or what he regarded as the most important points on which to test your comprehension, and why. In general then, I would say this kind of (non-)personal training from AI would be essentially self-directed by the student, since the student would have to know whether the AI assessments of his answers to the AI questions should count as adequate comprehension or not, and why, unless he is simply putting all his faith in the computer program to tell him what should count as comprehension and why. Again, you might be able to put your faith in the computer pretty well if you limited your idea of reading comprehension to the realm of basic facts and inferences from a straightforward informational reading. But of course that is far from the most important type of reading, or the most significant sense of reading comprehension.
To see the problem more clearly, consider a paragraph like the following from Beyond Good and Evil (§292):
A philosopher: that is a man who constantly experiences, sees, hears, suspects, hopes, and dreams extraordinary things; who is struck by his own thoughts as if they came from the outside, from above and below, as a species of events and lightning-flashes peculiar to him; who is perhaps himself a storm pregnant with new lightnings; a portentous man, around whom there is always rumbling and mumbling and gaping and something uncanny going on. A philosopher: alas, a being who often runs away from himself, is often afraid of himself—but whose curiosity always makes him “come to himself” again.
Now, I know that AI could respond to a request for reading comprehension questions for a paragraph like this, and its questions would presumably have a form similar to the kind of questions a human teacher of Nietzsche’s philosophy might ask his students on a class test. The form of the questions would be similar, yes. But what about the content? For to ask reading comprehension questions about such a paragraph, and to judge the answers, requires first possessing a comprehensive interpretation of the paragraph oneself, which requires a questioner who has a very specific perspective on what Nietzsche is talking about here, and what he means in this context, in order to say that this or that answer shows good, mediocre, or poor reading comprehension. In other words, the AI program offering the questions and then judging the answers would have to be pre-programmed (or self-programmed through its algorithmic compilation system) with one coherent way of “interpreting” Nietzsche’s thought in general, Beyond Good and Evil specifically, and this passage in particular. And for you, as the “trainee,” to have faith in that computer program’s perspective and judgment, you would already have to have some reason for trusting that its programmed perspective was the result of a serious and proper reading of Nietzsche. But since you would have no way of knowing whether its algorithm was favoring a good reading, a trendy but inadequate reading, or a silly reading, wouldn’t you be in the position of either feeling that you cannot know whether your comprehension of the paragraph is good based on the computer’s judgment of your answers, or simply accepting the AI’s programmed way of interpreting Nietzsche on faith, as though the computer were a trustworthy expert on a notoriously difficult philosopher’s thoughts?
Here, you see, we return to the problem of “consensus truth” that I described in “The Ultimate Labor-Saving Device.” The best way of reading Nietzsche, or of explaining the beginning of the cosmos, or of understanding the categories of Being, is not a matter of finding a general compromise among the various popular answers of the moment. Furthermore, even to absorb and use the various answers to such questions in its consensus-generating algorithm, the AI system would already have to be making similar simplifying assumptions about each of those various interpretations themselves, since any serious opinion on each of those topics is already subject to interpretation itself. So to form a coherent picture of how to judge your comprehension of a reading on any such topic, AI would already have to have made many leaps of faith (blind assumptions) regarding the alternative perspectives on the meaning, even before putting them together to form its own non-personal consensus.
Now, what about your second example, practicing English speaking. Here is the question you might ask the AI program, as you suggested: “Can you identify and correct grammatical mistakes in my speech?”
Yes, it can do that – by applying this or that textbook’s idea of correct grammar. Which textbook? If you knew that, why couldn’t you just consult that textbook yourself? This is not a small point, because it is relevant to the second half of your hypothetical request: “and correct grammatical mistakes in my speech.” But to correct the mistakes in your speech is merely to tell you what you should have said, rather than to activate your own intellect to try to find out whether you are right, and how you might correct yourself when you are wrong. In other words, the most important part of learning is figuring out where you have erred, and then making the effort to seek out a better way. AI correction (much like MS Word’s automated editing which has existed for many years) smothers the learning process, rather than enhancing it, by prompting you to fix the “mistakes” it finds, rather than teaching you how to do things so that you can learn to apply those skills for yourself. Whatever noble intentions a learner might have when using such algorithmic correction programs – “I will just use it to improve my own knowledge of grammar” – the truth is that, like the calculator, it is inevitably going to become a shortcut for active thinking and understanding, until we all rely on it instead of on our own intellects.
But I think there is a much deeper problem here that cuts closer to the core of what I believe is the biggest danger of AI for language.
Grammar textbooks – the good ones, by the best and most trustworthy grammar scholars – can be very helpful for understanding what has become the accepted norm for proper English in this or that place, in this or that time. Remember, though, that grammar rules really are only that – accepted norms. Such norms are important and useful for establishing proper communication in a society, but they are not laws of nature or unbreakable truths. The accepted norms therefore change over time. Which set of norms will AI apply to decide what constitutes good or bad grammar? It will very likely apply some consensus version of the latest accepted norms. But why are any of the latest norms (let alone a consensus amalgamation of various latest norms) better or truer than the earlier, now-abandoned norms? AI cannot tell you why. Or rather, of course it can “tell you why” if you ask it the question. But its answer will be as unfounded and untrustworthy as any of its other consensus truths, so what would be the point of asking?
So here is the deeper problem I mentioned above: If a new Shakespeare appeared in the world today, and wrote any of his thousand violations of standard English usage, created for the sake of dramatic or poetic effect, what would the AI algorithm say about it? Wouldn’t it be correcting him continuously, telling him he needs to fix his grammar, his spelling, his word order, his word choices, and his prepositions? He violates the norms of ordinary English (of his time) continually, as do most other original writers who are trying to say something that only they have thought, or that no one else has found a fully satisfying way to say before.
Yes, of course it would have to tell this new Shakespeare that he is using English incorrectly. Now you might object, reasonably, that a truly great thinker and writer like William Shakespeare would know better than to allow himself to be restricted by AI’s judgments of correct English, just as the real historical Shakespeare knowingly ignored the rules of Elizabethan English when he wished to say something new and striking. He would know better indeed – if he were a great thinker and writer like William Shakespeare. But in the world of AI editing (as opposed to self-correction), and AI composition (as opposed to the slow and stressful process of personal writing), and everyone molded to think and speak according to the consensus-based demands of algorithms, and even human teachers (as long as there are still any left) also relying on AI to do basic corrections of students’ writing for them to save effort, how could an independent spirit like Shakespeare come into existence? Remember, every Shakespeare or Nietzsche or Plato was once a young boy with a good mind but no learning or practice to develop the skills he finally showed as an adult. Developing those skills requires a lot of intellectual independence, unforced experimentation, and good teachers prepared to recognize and encourage the original mind and linguistic rule-breaker for the sake of his self-development. How can AI do that? Who could program it to do that? How would the program be applied to individual cases of writing to distinguish genuine “bad language skills” from original brilliance, discouraging the former while encouraging the latter?
This is what I mean when I speak of the (in my opinion) inevitable and (in my judgment) intentional narrowing of thought and communication with AI, until people all sound more or less the same, using more or less the same range of expression and the same norms of accepted meaning. This has been the trend in education for generations now. AI is accelerating the catastrophe, which will just keep accelerating now, as a whole generation of “educators” gives in to the government and corporate pressure to use AI in schools, and then the next generation, having been raised in AI-dependent school systems themselves, and having therefore already lost the thread of genuine education and language (which is individual minds with their own pre-linguistic thoughts seeking an almost impossible spiritual contact through a shared form of verbal communication), simply lands right where one has to expect such a dehumanized learning process to end up: our brave new world, complete.
Part Two: Replies to Objections
Not everyone using computers is a game addict; likewise, not everyone using AI will lose their intelligence. It depends on how we are using it, and there can be a benefit even though the cons heavily outweigh the pros. If we deny the pros, then it is coloring the truth and demonizing the machine.
Perhaps your analogy is a little generous to AI. “Not everyone using computers is a game addict.” In this analogy, you imply that computer use is to game addiction as AI use is to losing one’s intelligence. I think game addiction and losing intelligence are appropriate analogues, because both represent a specific negative outcome that follows from some cause. But as for the two causes in your analogy, I think “computer use” is much broader than “AI use” and therefore not informative. After all, we can use computers without ever playing a game at all, so it goes without saying that there is no intrinsic risk of game addiction in using a computer, whereas using AI is the action directly related to the risk of suffering the negative effect of AI (losing intelligence). That is, you can use a computer without ever playing a game (and therefore risking addiction), but you cannot use AI without using AI (and therefore risking the loss of intelligence).
Therefore, I think that to make your analogy more informative in this context, we should change it to this: Using AI is to losing one’s intelligence as playing computer games is to game addiction. Reframed in this way, we can easily see both the danger of AI and also, according to your thinking, the non-inevitability of the negative outcome.
And the analogy, written this way, may be compared further to a lot of other things which have potentially harmful effects, but which could be judged as potentially beneficial in other ways. One question we should ask, then, would be whether the potential benefits are valuable enough (or even real enough) to justify the risk of doing an activity that carries a significant potential for harmful effects.
As you go on to say in the passage I quoted above, “It depends on how we are using it.” That is true, I suppose. To carry our analogue (playing computer games) over to another example, I might say that I know from personal experience that some alcoholic drinks have a very nice flavor, and also that some families (including mine) usually included an alcoholic drink during and/or after a holiday dinner purely to add to the “specialness” of a holiday mood. And yet I stopped drinking alcohol completely for many years in Canada, after I had moved away from home; then, during my early years in Korea, I agreed to accept a rare drink due to (perceived) social custom pressures (not wanting to offend a senior, etc.); and I have now returned to not drinking at all under any circumstances. I know that having a drink once in a blue moon would not kill me, nor would it lead to alcoholism. But I see the “benefits” of drinking as so infinitesimal and purely material, compared to the potential for harm, that I regard the cons (which include some immaterial issues, beyond mere concerns about drunkenness) as more than enough reason to forego the “pros,” whatever they may be.
As you then note: “If we deny the pros, then it is coloring the truth and demonizing the machine.” That’s true, so I am willing, as with alcohol, computer games, watching TV, or perhaps even gambling, to admit that there might be some possible benefits to such activities, however inessential these benefits may be. I know that AI is already being used by millions (billions?) of people, often without their even realizing they are using it, to enhance certain practical pursuits in some ways.
I am willing to go ahead, for the sake of argument, and use the word “enhance” here, even though I believe the question of whether something is really enhancing our lives is more complicated than merely asking, “Does it make an activity easier or more efficient?” Ease and efficiency are only beneficial if the activities that are being done more easily and efficiently are (a) good activities in the first place, and (b) activities that will retain their positive effects to the same degree (or higher) even in their easier or more efficient forms. To return to my typical example, the use of electronic calculators certainly makes some arithmetical calculations easier and more efficient for people – especially since our reliance on calculators over the past fifty years has basically made us weaker and weaker at solving math problems in our heads. That is, the apparent benefits of calculators are partly a result of our over-reliance on calculators. So they have helped us to do some things better and more quickly. But in the process, they have made us dependent on them to a degree that indicates a genuine lowering of basic human brain power in an essential area of human reasoning. (I say that as someone who does indeed use a calculator often, during grading periods, for the sake of feeling confident about the scores I am giving people. In other words, I am not refusing to join the modern world outright; I just believe in maintaining a healthy skepticism about all supposed progress, and refusing to have blind faith in anything merely because it seems to make life easier – as though easier were always better.)
And the most harmful effect or an ultimate side effect of that personalized training is that it cuts off human interactions. As of now, it has the other problems that its response is not totally credible and that it is a mere elaborate plagiarism, but in my opinion, they are not insurmountable and will eventually be patched up by some means. However, it is inevitable that AI will diminish human interactions because that’s what it does: replacement.
Let’s break this down piece by piece.
And the most harmful effect or an ultimate side effect of that personalized training is that it cuts off human interactions.
I am willing to accept that this might be the “most harmful effect,” but as I explained in Part One of this reply, I do not agree that this is the only harmful effect, because I do not believe that even the “personal training” function can be accepted as a genuine enhancement of anything humans can do, unless we simply reduce all notions of “good vs. bad” to “speed and ease vs. slowness and effort.” But why are speed and ease better than slowness and effort? Are they always better, in all cases? If not, how do we distinguish which cases are benefitted by speed and ease and which are not? And how do we maintain the human belief in the benefits of slowness and effort – which is a moral premise in a sense – in a world that has become increasingly infatuated with the idea of speed and ease as ultimate goods?
As I have tried to explain previously, the personal training functions of AI depend for their acceptability on a lot of assumptions about the nature and quality of the algorithm’s answers or “judgments,” and ultimately a reliance on what I call “consensus truth,” which is by definition non-truth. In other words, the benefits you are assuming here involve much bigger questions than whether AI can “learn” how to give useful or reliable answers or corrections. There is a more fundamental issue about where those answers or corrections come from, and what effect our reliance on AI is likely to have on our human understanding of what truth is, what knowledge is, and what language is.
Next point:
…it has the other problems that its response is not totally credible and that it is a mere elaborate plagiarism, but in my opinion, they are not insurmountable and will eventually be patched up by some means.
The responses will get “better,” yes. And this will happen quickly, thereby reassuring the doubters with more acceptable responses. And then everyone will quickly forget how they decided that the AI responses were better – in other words, they will forget that AI can only be judged better by human (non-AI) standards – and will start to defer more and more to AI answers as though they represent superior answers or truer answers, which is inherently nonsensical, at least in any area of reasoning that is essential to human existence.
As for surmounting the “elaborate plagiarism” problem, I do not see how this can happen, other than by becoming even more elaborate until the plagiarism is so well obscured that we barely notice it anymore. The way AI systems are developed in the first place is by feeding into them everything anyone ever writes in a digital form, so that it can become part of the AI’s “knowledge base.” This is intellectual property theft, plain and simple, and a serious violation of any property laws that could ever make sense. Hence China and other countries that deny private property on political principle are always at the forefront of these algorithmic developments, because such countries have no internal debates about the legitimacy of just taking everything from everyone to use however they want, privacy and property be damned. In so-called democratic countries, the thieves have to be more subtle, such as by means of legalistic clauses tucked away in “Section 32, subsection f” of a company’s user agreement, which now typically says something like, “Everything you produce using this application belongs to you, the content producer; however, your use of this application is interpreted as consent for the company to use, copy, sell, distribute, or otherwise exploit the content you produce in any way it sees fit, without any legal restrictions whatsoever.”
In other words, “You own your content, but we own you.” The level of political and economic revolution implied by this sudden and publicly undiscussed shift into the digital world of universal implied consent is absolute and ought to be terrifying. But people barely notice, and hardly care. And this is how AI is being developed and “enhanced” so quickly. “Plagiarism” is certainly applicable, but in fact it doesn’t begin to describe the level of human violation being perpetrated here, whether directly by tyrants (as in China) or indirectly by corporate opportunists assisted by passive approval from “democratic” governments ignoring their primary responsibility to their citizens.
However, it is inevitable that AI will diminish human interactions because that’s what it does: replacement.
Yes. And it will be as effective at diminishing real human interaction as it is effective at changing men’s perception of what knowledge, truth, and language are, and what life is ultimately about. The modern obsessions with comfort and amusement have already been leading in this direction for two centuries. AI is merely speeding up the final stages of the process.
Finally, I arrive at the last stage of your comments, which includes many interesting questions.
Non-knowledge and non-art
What if people using AI are not looking for knowledge and art from the first place? What if they didn’t expect AI to function in the same way our minds do? Most people don’t really want AI to create an artwork or understand knowledge. Don’t they just want a painting/music or an information/fact? Judging their quality is another thing, so I’ll rather talk about the process. When AI can generate results similar with humans’, what does the difference that AI cannot think, understand, and create substantively mean? What’s the point of the conclusion that we humans are the only ones who can think and create?
A series of related questions, but some of them may be discussed together.
What if people using AI are not looking for knowledge and art from the first place? What if they didn’t expect AI to function in the same way our minds do? Most people don’t really want AI to create an artwork or understand knowledge.
Most people in history were never looking for real knowledge (The Truth) or true art (The Beautiful). However, they probably all thought they were. That is, they (most people in all times and places) looked for answers to their questions or problems that they could believe were the truth, even if their standard of truth was very low; and they looked for objects (pictures, books) that they believed were beautiful, even if they had no reasonable standard of art or beauty. In this way, although there has always been a clear distinction between the search for practical needs or amusement on the one hand, and the search for deeper understanding or inspiration on the other, there were certain underlying premises of each type of search (high or low) that were shared in common.
For example, all people have always tried to find out what they wanted to “know” from other people, assuming that some person out there knew the answers they needed, or could show them how to find those answers. Likewise, all people who listened to stories, sang songs, or looked at pictures assumed that those stories, songs, or pictures were produced by other humans. So there was a kind of continuity or natural hierarchy from the lowest forms of practical information to the highest forms of wisdom, and from the most vulgar forms of mass entertainment to the noblest forms of high art. The continuity was grounded in the fact that all of these desired goods (real or fake goods) were human products or discoveries, and the people seeking or enjoying them understood that what they were seeking and enjoying was coming from other humans, and in an important sense derived its value from being a human achievement.
Consider people who enjoy the dumbest and most computer-dependent popular movies in the world, such as the Marvel superhero movies which have almost no plot and even less real writing or serious directing. Even people enjoying such garbage still get excited about Robert Downey Jr.’s way of playing Iron Man, or argue about whether this or that representation of Spiderman or Captain America is truer to the original comic book writer’s “vision” of the character. In other words, they are still thinking that they are watching a product of human creators, which can be appreciated and judged as a human creation, in spite of all the CGI and so on.
But when AI has taken over the production of mass entertainment, and even, eventually, the production of “art,” the continuity that makes the search for beauty meaningful will have been broken: people will no longer be enjoying or seeking evidence of human thought, vision, or skill, but merely looking at flashing images or collections of sounds for their own sakes, without any connection to the human moral realm that was both the source and the essential purpose of art (from the highest to the lowest) throughout all of human existence until now. With AI, we are truly reduced not merely to the lowest form of art, but to anti-art, in the sense that without the emotional element that Rousseau calls the “moral impression,” we will simply be looking at colors and shapes, or hearing notes and harmonic chords. But, as Rousseau (who was a composer as well as a philosopher) explains very clearly, colors and shapes are the material of painting, but not its essence, and notes and harmonies are the material of music, but not its essence. AI will make the matter the essence, obviously. The moral impression that comes from experiencing and recognizing a representation resulting from another human mind and hand, will be entirely (or almost entirely) eliminated. This was the idea that inspired the essay question I assigned for my writing class exam last month: “If you found out that your favorite music had actually been produced by AI, would you still enjoy it the same way?”
This leads me to your next, related question:
Don’t they just want a painting/music or an information/fact?
As for painting/music, I would ask, based on the above discussion, what is a painting? What is music? Can these two things be separated from human intention without ceasing to be painting and music at all? Or, since what AI is doing is simulating human painting and music, we might ask: Is a simulation of a human intellectual and emotional process the same thing, or of the same value to us, as the actual human intellectual and emotional process itself? In other words, will even the simple-minded people you refer to feel satisfied with a non-human simulation of what is essentially the product of a complex human process?
I agree with you that most people “just want a painting/music,” without knowing or caring anything about the meaning of art and beauty. But that does not mean they do not desire beauty. Let’s take Socrates seriously when he insists that everyone desires the good, so the only real difference between good and bad choices is based on how ignorant or knowledgeable people are about what the good is. In other words, good choices (and good men) come from knowledge of the good, whereas bad choices (and bad men) come from ignorance. Thus, as people sometimes summarize Socrates’ view, “virtue and vice” are really just “wisdom and ignorance” described with reference to action.
As you know from The Symposium, Socrates applies this principle to the realm of the beautiful as well, since the relation between the beautiful and the good is intimate. And in the end, he shows that the truest beauty is true virtue, which is wisdom. In other words, the true, the beautiful, and the good are all interwoven in his view of the meaning of desire, and ultimately in his description of the erotic ascent.
My point in mentioning this is to remember that we are talking about human nature, which means we should be focused on what humans really want, rather than merely what they think they want. Yes, AI attracts people with the promise of endless self-generated pictures and tunes, along with immediately generated answers to a million questions. But does any of this answer to our true desires, any more than the Marvel movies answer to the true human desire for beauty, or than simplistic sophistry answers to our true desire for wisdom? The human condition has ignorance and wrong turns built into it, and the wonder of the soul is that any of us can make any progress at all in digging our way out of the confusion – the Tower of Babel – and begin to climb toward the proper goals of our nature. Doesn’t the false promise of “objectivity” and “personalized answers,” coming from a technology that in truth represents the very death of genuine objectivity and personalization, represent a particularly serious new challenge to the already infamously difficult human problem of knowledge and beauty?
When AI can generate results similar with humans’, what does the difference that AI cannot think, understand, and create substantively mean?
“Similar results” in what way? And can you really separate the results from the process? Are the results really “results,” if there is no relevant process? And if not, then the definition of the result itself is inseparable from our awareness of the process that led to it. Can people be tricked into believing in a result derived from no natural process, or even completely lose their natural awareness of the inherent significance of the process in judging any result? Yes, probably, and the popularity of so much computer-related learning and entertainment today seems to prove it. But if humans can be so tricked and abused, this does not provide any answer to the objections to AI, but rather only highlights the reason for the objections. We are going to lose our natural awareness of ourselves and our proper aims as a living species.
Consider the example of a machine that can fire a baseball ten times farther than any human baseball player can hit it. If you were at a science fair, this machine might be interesting, but at a baseball stadium, it would be intrinsically worthless, because the reason it is interesting to see a ball hit as far as a real human being can hit it is precisely because we, the viewers, are also human beings like that player, and can therefore appreciate his accomplishment (a) by comparison with our own understanding of ordinary human capabilities, and (b) as an inspiring example of how much a human being – a being like ourselves – can accomplish with a lot of hard work and determination. There is a kind of virtue, not just physical but also character virtue, implied in that great athletic ability, so we admire it and applaud it as a kind of celebration of human excellence. The ball-firing machine can make the ball travel much farther, but that is boring or merely a curiosity without moral relevance, because we are not ball-firing machines (or ball-firing machine manufacturers), so the machine’s “achievement” means nothing to us as baseball fans, but rather, if anything, it seems like an insult to human players who have to overcome the natural limits of the body they (and we) are given, in order to achieve their great feats.
What’s the point of the conclusion that we humans are the only ones who can think and create?
The point is only this: What AI offers us, ultimately, is a super-duper, hyper-deluxe, speed-of-light communication simulator. Simulator = Mimicker = Faker = non-communicator. Because a machine cannot communicate. Communication is not the sharing of language symbols. It is a sharing of souls, achieved by means of language symbols. Communication without intention, mutual understanding, emotional attachment, and the pre-linguistic possession of something that one wishes to communicate, and that can be communicated due an essential (i.e., natural) likeness between the sender and the receiver of the communication, is not communication, and can never be. A person who believes he is communicating with a machine that simulates his own methods of communication is merely deluding himself, and more specifically – like the prisoner alone with his robot girl on a penal asteroid in The Twilight Zone episode “The Lonely” – he is actually only communicating with himself. That is to say, he is living in an illusion of communication, however comforting or corruptive that illusion might be under various circumstances.
For a community of humans living together, with at the least the potential for real contact with truly likeminded souls available to them, to be sucked into this mass delusion in the name of speed and efficiency, or “progress,” would be the saddest death possible for the human spirit. If it happens – and I suppose it looks almost unavoidable now – it will be a fantastic final proof for all those great thinkers of the past who theorized that progress is an illusion, that human development is always and inevitably devolution, and that the natural cycle of human existence (and civilization) leads downward, not upward, perhaps so that the next time around the circle may begin again from the point of nothingness.
And that’s all the more reason for “we few, we happy few,” to enjoy the wonder of natural attachment and mutual understanding with one another to the extent possible to us, since understanding – the philosophic perspective – is not only the highest life possible, but also the only reasonable option left, when civilization has collectively chosen death as its preferred trajectory.
The last man is here. The last man, as Nietzsche says, lives longest – but even that longest life comes to an end eventually, and mercifully.