Android’s inbuilt alarm wakes me at 7:15. By 7:45, Under Armour’s MyFitnessPal has logged a breakfast of oatmeal and banana and Duolingo has begun to bug me to keep practicing. There is a certain rush of socks and keys defused by Balance’s reminder that I take fifteen minutes to meditate. Four hours hence, a Pomodoro timer tells me to take a break and lunch is likewise logged. At the end of the day, Google Fit tells me whether I did or did not walk the assigned allotment of steps and Woebot asks how I feel about that, all that, the whole day. One might feel stupid, choosing which emoji best represents your inner state to an uncomprehending machine.
Such machines do not think despite their makers’ insistence on their intelligence: witness, smart watches, smart phones, smart apps. They set timers. They prompt for input. They categorize this input. They match it to corresponding output. They display and display all the smarts of Skinner’s pigeons. Although these programs intend to take some part of the place of a secretary, a teacher, a nutritionist, a mother, a therapist, there is no illusion that there is a secretary, a teacher, a nutritionist, a mother, or a therapist on the other side of the screen.
And yet, such an illusion can be produced, at least for a short time. Joseph Weizenbaum’s ELIZA (playing DOCTOR) served its mid-sixties patients in brief sessions of teletype psychotherapy. ELIZA worked by scanning her correspondent’s message for keywords, identifying minimal context, and formulating (in the true sense of the term) a response. When she failed to make sense of a user’s response, she could give a pat reply or circle back to a previous part of the conversation. In theory, she could thereby provide topical responses at least in very simple-minded conversations.
Weizenbaum chose to show off ELIZA as a psychotherapist for largely practical reasons. The empathetic, passive stylings of psychotherapist Carl Rogers provided for simple-minded but credible responses. (“And how does that make you feel?”) The trick worked, at least sometimes. Weizenbaum’s secretary sneaked off in the evenings for some alone time with her electronic confidant and certain contemporary psychologists were so impressed as to prophesy their imminent digital replacement.
That such knells were premature by at least half a century, as so many optimistic bets on mid-century artificial intelligence were, should not permit the humanists among us to rest on their laurels. The smartphone has come to host a swarm of self-help apps, from diet and exercise, to organization, to, yes, psychotherapeutic. These latter programs have been given by the pandemic both a perfect social — a population with both new need for treatment and restricted access to therapists — and legislative environments — the FDA loosening its restrictions around such apps. The bet of such companies, like the makers of Woebot, is that a therapist’s job, at least enough of it to be worth paying for, can be automated and slipped in your pocket.
For a psychologist, the question is how the mind works. For a therapist, the question is how its infirmities and inflammations may be treated. For a businessman, the question is what may be sold. And it is certain forms of therapy that are more amenable to be bottled and sold. In the space of digital therapies, these are therapies that are already programmatic, which can be boiled down to a series of instructions. Woebot, among many of its competitors, has alighted on cognitive behavioral therapy (CBT), or a simpler distillation thereof, as its core product. Treatments of this type have the twin benefits of well-attested efficacy across a range of psychiatric disorders and straightforward techniques amenable to automation.
The behavioral therapist, unlike your Freudian psychoanalyst, identifies psychological disorders with their surface-level, behavioral manifestations rather than any deep cause of these rooted in a patient’s history, personality, or psychodynamics. As a result, diagnosis for such a therapist is straightforward: just get the patient to tell you what’s wrong. Addressing these behavioral manifestations therefore does not involve bringing the patient to some personal revelation but rather training the patient to recognize the triggers of their disorders and to develop and deploy strategies to address them. The arachnophobe is to conquer their phobia by exposing themselves to spiders until they are sufficiently convinced of their safety.
Such treatment is more or less effective on its own terms, at least when it comes to certain classes of disorders, including such popular illnesses and conditions as depression, anxiety, and stress disorders. This is a boon to developers of applications implementing, or claiming to implement, cognitive behavioral therapies, who get to describe their product as scientifically backed or evidence based. However, as Stawarz et. al. note, smartphone applications claiming to implement cognitive behavioral therapies for depression by-and-large only tenuously reflected widely practiced therapies. Most software is cheap crap. Further, and more worrying, the authors note that the (generally high) user ratings of such applications bore no relation to the extent of their adherence to accepted best practices. And, of course, that an application implements practices that have been shown to work in a therapeutic context does not itself imply that it, in quite a different context, will have the same results. Claims that such applications are “science based” are thus doubly misleading: they do not mean that the application is a good interpretation of the science nor that there is a scientific basis for believing that the application works.
One may reasonably worry that such applications are more predatory than therapeutic: exploiting rather than treating the ongoing and recently exacerbated mental-health crisis. As a study by Ettman et. al. shows, rates of depression tripled during the early stages of the pandemic, especially among those suffering job losses and other hardships. Meanwhile, therapists have become harder to access during the pandemic. Such sudden opportunities spawn low-effort attempts to capitalize.
The digital therapist is also the natural evolution of our political approach to mental health. As Mark Fisher in Capitalist Realism notes, the prevailing response to the mental health crisis, despite its following social and economic changes (jobs having become more stressful, more precarious, but no better paying), has been the responsibilization of mental health. If your circumstances make you depressed or anxious, then these feelings are a private dysfunction for which you should seek treatment. (One might imagine a population choking through an industrial smog receiving a reference to yogis who might provide breathing exercises.) Such a cultural assumption has insinuated itself in our collective consciousness so that “Get a therapist” is the first advice offered for any issue.
This responsibilization is inherent in the patient-therapist relationship — it is the sufferers job to find a therapist and faithfully carry out their prescribed treatment — but is also implicit in the theory of cognitive behavioral therapist. Recall that such therapy identifies a patient’s mental illness with maladaptive behavior and the patterns of thought and feeling that cause this behavior. Though this behavior is triggered by certain external conditions, those conditions are thought of as fixed and it is the patient’s response that is pathological. A therapist, howsoever noble her intentions and helpful her ministrations, becomes an extension of the disciplinary function of the modern economy. If a worker bucks and strains beneath the yoke of her employer, then it is simply up to her, with her own time and money, to find some extracurricular discipline: whatever tincture of journaling, meditation, and psychopharmacology that will get her back to clocking in and clocking out.
At the same time, we have become enveloped in a vast and omnipresent media environment, which has moved from our homes into our pockets. This world, as Fisher notes, already serves a therapeutic function. It is where we wind down after the workday, what we reward ourselves with. Correspondingly, access to such digital media becomes part of the inducement to work. You have to pay that Netflix subscription somehow. And as the screen has assumed more and more of the function of the outside world, from commerce to communication, it is no great surprise that it should claim the role of therapist.
In the film Blade Runner, blade runners seek out and destroy wayward replicants: androids manufactured as a disposable workforce. Given their extreme similarity to humans in both appearance and behavior, blade runners deploy a psychological test, the Voight-Kampff test to identify them. Unlike Turing’s imitation game, what is at issue is not the replicant’s intelligence but their emotions. The test couples a verbal Rorschach and a polygraph test: the could-be replicant is given an emotive hypothetical (“You come across a turtle stuck on its back”) and their physiological response is monitored. If their emotional response seems false or outside human parameters, the alarm is wrung.
Yet the replicants’ emotional life is not exactly false or inhuman but uncanny: we must suppose that they feel differently from us and yet it is never obvious what exactly the difference is. Indeed we must understand this uncanniness as being felt by the replicants themselves. This is clearest in the case of Rachel, a next-generation replicant who is so human-like that only a marathon session of the Voight-Kampff test can discern her inhumanity. Rachel herself has never been told that she is a replicant; instead, she has been implanted with the memories and carries on the life of the dead niece of Eldon Tyrell, the replicant’s creator. Not only is she not really a human, but she is a shadow or double of a human.
Freud, in “The Uncanny,” ascribes the idea of the double to two stages of our psychic development. It has its origin in our childhood “primary narcissism,” where we invent a “double” of ourselves to deny death. We, ourselves, this body, may be destroyed, but our double, our immortal soul, lives on. And surely for Tyrell, Rachel is a double in just this sense: a denial of the death of his niece. Later, Freud continues, after we have put away such fantasies, a new source for the double emerges. We develop a capacity for self-conception and self-criticism, and it is this distanced self-relation that gives rise to another concept of the double, not a backup but a more or less faithful version of herself. The uncanniness of Rachel’s experience, when the truth is revealed, consists precisely in a disjunction between her view of herself from the inside (where, given her memories, she is still Tyrell’s niece) and this new strangely distant view of herself as an object, an automaton.
For Rachel and the other replicants it is their inhumanity that dooms them. This generation of replicants has an artificially limited three-year lifespan and their attempts to rebel against this limitation and their servitude that leads to their pursuit by the blade runner. Thirty years later, in Blade Runner 2049, it is the replicant’s humanity that proves perilous to them. Correspondingly, the sequel introduces a new test, the baseline test, which is aimed at distinguishing not replicant from human but well-functioning from dysfunctional replicants. This new test follows a new, longer-lived generation of replicants, who because of their longer lives and greater responsibilities (the blade runner of Blade Runner 2049 is himself a replicant) are put under greater surveillance and softer forms of control and discipline — the blade runner is given a boss and a hologram girlfriend — though with the threat of retirement always looming in the background.
In this transition, the two Blade Runner films recapitulate via the replicants a transition from Fordist factory work to post-Fordist white collar work. As Fisher notes, the nature of the inducement to work evolves alongside with the work itself. For the blue-collar worker, the matter is straightforwardly existential — work or starve — but for the white-collar, knowledge worker what is at stake is not just one’s base, physical needs but a constellation of debts and addictions, broadly construed. However much suffering work involves, one must do it for the sake of the salves its wages permit. For the replicants of the original Blade Runner, the threat of death was meant to suffice to keep them employed. The replicant of Blade Runner 2049, needs in addition the salve of a synthetic romance to keep his mind in shape.
We may extend the analogy to suggest that the uncanniness of the replicant also exists within the perspective of the post-Fordist worker. This uncanniness, recall, results from the replicant’s being forced to take up a critical view of herself that is incongruent with her ordinary, first-person view. Her memories feel inescapably real to her, but she must understand, if she is to survive, that they are implanted, manufactured. And as a whole there is an incongruity between her internal view of herself as a free person with her own ends and an external view in which she is a tool fit for someone else’s purposes. A similar incongruity is introduced by the shift to subtler, internal methods of discipline in post-Fordist work; when the clock and the foreman are replaced by objectives and deadlines, a worker must engage in more and more self-management. She thereby assumes, in part, a corporate perspective on her own life, so that, for instance, personal issues are understood not in terms of their place in her ordinary life but as potential impediments to productivity.
In particular, her mental distress, even and especially dissatisfaction with the job, must be seen, simply, as more obstacles between her and the completion of her alloted tasks. Correspondingly, enjoyment and self-care must be evaluated also for their capacity to regulate her emotions, to become capable of passing her own baseline test. Therapy takes its place as the professional extreme of this regime of self-regulation as much as it is a properly medical discipline. Presumably the therapist faces her own version of this uncanniness; we are denied much of the perspective of the blade runner’s digital partner, Joi, but we can see that she too is caught between seeing her feelings for him as genuinely romantic and as a product she has been made to sell.
In a visually astonishing sequence, Joi superimposes herself over the body of a prostitute in order to have sex with the blade runner. Coupled with a program’s intangibility is its potential omnipresence and imperturbability: Joi is always on, always waiting for you to get home, never truly cross with you. Likewise, you can have a session with Eliza any time you want: no scheduling, no wait period, no cancellations. The illusion of Eliza’s attentiveness is easy to shatter; modern solutions, while more fully featured, are less ambitious in the presentation: they do not yet try to provide the classic experience of the therapist’s couch. When they do, patients will have to grapple with the new version of the familiar uncanniness of the counselor: she wants to help you, but only because she’s made to do it.
I have a depressed friend who swears by a mood tracking app. His depression comes in more or less regular waves and, by tracking it, he gains some ability to predict the next crest and, more crucially, a conviction that it will pass. Some of the same effect might be achieved by a friendly pat on the back and a “This too shall pass,” but precisely because the application simply mirrors back what he had previously inputed it gains a special rhetorical power.
And therapy, as opposed to psychopharmacology, is essentially a rhetorical exercise. For cognitive behavioral therapy, the rhetorical aim is one of classical persuasion: the client must be persuaded to give up old, maladapted ways of thinking and adopt new, functional ones. In this context, the procedural rhetoric (in Ian Bogost’s phrase) of software can potentially serve not merely as a simulation of that employed in a therapy session but as an addition to it. What is key to the mood tracker’s case that “this too shall pass” is not merely that it presents certain evidence drawn from the patient’s life (which any decent therapist would do), but that this evidence is produced by a simple, transparent procedure. The fact that the application is not intelligent and so is not capable of tricking you in part explains its effectiveness.
By contrast, Weizenbaum’s ELIZA worked, when she worked, as a trick. It was only because its users did not understand the procedures behind ELIZA’s statements that they could imagine that she understood them. For Weizenbaum, this misunderstanding belied a deeper confusion, a confusion between thinking and computation. A computer’s operation, how complicated soever, is fundamentally mechanical and regular. Its can bug out or break down, but a computer cannot, on its own terms, make a mistake. Thought, on the other hand, is autonomous, not merely in the sense of being free but in the sense of being self-legislated. We thereby err when we speak of a computer, employed say in prisons, deciding to deny a prisoner’s parole. And such errors are not innocent slip-ups, but obscure the role of the creators and operators of such programs, to whom the decision is properly imputed.
In light of this humanist understanding of the capacities of the computer, we can revisit ELIZA’s implementation of something like Carl Roger’s person-centered therapy as being motivated by something more than mere ease. For the Rogerian therapist, her sessions are “nondirective,” not aimed at effecting specific changes in the patient. Instead the client is assumed to have “a sufficient capacity to deal constructively with all those aspect of life which can potentially come into conscious awareness.”1 The role of the therapist, then, is simply to bring into conscious awareness those aspects of the client’s life that need to be dealt constructively with. The techniques of the therapist, then, center on mirroring, empathizing with, elaborating on the thoughts of the patient. The cure is more listening than rhetoric.
There is correspondingly something instructive in the shattering of ELIZA’s facade, the moment which must come sooner or later when we come to see that she does not actually understand us. As with the mood tracker, we see that we have taken out nothing more than we have put in, which is not to say nothing. We have been, like Narcissus, absorbed in our own reflection, our double. Perhaps this hall of avatars, aliases, and tulpas will double as a therapist’s couch. Perhaps we should not want it to.