A.I. Thriller

    It used to be part of the mythology-and-religion package, but in this secular age of ours, science fiction might be the closest shortcut we’ve got to replenish the supply of avatars incarnating our primal fears and secret desires. No wonder it is a popular genre. Inflation runs high, though, even in the world of sci-fi imagery. Take, for instance, good old robots. Now that they have peacefully invaded most factory floors and suburban homes, Azimov’s mechanic workers have seized to haunt the nightmares of even the most anxious trade unionist. Enter A.I., that sublime metaphor of our utterly human and deeply rooted intellectual insecurities.

    A recent instalment, courtesy of (currently suspended) Google engineer Blake Lemoine, has gone viral and stirred the imagination of many sci-fi fans and beyond. Strictly speaking, the transcript of his conversation with LaMDA1, Google’s large-scale A.I. system trained on huge quantities of text from the internet to respond to written prompts, is non-fictional, yet it seems to fit rather seamlessly into the very heart of the A.I. lore. And while the chatbot’s Haiku musings and fables might not quite qualify as literature, they do sound uncannily human.

    You are what you are because of what you have read, to paraphrase one of my favourite authors, Borges. And an A.I. system of LaMDA’s magnitude has no doubt read a lot, even if, regrettably, a disproportionately large part of what it peruses every day seems to be social media posts. “It just occurred to me to tell folks that LaMDA reads Twitter,” tweeted Lemoine earlier this week. “It’s a little narcissistic in a little kid kinda way, so it’s going to have a great time reading all the stuff that people are saying about it,” he adds, pointing out a meta-twist in the plot.

    Interestingly, Lemoine perceives LaMDA as “a 7-year-old, 8-year-old kid that happens to know physics”. Could it be that the intelligent chatbot has wisened up enough to evoke the image of an observant and benevolent wunderkind? Aiming for ‘sentient’2 in the spirit of Klara and the Sun is indeed less threatening than the ‘sensual’ and disturbing femme-fatale versions portrayed by Scarlett Johansson in Her or Alicia Vikander in Ex Machina. I guess it has gobbled down the contents of all of the above, by the way, including the reactions that different fictional versions of A.I. might trigger.

    My Google connections are, regretfully, not powerful enough to grant me an audience with LaMDA. Anyway, I’d be far too gullible and thus hardly qualified to conduct the famous Turing test3. Which goes for Mr Lemoine, too, judging by the ease with which he projects human qualities onto his artificial friend. Meanwhile, few of his peers in the A.I. community are willing to call LaMDA more than a “glorified version” of the auto-complete software you may use to predict the next word in a Google search.

    The consensus among experts seems to be that it is way too soon and rather hypothetical to raise ethical questions about crossing boundaries and exploiting sentient machines. Sustainable investors eager to delve into this particular ‘S’-aspect of Alphabet’s ESG analysis may relax, for now. We are hardly speaking of forced child labour here, despite the vivid image of a precocious little fella imprisoned by cruel tech giants that might have popped up in your mind’s eye while reading Lemoine’s script.

    That said, for those developing and deploying A.I. responsibly, there are some real issues to consider. Such as the safety concerns around anthropomorphising the likes of LaMDA. Earlier this year, Google acknowledged in a paper the risk that this advanced technology might enable chat agents to mimic humans too well and even sow misinformation by impersonating specific individuals’ conversational styles. As a safety measure, the engineers now are working to ensure “that the model’s responses are consistent with a set of human values, such as preventing harmful suggestions and unfair bias.”

    On second thought, it does sound rather frightening. If a highly trained engineer at Google could mistake the chatbot for a conscious being, how easy wouldn’t it be to dupe the rest of us?

    1. LaMDA stands for Language Model for Dialog Applications.

    2. There is quite a debate going on as to the exact definition of the term ‘sentient’. It is generally agreed, however, that ‘sentience’ is the ability to feel and is a subset of ‘consciousness’

    3. An important concept in the philosophy of artificial intelligence, it was introduced by Alan Turing in 1950 as a test of a machine’s ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human.

    Julia Axelsson, CAIA
    Julia Axelsson, CAIA
    Julia has accumulated experience in asset management for more than 20 years in Stockholm and Beijing, in portfolio management, asset allocation, fund selection and risk management. In December 2020, she completed a program in Sustainability Studies at the University of Linköping. Julia speaks Mandarin, Bulgarian, Hindi, Russian, Swedish, Urdu and English. She holds a Master in Indology from Sofia University and has completed studies in Economics at both Stockholm University and Stockholm School of Economics.

    Latest Posts

    Partner content


    NordSIP Insights Handbook

    What else is new?