Flowers For Anthropic

What if Daniel Keyes' work is in fact a cautionary tale for artificial intelligence ?

Flowers For Anthropic
Photo by Nikolett Emmert / Unsplash

A few years ago, I read the science fiction novel, Flowers for Algernon, written by Daniel Keyes in 1966 - a birthday gift to me from Candy. In it, a breakthrough cognitive treatment, initially tested on a lab mouse called Algernon, allows the animal to obtain a high-level of intelligence. This technology is then tested on a young mentally retarded man, Charlie, who is also the narrator of the story. We follow how Charlie increases his intellectual capacity through the improvement of language employed in the narrative text and his social ascent as he gains intellectual admiration and even a romantic relationship. We then learn that the scientists responsible for developing the treatment, record a full regression in the capacities of Algernon and this is subsequently reflected in the tragic mental decline of Charlie.

Over the last year, Candy and myself have spent time using the tools provided by the hyperscaler AI companies, notably Anthropic, to help us understand how we can build many of our coding projects. These have allowed us to learn how to construct applications for our projects, to make decisions about appropriate hardware and services to use and to accelerate the debugging of the inevitable system errors that we have been confronted with. These have allowed us understand what is possible, and to have explained in clear language, how the technologies work. This has given us a sense of empowerment and that we now have a way of gaining knowledge more quickly which makes certain projects, hitherto inaccessible, realistic prospects. Similarly , we see colleagues using AI coding plugins in their professional work to carry out developments and to increase their levels of productivity.

But what if Daniel Keyes' work is in fact a cautionary tale for all technologies that make the promise of intelligence ? This could be that by becoming reliant on the 'empowering' LLM models to carry out our work, as individuals we stop doing the deep thinking, the mental equivalent of going to the gym, and consequently our own capacity for reasoning begins to decline as we are simultaneously led into a false sense superiority. An investment in AI could then paradoxically lead to an organisation or a society to experience collective stupidity if we then just become dependant on it.

But there is perhaps another shorter term lesson to be drawn: that the technology itself might not be sustainable. Costs of running the AI infrastructure owned by the hyperscalers, according to recent reports, might never be covered by their current revenues. The roll out of the new resource hungry hyperscale AI data centres in the US to meet demand has been been hindered by access to electrical grid connections and also supplies of water, with many now being cancelled. At the same time component costs for equivalent RAM models have increased up to 400% over the last two years, according to a recent tracker by Tom's Hardware - to a large extent driven by the AI boom. As for GPU, a similar cognitive decline described in Keyes' novel on Algernon and Charlie seems to be happening to one of the most expensive AI components, with Fortune magazine reporting this month that GPUs which were supposed to be depreciated over 5 years have in reality a shorter life of 3 years, after which a negative return on investment is recorded.

Right now, it is hard to know whether OpenAI or Anthropic are truly profitable enterprises as these are not (yet) public companies with relatively opaque financial statements. Nonetheless, it seems probable that the price that we as consumers are paying today is only a fraction of the true costs (financially and not to mention environmentally). In the last month, Anthropic has started to limit its consumers access during peak hours and reports of worse performance, poorer results and even outages have been reported as infrastructure capacity appears not to keep up with demand. If prices increase to match the true costs, will we necessarily follow as we don't really know what the cost actually will be ?

For many of us, the AI promise that we have tasted over the last couple of years might be taken away. The human tragedy that Keyes imagined 60 years ago could now be playing out in real time.

Subscribe to Christian Baker Blog

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe