I, Robot, by Isaac Asimov
This is possibly the best known of Asimov’s stories, but the book with this title is, in fact, a series of nine short stories, published individually between 1940 & 1950, plus a fictitious introduction, in a connected thread, and it is also one of five ‘robot’ books written by Asimov; the epithet ‘seminal’ can surely and safely be ascribed to it, in the science fiction genre. Younger readers might initially associate the title with a 2004 film of the same name, directed by Alex Proyas, and starring Will Smith; given that it is a few years since I watched this film, from what I can remember, it bears little resemblance to Asimov’s original: the Wikipedia ‘blurb’ tells us that the original screenplay, Hardwired, was “suggested by Isaac Asimov’s 1950 short-story collection of the same name.” The underlying message of the film might not be too far removed from the original, however, because Asimov’s portmanteau essentially uses the technology of robotics as a vehicle for psychology, philosophy and, possibly, even morality: how much autonomy can we, should we, give to what are machines or, perhaps, cyborgs; if they have organic content in the form of a positronic brain (a term conceived by Asimov, and now very well known in science fiction); and if we do, how far would we be able to trust them, in view of their likely superiority, both mental & physical?
Of course, AI (Artificial Intelligence: “founded as an academic discipline in 1955”, according to Wikipedia, so very much springing out of, if not necessarily inspired by, Asimov’s thinking) is now a very widely known, if not necessarily understood, concept, and it is used in a plethora of applications, from internet search engines to what are now referred to as ‘smart’ devices; the worry, which some technologists are probably quite happy to dismiss as ‘conspiracy theory’, is that much of the work that AI does goes on unseen, in the background, so it is virtually impossible to monitor its activity and the repercussions for society, especially where privacy & human rights are concerned: perhaps these wider implications weren’t obvious to Asimov when he was writing the stories in the American post-war, white heat of technological development, although it is pretty clear that he was aware of the dangers that intelligent, autonomous robots could present.
These creations, initially of mankind but, before very long, self-reproducing, can be made to be beneficent (probably the best-known example of which is the android Data, from the Star Trek Next Generation series) just as easily as they can be made bellicose, as they would be when (rather than if) the military were allowed to dominate their development: the difference would be governed by the primary programming of the neural net (another name for the positronic brain), and it must be assumed that the military’s killing machines would not be given the fundamental & inescapable guidance of Asimov’s wonderfully precise & concise Three Laws of Robotics, “designed to protect humans from their robotic creations”, hence the clear & present danger which would be obvious to all, including (but expediently ignored by) the military.
The protagonists of these stories are three main characters, the primary one being, to Asimov’s credit, a female ‘robopsychologist’, Dr. Susan Calvin, the other two being engineers Gregory Powell and Mike Donovan, who have to deal ‘on the ground’ with different situations involving robots, in the chronological course of the narrative. It is structured in the form of a memoir of a series of interviews with Calvin by an unnamed future version of a journalist (he is only ever referred to by Calvin as “young man”: he is thirty-two), who is acquiring background information on her for his “feature articles for Interplanetary Press”: he already “had her professional ‘vita’ in full detail.” The year is 2062, and over the course of the interviews, Calvin gives the journo her thoughts on both her life, to that point, and sketches in the scenarios involving the main & supporting characters, which are described in the third person, including Calvin herself.
There are many interesting aspects to this series; the first is the obviously, and occasionally, in our terms comically, antiquated manifestation of the future technology as it could be conceived in the late 1940s; another is the way that everybody, across this future society, is quite comfortable with anthropomorphism of robots, primarily derived from their nomenclature: “Dave”, from DV-5; “Cutie”, for the QT series; but the first robot mentioned only has a human name, Robbie, rather prosaically, although ‘he’ cannot vocalise, being “made and sold in 1996. Those were the days before extreme specialization [sic], so he was sold as a nursemaid…” Also, and somewhat depressingly for me, it is taken for granted that capitalism will still be operating in this technological future, but it doesn’t have to be so: there is at least one highly developed ‘alternative’ system, Resource Based Economy, embodied in the work of Jacque Fresco and his collaborators in The Venus Project — it is difficult to pin down exactly when his work would have first achieved some prominence, but he was born in 1916 (died 2018!) and, according to the website, “Fresco’s lifelong project stems from his firsthand experience of the Great Depression, which instilled in him the urge to reevaluate how many of the world’s systems work.”, so it is possible that Asimov was aware of this concept, but whether he chose to ignore it is a moot point.
The impression given by Dr. Calvin’s reminiscences, for all her obvious genius professionally, is that she is distinctly ambivalent about the advisability of humanity’s inexorable & irrevocable reliance upon robots and AI, and her empathy, for all she could come across as occasionally cold & arrogant, is presumably the vehicle by which Asimov conveys his own reservations: any tool, or weapon, has no impetus other than the autonomy which is bestowed upon it, so an inert tool is subject to the use to which a human being might put it, but it appears that Asimov was wanting to warn us of the dangers of opening Pandora’s Box. Thankfully, those concerns are being addressed to some extent, but inevitably, secrecy associated with humanity’s protectionism embodied by global military forces means that it is possible that wider society will have no inkling of how far development of autonomous AI has progressed before it passes the point of no return: perhaps the best we can do is hope and work for peace wherever possible. The paperback edition of the book I read was published by HarperVoyager, London, in 2018, ISBN 978-0-00-827955-4.