
My first software engineering role, after graduating, was in 1979. I began working with a telecommunications company on stored program control systems for digital private branch exchanges (PBXs). This probably sounds very boring compared to what many of the world’s software engineers are currently working on in the rapidly growing and all-encompassing field of artificial intelligence. It did, however, provide me with a grounding in the software engineering profession and taught me many principles in developing real-time, mission-critical software.
A few years after starting in this role, in 1982, much of the computer press of the time began reporting on an initiative from Japan called ‘Fifth Generation Computing Systems’ (FGCS). These systems were seen as fifth-generation because they would utilise parallel computing techniques (thereby replacing the previous four generations of vacuum tubes, transistors, integrated circuits and microprocessors). The project was also to establish a platform for artificial intelligence systems. The FGCS was both revolutionary and ahead of its time. Although largely seen as a failure today, the FGCS initiative made several contributions, particularly in the field of parallel processing software. It also paved the way for showing how AI research need not be confined to academia but could benefit from collaboration between the public and private sectors. The AI that FGCS concentrated on was the so-called ‘expert systems’, the great hope for this field of research at the time. Not much later, around 1987, however, the field entered what is now termed the second major ‘AI winter,’ when interest and funding in AI significantly declined, as progress was not being made quickly enough.
Expert systems, AI winters and the importance of collaboration in the public and private sectors, not just in developing AI systems but also in addressing significant issues like the privacy and ethical impacts of AI, is where this important, newly translated book, Machines That Think, by Norwegian physicist and AI researcher Inga Strümke, shines. Strümke is known for her work in AI ethics, and it is in this area that her book excels and differentiates itself from many others. With the canon of AI books seemingly increasing at the same rate as the technology itself – many of them covering very similar ground – it is refreshing to find a book that brings a different perspective to the field by not just telling us how AI works, but also what it means for us.
Following a brief history, not just of AI, but of computers in general, including mention of pioneers such as Kurt Gödel, Alan Turing and John von Neumann, as well as some of the early AI successes such as IBM’s chess-playing computer, Deep Blue and Google’s Go-playing machine, AlphaGo Strümke dives into the details of what intelligence is and what “making computers intelligent”, means.
In his book Life 3.0, Max Tegmark defines intelligence as “the ability to accomplish complex goals”. He then goes on to say that intelligence is “substrate-independent” – it doesn’t matter whether intelligence runs on biological neurons (humans, animals) or silicon chips (computers). Strümke, however, neatly avoids falling into the philosophical rabbit hole of defining intelligence by saying that it is “an intuition of what you consider intelligence to be”. In other words, it’s hard to define, but you know it when you see it. So, how do we make machines intelligent? By telling them exactly what to do. But how do we do that?
Strümke explains that there are fundamentally two approaches to making machines that are intelligent, but these are so different that they form “a deep philosophical divide within the academic discipline”. Symbolic AI is an approach to artificial intelligence that involves defining symbols for a machine and creating explicit rules for how that machine should process those symbols. This is the approach that expert systems use, where the rules are based on human expertise and are created by interviewing specialists and encoding their knowledge into rules. Two problems exist with this approach. The first is that it is not always possible to explain clearly what the expert knows. The second is that when things change, for example, new knowledge is discovered, then new rules must be written, or old ones changed. In other words, expert systems are a hassle to keep up to date. They are incapable of learning new knowledge themselves. This brings us to the second approach for making machines intelligent, subsymbolic AI.
Subsymbolic AI enables intelligent behaviour by simulating biological systems using artificial neural networks without the need for explicit rules or algorithms. Neural networks are a form of machine learning that receive data, perform their computations, and output a prediction. They have been around since the early 1950s but were revived in the 1980s by, amongst others, Geoffrey Hinton and Yann LeCun. Hinton, known as “the godfather of AI”, was awarded the Nobel Prize in Physics in 2024 for his contributions to machine learning using artificial neural networks. It is this form of artificial intelligence that underpins the large language models (LLMs) being built by OpenAI (ChatGPT), Anthropic (Claude) and Google (Gemini).
Neural networks are only as good as the data they ingest during training, and require data in sufficient quantities with the appropriate relevance to the problem they need to solve. Strümke spends some time explaining the importance of good data distribution (and what can go wrong if this is not done, when bias can creep in), as well as how supervised and unsupervised learning work. However, as she points out, the most significant advances in artificial intelligence haven’t come from putting human-generated data into a machine or developing clever algorithms, but from amassing enough computing power to enable large-scale search and learning.
Having provided us with some of the history of AI and how we arrived at today’s LLMs, part II of Machines That Think delves more deeply into how neural networks actually work – and what happens when they don’t. It examines some of the most recent advances, including computer vision and convolutional networks (which process information using the same strategy our brains do), as well as how AI is being applied and the related privacy and ethical concerns these uses raise.
One of the more concerning areas she focuses on is the use of LLMs in the education sector, in particular when students use ChatGPT to write their essays. Not only is this plain and simple cheating, but it is also likely to weaken the students’ ability to express themselves. The goal of assigning an essay to a student is not just to produce as many essays as possible, but to practice research, analysis and enunciation of ideas. As the author says, “What is the goal of assigning students an essay? The purpose is not to produce as many essays as possible, but for young people to practice gathering information and expressing themselves. If a student uses ChatGPT instead of struggling through an essay on their own, their use of technology means that the core purpose of the essay is not achieved. The student’s ability to express themselves ends up weaker than it would have been without the ‘aid’ of technology.” The danger of an over-reliance on tools like ChatGPT in weakening our ability to express ourselves, as well as the danger of reducing our critical thinking skills, is, I believe, one of the big issues we face today with the use of AI, and it is good to see authors like Strümke facing these issues head-on. This, as well as other examples which highlight such issues, is one of the strengths of Machines That Think.
Strümke’s area of research is in explainable artificial intelligence (XAI). Somewhat unfortunately, this is also the name of Elon Musk’s AI firm; he seems to be on a mission to capitalise on the letter ‘X’ for all his companies. As AI becomes more advanced, humans are increasingly challenged to understand how the algorithm made a decision. The whole process is a black box that is impossible to interpret. As Strümke points out, “machine learning models don’t describe the world in a way that humans can naturally understand. They’re made up of numbers and mathematical operations we can read, but that don’t reveal the insights the numbers encode”. This is where XAI comes in. It is used to verify an AI model’s biases, accuracy, fairness, transparency and outcomes. It never ceases to amaze me how humans seem so intent on building technologies that we are unable to control or even understand, so we have to spin off a whole new area of research to do that! Unfortunately, as Strümke points out, “the need to know what machine learning models have understood about us is urgent if we want to protect our ability to make free choices”.
When it comes to the ethical and privacy issues that AI creates, the eternal problem always seems to be that of technology developing more rapidly than laws can be written or enacted. Furthermore, some uses of AI, which are perfectly justified and of value to society, may, if such laws are not written correctly, end up inhibiting its use, meaning we may miss out on the benefits that could accrue. As an example, Strümke cites the use of AI to detect diseases earlier and make more accurate diagnoses. These efforts could be seriously impaired if, for example, privacy laws prevent certain sharing of such medical data. Facing this problem, the EU, when drafting its General Data Protection Regulations (GDPR), decided to write vague rules in which the intention is clear, but which must be interpreted for each new context in which they may apply. This means it is the responsibility of those who interpret such laws to apply them carefully and try to understand the given context. I think it is this kind of interesting and insightful perspective that Strümke provides, which makes this book so valuable for someone like me who is not a lawyer but has had to interpret GDPR and been frustrated at its vagueness.
GDPR also has an interesting connection to XAI. In an effort to protect the privacy of EU citizens, the GDPR grants them the right to an explanation stating that anyone who processes personal data is obliged to provide the person to whom the data belongs with an explanation of the logic involved. Unless XAI can come up with a way of doing this (and it’s still currently an “open research question”), then a lot of big (mainly American) corporations are going to fall foul of GDPR if they want to sell their wares in the EU.
Machines That Think examines several other ethical concerns of AI, including autonomy and control, who decides when something goes wrong, and the “collective action problem” of AI development, i.e. what do its developers do if they become concerned about aspects of the technology they are working on. Some of these problems have been discussed in books like Sarah Wynn-Williams’ Careless People (about Facebook/Meta) and Karen Hao’s Empire of AI (about OpenAI), but Strümke has an interesting and engaging way of explaining issues and often provides new insights.
In the final part of Machines That Think, the author examines artificial general intelligence (AGI) or superintelligence and asks what will happen the day machines achieve intelligence comparable to our own? Currently, there is an arms race between LLM providers, who must acquire more data and build bigger data centres if they are to compete and reach the singularity, the point at which machine intelligence will exceed that of humans. There is a danger that as machines become more intelligent, we will lose control of them, as they will focus on solving the problems assigned to them, potentially ignoring the cost to anyone or anything else. The fact that artificial intelligence is being developed in a commercial market where short-term goals may conflict with long-term benefits is a huge challenge. With AI, we are releasing an incredibly powerful technology onto the world whose consequences we can barely comprehend, let alone legislate for. As Strümke quite rightly asks, “How could AI researchers [before us] allow the world to become so polluted by AI-generated content?”
Machines That Think makes an important contribution to the discussion of ethics and privacy in the field of AI. Rather than discussing these topics in an abstract way, it addresses the issues with real-world problems and highlights what is being done, as well as what needs to be done, to address some of the looming problems AI is unleashing on the world. The book is a good primer, not just for how current AI works, but also provides an essential discussion on how humans should deal with this technology if we are to coexist with it rather than let it rule our lives. Anyone who cares about their children and grandchildren’s future should read this book.

Leave a comment