I have divided this essay into two major parts. The first unpacks my immediate thoughts on AI. This part has my mental model for thinking about AI and some links to what I find to be valuable articles to better understand it. The second part outlines four themes that have taken up a lot of my bandwidth in the past year. They all build on the premise that we will have AI powerful enough to do significantly more than what we’re able to do with AI today. For my writing to make sense, I think you’ll have to accept that premise.
Here goes…
Struck by lightning
I’m awestruck by what’s been happening in artificial intelligence in the past year or so. As is pretty much everyone I talk to. If you care enough about AI to be reading this, your head is probably also spinning a little.
I was lucky enough to catch a whim of what was coming before it all exploded, but despite that little pre-warning - or maybe because of it - the past months have been dizzying.
As I’m finishing this essay roughly one year after my first real encounter with generative AI, I thought that would be a good time and place to start.
The content you’d create was allright but definitely not great. It was fun but also kind of useless.
During the summer of 2022 I spent way too much time playing around with apps built on top of OpenAI’s GPT-3 model. Tools like Jasper.ai, Copy.ai, and others. It was pretty cool. You’d write a few sentences of a narrative (a novel, a blog post, whatever), and the AI would supply a few. Then you’d take turns. The content you’d create was allright but definitely not great. It was fun but also kind of useless.
I remember thinking that maybe this could help scale content creation, and I spent a few days deep-diving into how to use it. There was potential and I spoke to a few people running content production agencies/studios about it. My girlfriend and I bought a domain for an AI-based content production studio and mocked up a website for it. It was a lot of fun.
As summer - and consequently my vacation - came to an end, I put it aside and went back to the day-to-day of running Responsive.
Then on November 30, ChatGPT launched…
I know enough to know that I barely know anything
From 2011 and a few years on I founded and led a company building a proprietary machine learning model at the core of the business. It failed miserably but I did learn a little about AI and machine learning. Since then, I’ve been part of developing and applying various advanced analytics / machine learning / artificial intelligence models in projects for our clients at Responsive. I’ve won industry awards for novel work within artificial intelligence. In other words, I have spent part of my life for more than a decade thinking about artificial intelligence. Thinking about what it can do, how it might impact businesses and society. How we work responsibly with it.
From 2019 and a couple of years on, I ran a series of workshops with clients’ marketing teams and management teams on ethics in AI, corporate policies for developing and deploying AI, and doing scenario planning for AI development as part of the digital transformation projects we run at Responsive.
I think I have a good conceptual idea of how AI works, both generative AI and AI overall. But I’m also keenly aware that there are many things I don’t understand.
In the past twelve months alone, I have spent close to a thousand hours listening to podcasts on AI. I’ve read thousands of pages on the topic. I’ve talked about this for hours and hours on end with nearly all the smart people I know. I think I have a good conceptual idea of how AI works, both generative AI and AI overall. But I’m also keenly aware that there are many things I don’t understand.
I know this because the field of AI is immensely complex and deeply complicated. Even the people building it have no idea exactly how it works, and why it does what it does. As in no idea. So, I’m pretty sure that I know a lot less.
Consequently, I really try to stay humble, curious, and cautious when I voice an opinion on the topic.
Every time you see a post or an article from someone sharing a confident take on what AI is, what’s next, where it’s going, where it’ll end, and what it means for businesses, society and humanity, please take it with a grain of salt.
As I’m writing this essay over the course of weeks, not hours, new information keeps coming up. And I find myself regularly updating throughout my writing process.
Recently, news broke that in June, ChatGPT’s user numbers dropped for the first time since it launched. This could be due to any number of really good, underlying reasons. Intuitively, I’m thinking there are (a) fewer “tourists” checking it out, trying it for a few days, (b) a realisation from the well-intended but not deeply committed users that it takes a lot of effort and skill to get decent output, and (c) more API use by serious users, which is where I’m personally gravitating - admittedly via my coding-capable colleagues.
Totally non sequitur - later in the article, you’ll find a link to Tim Urban’s absolutely brilliant Wait But Why two-part-post from 2015 on artificial intelligence. I don’t want you to miss it just because it’s buried deep in the article, so I figured I should also leave it here. It’s almost ten years old but it’s still really good. I’ve found myself re-reading it every year or so since it came out.
A mental model for (generative) AI
I’ve struggled to build a robust mental model for thinking about generative AI. I’m leaning towards applying different mental models for different aspects of thinking, and maybe you want to borrow some of the homework I’ve done on this.
First and foremost, if I want to try and understand the inner workings of e.g., ChatGPT, I think the autocomplete metaphor is useful (albeit obviously not perfect).
... if I want to try and understand the inner workings of e.g., ChatGPT, I think the autocomplete metaphor is useful as a mental model.
So, what’s the autocomplete metaphor? Based on reading stuff on the internet, it tries to guess what would be a reasonable next word in the sequence you’ve begun. It does not answer your question correctly per se, it tries to guess what answer you’ll be happy with. So, it essentially provides the “autocomplete” that is most likely to be the way that similar sentences were completed across the internet. This also helps explain some of the lack of originality in tasks requiring creativity.
But as we all know, the internet is full of pretty weird stuff, so sometimes ChatGPT can be pretty weird. And it’s also been designed to try and make you happy, so it’ll adjust its answers to comply.
If you tell ChatGPT that your high school math teacher insists that 6 is greater than 8, it’ll concede that 6 is probably greater than 8.
A good example is math tasks. If you ask it which number is greater, 6 or 8, it’ll say 8. If you then tell it that your high school math teacher insists that 6 is greater than 8, it’ll concede that 6 is probably greater than 8.
But also - somewhere on the internet some guy probably wrote something that to ChatGPT sounded like 6 was greater than 8. So based on the training data, there’s this tiny likelihood that 6 is greater than 8. And sometimes it’ll throw that low probability answer by mistake (or by design: if OpenAI uses user interactions as reinforcement learning - which I’d imagine they do - it’s valuable to test the low probability answers every now and again to get feedback that goes back into the model training). Further, it’s default setting re. temperature is high, to make the interactions more compelling and entertaining.
This seems to be mind boggling to most people. But ChatGPT doesn’t do calculations using the numbers 6 and 8 to determine which is greater. Even though such calculations are simple. It is trying to guess “what’s a reasonably satisfying continuation of this sequence of words?”.
The same goes for tools like DALL-E, Midjourney, StableDiffusion. As it’s visual, it’s trying to guess a reasonably satisfying continuation of pixels. But the underlying approach is the same. It treats pixels as words and tries to continue the sequence.
Please note that I’m not in the camp of ridiculing generative AI as a glorified autocomplete. But I think that it’s a useful *mental model* for thinking about generative AI; that it's trying to guess at a satisfactory answer, not a factually correct one.
For a more detailed, yet approachable rundown on how large language models work, I recommend this short-ish write-up by Stephen Wolfram. I’ve read a few of this sort, and this is my favourite because it strikes a good balance between thoroughness and approachability and only takes 1-2 hours to read. Less if you’re familiar with math.
Other mental models are useful for understanding other aspects of thinking about AI. If you want to think about where AI is going, the phases and stages that other significant technologies have gone through are useful (yes, that implies that I don’t find AI to be substantially different in that respect, even if it’s not completely copy/paste). If you want to think about the speed at which it’s developing, I think pandemics are good well suited. If you want to think about how to govern them at a global scale to avoid the destruction of humanity, nuclear weapons are a decent starting point (for thinking about it, not necessarily for solving it).
It’s very exciting. And quite unsettling.
It’s an understatement to say that I’m deeply ambivalent about what we’re witnessing right now.
On the one hand, this could be breakthroughs the size and impact of the industrial revolution. Something that will propel humanity towards prosperity, something that could help solve the climate crisis and put an end to cancer, something that could improve quality of life for billions of people.
All the attention is currently directed towards ChatGPT, and OpenAI has clearly been the most publicly noticeable company in the recent development. Its CEO, Sam Altman, has admirably done a number of interviews. OpenAI should be part of your AI curriculum, so I’ll suggest a few interviews worth your time:
... will propel humanity towards prosperity, something that could help solve the climate crisis and put an end to cancer, something that could improve quality of life for billions of people.
My absolute favourite general interviewer is Ezra Klein, and The Ezra Klein Showmy most listened-to podcast, so I’ll recommend his two interviews with Sam Altman. This first one is from 2021, and the second one from 2022. Kara Swisher is another favourite of mine when it comes to tech interviews, and her conversation with Sam Altman does not disappoint.
But while OpenAI launched a fascinating chatbot, Google Deepmind built AlphaFold. The protein folding problem has been around for some 50 years with little progress in solving it, despite it being foundational to developing vaccines and curing genetic diseases. The abilities of deep learning applied to the protein folding problem superseded those of humans and conventional computational techniques, and in two years gave science the breakthrough that 50 years of effort hadn’t come close to. Please also listen to Ezra Klein interviewing Demis Hassabis of Deepmind. It’s an amazing interview with an astonishing human, hell-bent on solving some of the most valuable challenges facing humanity.
He (Demis Hassabis) is arguably one of the most knowledgeable humans in the world when it comes to AI, and he is *very* careful how he talks of it.
In the beginning of this essay, I warned against overly confident predictions about current capabilities and particularly about future developments of AI. Notice how Demis Hassabis language changes when he talks about algo-trading, which is not explicitly his area of expertise. His sentences take the form of “as I understand it …”, clearly emphasising that we are now outside what he considers his domain of expertise. And he is open to Ezra’s viewpoints, arguments, and evidence. This is a man with a PhD in neuroscience and a number of the world’s most important breakthroughs within AI under his belt. He is arguably one of the most knowledgeable humans in the world when it comes to AI, and he is *very* careful how he talks of it.
The perspectives of what this kind of technology can potentially do is extremely exciting. The value leap from better chatbots to curing cancer. From doing funny advertisements to solving the climate crisis. I think it's important to understand that this is a key driver for many of the incredibly intelligent people working on AI.
On the other hand, this could be the beginning of the end for humanity in any number of dystopian ways. The latter scenario is the one currently doing the rounds on various media outlets, including this one. And it’s certainly the one that keeps me up at night. If for no other reason that the probability is greater than zero, and the impact is ultimate.
If you want a brief and somewhat nerdy TED Talk primer, watch these ten very depressing minutes of Eliezer Yudkowsky telling you how we’re all doomed.
Tristan Harris of the Center for Humane Technology is arguably the most important voice in the debate about AI, especially following the went-viral video of his talk, The A.I. Dilemma, which he gave with his cofounder, Aza Raskin. I link to a few interviews with Tristan later in the essay, and I strongly urge you to listen to them. Start by watching The A.I. Dilemma on the odd chance you haven't yet.
Four things I’ve been thinking about
I’ve thought about more than only these four things. But these are the ones I’ve probably thought about the most.
- Will AI end humanity?
- How fast will AI development unfold?
- Will we all be poor and unemployed?
- Will we end up emotionally, spiritually, and intellectually numb?
I don’t have the answers to these questions. I have reflections, thoughts, hypotheses. I have mind-crippling fear, I have heart-warming optimism. At the end of the day (this day at least) we don’t know, because we can’t know.
Nonetheless, I genuinely believe that these are questions worth thinking about.
#1: Will AI end humanity? (aka Will Skynet Kill Us?)
The existential threat of AI is arguably *the* question to ask and debate.
Famously, a 2022 survey amongst the world’s leading AI researchers showed that half the respondents estimated the probability of human’s inability to control AI causing the extinction of humanity to be 10% or higher. That’s half the people who know the most about this stuff saying that there’s a >10% probability that it will end up killing us all. Importantly, they’re not saying this will turn evil and kill us all. They’re saying that an extremely complex piece of software with immense capabilities is likely to spin out of our control with grave consequences.
It’s been referenced widely, most notably by Tristan Harris and Asa Raskin in their must-watch talk, The A.I. Dilemma. Once you’ve watched this, you should also listen to Kara Swisher’s enlightening and unnerving conversation with Tristan Harris.
It’s worth pointing to a seemingly overlooked aspect. The AI experts surveyed are experts in the granular details of AI. They are, however, not experts in any of the following: humans, governance, geo-politics, legislation, psychology, anthropology, or any of the many other fields of expertise that play a role in determining the future impact of AI on the society surrounding it.
The always delightful Ian Leslie has a Substack called The Ruffian. In a post called Seven Varieties of Stupidity, one variety described is the “Fish-out-of-water” stupidity. It’s the illusion that the high intelligence that has helped yield one’s expertise in one domain (in this case AI) leads to exceptionally insightful thoughts in every other domain.
In this case, I would argue that the stupidity stems from the readers’ involuntary tunnel vision of projecting “Fish-out-of-water” intelligence onto the AI experts. The ability to predict what will happen to humanity based on the projected development of AI hinges on at least two strands of understanding, (a) what the technology could potentially do, and (b) what it would require to mitigate that risk across every single relevant domain. They’re experts in (a) but largely clueless in (b). There’s inarguably a risk, but I would challenge the notion that the people surveyed have a holistic view of the nature and size of said risk.
As with everything AI, we should take things very seriously while applying critical thinking.
My hypothesis: unexpected chaos will precede Skynet - and potentially save us
Intuitively, the apocalyptic scenario builds on the premise that AI will achieve superintelligence. Some would think that it also builds on the premise that the superintelligence will become sentient. I won’t go deep into the question of whether superintelligence will happen. I think it’s possible it could, and I’ll share my thoughts on this when discussing the “How fast will it all happen?” question.
AI doesn’t need superintelligence to destroy humanity. It simply needs access, advanced capabilities and a large, misaligned objective.
It seems obvious that an AI doesn’t need superintelligence to destroy humanity. It simply needs access, advanced capabilities (not omnipotent intelligence) and a large, misaligned objective - but not necessarily malignancy. In this example, think of a super-villain with access to a powerful AI. He instructs the AI to make him the most powerful person in the world (obviously not having thought this through entirely), and the AI responds by hacking government facilities around the world, messing with key infrastructure and providing its master with the new keys to all of these systems.
Again - it’ll be vastly more complex and complicated than this, but it was meant to illustrate that havoc can be wrecked upon the world without an ill-intended super intelligent AI. Immense harm can be caused without a sentient AI, as its capabilities as a tool for evil would be plenty powerful.
As I wrote earlier, I find that the best (though certainly not perfect) mental model is the comparison to nuclear weapons and how the global community has handled it. I find it useful for two reasons: (a) it is a recent technology with the potential to annihilate humanity if not controlled/contained, and (b) it gave us a model for understanding some of the warnings along the way and the global community’s willingness and ability to act. Again, it’s certainly not one-to-one comparable but it’s useful for thinking about similarities and differences. However, I don’t think that Oppenheimer is very useful in this context.
The reason humanity did not end in a nuclear apocalypse were the scares we got along the way.
I believe that part of the reason humanity did not end in a nuclear apocalypse were the scares we got along the way. It ranges from Niels Bohr warning Churchill and Roosevelt of the dangers of the nuclear bomb, over the nuclear reactor meltdown at Chernobyl all the way to the US actually detonating nuclear bombs over Hiroshima and Nagasaki.
There were many more warning signs, but suffice to say that these were some of the milestones. Collectively these gave world leaders the very real sense of looking into the abyss and ushered in responsible actions. In The A.I. Dilemma, Harris and Raskin mention the movie, The Day After, as an example of the focus and responsibility that characterised parts of the global community during the Cold War.
Since then, we’ve had international collaboration to ensure that the spread of nuclear weapons has been very limited. I believe that (a) we will have warnings, and (b) we have willingness to try and govern AI.
As for the first point, it seems unlikely that we will jump straight from ChatGPT to Skynet. Already today, we’re seeing AI-powered tools that will distort our perception of reality. I could elaborate extensively on this, but I found that this tweet by Dan Schwarz does a magnificent job of explaining exactly what I mean. Referencing Tristan Harris again, I’d recommend Steven Bartlett’s interview with Tristan in which he lays out his case for short-term chaos and the impact it will have. It will send chills down your spine, but it's informative and certainly worthwhile.
The internet as a source of information will essentially be useless, because of the amount of undetectably false information. The depth and scale of propaganda from all sides will render a democratic election unrealistic ...
We’re likely to see elderly people being tricked out of their pension savings, companies falling victims to more sophisticated ransomware at a much larger scale, governments being hacked, democracies destabilised by false information, cities being blacked-out, power grids being shut down, trading algorithms causing bankruptcy of large companies, etc.
The internet as a source of information will essentially be useless, because of the amount of undetectably false information. The depth and scale of propaganda from all sides will render a democratic election unrealistic, as virtually no one would trust the results. Imagine riots, coups, civil wars.
All of this will precede the Skynet-like apocalypse. And while it’s likely that no single event will trigger global governance action, the sum of them might.
I am fairly confident that governments will be willing to act. And I believe it is possible to agree on a global rule set. It will be complex, it will be risky, it will be error-prone, it will be complicated. But it can be done, as history shows. And that's why I'm optimistic that we will find solutions while the challenge is relatively manageable.
An optimistic view on the threat of artificial intelligence
My optimism around, to put it crudely, the survival of our human race grows when seen through a historical lens. When thinking about recent history and our response to a large-scale threat seemed imminent. For example, think about the global society’s response to the Covid pandemic. My optimism comes from seeing how swiftly politicians acted, and how willing people across almost all countries were to support the measures taken. In a situation perceived as life threatening, the willingness to endure inconvenience was massive. The willingness to collectively take responsibility for the situation was rather heart-warming to me. I fully respect the arguments of that situation that are negative (massive overreach from governments, neglect for principles of freedom, disappointment of citizens acting like lemmings, ineffective measures implemented, etc., not to mention outright appalling behaviours by governments in certain situations), but I was elevated by the parts of the experience that saw decisive action by politicians and near unequivocal, global support by the people. It also seemed to me that there was a sense of global solidarity, which will again be required if we are to rally around a coordinated global effort to combat something about to spin out of control.
In a situation perceived as life threatening, the willingness to endure inconvenience was massive. The willingness to collectively take responsibility for the situation was rather heart-warming to me.
The things that make me hopeful, are the things that are the most humane about us as a species. If we – as I suspect – find ourselves in a situation where AI is causing varying degrees of chaos, our salvation will be our humanity. It will not be our ability to rationally develop software smarter than the AI at a pace faster than AI development. It will not be impenetrable cyber security technology, nor will it be robots that are stronger and smarter. It will be our willingness to unite behind a cause. While the means to control the development of artificial intelligence will certainly contain elements of ingenious technology, what will enable and empower them will be deepfelt emotions of unity.
During the pandemic, it was evident that humans do not flourish, when we live our lives in isolation and our interactions with each other are mainly online. This ought to be obvious to everyone, but over the past decade or two we have been brought to forget some of the fundamental truths about humans and mankind. The internet as a whole, social media amplified by the proliferation of mobile devices, the cult of tech capitalism originating in Silicon Valley. They’ve all played a part in convincing us that connecting online can substitute real life interactions. Zoom and Miro to replace discussions and joint ideation, parties at Clubhouse to replace the Friday bar, Slack to replace watercooler chats. The absurdity of believing that humans would flourish in such a reality is glaringly obvious in hindsight.
A disproportionately large share of our lives in the last decade has been defined by a relatively small group of people who know a lot about computer science yet know very little about humans. A small group of people with a complete lack of human empathy and understanding for concepts like beauty, happiness, connectedness.
A disproportionately large share of our lives in the last decade has been defined by a relatively small group of people who know a lot about computer science yet know very little about humans.
These are the people who made the “shape rotators vs. wordcels” meme, condescendingly reducing authors, artists, philosophers, historians to irrelevant second-rate intellects, feeding off the prosperity brought about by the so-called shape rotators.
I always find it interesting to observe the difference between two of the internet successes, Facebook and Airbnb.
The former was built on the premise of degrading women college students online and removing its users from the real world and all of the friction and mess that comes with it. The underlying worldview has stayed with the company in everything it has built and bought since on its path to becoming one of the world’s most valuable but deeply horrible companies.
The latter was built on the premise of opening your home to a stranger to sleep on your couch. It has embraced the chaos and imperfection that is human interaction. It keeps building on the worldview that while some people are assholes, most are decent. Its leadership actively engage in efforts to help facilitate housing to people in need when wars break out or when natural disasters strike.
It's a remarkable difference in perspective, and I think we owe it to ourselves to evaluate companies along more axes than their ability to increase their stock price.
Why is it important to discuss the pandemic or distinguish between Facebook and Airbnb when the essay was supposed to be about AI?
I think we owe it to ourselves to evaluate a company along more axes than its ability to increase its stock price.
Well, I think it’s analogous to our relationship with artificial intelligence. I think that humans on average have a good instinct that human interactions are emotionally superior to machine interactions. That real life interactions are superior to online interactions.
Humans don’t want to hang out in the metaverse, they want the feeling of the sun on their skin, they want the smell of the sea, they want to marvel at the wonder of a Botticelli, they want to be in the stadium and hug a stranger when their team scores.
This core of being human is deeply embedded in us, and it will be the bedrock of support for action to govern artificial intelligence. We will intuitively recognise that even if this does not necessarily constitute the nihilistic threat generally imagined, it needs regulation. And the response to the warning signals I described earlier will be a broad call for regulation, and an overwhelming support that spans countries and regions and creates the geopolitical room to manoeuvre that legislators need.
The major question is if governments - and humanity - can act with sufficient swiftness and decisiveness. Which conveniently segues my second question, “How fast will it all happen?”
#2: How fast will it all happen?
When I described experiencing the GPT model’s leap forward between July and November 2022, an implicit point was the speed with which improvements had happened. From interesting but essentially useless, to real-life applicable in 4 months. That’s really fast. And yes, I completely accept that ChatGPT cannot and should not be used for everything, but even in the November release it could make a real difference in many people’s everyday work life. For me personally, it felt like having something with the quality of a semi-good student helper who literally worked at the speed of lightning at my beck and call.
During the Covid-19 pandemic, we all got a live demonstration of exponentiality. And more relevant to my point: humans’ innate lack of ability to mentally grabble with the concept of exponentiality. In tech circles there is more familiarity with the concept (hockey stick growth and all), which I think puts them at an advantage when thinking about how and how quickly things might develop. It might also be the reason this community has been the ones to sound the alarm. Simply because they’re far more trained in extrapolating exponential curves.
It’s as far from the point as saying that a 7-year-old who currently reads Plato will never understand philosophy because she hasn’t got a very good grasp on Kant.
Tim Urban’s wonderful essay describes this in detail and far better than I could ever dream of, so I urge you to read it. For this point, scroll about 3/4 through part one and watch the proverbial Lake Michigan be empty for nearly 80 years only to flood in 6 years.
Or even more to the point, the comparison of different levels of intelligence a bit further down. I think of this illustration (“haha, that’s adorable, the funny robot can do monkey tricks”) every time someone discards generative AI arguing the current limitations of its capabilities. What it can do today is not the point. It’s as far from the point as saying that a 7-year-old who currently reads Plato will never understand philosophy because she hasn’t got a very good grasp on Kant. It’s obviously a silly take, yet it’s what happens all the time with AI.
As Tim Urban outlines in a thought-experiment elsewhere in his essay “It takes decades for the first AI system to reach low-level general intelligence, but it finally happens. A computer is able to understand the world around it as well as a human four-year-old. Suddenly, within an hour of hitting that milestone, the system pumps out the grand theory of physics that unifies general relativity and quantum mechanics, something no human has been able to definitively do. 90 minutes after that, the AI has become an ASI (artificial super intelligence, red.), 170,000 times more intelligent than a human.”
It probably won’t happen as rapidly, but the point remains. On an exponential curve the distance measured as time between what the LLM models are currently capable of and capabilities that supersede those of any human being within a given discipline is very short. It will happen way sooner than we intuitively expect. Given exponentiality.
Then AlphaZero started to play itself to learn and master the game. Within four hours, it was able to beat every living human and every other chess-AI ever designed.
AlphaZero was given a virtual chessboard and the rules of the game. Then it started to play itself to learn and master the game. Within four hours, it was able to beat every living human and every other chess-AI ever designed. And given an additional 20 hours, the same AI - given no context or pre-learning - reached superhuman level of play within Go, easily beating its world-famous "cousin" AlphaGo.
Machines are already vastly more intelligent and capable than humans in narrow-yet-complex tasks.
Does this mean we should think of all this as inevitable? Maybe.
We all know Moore’s Law, and what’s currently unlocking AI’s exponential development seems chiefly to be availability of computing power. Yet some argue that Moore’s Law is dead which could slow down or even halt development.
Currently, only a few global actors are participating in the AI arms race. They’re mostly or exclusively very large and wealthy for-profit technology companies (OpenAI/Microsoft, Google, Meta, Amazon, and a few more). It takes A LOT of computing power - and thus a lot of money - to develop these models, which is why only a few companies have the resources to do it.
For reference, the Paris-based startup, Mistral, raised €105m earlier this summer. The reason for the huge amount raised was essentially the capital requirements for computational power to train their proprietary AI models. It simply costs a ton of money to rent computers big enough to train the models. I’ve seen reasonable people in my Twitter stream argue that the amount raised will afford Mistral two or three rounds of model training. So little wonder that OpenAI ended pivoting from its original and noble non-profit structure as the reality of capital requirements dawned upon them.
However, as recent as April Sam Altman, the CEO of OpenAI, said that the advances that have given us ChatGPT, namely the sheer size of the large language models, may have exhausted their value and that future breakthroughs would require new approaches. This went a little under the radar, probably because the predominant narrative is the hype train going at full speed. The subtle point does however make sense in the context of news slowly leaking over the summer, summarised in this Substack by Alberto Romero.
On one hand, these factors pull in the direction of less rapid development. Others pull the other way, as for instance Nvidia’s stock price is exploding on the promise of enabling LLM development on GPUs rather than CPU-intensive servers, thereby making it available cheaper and at more scale for the likes of Mistral who are yet to raise $10bn like OpenAI.
SO - lots of conflicting information. I feel pretty confident that development will continue at a very high pace. What that means exactly, I don’t know. But if forced to venture a guess whether development speed of AI will follow an exponential curve, a curve of diminishing returns or an S-curve, I’ll hedge my bets and go with the S-curve. It seems returns on development effort may be slowing down temporarily, but I cannot think of a substantial argument that the slowdown is significant or permanent. I think we will see another acceleration soon.
But this is essentially an uneducated guess. I think this is the one question that requires first-hand knowledge of the actual development inside OpenAI, Google, Meta.
#3 - Will we all be poor and unemployed?
Assuming an almighty AI with God-like capabilities doesn’t kill us, what will our future look like? Will a few hundred people collaborating with and controlling the AI be the only ones benefiting? Will the rest of us be reduced to a life of powerty and pointlessness?
My mental model for thinking about this is various iterations of technological change, from the Industrial Revolution to smaller but significant events following the emergence of the internet.
Has there been a net loss of jobs following the introduction of technology? Which jobs and workers have been affected? Have new jobs emerged in the wake of the change?
There are two schools of thought around this topic: (1) This will be like other industrial and technological forward leaps vs. (2) This will be different altogether, because <insert a variety of plausible and/or less plausible hypotheses>.
The second school of thought by definition is more speculative, fragmented, and consequently harder to outline as a consistent argument. I would say that it really requires you to consider any given argument individually and on its own merits.
Within the first school of thought there are a good amount of empirical evidence to support the hypotheses. It has literally been studied by economists for decades. In short, the argument goes:
1) New technology enters and does work previously done by humans
2) Human jobs are destroyed - but…
3.1) Companies want more of what the technology did and will employ humans to maximise the output of the technology (new jobs created by the technology)
3.2) The technology will make the product it helps produce cheaper, more people will buy it at a lower consumer price, making consumer spending available to other products and services, the producers of which will in turn demand more labour (technology creates economic growth which in turn creates new jobs)
Obviously, this is overly simplified. But it’s the essence of it.
I’m a huge admirer of Benedict Evans, and he lays out this argument with clarity and eloquence in this must-read piece.
It seems to me that both schools of thought have merit, and personally I’m on the fence. I do believe that the “automation by AI” will likely eliminate white-collar jobs that are currently employing millions of people around the world. The argument from school no. 1 is that this has been the case many times before, and that those people will find new and meaningful jobs. This has largely proven to be true in the past (if not nearly as much as that libertarian friend of yours would lead you to believe), so the question is to what extend it will be the same or different this time around.
First They Came
As knowledge workers, we are much more concerned with the dangers of AI displacing jobs, as for the first time in history it might be *our* jobs at stake. Historically, industrialisation displaced and transformed blue-collar jobs. Without going into too much detail, people employed within industrial labour were largely displaced by machines while some job transformation did happen (e.g. operating the machines), leaving a fraction of the jobs available within the industry. This also helps explain the rise in the number of people employed within service industries and the increased concentration of wealth.
The average knowledge worker income has decreased, the top ten percent of knowledge workers have seen significant and remarkable increases and the top-top has exploded.
To understand the first PoV (that this will be similar to past experiences) and how it might extrapolate, we can try and look at how digital advances have impacted white-collar jobs in recent years. A good source on this is Erik Brynjolfsson and Andrew McAfee who wrote the book “The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies” and who summarise their thinking on the matter in this HBR interview. To save you some time: the average knowledge worker income has decreased, the top ten percent of knowledge workers have seen significant and remarkable increases and the top-top has exploded. In other words, the benefits are unevenly distributed, and some knowledge workers are already being left behind by digitisation. And it seems even more will be impacted going forward.
I’m sceptical that job transformation and creation of new jobs will have a positive impact on the number and value of knowledge worker jobs. I think the bottom 50% - as measured in depth of skill or level of expertise - of nearly every knowledge worker profession can likely be replaced by AI in ten years.
My main reason for thinking so, is that a frighteningly large proportion of so-called knowledge worker jobs are very mundane, predictable, and repeatable. A large part of the critique of e.g. ChatGPT is that it rephrases something someone has already said before. Ironically, the reason it’s fairly good at it is that apparently humankind repeats itself a lot, providing really good training data for the LLMs. And more to my point, most software programming, most medical diagnostics, most legal work, most ad campaigns, most consulting engagements, most engineering, most college lectures are at a reasonable level of abstraction merely repetition and rephrasing of priors. And thus potentially replaceable.
Surely, there will be new “operator” or “supervisor” types of jobs that will appear. I hypothesise that a large share of the operator/supervisor jobs will be extremely high skills roles. The law firm partner specialised in M&A who synthesises the AI’s analysis and makes a recommendation to the board of directors. The oncologist who decides on the recommended treatment based on a range of analyses and emphatic insights into the patient’s family relationship and situation. The software engineer who orchestrates the array of input parameters and systems interacting with a novel model to be deployed in a sensitive corporate or governmental setting.
In many cases, the human value add may not be adding knowledge to increase the correctness of the model output. Rather, it may be a case of communicating the output, of contextualising the output, of securing trust in the output.
The case for the humane
I like to imagine a world where humans still make the decisions, sometimes based on recommendations predominantly created by AIs. And frankly, this is also the world I foresee. There is little doubt that AI will be – already is to a large extend – vastly superior to humans in a range of tasks that are well-defined, require accuracy and builds on a large amount of available data. But I hypothesise two important things in continuation of this: (1) the holistic contextualising of the task performed will remain difficult for the AI, and (2) the human making the decision will want to debate it with humans to feel secure about it.
In this world, the humane remains.
Imagine the following scenario: Responsive's client since 2018, Matas, recently entered into an agreement to acquire a Swedish company, KICKS, to create the leading Nordic omnichannel health and beauty retailer. As with any acquisition and subsequent merger of two companies, a lot of difficult post-merger decisions must be made. How do we design the new organisational structure? Which employees will remain in key roles and who will have to find new challenges? Which IT systems will we consolidate, and which will remain separate? How – and how quickly – do we move towards on brand? And many, many more hard choices.
Disclaimer: Responsive has not been working on the acquisition in any way, so the points I make are purely speculative.
There is little doubt that AI will be – already is to a large extend – vastly superior to humans in a range of tasks that are well-defined, require accuracy and builds on a large amount of available data.
There’s a very reasonable argument that decisions like these are too complex for the human mind to process in a rational and structured manner. By that rationale, an AI would be better equipped to make the ultimately superior decision, as its ability to process the vast amounts of data points and create a balanced synthesis, is exactly what an AI would do well.
BUT… success in this case depends on so many things that are much less rational, are out of sight from an AI, and have everything to do with humans. The consolidation and integration of IT systems will be hugely important to realising the value of the acquisition. The ability to succeed will depend on – amongst many other things – the confidence in and of the CIO of the joint operation and the team around him. It will also build on the motivation and collaboration between the two teams. And it will depend on the former CIO (of one of the companies) who did not get the new top job. So suddenly, the personal preferences and experiences of a few people will in reality shape the outcome much more than anything. This might mean deferring the phasing out of an inferior IT system because the CIO has a personal preference for postponing it. Or it might be that to balance the future organisational design, it will make sense to keep a group of IT people in one of the “old” organisations, and that this group has expertise in a system that would otherwise make sense to deprecate.
When Matas CEO, Gregers Wedell-Wedellsborg, is going to make all of these decisions, he is likely to do so based on conversations with his key people and a few advisors. His decisions will vary hugely with the emotional impulses he receives in those conversations, not on the rational analysis of the AI. Ideally there will be sound analysis underpinning the conversations and recommendations, but the interpersonal dynamics are likely to be the decisive factors in both the decision-making and the ultimate success.
Again, I want to emphasise that I believe this to be a good thing. That I have faith in humanity, both philosophically and rationally.
To move this out of the professional sphere and into the sphere of non-work everyday life, consider the following.
A recent study found that an AI-powered chatbot was considered to provide more accurate and more empathetic answers to patients’ medical questions than human doctors. It’s fairly well-documented that God Complex is particularly prevalent amongst doctors, and I think many have experienced being ignored and/or patronised by a doctor. So, I think there’s merit to the finding of the “AI vs. Doctors” study.
A few weeks after writing the paragraph above, I came across a working paper by the National Bureau of Economic Research at MIT. It suggests that doctors – in this case radiologists – do in fact struggle to make good use of the powerful tool that AI can provide, largely dismissing AI predictions when there’s other types of contextual information available. Arguably supporting the “Doctor With God Complex” cliché.
The study about patients’ perception of interacting with doctors and chatbots respectively could be seen as proof that AI could (should?) replace doctors in the role of patient interactions. Personally, I believe the medical profession should rather see it as an opportunity to upskill in the most human of abilities - showing compassion and interacting with one another. I have close to zero doubt that a diagnosis provided by an AI but contextualised and explained by a medically trained human is vastly preferable to simply interacting with a chatbot.
On balance, I think capable AI will lead to a significant net loss of white collar jobs. And I believe it will polarise the types of knowledge worker jobs available. And I believe it will create a new class of “digital service industry jobs” that are the equivalents of today’s waiters, delivery people, hairdressers (on the low-value end of the spectrum), or coaches, personal trainers, astrologists (on the high-value end of the spectrum).
Whether that’s desirable depends 100% on your view of humans and the world.
#4 - Will we all be emotionally, spiritually, and intellectually numb?
A major promise of AI is that everyone will have their own virtual assistant available to improve life. Linkedin is already flooded with “The only ChatGPT Cheat Sheet You’ll Ever Need” and other AI-enabled life hacks.
The eternally tech-optimistic venture capitalist and internet pioneer, Marc Andreessen puts it this way in his recent essay, titled “Why AI Will Save The World”:
“Every person will have an AI assistant/coach/mentor/trainer/advisor/therapist that is infinitely patient, infinitely compassionate, infinitely knowledgeable, and infinitely helpful. The AI assistant will be present through all of life’s opportunities and challenges, maximizing every person’s outcomes.”
Sounds wonderful. And every child will have their own teacher, their own soccer coach, their own best friend. Every scientist will have her own research assistant, her own lab partner, her own peer reviewer of her papers.
No kid will ever have to wait his turn to get the teacher’s help. He will never have to wait while the kid next to him catches up and the class can move forward. He will never have to raise his hand to ask a question. He will never have to hear another child’s childish take on whatever text he’s reading. He will never be stuck and frustrated, having to struggle to work his way out of the intellectual dead-end he stumbled into.
Everyone will have both the opportunity and ability to be their best self.
The beauty of StackOverflow
An early use case of generative AI is the programming assistant. Github (owned by Microsoft) offers the OpenAI-enabled Copilot to help developers write code faster, better. One consequence is that traffic to the world’s largest community site for programmers, StackOverflow, is dropping rapidly.
StackOverflow is *the* website for asking programming questions and getting high quality answers. Most developers I know, have joked that the main skill as a developer is the ability to search on Google and to copy/paste from StackOverflow.
Contributors (that's what they call people answering other people's questions) with many high-quality answers have near-celebrity status in the developer community. And in a similar way as Github’s more than 28 million public repositories of code, the Q&A’s on StackOverflow was also an embodiment of the prevalent philosophy of “giving back” that has always characterized the computer science community, especially the open-source community.
I cannot sufficiently emphasise the importance of this as an example of how openness, collaboration, and community helped propel humanity forward.
To me, it showcases mankind at its finest. It is such a human endeavor. It’s certainly asymmetrical in the sense that most users extract value without directly providing value back. And it’s certainly not altruistic motives that compel the top contributors to spend their time writing answers. But it is very much a sense of the complexity of common value creation. Of believing in outcomes that are greater than the sum of the parts. Of communal effort and giving back. Of mutual respect and appreciation. Of the values and virtues that I believe are some of the most beautiful of human traits.
But as answers from ChatGPT and the ongoing assistance from Copilot is more easily available, the need to interact with human questions and answers evaporate. The publicly shared, and collectively developed and maintained body of knowledge of programming could turn into a historic relic. A testament of what we did and knew about applied computer programming until 2023.
Ironically, OpenAI used all the questions and answers on StackOverflow to train GPT.
The dystopian extrapolation of this is a world in which humans interact much less with one another, become removed from each other, and wind up fundamentally estranged from one another.
Also, there’s the problem of the data on which new models will train themselves. If words, images, everything is created mainly by AIs, not humans, it could lead to a shortage of new data on which to train the AI, and thus it will be left with no other option than to train itself on the things it has created itself. We’re barely two seconds into all of this, and already ChatGPT is polluting the web. This is literally the data version of drinking your own Kool-Aid or eating your own shit. The former isn’t very credible, and the latter isn’t very nutritious.
Maybe we’ll touch grass again.
The old internet meme got its recent revival when Elon Musk told Twitter’s users to “touch grass again” after limiting the number of tweets a user can view to 600 per day. It seems a good idea. The reports on social media’s negative impact on self-esteem, well-being, relationships etc. are endless. And it’s not only the view of a grumpy old man thinking everything was better in the good old days. A recent survey of Danish young people showed two in three 12-30-year-olds want to spend more time with their friends in real life (as opposed to online).
As should be apparent by now, I am deliberately optimistic. My faith in humanity is close to endless in many areas, including this.
I deeply believe that humans long for connecting, for belonging, for loving, for sensing, for feeling, for dreaming. That these are amongst the strongest of human desires.
The Danish summer of 2021 was a beautiful display of these desires having been suppressed only to be relieved and fulfilled. It was the perfect storm of pandemic restrictions being (partially and temporarily) lifted, coinciding with the Danish national team playing its home matches of the EURO 2020 in Copenhagen, and the same team performing well. All this amplified by the tragic incident of Christian Eriksen’s heart attack during the opening match.
That summer had so much joy, so much kindness, so much connectedness, so many tears of laughter. So many dreams that got to feel air under their wings, so many glances turn into kisses, so many spirits united.
It was the embodiment of everything for which humans long.
For this reason, I believe that there will be a reaction to a world in which AI plays an integral role. My hypothesis and my hope are this: There will be a second renaissance.
Will we see a second renaissance?
Coming out of the pandemic, there was a lot of talk of a potential new “Roaring 20s”, referencing the era following WW1. While we have seen unexpectedly quick economic recovery, we arguably have not experienced the cultural and spiritual uplift that characterised the 1920s. Interestingly, this piece outlines how the 1920s ushered in a new general purpose technology (electricity) as a vehicle for much of the societal transformation, and hypothesises that AI could be a similar vehicle in the 2020s.
Irrespectively of the specific timing or this particular prediction, I think it is interesting to consider how our collective response to a large and explicit presence of AI in our lives. What will we get from these interactions and what will we miss? And thus, what will we seek elsewhere and from each other?
The dystopian version is that of Spike Jonze’s 2013 film, Her, in which Joaquin Phoenix falls in love with a Scarlett Johansson-voiced artificial intelligence and effectively turns his back on the rest of the world. This is already a thing happening to real people, but I have much faith in humans in general when it comes to our need for real relationships with real people. After all, evolution is slow, and certainly much slower than technological progress.
I envisage a future of dabbling with new thoughts, ideas, concepts together with an AI. Of getting a useful pathway into a subject that allows for subsequent deep diving via additional reading and conversations.
High school debates, AI and the value of conversation
I recently came across this amusingly US-specific SlowBoring post from its Substack author Matt Yglesias on how the dominance of critical theory arguments is ruining high school debates. As I have neither experience from high school debate nor from college-level social studies, I must admit that I’m not deeply familiarised with critical theory beyond the little Foucault and Habermas I have read. So I used GPT-4 to provide me with an overview and had a little back-and-forth to better understand the rudimentary principles.
Much more importantly, it raised a ton of questions that I didn’t want to dive into with an AI. An absolutely non-exhaustive list:
- When is a critical theory argument relevant and valid? Because it probably is valid at times?
- What micro-cosmos is high school debates and what characteristics are present there that are similar to other settings? And consequently, what other environments are susceptible to the kind of bias described?
- In which way is critical theory the precursor to woke culture and how might it have interacted with other trends within academia and culture to get us there?
- Is everything bundled under the big umbrella of critical theory equally fitting?
- If I appreciate critical thinking and challenging the premise of a question or assumption, is critical theory not by definition a legitimate and relevant path to explore?
- When would I consider critical theory a more or less relevant lens of viewing a given problem?
And many, many more.
However, all of these are questions I want to discuss with human beings. With different human beings. With my family member who is a sociologist, with my friend who is a theoretical physicist, with another close family member who is very sweet but staunchly non-intellectual, with my very intelligent self-taught multi-millionaire friend who has not read a single academic book in his life, with my 22-year-old, sexually fluid family member who lives in the reality that critical theory seeks to explain and impact. I would want the many different perspectives they collectively offer. I would want the friction that comes from human interaction. I would want the disagreement, the elaboration, the unexpected conversational development. The unsolicited opinion, the heated argument, the emotional response to the unexpected.
This is just one example of many. The point being that it is intrinsically human to want to explore thoughts, ideas, concepts alongside other humans. Creativity will flourish as talented individuals may dabble with ideas individually, supported by an AI and subsequently take those ideas to new heights taking inspiration from others working in the same field through collaboration, conflict, competition. Pablo Picasso and Henri Matisse famously rivalled each other professionally, providing each other motivation to master their craft while exploring new facets of artistry and creativity. More dramatically laden was the infamous conflict of Caravaggio and Baglione, yet this also showed how inspiration, motivation, artistry can be uniquely achieved through human tension. I struggle to imagine the same emotional response to an interaction with an AI.
If we are to take some of Marc Andreessen’s assertions at face value, we will see education be much less time consuming, as “Every child will have an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful. The AI tutor will be by each child’s side every step of their development.”
If the fundamental learning objectives are accomplished in less time than today, I want a world in which it is not capitalism’s productivity paradigm that governs what to do with that extra time. I want a world in which children learn to be more human, not more productive. I want a world in which they explore relationships with each other as a path to happiness, to achievement, to reflection, to fulfilment. I deeply believe that good will come from that, for the individual and for humanity. I’m also convinced that the outcome will not be some hippie degrowth nonsense, but rather the opposite.
The difference will be this: Humans that are more connected with themselves and each other will have a moral grounding that provides direction to creativity and entrepreneurship.
To understand the implications of my claim, I want to highlight the complete failure that is current-day innovation and entrepreneurship, particularly within consumer software.
Some of the so-called unicorns founded in the last two decades include ride-hailing services like Uber burning billions of dollars to no other avail than oppressing workers’ rights and increasing pollution. It includes scandalous, scammy office-sharing company WeWork. It includes depression-inducing social media apps like SnapChat, Instagram. It includes fraudulent investment platforms such as the nauseatingly named Robinhood. It includes the perfect storm of morale collapse and fraud that is crypto-trading platform, FTX.
It seems to me that people with better tools and a stronger moral grounding will direct their energy and initiative elsewhere. I won’t try and predict where, but I find it inevitable that it will be somewhere better.
Summary
If you made it this far, I would like to thank you. I knew it would be a long essay when I started writing, and I greatly appreciate your time and attention.
To recapitulate my points from the previous 10,000-ish words, I’ve divided them into a few points summarising my general advise and my working hypotheses.
My general advise is this:
- AI is exceptionally complex. Be sceptical of overly confident people.
- Develop an appropriate mental model for thinking about the various aspects of AI. It will help your thought-process.
- Read, listen, watch, discuss. Absorb information on the topic. It will be important, and you owe it to yourself to engage.
My personal working hypotheses are these:
- It will inevitably move forward, and it will move forward quickly. It is nearly unimaginable to picture a world in fifty years that does not have a version of very intelligent artificial intelligence.
- Something very serious will happen to the world and humanity, but it will not be the evil of Skynet. Rather it will be some version of chaos, caused by humans and AI messing with the world as we know it. Deliberately or mistakenly.
- AI will significantly impact white-collar jobs, across the board. If AI doesn’t kill us or completely alter the world as we know it, it will cause massive changes and enormous amounts of unemployed knowledge workers.
- I am staunchly optimistic about the potential and the resilience of humans. I believe that we will flourish given the opportunity.
- I believe that AI can usher in a second Renaissance, that our attention will gravitate back towards the humanities, and that we will rediscover the core of what it means to be human.
As I said initially, I try to remain humble and curious. And I have certainly not covered everything. So please add your perspective in the comments, or reach out to me privately, as I’m happy to discuss this topic over a well-brewed African espresso or a glass of good white Bourgogne.