top of page
  • Writer's picturekameronsprigg

What’s the Big Deal, Anyway? (Part 1)

Updated: Apr 7



“Oh come on, I’ve heard it a thousand times before - AI is getting ‘better’. Sure I can talk to an AI which is kinda neat, but is that really important? I mean it can’t even count how many hands or fingers are supposed to be there. It forgets things all the time, and makes weird stuff up too! It’s not like it’s that big of a deal.


It’s just another tech-fad, and like everything else it’s probably just gonna disappear in a few months or years anyway. Just look at how crazy people went over 3D movies or how useless cryptocurrencies have been. It’s just predicting the next token, it’s not like it actually understands anything. How would it even take my job? It’s just a machine and what I do is way too complicated. 


. . . Right?”


Let’s step back and forget what we think we know about the world for a minute, as we explore the future of AI and why we should all care. 


First and foremost - if you haven’t already I strongly urge you to read Tim Urban’s post from 2015: The AI Revolution. It's easy to understand, engaging, and well written. I won’t try and recreate the wheel here. Seriously, the man’s a genius writer. Not only that, but he warned us about today a decade ago.


So that we’re on the same footing, we need to make sure that we all clearly understand one of the crucial points of Tim’s work. 


As society gets more advanced, the rate at which we are able to bootstrap our own advancements increases. This is called the Law of Accelerating Returns. Despite being consistently accurate, this concept is very counter-intuitive to the human mind. Because of this, when we try to make a prediction about something groundbreaking, even experts get it wrong all the time. It’s easy to find examples of this. In the case of AI, let’s look at one very simple statistic. 


Between 2016 and 2022, nearly 2800 AI experts consistently predicted each year that AI systems would be able to perform any mental task at or above human level by 2060. This is a way of describing Artificial General Intelligence (AGI). Just one year later in 2023, with the release of ChatGPT 3.5 and other Large Language Models (LLMs), this estimate was revised to 2047. A difference of 13 years due to one model which is already not just outdated - but pedestrian compared to current state of the art. How far will this prediction change by next December I wonder? 


David Shapiro is a machine learning and automation expert who has switched his entire career to share his insights with more people about AI. He argues - and I find myself agreeing with him - that:


"The core cognitive biases that humans have is a very high fear reaction to things that are going to disrupt the status quo and it makes us very uncomfortable. So what you're actually measuring when you ask people 'how far away is AGI', what this is actually measuring is how far away they are emotionally comfortable with these changes happening."

The problem of course, is whether or not we’re emotionally ready for AGI, it is certainly not just another tech-fad. Let’s dive in.


“How would it even take my job?”


The first thing for us to wrap our heads around is that AI disrupting the economy is not a distant future. It’s not something that perhaps might affect some people, maybe. It’s here today, right now, and is probably more widespread than a lot of people realize. I already discussed part of this in my post “Why I Use AI Art”, but there’s a lot more to it than that. 


Starting with the obvious: IBM famously laid off ~3900 employees in early 2023, and has plans to lay off a further 22,000 in the near future - approximately 30% of the entire company's workforce. BlackRock Inc - the world’s largest asset manager, said it was laying off approximately 600 employees in January, 2024. For both companies, these are jobs in communications, marketing, software engineering, and more. 


The less obvious however, is that since IBM faced a storm of media attention after announcing the layoffs, many companies have been playing their cards close to the chest. It is less of a headache to simply lay off staff without an announcement, and so the estimate of 4600 layoffs in the USA since May 2023 due to AI is “certainly under-counting” the true number. The total USA layoffs in the tech industry in 2023 alone were over 262,000, and it’s not unreasonable to assume that most of these were due to AI advances. 


"Okay, but most of these AI related job cuts so far have been in the tech industry. So what?" 


Last month, Devin was announced and released to selected individuals for further testing and small-scale implementation. Devin is a fully autonomous AI software engineer. That means that you or I can talk to Devin like we would any other person, give it a task (build me a website, solve this problem, automate my inbox) and it will go about creating a solution for that task right in front of you, with no further input. 


Today Devin's accuracy is around 13%, which doesn’t sound like it’s that great. But let’s not forget about the law of accelerating returns. We saw this play out last year with the rapid advancement from GPT-3.5 to GPT-4. Despite being released just a few months apart, GPT-4 scores significantly higher across every single benchmark. As more people get access to this software, and as the developers (and more importantly, the AI) learn from mistakes in the real world, this 13% will rise very quickly. 


"But this is still the tech industry, isn’t it?" 


Sure is! Let’s branch out. 


FigureAI last month (just like Devin, this happened less than 30 days ago) announced and demonstrated their partnership with OpenAI to create a humanoid robot that can respond to users in real time while simultaneously executing tasks accurately in the real world. 


NVIDIA last month (so much happening in one month is another example of the law of accelerating returns)  announced Project Gr00t, a foundational model to train robots with AI, using a new software, new hardware, and designed to understand natural language and use a form of simulated reinforcement learning to rapidly advance the rate of training.


Reinforcement learning essentially is where you give the AI a task within a set of rules, and just tell it to figure it out. It goes through millions of possible actions until it finds consistent and useful procedures to solve that task. It might seem like that would be a slow process, but this has been used most famously since 2017 when AlphaGo defeated the Go world champion and shocked audiences around the world. 


"Okay, so what does all of that mean for me then?"


It means a few things. 


There are plenty more companies doing work in this field than what I’ve mentioned here, these are just a few of the most promising and the biggest players to know about. In these examples, we’ve seen a program that can do a software engineer's job - a highly technical and logically complex role. We've seen examples of companies developing robots that incorporate these advanced AI systems to carry out real-world physical tasks.


Not only are they working on it though, it’s working


Let’s take a step back from robots though, and come back to the law of accelerating returns. OpenAI, Anthropic, Google, Meta, and others are now at a point where they can use AI systems available today to create high quality data which can then be used to train future systems. One of the challenges that we thought we were going to struggle with is having available data for this training process. 


ChatGPT-4 for example, was trained on almost the entire internet.


This means that we’re at a point where AI is taking an active role in jump-starting its own progress.


As Tim so eloquently put it in the AI Revolution: 


“I hope you enjoyed normal time, because this is when this topic gets unnormal and scary.”



 


When I started Syntelligence, the plan was to write a post once per week and release it on Sundays. But this topic is too important to delay any longer than absolutely necessary. I wrote more about this in my follow up to this where I discuss what this all means for you and I more concretely, and why we shouldn’t bury our heads in the sand.

30 views0 comments

Recent Posts

See All
bottom of page