Don’t worry about AI eating up your job. It may be the software itself that’s eating up your job.

Don’t worry about AI eating up your job. It may be the software itself that’s eating up your job.

summary

There is a great deal of hype about recent developments in artificial intelligence, especially large language models like ChatGPT .

The substantive impact of such technologies on work also deserves attention.

But the observer missed two very important things:

It’s a useful way of thinking about “ how big is software’s appetite , but it’s worth thinking about why it’s taken so long for software to eat the world.

Moreover, if we dig deeper, if the software really needs

1. Every wave of technological innovation is triggered by expensive things becoming cheap enough to be wasted.

For a long time, because software is too complex and expensive to produce, we have underdelivered software for decades, which has led to a huge technical debt across society. 

2. This technical debt will be paid back dramatically and across the economy as the cost and complexity of software production collapses, unleashing a wave of innovation.

Software is misunderstood.

Software feels like a discrete thing, something we interact with. 

But in reality, software is something very exotic, an intrusion into our world.

It’s a strange interaction with electricity, semiconductors, and instructions that magically controls everything from screens to robots to phones, medical devices, laptops, and a dizzying array of other things.

It is almost infinitely malleable, able to slide, twist and deform, so much so that this adaptability opens the door to new worlds for us.

The exotic nature of this software has surfaced recently, as true conversational AI, that is, chatbots based on large language models ( LLMs ) like ChatGPT , has gone from science fiction to something people can play with as easily as searching on the internet. 

01. Why hasn’t software eaten the world yet?

This reminds me of Marc Andreessen ’s famous statement in the last technology cycle that software is eating the world . 

Although Anderson's words are about thinking

What will be the catalyst, and what will the world eaten by software look like?

To answer these questions, we must first look to history and then look to the future. 

To do this, we need to consider a range of factors, including complexity, factor costs, and the economic model of software supply and demand.

We start by thinking about the economic model of software supply and demand.

Software has a cost, and there is a buying and selling market.

Some of these markets are internal to the organization , but most of them are external markets.

People buy software in the form of apps , cloud services or games, or even embed it in other objects, ranging from doorbells to endoscopic cameras for cancer detection.

All of these things are software, in countless forms. 

With these characteristics in mind, you can think about software using the basic supply and demand curve diagram from introductory economics. 

There is a price and the quantity demanded at this price, and then there is a price and quantity that are in a rough equilibrium, as shown in the figure below.

Of course, for a variety of reasons, this equilibrium point may shift, causing the P/Q crossover point to be at a higher or lower level of total demand.

If the price is too high, software will be under-produced (leaving behind technical debt), and if the price is too low, well let’s get back to that. 

 

This leads to a basic question that is sometimes asked in economics courses: 

How do we know that this combination of price and quantity is the optimal combination?

The answer is that the optimal combination of price and quantity should be at the intersection of the supply and demand curves. 

Demand curves are typically downward sloping because people tend to demand less of most goods when prices increase. 

The supply curve is typically upward sloping because producers are generally willing to supply more of a good when the price rises.

If we raise prices at this point, then consumers will buy less and manufacturers will increase production, which will eventually lead to excess inventory or a price collapse.

Conversely, if we lower prices at this point, then manufacturers will reduce production even if consumers want to buy more, which will cause a supply shortage that may eventually push prices up again until supply and demand reach equilibrium.

2. How technology makes economics more complex and unpredictable 

These are all economics that you learn in undergraduate courses, and they are simple and easy to understand. 

But technology has a habit of muddying economics.

When it comes to technology, how do we know those supply and demand curves are right?

The answer is we don’t know.

This is when the fun begins. 

For example, sometimes an increase in the supply of something can lead to an increase in demand, which causes the curve to shift. 

This has happened many times in technology as various core components of technology have moved down the cost curve as power (or storage, or bandwidth, etc.) has increased. 

In the CPU scenario, this situation has always been attributed to Moore's Law, which means that the CPU 's capabilities will increase to a certain extent every 18 months or so. 

While these laws are more like heuristics than physical laws like F=ma , they do help give us a glimpse into how the future might be different from the past.

We’ve seen this over and over in the technology sector, where the prices of various technologies have plummeted, but their capabilities have grown rapidly.

It feels as though this has become commonplace, but it isn’t.

This is not the case in other sectors of the economy, nor has it been the case in economic history.

It is not normal for better things to be cheaper. 

While many markets are characterized by economies of scale, there has been no situation in economic history where CPU costs have collapsed while performance has increased by a million times or more. 

To help you see this more clearly, imagine that if cars were improved at the same rate as computers, then a modern car would: 

  • With more than 600 million horsepower
  • Going from 0 to 60 mph in less than one hundredth of a second
  • Get about 1 million miles per gallon
  • Cost per vehicle: less than $ 5,000

This is not the case with cars.

Sure, the Tesla Plaid is a fast car, but it falls far short of the above specs — and there never will be one.

This automotive performance inflection point is not our future, but it speaks volumes about how much has changed in (software) technology over the past 40 years. 

However, most people don't even notice this.

Everyone takes it for granted that they don't notice how amazing these changes are. 

3. Dynamics of technological collapse

You can see these dynamics in the image below. 

Note the logarithmic scale on the Y- axis; this is to avoid the price / performance curve appearing to plummet in a straight line - the rate and magnitude of price/performance decline for these factors is simply too great.

This is unprecedented in economic history.

 
Each collapse had wider consequences.
 
The collapse in CPU prices has dragged us directly from mainframes into the personal computer era.
 
The collapse in the price of storage (of all types) inevitably led to more personal computers with significant local storage, which helped give rise to databases and spreadsheets, which then led to web services, and then cloud services.
 
And, more recently, the collapse of network transmission costs (along with the explosive growth of bandwidth) has led directly to the advent of the modern internet, streaming video, and mobile apps.
 
Contrary to the merits of that old Paul Simon song ("Boy in the Bubble"), every generation of technology catapults a hero to the popularity (or price per performance) charts.
 
Each crash, along with improved performance, has led to huge winners and massive change, from Intel to Apple to Akamai to Google and Meta to the current AI boom.
 
Every beneficiary of the collapse will need a drop in price and a surge in performance of one or more core technologies.
 
This, in turn, opens up new opportunities to “waste” them—to use them for things that previously seemed impossible, prohibitively expensive, or both.
 
04 Artificial Intelligence is the driving force behind the next technological collapse
 
All of this brings us to today.
 
Suddenly, AI became cheap enough that people could splurge on it by feeding prompts to chatbots to “write articles,” get help with microservices code, and much more.
 
You might think that the price/performance curve for smartphones themselves would be coming down, just as it has with previous generations of technology.
 
You can make such a point of view, but it is too narrow, too orthodox, or at least incomplete and immature.
 
Let’s leave aside the ethical and alignment issues of artificial general intelligence (AGI).
 
Even though we feel closer to AGI now than we have in decades, that day is still probably a few years away at best.
 
With that in mind, it’s worth reminding ourselves that the AI ​​craze strikes our “beach of consciousness” every decade or two, only to recede again as the hype becomes greater than the reality.
 
We saw this with Minsky’s (failed) work in the 1950s, again with Japan’s (failed) Fifth Generation project in the 1970s, and again with IBM’s (failed) Watson in the 2000s.
 
If you squint really hard, you might see a pattern.
 
Still, the sudden explosion of large-scale language models has some people spending a lot of time thinking about which service industry jobs might be automated away, what economists call “displacement” automation.
 
But this alternative automation would not add much value to society as a whole, and might even reduce value and create instability—like outsourcing American white-collar workers’ jobs to China.
 
Perhaps we should think less about opportunities for replacement automation and more about opportunities for augmentative automation, the kind that unleashes creativity and leads to wealth and human flourishing.
 
So where does this come from?
 
We believe this surge in augmented automation will come from the same place as previous surges:
 
The price of something plummets, while the associated productivity and performance soars.
 
That is the software itself.
 
By this, we don’t mean that “software” will fall in price, as if AI will trigger a price war for word processors like Microsoft Word or AWS microservices.
 
This is linear thinking and inferential thinking.
 
That being said, we do think that the current frenzy to inject AI into every app or service you can see will lead to more competition, not less.
 
It will do this by increasing the cost of software (every AI API call puts money in someone’s pocket) while providing no real differentiation, since most vendors call the same AI APIs.
 
05 Baumol's cost disease and software problems
 
To understand what I mean, it helps to briefly review some basic economics.
 
Most of us are familiar with how the prices of technology products have plummeted, while the costs of education and healthcare have skyrocketed.
 
It seems a maddening puzzle, and it has led to calls for new ways to make these industries more like the tech industry — which is often seen as more vulnerable to tech deflation.
 
But this is a misunderstanding.
 
To explain: suppose there is an economic sector with only two sectors, one with much higher productivity, specialization, and wealth productivity, while the other has much lower productivity, the latter will face huge pressure to raise wages to prevent many employees from leaving.
 
Over time, the less productive sector starts to become increasingly expensive, even though it is not productive enough to justify higher wages, and so it starts to "eat up" more and more of the resources of that sector of the economy.
 
Economist William Baumol is generally credited with discovering this phenomenon, which is also known as "Baumol's cost disease."
 
You can see what this cost disease looks like in the following figure:
 
Various American products and services (spoiler: mostly in high-touch, low-productivity industries) have become more expensive, while other products and services (non-spoiler: mostly technology-based) have become cheaper.
 
This should all make sense now, given the explosion of technology compared to everything else.
 
In fact, it's almost mathematical.
 
 
Without major productivity improvements, which can only be achieved by removing the human element from these services, it’s hard to imagine how this situation will change.
 
Assuming we continue to need healthcare and education in the future, the situation is likely to continue to deteriorate, given that most of the value of these services will continue to be provided by humans.
 
But there is one sector that is hampered by this incarnation of Baumol's cost pathology: the software itself.
 
This may sound contradictory, and understandably so.
 
After all, how could the most productive, wealth-creating, deflationary sector fall victim to the same problems that befell other sectors?
 
If you think back to the two-sector model we discussed earlier, yes.
 
One department is semi-finished products as well as CPU, storage and backbone network.
 
These things are collapsing in price, requiring fewer people to make them, and are delivering huge performance gains at lower prices.
 
Meanwhile, software remains the same, producing the same things in ways that are barely different from how developers did decades ago.
 
Yes, there have been advances in the production and deployment of software, but at the end of the day, it still comes down to hands on a keyboard typing code.
 
This may seem familiar, but we shouldn’t be surprised that software salaries remain high and continue to rise despite relative lack of productivity.
 
This is Baumol's cost disease, which occurs in a two-sector economy where the technology itself is very narrow.
 
These high salaries lead directly to high software production costs, and given factor production costs and those nasty supply curves, this leads to limited software output.
 
Startups spend millions of dollars hiring engineers; large companies continue to shell out millions more to keep them.
 
And, although there is a market clearing price, the point where the supply and demand curves intersect, we still know that when its wages are higher than comparable positions in other sectors, the goods produced will still be lower than society desires.
 
In this case, the underproduced good is… software.
 
We end up with a social technical debt because the amount produced is far less than society would like — we don’t know how much less, but it’s probably a very large number and explains why software hasn’t eaten much of the world yet.
 
And because this has always been the case, no one noticed.
 
06 Demographic structure, aging, and the upcoming workforce disruption brought about by LLM
 
We think this is all about to change.
 
However unintentional, the current generation of AI models is like a missile aimed directly at software production itself.
 
Sure, chatty AIs are great at writing undergraduate papers or coining marketing copy or crafting blog posts (as if that weren’t enough), but these technologies are even better at generating, debugging, and speeding up software production quickly and at almost no cost, to the point where it almost seems like black magic.
 
why not?
 
As shown in the figure below, the impact of large language models (LLMs) on the job market can be thought of as a 2×2 matrix.
 
One axis represents the degree of grammatical formalism of a domain, meaning how rule-oriented the manipulation of symbols is.
 
For example, there are rules for writing essays (ask any angry English teacher), so an LLM-based chat AI could be trained to write surprisingly good essays.
 
Tax service providers, contracts and many other areas also fall into this category.
 
 
In the next few years, the disruption to the professions in the upper right quadrant will be so severe that it is almost unprecedented.
 
We will see millions of jobs replaced across a range of occupations, and at a faster pace than any previous wave of automation.
 
This would have huge implications for industries, tax revenues, and even social stability in regions or countries that rely heavily on some of the most affected job categories.
 
These widespread and potentially destabilizing impacts should not be underestimated and are significant.
 
Some argue that the demographic structure of aging societies and the inverted population pyramid of developed economies will offset these changes caused by AI.
 
While demographics will soften the blow in the coming decades—aging societies and shrinking workforces in parts of the world will be starving for workers—they may not be enough.
 
07 Software is at the epicenter of its own disruption
 
But let's get back to the software itself.
 
The software is more regular and grammatically compliant than conversational English or any other conversational language.
 
Programming languages, from Python to C++, can be thought of as formal languages ​​with a set of highly specific rules governing how each language element may or may not be used to produce desired results.
 
The most annoying thing about programming languages ​​is the syntax, which frustrates many would-be programmers (because of a missing colon?!
 
That’s the problem?!
 
Heck), but for an LLM like ChatGPT, this is the perfect thing to deal with.
 
The second axis of this diagram is equally important. In addition to the underlying syntax, there is also the issue of predictability of the domain.
 
Do the same causes always lead to the same results?
 
Or is it that this field is special, where the cause sometimes precedes the effect, but not always, and is unpredictable.
 
Likewise, programming is a good example of a field that has predictability, where the program is designed to produce the same output given the same input.
 
If that's not the case, then there's a 99.9999% chance that the problem is with you, not the programming language.
 
Other fields are much less predictable, such as equity investing, psychiatry, or meteorology.
 
This framework, this syntax and this predictability, leads us to believe that for the first time in the history of the software industry, we have tools that will fundamentally change the way we produce software.
 
This is not about making it easier to debug, test, build, or share, although those will change too, but about what it means to manipulate the symbols that make up a programming language.
 
Let's be more specific.
 
For example, instead of having to learn Python to parse some text and remove ASCII emojis, you can just write these prompts directly to ChatGPT:
 
Write some Python code that opens a text file and deletes all the emojis except the ones I like, then saves it again.
 
If your thought is, “That can’t possibly work,” you’re wrong.
 
The program works fine and it only takes two seconds. This is just a microcosm of the fact that programming skills that you could never learn before are now within everyone’s reach:
 
 
Point being, obviously: this example is trivial, unremarkable, and silly, albeit useful in today’s emoji-filled world.
 
This is not a complex code.
 
The process is simple, even annoyingly simple, for skilled practitioners, but impossible for most others without a lot of reading on Reddit and Quora.
 
But it’s getting better and deeper.
 
If you’re not sure why this works, or doubt that it works and think the AI ​​might be cheating, you can ask it to explain itself, like this:
 
 
In short, LLM uses a clever trick. Instead of exhaustively checking the text for the presence of all ASCII emoji characters, it chooses to use the character encoding to distinguish emoji from non-emoji.
 
This is really clever, and the fact that you can ask an LLM to explain how it does something is another reason why it changes the software game.
 
This is just the beginning (and it’s only going to get better).
 
It’s now possible to write nearly any type of code using this technology, from microservices that connect various web services together (a task you might have previously paid a developer on Upwork $10,000 to complete) to entire mobile apps (a task that might have cost you $20,000 to $50,000 or more).
 
08 What cheaper, less complex software products look like
 
One thing needs to be made clear.
 
Can you write a better MICROSOFT WORD?
 
Or solve this classic COMPSCI algorithm in a novel way?
 
No you can't, and this will lead to many people treating these technologies as toys.
 
It is indeed a toy, but it has important meaning.
 
They are "toys" because they are able to generate code snippets for real people, especially non-coders, a small group of people will think this is trivial, and another large group will think it is impossible.
 
This perception gap will change everything.
 
How to change?
 
Well, for one thing, the clearing price for software production is going to change.
 
But it’s not just because it’s become cheaper to produce software.
 
At the extreme end, we think this moment is similar to how previous waves of technological change drove the price of foundational technologies (from CPUs to storage and bandwidth) down to near zero, triggering explosions of speciation and innovation.
 
In software evolution terms, we’ve just transitioned from human cycle time to fruit fly cycle time: everything evolves and mutates faster.
 
Let’s do a thought experiment: what if the cost of producing software followed a similar curve, perhaps even a steeper curve, and was falling to almost zero?
 
What if producing software was about to become secondary, as natural and common as explaining yourself with words?
 
Software development back then was a matter of “I need to do X, Y, and not Z for the iPhone, and if you have any ideas for making it less ugly, I’m all ears.”
 
Now we can revisit that previous cost reduction curve and add software to the mix.
 
While costs may remain high due to internal “Baumol’s cost disease” for the reasons discussed previously, what happens if the cost of producing software is about to collapse?
 
Given the speed at which LLM is developing, this may all happen very quickly, faster than in previous generations.
 
 
What does all this mean?
 
We are not against software engineers, and in fact we invest in many outstanding engineers.
 
However, we do believe that software cannot reach its full potential without breaking away from the constraints of the software industry, which is very costly and relatively low in productivity.
 
It would be a transformative moment for the software industry when anyone could write software for pennies and it could be done as easily as speaking or writing.
 
After Gutenberg invented European lead printing, previous barriers to creation—academic, creative, economic, etc.—disappeared.
 
It's a bit of a stretch to say this is the equivalent of a Gutenberg moment, but only a bit, because people are now free to do things that are limited only by their imaginations, or, more realistically, by the cost of producing software in the past.
 
Of course, change brings disruption.
 
Looking back at previous waves of change, it seems likely that this will not be a smooth process and may take years or even decades to complete.
 
If we are right, then the employment landscape for software developers is about to undergo a radical reshaping, and with it a “productivity surge” as the falling cost of producing software offsets the technical debt that has accumulated across society from decades of underproduction.
 
09 When we pay off this technical debt, what happens next?
 
We’ve mentioned this technical debt several times now, and it can’t be emphasized enough.
 
It is almost certain that the software we produce is still far behind what is needed.
 
We don’t know how big the scale of this technical debt is, but it cannot be small, so it may grow exponentially in the future.
 
This means that as the cost of software drops to near zero, the creation of software can be expected to explode in ways that were almost unimaginable before.
 
At this point, everyone always has this question:
 
“What kind of applications would that make?”
 
While this question is understandable, it is indeed a bit silly and it is definitely too early to ask it now.
 
Would you have thought of Netflix when Internet transmission cost $500,000/Mbps?
 
Can you imagine the Apple iPhone when the size of the screen, CPU, storage, and battery would result in a device the size of a room?
 
Of course not.
 
The point is, the only thing we know is that apps and services will come.
 
There is no doubt about this.
 
You want to be part of it, to start investing in it as soon as there's a hint of movement.
 
In short, the green field in front of us now looks like the next great technology cycle, but too many people simply can't see this cycle coming (because their focus is still on investing in LLM applied to the current software environment).
 
Entrepreneur and publisher Tim O'Reilly has a great expression that fits here as well.
 
He believes that investors and entrepreneurs "should create more value than they extract."
 
That was the case with the tech industry in the beginning, but in recent years the industry has become frivolous and often looks for quick wins, often following the financial services industry’s playbook.
 
We believe that this is the first time in decades that the technology industry can return to its roots and truly create more value than it captures by unleashing a wave of software production.

Translator | boxi

Reprinted from | 36Kr.com

<<:  Konka's 2021Q1 financial report reveals the secret of growth: strengthening technology investment and accelerating product upgrades

>>:  This feature of Huawei's new flagship is super powerful!

Recommend

Special: Mountains and rivers are wallpapers everywhere

All images in this article By 40 photographers Sh...

How to continuously obtain seed users?

There are many techniques for acquiring seed user...

Bigger is better? How RAM size affects mobile phone performance

In today's mobile phone market, many manufactu...

Refined user retention and funnel analysis methods!

1. Event Analysis The application field of event ...

How to do short video marketing? What are the short video marketing techniques?

Now that short videos have become the traffic ent...

QQ PC/mobile version big update: group video, one-click mute for all members

On February 9, Tencent made a major upgrade to QQ...

Cloud gaming has arrived as expected, is the smart TV industry really ready?

It has been 11 years since the concept of cloud g...

LeEco Zhixin President Liang Jun: We will work with TCL to create TV games

On December 14, the rumored cooperation between L...