TLDR: AI is already better and faster at writing basic research reports than many humans. Soon, it will be better and faster than… me. So, are knowledge-workers like me toast? I think and hope not. I think and hope that, as long as humans need to make decisions and work together, we can evolve our skills and still add value.
I’m experimenting with AI to learn about the “newsroom of the future.”
I’ve learned a lot. (Specifics here).
I’ve concluded that AI will not rapidly “replace all human journalists” or “bankrupt world-class news organizations.” But it will rapidly change how modern newsrooms work.
AI can already do many newsroom tasks pretty well and with breathtaking speed. It doesn’t do them as well as the best humans. But it does them better than some humans.
Importantly, it also does them 100X-1000X faster than all humans, including me.
In most professional endeavors, including news production, speed matters.
Also — and this is the really important part — AI does many of these tasks well enough to be useful.
So I think AI tools will radically increase the productivity and effectiveness of journalists and news organizations who know how to use them intelligently. And they will make these journalists better and more valuable.
(ChatGPT created the image above, btw. It won’t create photos of actual people, so I asked it to draw the scene with someone who “looks like” me. For some reason, it often makes me look like a friend of mine.)
Sorry, research analysts — we’re not safe, either
In my latest experiment, I discovered that AI is also already pretty good at a specific skill I think I am pretty good at — namely, research and analysis.
This means that, in this arena, too, I need to figure out how to use AI intelligently, or I will become obsolete.
I had been telling myself that, sure, AI might be competent at the “commodity” parts of research — the fast gathering and synthesis of public facts, for example, or the condensing of tomes, or note-taking, or the drafting of internal communications and documents.
But I didn’t realize AI was already as good at researching and writing a final product — a research report — as, say, a competent junior analyst.
And… 1000X faster.
Am I still better at research, analysis, and report-writing than AI, with my three decades of professional experience? I hope so! But only if you give me a human timeframe in which to do the work.
And even on the final product itself (regardless of time/cost), the difference between me and AI is not as big as I had imagined. And AI has only been at this for a few years, whereas I have been at it… forever.
Also, again, speed and cost matter.
And AI absolutely kicks ass at speed and cost.
It’s nice to think that our work just takes as long as it takes and costs what it costs — and that it’s fine to spend weeks and thousands of dollars doing something if the result is better than if we only spent minutes and pennies. But, alas, if we’re trying to make a living in a competitive economy, that’s usually fantasy.
Most human customers care about speed and cost. So if we want people to buy our products and services, we also need to care about speed and cost. Or we’ll lose business (and jobs) to competitors that do.
Remember, it’s not about being “better.” It’s about being “good enough.”
As anyone who has read The Innovator’s Dilemma can tell you, disruptive technologies — like AI — are not better than existing technologies. They’re just cheaper, faster, and more convenient. And, most importantly, for some applications, they’re “good enough.”
For basic background research reports, AI is already much cheaper, faster, and more convenient than human researchers. And, in many cases, the resulting reports are… good enough.
For example, I’ve recently begun to research and analyze three big questions:
What impact will AI have on the job market?
Will AI-powered search engines disrupt Google’s search business?
Will Tesla’s cameras+AI approach to self-driving ever allow the company to achieve full autonomy — or will Tesla be forced to also use LIDAR and mapping technologies like the far-more-successful-at-autonomy Waymo?
The first (old-fashioned, human) step in research projects like this is to find, read, and digest what’s already out there. So I did that. I searched-for (Googled) and read relevant news, research, and analysis. And after, say, a day of looking into each topic, I got a feel for the facts and arguments.
Then, drawing on 30+ years of professional analytical and business experience, I developed my own initial conclusions — and began writing and speaking about them.
I described my basic is-Google-getting-disrupted thesis here. I’ll share the Tesla-Waymo and AI-impact-on-jobs theses soon.
Well done, John Henry! Now bring on the steam shovel…
After doing this research the old-fashioned way, I decided to see how AI would handle it. Specifically, I tried the “Deep Research” and “analysis” features of three leading AI services — ChatGPT, Perplexity, and Claude.
One of the most fascinating features of these “report-writing” functions is watching the AIs work. They share what they’re doing and “thinking” as they do it — or at least what they’re purportedly doing and thinking.
And what they’re doing and thinking is similar to what I was doing and thinking as I researched and analyzed these questions the old-fashioned way. They just did it 1000X faster.
I started with ChatGPT. I asked for a 3-5 page report about AI’s impact on the job market, with a focus on the legal, finance, and creative sectors. (The impact on the market for software developers is more widely known). I suggested starting with recent work by Harvard Professor David Deming, who is really smart on this topic.
On the first try, ChatGPT worked for a while and then got hung up. It was trying to make a chart. Apparently that led to a glitch.
So I tried again, this time telling ChatGPT to skip the charts.
Six minutes later, ChatGPT delivered a 23-page report, complete with sources.
Now, of course, I’m human, so it took me ten times longer to read the report than it took ChatGPT to create it. Next time, in consideration of my slow (but energy efficient!) human brain, I’ll ask for a TLDR summary… or plead with ChatGPT to stay within my 3-5 page guideline.
But, more importantly, how good was the report?
Was it a staggering work of analytical genius?
No.
Did it contain errors and fabricated sources?
Probably. But it seemed conceptually correct, and I didn’t see any obvious errors.
Was it a competently researched and written backgrounder?
You bet.
In fact, if I had given a smart, eager research associate this assignment and gotten this report back in a couple of days, I would have thought, “hey, this is pretty good, kid’s got talent.”
And, importantly, ChatGPT didn’t get me the report in “a couple of days.”
It got it to me in 6 minutes.
That means, if I amortize the cost of my $20/mo ChatGPT subscription over that amount of time, the report cost me… three-tenths of a penny.
($20 per month / (60 minutes per hour X 24 hours per day X 30 days a month X 6 minutes).
That cost, by the way, equates to an annual “salary” for my ChatGPT research associate of $240. And I even got an annual discount!
I love employing and working with humans. As long as I can make the finances work, I’d much rather employ and work with an inspiring and excellent human team than a laptop. But our economy is competitive. And the difference between, say, a $100,000+benefits research-associate salary and a $240 subscription is…significant. So if a competitor staffs his research boutique with ChatGPT while I staff mine with humans, the competitor will be able to charge a tiny fraction of what I do for research reports — and produce them 100X faster.
Again, I assume ChatGPT’s report contains errors. But a report from a non-subject-expert human researcher would also likely contain some errors. Because, well, we’re all human.
Here’s the report, so you can see for yourself. The “researcher” headshot, by the way, is of Casey Alvarez, the “economist” AI colleague that ChatGPT and I created. You can read about Casey and my other AI colleagues here. (The headshot makes her seem more… well… real… doesn’t it?)
And, in case you don’t want to read 23 pages, I’ve also included a shorter report, written by Sierra Quinn (actually, Perplexity), another of my AI colleagues. Sierra also researched and produced her report in minutes.
The TLDR conclusion, btw, is this — my words/view, based partly on Casey and Sierra’s work and partly on my own research and thinking:
AI will change jobs, eliminate jobs, and create jobs — the same way technology always has. Job-doers who learn how to use AI intelligently will thrive. Job-doers who don’t will fall behind. AI Luddites will become like human phone operators after telcos moved to electronic switching: Good at something the world no longer needs. Those who learn to use AI intelligently, meanwhile, will thrive.
So, is that it? Am I obsolete?
I hope not.
I hope — or at least want to think — there are still things I can do as a journalist, analyst, communicator, and human that ChatGPT can’t. I hope that these things will enable me to create enough value for my human customers that they keep reading and listening to me.
What are these things?
People say they they value my work because I see the big picture, focus on what really matters, apply common sense, and explain concepts in simple and engaging ways. In other words, people say I help them learn and think about things they care about in ways they find helpful and worthwhile.
The product/service I (and other analysts) provide, in other words, is not really “research reports.” Most busy people don’t actually have time to read research reports, no matter how good the reports are — and no matter who (or what) wrote them. And, research reports can’t make people’s decisions for them. What helps us make decisions is often hearing a range of views from people whose perspectives and personalities we trust, value, and/or enjoy.
I hope, for now, I can still help people make decisions — or, at least, inform, entertain, or otherwise use their time in ways they find worthwhile.
If so, this advantage doesn’t feel particularly durable.
Now that I know how competent and fast AI is, I will use it to make my own research work better and faster — the same way I’ve used many other key tech innovations to improve and communicate over the past 30 years.
(Typewriters, for example. Then PCs. Then the Internet. Then email. Then texting. Then search engines. Then vast repositories of research, data, history, and expertise. Then the iPhone. Then Slack. Then Zoom. Then remote work. Etc.)
But that advantage — Henry + AI! — doesn’t feel all that durable either.
Rather, it feels like an in-between stage — like when IBM’s Deep Blue could beat the world-champion-human-chess-player Garry Kasparov, but Kasparov believed that, if they gave him a supercomputer, too, he and the supercomputer could beat Deep Blue.
That may have been true for a while.
But not now.
Now, we’re in an era when Google’s Alpha AI can learn the game Go by playing itself and, in 3 days, get better than the best human Go player.
So I don’t expect “Henry + AI!” will be better at writing research reports than “just AI” for long.
But, thankfully for me and other humans, most “knowledge-work” jobs involve actual decision-making and working with other humans — not just producing research reports. Any decision or project that involves people working together and communicating with each other will benefit from human input for a while.
Yes, our jobs will change as we learn to use AI. But as long as we are working with other humans, we will want colleagues who treat us like people and inspire us to work together (and with AI!) to create a better future.
Fortunately for me (and you), that’s what most human workers actually do.
Thank you for subscribing to Regenerator! We’re the publication for people who want to build a better future. We believe that the best way to solve our problems is to innovate our way out of them. We analyze the most pressing questions in the innovation economy — tech, business, markets, policy, culture, and ideas.
What I find interesting about your project is the work you've put into developing the personas. People want to work with people, but increasingly, the people we think we are working with are assisted by AI and as you said, that's an interim step. It's now possible to create a completely believable human-like AI thought leader, analyst, or influencer with individual quirks, habits and biases.
I left "knowledge work" and "content creation" because I felt like John Henry all the time. I couldn't beat the steam engine. That ended up giving me more comfort with what AI could do to support my (diminishing) online life, so I could focus on the physical world.