Melanie Subin is the Managing Director of Future Today Institute (FTI), a strategy advising firm that specialises on emerging technologies and advises leaders on how to mitigate future risks and opportunities. Kriffy Perez is a Senior Expert Advisor at FTI. We got together at SXSW, after they launched their highly anticipated Emerging Tech Trends Report. The interview has been edited for length and clarity.
You launched the Emerging Tech Trends report [at SXSW]. Can you tell us the highlights?
MS: Sure. It’s our 16th edition. We have 666 trends in it, which is not a miscount. It’s not all doom and gloom. We broke it into 14 different books based on the topic, because we know that individuals might be looking for certain content. It’s about split 50-50 between technology-based books looking at robotics and drones, for example, and then books on industries — looking at things like financial services. Our hope is that readers will be able to familiarise themselves with some topics that they’ve been hearing about, but they’re not exactly sure what the impact is going to be, as well as newer trends that are just beginning to show on the horizon.
Well, I was hoping you would talk about generative AI
MS: We can give you highlights from different topics. We have so many different books. I think AI is definitely the biggest of all of them. Amy [Webb] writes the AI book herself. She’s been writing the AI book since the inception of the Trend report. And you probably know she wrote a book on AI a few years ago, so that’s her spot. I think generative AI is really interesting. We’ve been watching the technology come along over the past few years, waiting for GPT-3 to come out and watching things like OpenAI develop. So it wasn’t a surprise to us. But I think that making it consumer-friendly was really the inflection point.
Anybody can go access it right now and test it out and try it. I think that has democratised generative AI. That, for me is the inflection point — that Midjourney, Lensa and ChatGPT and all these different tools are suddenly available for free. Which means that there are applications emerging that even the creators of those tools hadn’t imagined because they couldn’t put themselves in the seat of a coder, a creative designer, an author or a kindergarten teacher. I think when you democratise technology people start to use it in really interesting ways.
KP: Yeah, the application of it and where it’s applied accelerates exponentially. Because it’s top of mind for people who would have never otherwise thought of it, let alone used, experienced and touched it. That’s what’s so amazing about ChatGPT and AI right now. We’ve had brand new technologies before, but they aren’t normally as accessible to so many people so quickly. It’ll be fascinating, I would say, in the next two to three years, to just see how people are integrating that into a brand new product that has nothing to do with AI. A financial services product, a transportation product, an infrastructure product… all of these widgets that have components that weren’t considered before.
MS: I think one of the unique things about Generative AI, ChatGPT and Midjourney in particular, tools like that is that for the first time, these consumer facing tools, it’s really difficult to discern between real or human-created and generated by an AI. So two or three years ago, or even less than that, if you went onto a business website and you used their chat function, you could tell if it was an AI because it couldn’t really answer your questions and it wasn’t very good.
KP: ‘Let me please forward you to a representative.‘
MS: Yeah, exactly. This is the first time in my memory that it’s really difficult to tell. If I wrote a 500-word essay on something and then I asked ChatGPT to write a 500-word essay on the same thing, I don’t think anyone would be able to tell who wrote which one. And similarly, with things like Midjourney and photos, in some cases, what they create are so hyper-realistic, you wouldn’t know that that didn’t actually happen. I think that can be exciting from a creative or an assistive perspective. What concerns me, though, is that so few people who are accessing these tools understand them.
The most obvious example is when ChatGPT was released to the public and journalists were having dialogues with ChatGPT and the Bing search and were stating that they were getting emotional responses back. There’s been questions about — is AI aware? Is it sentient? Is it conscious? It’s programmed off of data, books and text so that it can mirror emotional reactions. But most people accessing it don’t understand where that content is coming from, which is dangerous because it means that they’re even less able to discern between what’s real or what’s reliable, true or accurate. That’s a real risk.
Last year it was the metaverse, the year before, crypto… all of these are new, big, revolutionary technological developments that definitely all matter, in varying degrees. But with scale, I think, comes hype and anxiety. How much of generative AI do you think is hype? And how is it different to other hyped technologies?
KP: That’s a good question. I’m a DeFI minimalist, so a crypto minimalist. I think what’s slightly interesting or different about the AI is the reach that it has. If you look at the adoption curves — a couple of days per million users versus everything else? Does that mean that it has staying power in its current form? No, but what it does mean is that a lot more people are exposed to it sooner. Everybody knows about this, so they’re trying to think through how to use it. What does that look like is a different question. I think personally, we don’t exactly know what part of AI is tangible. We got to try it, you got to see it, you got to test it for your specific use case. There’s obvious use cases that makes a lot of sense. Call centres, routine conversations and interactions that you’re having… we teach a class where we have students. It’s going to be very fascinating to see how students are going to use it. How are they going to use it to make their life easier or work faster? How is that going to impact what they learn or don’t learn? Because it’s the same thing as any new tool.
MS: Yeah, I think the applications can be hype. I think the underlying technology is real. For example, I think NFTs of art, was a big thing, like a year ago. Bored Ape — I think that was pretty hyped and there were plenty of people who it did make a lot of money initially, and plenty of people who invested in it, but I think the majority of the world kind of sat back and they were like, ‘Why? I don’t understand why I need to pay millions of dollars for a cartoon ape. That doesn’t make sense.‘ Sure, that was hype but the underlying tech of being able to tokenize previously intangible things, that’s real.
All the things now that could be tokenized and then monetized in a more virtual, digital landscape that previously couldn’t be captured — that’s where the real potential is. And so, similarly, maybe Midjourney is hyped, maybe Midjourney kind of fizzles out… after a while, people get over the shock and awe, being able to type in a few words and get a brand new photo. But the fact that the model underneath it can create that is where the potential lies. In the long run, we’ll see it more embedded in tools.
What about the anxiety around all of this? Do you think that’s warranted?
MS: I think caution is warranted. I think anxiety is a natural response. It’s something that people have not encountered before. They can’t fully understand it yet. I think that’s a natural response. I don’t think we have anxiety about it. I think we have caution about it as foresight experts because we can see, just like Amy’s talk said, how it could go very, very well or very poorly. We’re watching for signals of that. I will say, I feel like historically there’s almost always pushback against new technological revelations. ‘We don’t need that.‘ There was a lot of scepticism, fear, anxiety and caution around the internet. And that was similarly revelatory. That really changed the way businesses operated, the way that society communicated. I think there’s always hesitation because it’s a completely new way of doing things.
When we think about all these catastrophic outcomes, potential ones, the role of government becomes immediately apparent. You work with government entities as well. In your experience, are they strategically preparing for any of these catastrophic outcomes?
MS: Which government?
The US. Because US companies control so much of this tech.
MS: US government is slower to innovate its regulations. The way that America’s political system works, politicians and government officials are incentivized to make decisions that are at least not in the worst interest of corporations, I’ll put it that way. They can’t always make decisions or laws that are in the best interest of corporations, but our political system does mean that the impact that will have on markets and companies, I think, does impact the way that our regulation is formed. Europe, European countries, the EU are doing a much better job of looking towards creating privacy regulations, AI ethics regulations, but regulation always lags innovation.
So I think that the government in the US and governments in Europe are probably not moving fast enough to create guardrails, which means it’s going to be up to more local municipalities, corporations and companies to create those guardrails until regulations are in place. There are a lot of changes happening that individuals who are making the laws or the regulations are just struggling to keep up. Crypto staking is a great example of that. That is currently a big topic. It was a couple of months ago and throughout late last year as we saw multiple different crypto companies kind of tank, and there has been a lot of debate about whether or not crypto staking falls underneath securities regulations.
SEC is definitely cracking down now but it took them a while to even figure out what was going on, and whether or not that was an activity that should be regulated under those laws and they could or should step in. By the time they did, obviously it was too late to prevent those crypto crashes from happening. They can try to prevent those types of things from happening again, but this is also somewhat due to the pace of innovation. These technologies are developing really fast and unless regulators are staying ahead of it to make sure that they’re informed about what these technologies are and how they work, then inevitably we’re going to learn the hard way and then put guardrails into place.
KP: And there’s a lot of specialisation required. You see more and more corporations working with regulators to try and not just teach them, but also end at a desired world as opposed to just happenstance on something that works. You see a lot of mixed messages and a lot of different friction in just who’s trying to do what and in which systems and how. Then it’s risk aversion too — things are moving really fast. Do you also want to be someone stirring the pot or do you want to be someone slowing the pot down? Government entities tend to want to slow the pot down to give people stability.
So what responsibility do you think falls on businesses to ensure all these technologies are developed ethically and responsibly?
KP: Do the right thing is a simple answer. If you think your shareholders are going to suffer in the long term, it’s not a good thing.
MS: Yeah, and you know what? I think in this evolving ESG environment, doing the right thing or developing technology responsibly is doing the right thing for shareholders. Increasingly, especially, younger generations are voicing that they want to see companies act responsibly and ethically. There will be companies who will innovate in whatever way they can, as fast as they can, whether or not it’s the right thing to do, whether or not it’s responsible. That may work for a short time. But I think ultimately, especially when it comes to new technology, consumers are so sceptical. If the technology is developed by a company in a way that consumers begin to feel like they can’t trust, then ultimately that product or service isn’t going to be durable. So in the long run, it’s not good for shareholders. I think the environment we’re in right now, those two things go hand in hand.
KP: I think that’s huge, especially with the market corrections in the VC landscape where that is more and more of a focus. You have to have a board that has a strong opinion, a vision for what the company should be doing. It’s been really fascinating, especially in 2023, to see the pendulum swing back into stronger boards, stronger convictions, stronger reasons, which ties into ESG and all of the holistic ‘what’s good for everybody’, not just ‘what’s good for this next minute, or good for one specific personality’. You have a lot less of founder worship.
The tech industry is such a bubble and moves at such speed. But because it’s a bubble, that doesn’t really translate into as much of an impact on wider society. Some developments the industry goes through the public doesn’t even know about. It makes me think about the role of adoption. What do you think about the relationship between how fast you see change occurring versus how we adopt it?
MS: I think adoption is a huge lever in why things take longer to scale than the public thinks they will. I’ll use electric vehicles as an example. Electric vehicles are available. They’re on the market today. You can go buy one. But we still see that the vast majority of vehicles on the road are gasoline based. That’s an example of where the reasons for adoption are so multifaceted. Is it more expensive than the way I’ve been doing it? Is there any reason to change the way that I’m doing something to this new way? And people tend to not really change dramatically the way that they’re doing things until they need to, and I think that’s true of companies as well. So adoption plays a huge role.
For example, ChatGPT: we talk about how quickly it had a million users, but most of those users were just people trying it out. They’re not changing the way that they go throughout their lives. They’re just playing with it. True adoption of generative AI will come when large organisations begin implementing it in their processes, when it becomes embedded in consumer applications, so that people are using it throughout their day in a sort of invisible way. Adoption takes a long time and it doesn’t necessarily have anything to do with how ready the tech is. It can also be about how expensive it is, how different it is, how unreliable people feel like it might be. I think adoption is one of the key levers to how long things take and why people love to talk about the next 3-5 years: but a lot of the things we talk about and we look at are at least 10 years.
KP: Adoption is a fascinating thing, because I think it’s not a driving factor. It’s a result of a lot of different things that are happening. People struggle to deconstruct it to what causes, what accelerates or slows down adoption. And the credit card industry is always a really fun one because contactless payments and Apple Pay took a very long time to adopt. And especially people that were in the industry were struggling to figure out why it was so slow to adopt. The user experience didn’t have enough additional value to actually change. The first attempt was, ‘I’m going to try and create this user experience that’s better, that’s going to drive enough value for everyone to change in the whole ecosystem.‘ That was a miscalculation. They then had to go back and restructure what they were trying to do and say: ‘Now instead of the user experience, I’m actually going to drive and push it from the other end, from the system standpoint, and say that it’s much better from the efficiency of the fraud and the risk.‘ They had to drive it through there. Most people don’t think how adoption is going to work at a large the scale of an industry, and most people don’t iterate on that.
A last question: what piece of tech are you most excited about in the near future?
KP: I relatively recently got autopilot into a non-Tesla vehicle, and I’m most excited to see how that kind of assistive technology permeates beyond just driving or a specific highway use, which is, I guess, sort of part of generative AI. I’m really interested to see how much better we can get at technology that gives me an assist.
MS: I’m really excited for tokenization, and not necessarily tokenization of art, but tokenization as it can make things more seamless for people. So tokenization of property records, work experience, educational credentials… I think that there’s a real opportunity to just create more connectivity between people’s backgrounds, what they carry with them and who they are and make it easier for them to interact with all different kinds of systems. I’m excited for that and AR, although I will admit that I’m having a hard time these days figuring out if it’s not glasses than what it is.
And glasses — there has been a really big barrier as far as adoption goes. I’m not sure right now whether or not we can overcome that barrier. I think this technology is going to adopt alongside things like distraction minimization, minimising the notifications that you get, because AI can make a lot of those decisions for you about how to respond or whether or not to respond. I think that it has the potential to create a much more enjoyable being-in-the-moment experience. Especially because over the past decade, everybody’s face is in their phone all the time. I’m looking forward to a future where that is not the case.
Featured image: This is engineering / Pexels