AI for governments: with Humans.ai’s Sabin Dima

AI-powered chatbots for government and corporate use

AI is the buzzword of the year. OpenAI’s release of ChatGPT has led to the current AI craze with investment going into AI-related start-ups doubling in the first half of 2023 and AI investment projected to reach around $200bn globally by 2025.

MediaCat Magazine‘s Business Editor Selin Ozkan sat down with Sabin Dima, the Founder of Humans.ai (a company that’s building AI-powered chatbots for governmental and corporate use) to discuss ethical AI, legislative sentiment towards the industry and the potential of the technology.


Let’s start with talking about what Humans.ai does. How does it merge blockchain with AI? What does it want to accomplish?

There are a lot of stakeholders in AI. There are the data providers, developers, the user, small developers or start-ups that are building on top of different AI applications. The stakeholders don’t trust each other because they don’t know each other. How can we make sure that everybody is working towards the same objective? How can we onboard millions of developers working on the same AI without knowing each other? That’s why we created the first blockchain for artificial intelligence, which is a framework that can help developers, data providers, and all the stakeholders working on the same AI. It creates this trust layer between all the stakeholders, and a trust layer for all the data. You know AI can be biased. We’re using blockchain to create what is called explainable AI for data traceability. 

Blockchain is an uncapped technology. You can run AI and it’s clear if your AI is owned by you or not. Imagine that you clone your voice and have it on the blockchain, and have full rights to your voice. More than that… you create an economy of AI: you can create a set of rules, onboard stakeholders, and create a non-debatable mechanism to reward stakeholders in a fair way.

Imagine ChatGPT on blockchain: using blockchain you can decentralise the value and distribute the value, which now is centralised within one company. On blockchain, ChatGPT can be owned by all of us. If I’m a user and give feedback on a question, I should be rewarded for that. 

Can you talk more about the relationship between building this product on the blockchain and how you relate that, in terms of developing an ethical version of AI?

Humans.ai has two main components: one is the infrastructure, the blockchain where we’re making sure that we’re creating an ethical AI. On top of that, we have an AI platform. It’s called Humans EVA, an AI that is able to create other AIs, fine-tune different AI models, and make different AIs work together.

Those AIs are encapsulated in our blockchain, making sure that all the data is traceable. Our main product uses EVA. We can create custom AIs and create an AI for the five Cs — country, city, company, community, and citizen. 

You were just at the UN General Assembly. How was the sentiment there towards AI? What did the conversation focus on? 

We were happy to present in front of the Commonwealth and the United Nations representatives, and we were presenting an AI for the people from the Commonwealth. In the beginning of AI development, governments in general were not necessarily afraid, but they were very cautious about this technology. Now they understand that you can create an AI for good. They understand that you can use artificial intelligence to empower people and create this link between citizens and decision-makers. Because if you look at what’s happening in societies and normal democracies… it’s that people have a voice only [every] four or five years when they vote. It’s this gap between people, opinions, and decision-makers.

We present an AI that is able to understand, that was created to support the citizens in order to strengthen the community. How we are doing that? By having a conversation with every citizen from the Commonwealth, understanding their problems and sense of urgency. They’re willing to adopt those types of technologies in order to uplift the most vulnerable citizens and create a collective voice.

That can have a huge impact in democracy 2.0, and that can help decision-makers make the best decisions for the people.

That reminds me of tools that governments in Estonia and Taiwan created, so that people could participate in parliament. The tools didn’t require a lot of technology, but still not a lot of cities adopted them. How is this different than just building an app so that people can text to parliament, versus what you’re doing? What gives you confidence that more cities or governments will adopt that versus, I guess, easier and less costly methods of local participation?

There are many reasons. We’re overwhelmed by the requests from different governments from all around the world. I’ve had multiple discussions with one of the most powerful governments in the world, and they’re willing to adopt this solution in order to understand what people need. On the other side, we are looking at Ion, the first AI councillor of a government. We’re seeing citizens talk with Ion like they’re speaking to a human. They understand that it’s this third-party entity which is unbiased and can help them send a message to decision-makers.

This is only one component. Also, we are seeing how honest the people are talking with this AI. They’re taking their time in order to have their voice heard.

This governmental AI has two components: one is the psychologist that asks and tries to understand the problem. On the other side we have another AI that gives the decision-maker the possibility to have a conversation with the data. Our AI is trained on everyone’s opinions, and the decision-makers can have a conversation with one AI like they’re talking, for example, to 90 million Romanians, and probably it will be a must in the future for every decision-maker. When I say decision-maker: this can be a politician, a member of the government, a CEO or a C-level executive in a company, or the head of a community. Maybe it will be a must, to have at least 20 minutes of conversation every morning when you’re a community member through this AI. 

How does the AI unify the opinions of millions of people? Because communities within themselves are divided as well. How do those differences come to the fore when government officials converse with AI chatbots?

One of the things that it’s doing is trying to find common ground. That’s one thing it’s doing, but it depends on who’s asking. It has a huge amount of information. It depends on you to really try to understand what those people are saying, because you can ask an infinite amount of questions in a completely different way. 

There are people who are leading AI companies who believe that AI models in the next few decades will pose a danger to the current nation-state model of governance. How do you think AI will impact governments and governance in the future?

AI will help governments really understand everyone in real-time. You can create better policies because you know what the people need in that moment. I think that this is the most important part — to create this link, to allow citizens to be heard.

Imagine that you can create an ongoing referendum: this is a project that we’re doing in the Netherlands. We’re creating, together with them on Ion, a version for the Netherlands to involve people in development of the law.

Imagine an ongoing referendum that is not only asking people if an idea works or not, but involving citizens in the creation process... asking more questions and trying to understand why you like (or dislike) certain laws, and how they’re going to impact your life, then bringing all that information together to try to help the lawmaker make the best decision. I think this is the most important part and the first step in every democracy, to re-establish this link between the citizens and the decision-makers. Based on this foundation, the sky is the limit.

So you don’t think AI poses a threat to democracy?

No.

Do you plan on rolling your product out to the private sector? And if so, what will that look like?

With Humans EVA (the AI that is able to create other AIs), as I said, we’re able to create custom AIs. We are working, now, with a big pharmaceutical company, creating an AI to help their customers. The beauty of the technology we’ve created is that it’s industry-agnostic. If we have the data and the objective, we can create a narrow AI that is able to respond to a specific need.

Also, with some governments we have an educational AI, based on the same framework. An AI that is able to help kids in school to better understand, and to act like a teaching assistant.

That’s great. Education definitely needs reform, almost everywhere. Finally, what excites you most about the future, when you think about how much investment and attention is going into AI, both from the public and private sectors?

What I’m really excited about is that AI can really scale human potential. It can fill this gap between idea and creation. I really believe that every human being is very creative, but [there is] this huge learning curve and budget gaps. But using artificial intelligence, every human being can scale their potential. If you have an idea, you just need to set it and it’s done. 

Featured image: Sabin Dima / Humans.ai

Nazli Selin Ozkan, Head of Content & Partnerships at MediaCat Magazine

Selin is Business Editor at MediaCat Magazine. After graduating from Duke University with a degree on political science, she started working at the content department at Kapital Media, working on events such as Brand Week Istanbul and Digital Age Tech Summit. She took on the role of Business Development Manager at Kapital Media, working on Kapital Media's several products, such as MediaCat Magazine, Polaris Awards, Brand Week Istanbul and Digital Age Tech Summit. She regularly contributes to MediaCat Magazine, covering media and tech.

All articles