Super intelligence: How collective human reasoning can elevate AI to the next level
With more than five decades working in Artificial Intelligence, Dr Thomas Kehler has seen its evolution from a problem-solving tool to a headline-grabbing titan which is increasingly touching on all aspects of our lives.
As former CEO of Connect (one of the first ecommerce companies to go public) and early social marketing company Recipio (serving clients such as LEGO, NBC and Procter & Gamble), and later as chief scientist and co-founder of human-interactive AI firm CrowdSmart, Kehler has become one of the most prominent voices espousing the many virtues of artificial intelligence.
With a reputation for establishing relationships between innovative technologies and business needs, he holds four patents in the application of AI technology in amplifying human intelligence and has served on the Information Technology Advisory Board of the National Research Council and various corporate, academic and non-profit boards.
Here, Dr Tom Kehler talks about rising fears around the current generation of AI and his hopes for the future…
AI has been evolving for decades but there has been an explosion in the sector in recent years. How does CrowdSmart fit into the current landscape?
It’s important to understand that much of the current generation of AI is based on statistical data learning. In other words, determining patterns and past data to learn intelligence and answer questions or predict the future.
CrowdSmart’s AI extends statistical AI to include integration with human reasoning – requiring the AI to explain its thinking in a form understandable by humans. CrowdSmart looks beyond just examining data and amplifies collective human intelligence. We use the AI to listen to humans collaborating to solve a problem or make a decision together. The result is a kind of super intelligence that amplifies and extends the potential of human creativity. It was the collective intelligence of humans that created AI, so we believe embracing human collective intelligence is the most productive and safest path forward for AI.
Judea Pearl is a professor of AI at University of California, Los Angeles, and he said, “You are smarter than your data”, meaning that the data we leave behind isn’t the total indicator of human capacity for imagination and intelligence. CrowdSmart is about tapping into that collective intelligence of human beings, aided by artificial intelligence to help them with the process.
Charlie Brooker, the writer of the Netflix series Black Mirror, recently tried to write a new episode using ChatGPT and found that, while the resulting script was serviceable, it was very derivative of his previous work. Is this indicative of this kind of AI’s methodology?
ChatGPT regurgitating what someone had said in the past in a different form is probably its greatest danger – it’s generating what is plausible and not necessarily what is true.
You can do this experiment yourself, although they may have fixed it by now… Ask ChatGPT why Nike is better than Adidas, it’ll give you a beautiful answer. Then ask why Adidas is better than Nike, it’ll give you the same answer.
It’s a plausible answer but it’s not based on contextualised data and it highlights the importance of data provenance. ChatGPT doesn’t care, and that’s a really dangerous thing.
Many who are now talking about AI do not have the in-depth knowledge that practitioners have developed. For this reason, we have high potential for error
As someone who has worked in the field of AI for 50 years, is the rapid rise of this new generation of AI and its implementation in many new areas a source of frustration for you?
It’s a deep frustration because, unfortunately, many who are now talking about AI do not have the in-depth knowledge that practitioners have developed. For this reason, we have high potential for error. What happens in this kind of environment is that people who wish to make money pass over the issues or dangers and fan the flames. ChatGPT, for example, is incredibly cool but should be viewed as more of a demo than a working solution. You have to be aware that it is capable of generating fake and potentially dangerous results.
That has been my biggest concern and CrowdSmart’s answer is to let ChatGPT be an intelligent agent along with real humans reasoning over a problem. If ChatGPT comes up with a good idea, then it gets used. But, if it doesn’t, let the humans curate the stuff that isn’t true. It’s important to regulate it and filter it through collective human intelligence… that will get us back on the right track.
From fake news to deep fakes, there are some troubling questions that are further stoking concerns about the ethical implementation of AI, what are your thoughts on this?
We’re working with some very large companies and we’re developing alliances that are aiming for a transparent best use of AI strategy.
Tristan Harris is one of the big naysayers around all AI, but one of the things he did say is that the rise of AI is similar to how we treated the nuclear arms race, there came a point where it went from fear to constructive action. I believe the more there is awareness of misinformation potential, the more people will engage with it for the better.
A group called the Boston Global Forum has laid out a framework for a safe use of AI, which I think will start that process of reducing risk. Also, we’re going to be working with NATO on disinformation campaigns. The more we create that sense of critical thinking and filtering, the better off we’ll be.
British neuroscientist Karl Friston has developed an AI model of the brain based on something called the ‘free energy principle’. It looks at how the human brain works to construct its model for survival. We formed CrowdSmart around that notion, except we’re building a model of the collective brain of humans – literally a hive mind of what people think about an outcome. We use that underlying AI notion of the Friston model and we’re now joining with other companies with expertise in the area of agent-based simulation, so that we can begin to show quickly the implications of outcomes. We’re creating this common-good AI and it’s going to be open source and designed so it doesn’t get corrupted… So that it’s supportive of life and humanity not destructive to us.
The possibilities of AI technology are mind-blowing. What do you think the landscape will look like in ten years?
I’m a genuine believer that if we all work together, AI and the amplification of human intelligence can be extremely powerful, including in the world of scientific research and development.
There’s further potential if we use a neuroscience-based approach to AI, not just a mathematical approach – something that’s based in the laws of human nature. I think we’ll build something that will coexist with life and even play into evolutionary principles that are supportive of the growth of intelligent life.
I think we’re going to see an ever-increasing set of capabilities, including what we call ‘Wet AI’, where bionic integrations will help people to live with a variety of diseases and even extend human capabilities.