
Artificial intelligence (AI) is learning more about the way to work with (and on) humans. A recent study has shown, how AI can learn to identify vulnerabilities in human habits & behavior and use them to-influence human decision-making.
It may seem cliched to say that AI is transforming each & every aspect of the way we live & work, but it is true. Various sorts of AI are at work in fields as diverse as vaccine development, environmental management & office administration. And, while AI doesn’t possess human-like intelligence & emotions, its capabilities are powerful & rapidly developing.
There is no need to worry about a machine takeover just yet, but recent discovery highlights the power of AI and under-scores the need for proper governance to prevent from misuse.
How AI can learn to influence human behavior
Researchers at the CSIRO’s Data61, the data & digital arm of Australia’s national science agency, devised a systematic–method of finding & exploiting vulnerabilities in the ways people make-choices, using a kind AI system called a recurrent neural network & deep reinforcement learning. To check their model, they carried-out 3 experiments in which human participants played games against a computer.
The first experiment involved participants clicking-on red or blue colored boxes to win a fake currency, with the AI learning the participant’s choice patterns and guiding-them towards a specific choice. The AI was successful about 70% of the time.
In second experiment, participants were required to look a screen and press a button when they are shown a specific symbol (like an orange triangle) and not press it when they are shown another (like a blue circle). Here, AI set-out to arrange the sequence of symbols so, participants made more mistakes & achieved an increase of almost 25%.
The third experiment consisted of some rounds in which a participant would pretend to be an investor giving-money to a trustee (AI). The AI would then return an amount of money to the participant, who would then decide what proportion to invest in the next-round. This game was played in 2 different modes: in one, AI was-out to maximize what proportion of money it ended up with and in the other, AI aimed for an unbiased distribution of money between itself & the human investor. The AI was highly successful in every mode.
In each experiment, machine learned from the participants responses & identified and targeted vulnerabilities in people’s decision-making. The end result was that the machine learned to steer participants towards particular actions.
What the research means for the future of AI
These findings are still quite abstract & involved limited & unrealistic situations. More research is required to determine, how this approach can be often put into action & used to benefit society.
But the research will advance our understanding not only of what AI can do but also of how people make-choices. It shows machines can learn to steer human decision-making through their interactions with us.
The research has a huge range of possible applications, from enhancing behavioral sciences & public-policy to improve welfare, to understanding & influencing how people adopt healthy eating habits or renewable energy. AI & machine learning might be used to recognize people’s vulnerabilities in certain situations & help-them to steer away from poor choices.
The method also can be used to defend against influence attacks. Machines taught to alert us when we are being influenced online, for instance, help us shape a behavior to disguise our vulnerability (for example, by not clicking-on some pages or clicking-on others to lay a false trail).
What’s next?
Like any technology, AI can be often used good or bad and proper governance is crucial to make sure, it is implemented in a responsible way. Last year, CSIRO developed an AI Ethics Framework for the Australian government as an early-step in this journey.
AI & machine learning are typically very hungry for data, which means, it’s crucial to make sure we’ve effective systems in place for data governance & access. Implementing adequate consent processes & privacy protection when gathering data is important.
Organizations using & developing AI need to ensure that they know what these technologies can & can’t do and be aware of potential risks as well as benefits.