AI algorithms could disrupt our ability to think

We’re excited to bring back Transform 2022 in person on July 19 and virtually from July 20 through August 3. Join leaders in AI and data for in-depth discussions and exciting networking opportunities. Learn more about Transformer 2022


Last year, the US National Security Commission on Artificial Intelligence concluded in a report in Congress that AI is “changing the world”. The AI ​​also alters the mind, as the machine powered by the AI ​​now becomes the mind. It’s an emerging reality of the 2020s. As a society, we’re learning to rely on AI for so many things that we might become less curious and more confident in the insights AI-powered machines give us. . In other words, we could already be outsourcing our thinking to machines and therefore losing some of our agency.

The trend towards greater application of AI shows no signs of slowing down. Private investment in AI is at an absolute record, totaling $93.5 billion in 2021, double the amount from the previous year, according to the Stanford Institute for Human-Centered Artificial Intelligence. And the number of patent filings related to AI innovation in 2021 is 30 times higher than filings in 2015. This is proof that the AI ​​gold rush is in full swing. Fortunately, much of what is achieved with AI will be beneficial, as evidenced by examples of AI helping to solve scientific problems ranging from protein folding for Mars exploration and even communicate with animals.

Most AI applications are based on machine learning and deep learning neural networks that require large data sets. For consumer applications, this data is gleaned from personal choices, preferences and selections on everything from clothes and books to ideology. From this data, apps find patterns, leading to informed predictions of what we would likely need or want or find most interesting and engaging. Thus, the machines provide us with many useful tools, such as recommendation engines and 24/7 chatbot support. Many of these apps seem useful — or, at worst, benign.

One example that many of us can relate to are the AI-powered apps that provide us road routes. These are undoubtedly useful, preventing people from getting lost. I’ve always been very good at directions and reading physical maps. After driving to a location once, I have no problem getting there without assistance. But now I have the app activated for almost every trip, even for destinations I’ve driven multiple times. Maybe I’m not as confident in my directions as I thought; maybe I just want the company of the soothing voice that tells me where to turn; or maybe I become dependent on apps to provide direction. I now fear that if I didn’t have the app, I wouldn’t be able to find my way.

Perhaps we should pay more attention to this not-so-subtle shift in our reliance on AI-powered apps. We already know that they diminish our privacy. And if they also diminish our human agency, that could have serious consequences. If we trust an app to find the fastest route between two places, we are likely to trust other apps, and we will move further and further through life on autopilot, just like our cars in a not too distant future. And if we also subconsciously digest what is presented to us in news feeds, social media, research and recommendations, perhaps without questioning it, we will lose the ability to form our own opinions and interests. ?

The Dangers of Digital Groupthink

Otherwise, how could one fully explain unsubstantiated QAnon theory that there are elite Satan-worshipping pedophiles in the US government, corporations and media who seek to harvest the blood of children? The conspiracy theory began with a series of messages on the 4chan message board which then went on to spread quickly through other social platforms through recommendation engines. We now know – ironically with the help of machine learning – that the initial messages were probably created by a South African software developer with little knowledge of the United States. Nevertheless, the number of people who believe in this theory keep growing; and it rivals some dominant religions in popularity.

According to a story published in the Wall Street Journal, the intellect weakens as the brain becomes dependent on telephone technology. The same is probably true of any information technology where the content comes to us without us having to work to learn or discover for ourselves. If this is true, then AI, which increasingly presents content tailored to our specific interests and reflects our biases, could create a self-reinforcing syndrome that simplifies our choices, satisfies immediate needs, weakens our intellect. and locks us into an existing state of mind.

NBC News correspondent Jacob Ward argues in his new book The loop that thanks to AI applications, we have entered a new paradigm, one with the same repeated choreography. “The data is sampled, the results are analyzed, a reduced list of choices is offered, and we choose again, continuing the cycle.” He adds that by “using AI to make choices for us, we will end up reprogramming our brains and our society… we are ready to accept what AI tells us”.

The Cybernetics of Compliance

A key part of Ward’s argument is that our choices are narrowed because AI presents us with options similar to what we’ve preferred in the past or likely prefer based on our past. Thus, our future becomes more narrowly defined. Essentially, we could be frozen in time – a form of mental homeostasis – by the apps theoretically designed to help us make better decisions. This reinforced worldview is reminiscent of Don Juan explaining to Carlos Castaneda in A separate reality that “the world is such-and-such, or such-and-such, only because we tell ourselves that it is so”.

Ward echoes this when he says, “The human brain is built to accept what is said to it, especially if what is said to it is in line with our expectations and frees us from tedious mental work. The positive feedback loop presented by AI algorithms regurgitating our desires and preferences contributes to the information bubbles we already know, reinforcing our existing viewpoints, adding to polarization by making us less open to different viewpoints , less able to change and turning us into people we didn’t consciously intend to be. It is basically the cybernetics of conformity, of the machine becoming the mind while respecting its own internal algorithmic programming. This, in turn, will make us – as individuals and as a society – both more predictable and more vulnerable to digital manipulation.

Of course, it’s not really the AI ​​doing that. Technology is simply a tool that can be used to achieve a desired end, whether it’s selling more shoes, persuading political ideology, controlling the temperature in our homes, or talking with whales. There is an implied intent in its application. To maintain our agency, we must insist on an AI bill of rights as proposed by the US Office of Science and Technology Policy. More than that, we soon need a regulatory framework that protects our personal data and our ability to think for ourselves. the EU and China have taken steps in this direction, and the current administration is leading to similar movements In the United States It is clear that the time has come for the United States to get more serious about this business – before we become unthinking automatons.

Gary Grossman is Senior Vice President of Technology Practice at Edelman and Global Head of the Edelman AI Center of Excellence.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including data technicians, can share data insights and innovations.

If you want to learn more about cutting-edge insights and up-to-date information, best practices, and the future of data and data technology, join us at DataDecisionMakers.

You might even consider writing your own article!

Learn more about DataDecisionMakers

Comments are closed.