Any new technology brings with it a lot of skepticism and inertia in adoption. I thought of AI as just another new innovation that a 90s kid has experienced more rapidly than ever in the history of the world. From using analog telephones to smart phones, from wire-based communication to internet, we have seen it all.
However, I was a bit surprised, when I was in the board of selection for recruiting cohort members of a highly competitive leadership program for Bangladesh’s 1st and 2nd-year university students. We received about 1,300 applications from all across Bangladesh and more than 90% of the expected essays on personal stories of struggle and triumph were written using AI.
It severely affected our selection process. We were not looking for people who can best use the AI models but people who can influence and provide leadership due to their ability to overcome difficult lived experiences. But how did AI learn to produce such finely tuned essays of personal experiences? Where did they find so much of the personal data without the consent of the users?.Is this breach of privacy harmful? More profoundly, are we losing the charm and emotional connections in human communication?
In the past two decades, most companies innocuously breached data privacy from major online marketplaces only.[1] However, collecting user data on such a limited scale did not raise any concerns, even though it was a ‘breach of privacy’. But now, companies are collecting massive amounts of user data from the internet with the help of AI. All these are sensitive user data that the user would not consent to companies collecting under normal circumstances.
However, companies still do it as they know users will not share their data if given the option. AI is rapidly self-learning and self-deciding on the data required according to tech giants’ necessities and business prerogatives. A study on Apple’s App Tracking feature found that most users left it disabled, showing they do not want to be under surveillance by private entities.
So companies came up with tactics to make users unknowingly share their data. The companies would create long ‘terms of agreement’ and ‘privacy policies and then embed a tiny section on it where it would mention that they will collect user data. Users are less likely to read and find out about that specific section, so they would most likely agree to the terms and conditions without reading it.
Even if they did read it, they would see that complicated words and sentences were used to confuse the user and prevent them from figuring it out. But is this a breach of privacy and knowledge extraction for the greater good?
A German science fiction silent movie ‘Metropolis’ released in 1927 tells the story of a factory in post-war Germany, where the workers are tranced in a robotic, repetitive and imprisoned life under the full control of its factory owners and managers in an underground dark tunnel. The protagonist, Federer, hallucinates a robot-like monster that rapidly build high skyscrapers to build a hyper materialist dystopian metropolis for the owners of the factories.
The monstrous robot devours the jobs of the workers and exposes them to work that is outrageously dangerous and dissatisfactory. This powerful figment of imagination is already in action as we transition towards the Fourth Industrial Revolution.
The problem is not that there are technologies that save labour, but the problem is who controls the technologies and employment. In the UK, it has been estimated that the first wave of AI will put 11% of jobs at risk of automation. Cognitive jobs such as scheduling, stocktaking, customer service and database management are some of the jobs that will become redundant.
However, the second wave of AI could put 59% of jobs at risk, such as copywriting, graphic designing, and other high-earning jobs.[1] Applications such as Midjourney and other AI-generating software are already making complex graphics that make graphic designing jobs redundant. It may be a matter of time before AI takes over most human jobs, forcing employees to be fired and serving the menial labour of the capital owners. The companies or employers will become like Federer’s hallucinated monster, eating up jobs and freedom of the ordinary working class. But will automation by AI lead to a more significant redistribution of incomes and free up time for more leisure?
Daron Acemoglu, one of the prominent economists from MIT, in his latest research paper stated that increased automation should not be expected to increase workers’ wages because financial gains made from increased workers productivity are usually taken up by the company instead of being distributed down to the workers.
Automation in countries like Germany, Netherlands, France and Spain have seen the same result when revenue from financial gains went to shareholders instead of the salary of warehouse workers.[5] The owners of the capital and the people in the rich countries definitely going to enjoy greater leisure time. But for most of the working class, new skills needed to be learned along with labouring for low-skilled jobs for survival.
Low-skilled jobs where wages are determined at hourly and shift-based rates could see workers getting even less pay due to automation reducing the time that workers would be needed to work. However, even if a benevolent company owner wants to redistribute the time and wealth, there are two barriers to it.
As AI is collecting so much user data and then using it to train itself, so what is the outcome of this training? The answer is that the AI is trying to infiltrate people’s subconscious minds and influence their decision-making
First, the imperatives of the market and competition for AI-generated industrial goods will push wages down to the bottom for menial and dangerous work. Another significant barrier is the implicit cost of the technologies due to climate and ecological limits. AI is not operated under a vacuum, the massive computing speeds for the latest AI need semi-conductor chips that are produced using trace heavy metals like silicon, germanium, phosphorus and boron The cost of massive mining expeditions in Africa, Latin America and Asia to procure the enormous demand for semi-conductor chips is the devastation of habitat for indigenous populations, enormous deforestation and pollution.
The data centers necessary to run the AI operates 24/7. These data centers currently consume 3.7% of the global greenhouse gases (GHG) which is greater than the worldwide contribution of the aviation industries. We are still at the tip of the iceberg, as the computing requirement for the sophisticated AI will require greater computing speed and will emit more carbon dioxide and leads to greater ecological breakdown.
But all these impacts will be burdened by the poor countries as more and more people will displaced and dispossessed from their habitat in search of low-skilled job in cities. But will democratic countries allow such injustice, what is AI doing to democracy?
As AI is collecting so much user data and then using it to train itself, so what is the outcome of this training? The answer is that the AI is trying to infiltrate people’s subconscious minds and influence their decision-making. Scientists have found that only 5% of the human brain activity is conscious while the rest 95% of the activities occur subconsciously, meaning that we are not aware of our own subconscious brain activities.
But AI can influence the subconscious parts of the brain by producing stimuli and cause it to act in a certain way. These stimuli can range from influencing a person to buy certain goods to voting for a candidate in elections. In 2016, the US election was influenced by a research and consulting firm called Cambridge Analytica which used user data from platforms like Facebook to target Democrat voters and influence them to vote for Republicans.
So, the human subconscious can be influenced with the right data. And what better tool is there to analyse large quantities of data than AI? AI can influence voting patterns in a way that it can elect fascists all over the globe and keep us engaged in this trance of AI-augmented reality.
Syed Muntasir Ridwan, CEO, Catalyzing Sustainable Transformation (CaST)