Expert in Artificial Intelligence Exposes The Secret on Big Tech Manipulations

Justin Lane is an artificial intelligence (AI) specialist and entrepreneur with an Oxford University education who has little time for fluffy theories. His research interests include both human and artificial cognition, as well as faith and conflict.
This led to some interesting fieldwork in Northern Ireland, where he observed extremists from the Irish Republican Army and the Ulster Defence Association up close. His humanities studies ultimately contributed to AI programming and agent-based computer simulations.
Join The True Defender Telegram Chanel Here: https://t.me/TheTrueDefender
He somehow managed to reach undergrad as a Green Party democrat in Baltimore, Maryland, and emerge from England’s ivory towers as a Second Amendment supporter. He now finds himself a political moderate with a “libertarian tinge.”
Lane was working at the Center for Mind and Culture in Boston when I first met him. The promising academic was thoroughly corrupted by capitalism and went on to co-found a multinational data analytics firm that works with high-profile corporate and academic clients. He’s one of the pallid suits in the Matrix who pushes buttons, so he’s able to show us human prisoners around.
This interview has been edited for clarification and content.
ALLEN, JOE: What do the unwashed masses look like from a God’s eye view, from your viewpoint as a network analyst? Are they out to get us, or am I being paranoid?
LANE, JUSTIN: Companies like Google, Facebook, and Twitter — as well as analytics firms like mine — collect large quantities of data on users. This is what most people nowadays refer to as “Big Data.”
The majority of businesses analyze vast volumes of data to identify trends. The business, the data scientist, and their algorithms are typically uninterested in the basic content of what is posted. They’re simply developing systems that can recognize certain trends and monitor and increase individual user engagement.
However, the data exists in individual granularity in a database somewhere. Perhaps it’s on Twitter, or in a corporate database, or in an intelligence database. The personal data may be used for malicious purposes. The ethical consequences are clear.
Consider how intelligence is used in China, where saying something derogatory about the regime decreases the social credit score. They’re not going at it all at once; they’re thinking, “No, you said something critical of the government on this day, and now you can’t buy a train ticket.”
I believe that what China is doing now is the most akin to the dystopian hellscape that we all fear, while most American corporations are more concerned with being paid per click. That, I believe, is a significant distinction.
JA: Couldn’t this huge map of public opinion be manipulated in ways that go beyond the data-scraping of Barack Obama’s campaign or Donald Trump’s Cambridge Analytica “affair”?
JL: The potential for swaying public opinion is enormous. The typical user is exposed to both active and passive manipulation.
They’re being deliberately exploited in the same way that anyone who has ever turned on a television or radio has been. Marketing exists to exploit us by ensuring that when we require something, we can require a particular brand. That is advertising’s brilliance.
With the data that social media has and the number of people engaged on a single network, it’s simply scalable to a degree we’ve never imagined.
Around the same time, there’s passive manipulation, which has to do with what a corporation allows on its website — and how they generate and modify the details we see, algorithmically.
We know that the more you can rehearse details, the more likely you are to remember it from a psychological standpoint. This is referred to as “learning.” When a social media platform chooses to filter a certain type of content, it means they’ve chosen to make it more difficult for their users to learn about it.
To some degree, that happens in ways that I believe are ethically imperative in some situations, such as when children are harmed or helpless people are targeted. Editing user opinions, on the other hand, is a grey field. Who do you think you’re shielding by censoring such political speech? Who do you think you’re shielding by shadowbanning?
We are clearly being exploited to that degree. If that’s a positive thing or a negative thing depends a lot on where you are on the political spectrum, but it appears to happen more to conservative voices than liberal voices. The distinction between “What is hate speech?” and “What is speech that you despise?” is a clear example of this.
JA: Are these platforms’ AI cops policing our speech?
JL: Without a doubt. According to Facebook, over 90% of initial flags for offensive content are generated algorithmically, using artificial intelligence systems. Human testers would then look at it. They’re also self-training the AI based on the content that’s been flagged.
If conservatives are being silenced more than liberals, it may be because those political predispositions are more likely to offend. As a result, the current censorship bias in social media may not be entirely due to the political prejudices of tech executives, censorship testers, or data scientists.
Since Facebook’s algorithms are trained using tens of millions of user reports that flag content that offends them, the system’s political bias is likely to represent the loudest voices in the space. If conservatives don’t actively participate in flagging content, their data points won’t be inserted into the database, and their views won’t be represented by these algorithms.
[Lane’s face is wrapped in a sly grin.]One thing conservatives might do — and it would be fascinating to see how well this works on Facebook — is simply mark something as offensive when they see something they know would be flagged if the situation were reversed.
It’s likely that social networking platforms’ algorithms will figure out that leftist speech is offensive and change how things are flagged. It would be a fascinating social experiment to conduct.
JA: As someone who grew up in a conservative family, how has your college experience been?
JL: After finishing my undergraduate studies, I moved to Europe because it was the only place where I could pursue my interest in computer modeling of human behavior. I started my journey in Belfast, where human existence and cultural values have a long and turbulent history of conflict.
This is in the United Kingdom, where firearms are prohibited. Thousands of civilians were killed by weapons and bombs during the Troubles, despite this. That reaffirmed my newly discovered belief that human nature is much more important than any man-made rule.
Then I arrived in Oxford, where I find myself straddling two worlds. On the one side, I was a member of the Hayek Society, a libertarian economic society that hosted joint activities with a wide range of people. Guest lecturers explored economics, freedom, and morality from a critical viewpoint.
On the other hand, my impression of college culture was that you could only participate in a debate if you were a member of it.
This is a major problem at Ivy League schools. Brown University studied how many conservative speakers appear on campus, finding that over 90% of all speakers on American college campuses are liberal or left-leaning. This is part of the same echo chamber that occurs at Oxford, despite the fact that they are much superior to other universities. The University of Chicago, meanwhile, remains a stronghold of free speech.
I did post-doctoral work at Boston University after finishing my thesis at Oxford, and it was a shock and awakening to see how horrible the American university system has become as an educational institution. It’s not that they don’t support critical thinking; it’s just that they do so in a different way.