Are you loyal?

In theory, it might be possible to examine your overt deeds and ascertain whether loyalty is exhibited by your actions.

Suppose though that instead there was an attempt made to read your mind and meanwhile scan your face to determine your loyalty quotient. This is eerily intrusive; you might readily decry. Sounds like one of those crazy sci-fi movies that envision a future dystopian society.

Thankfully, you furtively whisper to yourself, we don’t have anything like that today.

Whoa, hold your horses.

News headlines are recently blaring that a research paper posted online in China on July 1, 2022, depicted a study that supposedly involved an assessment of people’s brainwaves and their facial expressions for purposes of computing whether they were loyal to the Chinese Communist Party (CCP). So, there you go, the future is getting closer and closer, at least in terms of attaining the dystopian society that we have dreaded might someday arise.

The research paper quickly disappeared from its online posted link.

Presumably, the rapid scorn that quickly swept across the Internet was enough to get the paper taken down. Or, maybe the researchers just want to make a few wording changes and other innocuous corrections, aiming to do a reposting once they’ve had a more thorough chance to ensure that the i’s are dotted and the t’s are all crossed. We’ll have to keep our eyes open to see if the paper gets a second life.

I’m going to go ahead and do a deep dive into what we know about the research study and try to connect the dots about how this kind of AI-related work has significance for all of us, going far beyond the scope of seeing this as confined to one particular country. My coverage will be a bit more extensive than the other recent reporting on this newsworthy item, so please bear with me.

My added emphasis too will be that there is a slew of vital AI Ethics lessons that we can glean from the purported paper. For my ongoing and extensive coverage of AI Ethics and Ethical AI, see the link here and the link here, just to name a few.

Here’s what has been so far mentioned about the research study.

Apparently, some “volunteers” were recruited to participate in an experiment regarding perceptions of the CCP. Whether they were willing volunteers or more akin to goaded or maybe guided volunteers is unknown. We will assume for sake of discussion that they were agreeable to being a subject in the study.

I bring this up to not simply be smarmy. Whenever an experiment is done involving human subjects, there are a plethora of generally accepted practices regarding the recruitment and immersion of such subjects into a research effort. This partially traces back to prior studies that often tricked or forced people to engage in an experiment, at times leading to adverse psychological repercussions or even physical harm for those participants. The scientific community has tried mightily to curtail those types of insidious studies and requires that all sorts of disclosures and warnings be provided to those sought for inclusion in human-related studies.

To clarify, not everyone abides by such prudent and conscientious guidelines.

Moving on, there were reportedly 43 subjects, and they were said to be members of China’s Communist Party. Keep in mind that the selection of subjects for an experiment is quite crucial to the experiment and also must be taken into account concerning any conclusions that you might try to reach afterward about the results of the experiment.

Suppose I want to do an experiment about how people perceive the acclaimed Star Wars series. If I pre-select subjects that all resoundingly hate Star Wars (how can such people exist?), and I show them Star Wars video clips, the odds are that they are probably going to say that they still dislike Star Wars. Based on this pseudo-scientific experiment, perhaps I sneakily claim that people — in general — really hate Star Wars, which was “proven” (wink-wink) in my “carefully” prepared research setting.

You might not know that I had rigged the roulette wheel, as it were, by preselecting the subjects that I anticipated would produce my surreptitiously desired results. Of course, had I intentionally instead recruited people that loved Star Wars and were zealous avid fans, the chances are they would report as being ecstatic about watching those Star Wars clips. Again, any conclusions reached about how people, in general, react to Star Wars would be tempered by the preselected set of subjects chosen for the effort.

The CCP-focused study seemingly had the subjects sit in front of a kiosk-like video display and read various articles about the policies and accomplishments of the CCP. This is probably the considered “experimental treatment” that the subjects are being exposed to. When planning out an experiment, you usually come up with an experimental factor or aspect that you want to see if it impacts the participants.

The research question apparently being explored was whether the act of reviewing these materials would have any impact on the subjects in terms of increasing, decreasing, or being neutral as to their subsequent impressions of the CCP.

In a classic null hypothesis, you might arrange such a study to state that the materials consumed have no impact on the subsequent impressions expressed by the subjects. Once you’ve done a before and after comparison of their viewpoints about the CCP, you would statistically try to see whether there was a statistically significant detection of a change in their impressions.

It could be that the before and after are not statistically different, therefore you might reasonably tentatively conclude that for this particular study the displayed materials (experimental treatment) did not seem to make a difference in their impressions. On the other hand, if there was a statistically valid difference, you would look to see whether the after was greater than the before, allowing you to gingerly suggest that the materials boosted their impressions (and, on the other side of the coin, if the after was less than the before this might imply that the materials lessened or lowered their impressions).

There are a lot of nagging loose ends that would need to be dealt with in such a study.

For example, we usually want to have a so-called control group that we can compare to those that receive the experimental treatment. Here’s why. Suppose that the act of sitting in front of a kiosk to read materials was the true basis for why impressions were changed. It could be that the nature of the materials consumed is roughly immaterial to the impression impact. Merely sitting and reading anything, such as the latest stories about cats that do funny things, alone might do the trick. We might therefore arrange to have some subjects be in our control group that is exposed to some other material to read, other than the CCP policies and attainments materials.

We don’t know if that was done in this instance (no one seems to have mentioned this facet as yet).

I realize you are by now getting antsy about the media explosive part of the study. We shall move expeditiously into that portion.

How might we detect whether the subjects in this experiment responded to or altered their impressions as a result of reading the displayed materials?

A customary means would be to ask them.

You would have beforehand administered perhaps a questionnaire that asks them their impressions of the CCP. Then, following the exposure to the experimental treatment, as in the reading of materials being displayed, we could administer another questionnaire. The answers given by the subjects on a before and after basis might then be compared. If we were also using a control group, we would assume that the control group answers would not substantively change from the before to the after (under the belief that looking at stories about frolicking cats ought to not have impacted their CCP impressions).

This act of asking the subjects for their impressions is not necessarily as straightforward as it might seem.

Suppose that the subjects in the experiment get a sense or overall drift that you are wanting them to respond to the experimental treatment in a particular manner. In that case, they might purposely overstate their reactions in the after part of the experimental administration. You’ve certainly seen this happen. If I am doing a taste test for a new soda coming onto the market, I might act like I wildly relished the soda, doing so in hopes of maybe getting featured in an advertisement by the soda maker and getting my richly deserved fifteen minutes of fame.

The gist is that merely asking people their opinions is not a surefire means of gauging changes. It is one approach. Other approaches could and are also often undertaken.

How did this particular study opt to gauge the reactions of the subjects?

Apparently, at least two methods were used. One method consisted of doing a facial scan and using AI-based facial recognition software to assess the reactions of the subjects. The other method was reportedly some form of brainwave scanning. It hasn’t been as yet reported as to what type of brainwave scanning devices were used, nor what kind of AI-based brainwave analysis software was employed.

Various reporting has indicated that the study stated this about the nature of the experiment: “On one hand, it can judge how party members have accepted thought and political education.” And the study supposedly also mentioned this too: “On the other hand, it will provide real data for thought and political education so it can be improved and enriched.” The research study was attributed to being performed under the auspices of China’s Hefei Comprehensive National Science Centre.

Media reports suggest that the study alluded to a claim that the facial recognition scans and the brainwave scans were able to aid in detecting that the after impressions were boosted about the CCP.

I’d like to note for you that without being able to directly review the systems utilized and examine closely the research paper, we do not know the particulars of how those AI-based systems were exactly used.

It could be that the subjects were reacting to the experimental setting rather than reacting to the experimental treatment. Anybody that participates in a study might be anxious, to begin with. This might confound any efforts to do brainwave scans or facial pattern analysis. There is also the chance that they felt motivated to please the researchers, opting to have concocted positive thoughts after seeing the materials and this could in theory be reflected in the brainwave scans and the facial scans (maybe, though please know that there is a great deal of impassioned controversy over the validity of such contentions, as I will elucidate momentarily), hoping to skew the results and show that they were positively impacted.

The Twitter reaction substantially decried that the very notion of using AI-empowered brainwave scans and facial recognition is by itself an appalling and outrageous act. Only human monsters would use those kinds of devices, we are told by some of those tweets.

I must ask you to sit down and prepare yourself for something that might be a rude and shocking surprise.

There are many researchers worldwide that use those same kinds of technologies in their research studies. This is certainly not the very first time that a brainwave scan capability was used on human subjects in a research effort. This was also certainly not the first time that facial recognition was used on human subjects for experimental purposes. Even a cursory online search will show you lots and lots of experimental studies across all manner of countries and labs that have used those types of devices.

Now, that being said, using them to gauge loyalty to the CCP is not something you would find much focus on. When such AI is used for governmental control, a red line has been crossed, as they say.

That’s the obviously chilling part of the whole kit and caboodle.

The expressed concern by many is that if governments opt to use brainwave scanning technology and facial recognition to ascertain loyalty to the regimes at hand, we are going to find ourselves in a dystopian world of hurt. When you are walking down a public street, it could be that a device mounted on a lamppost is going to clandestinely determine your loyalty quotient.

One supposes that if your face doesn’t seem to suggest you are sufficiently loyal, or if the brainwave scan suggests the same, governmental thugs might suddenly rush up and grab ahold of you. Unnerving. Abysmal. Must not be allowed.

That is the crux of why a headline gushing outroar and outrage has arisen on this news item.

Imagine this. We are possibly going to craft and then put into use computer-based systems that use the latest in AI to decide whether we are loyal or not. If you tried to hire people to sit around and do the same, you would need a lot of people and you’d have a logistics issue of trying to position them to eye everyone. In the case of an AI-based system, all you need to do is set up the electronic devices on lampposts, sides of buildings, and so on. The scanning for loyalty can happen 24×7, at all times, in all places so equipped. This can then be fed into a massive database.

We become humans that are mere cogs in a colossus all-seeing all-watching societal oppressive ecosystem. The seeing eye is not just watching what we do. It is also interpreting what our faces claim to say about our loyalty to the government. Our minds likewise are to be examined for a similar dreadful cause.

Yikes!

There is a secondary concern that comes from this too, though perhaps not quite as thorny in comparison to the Big Brother implications as already sketched.

Ponder these two pressing questions:

  • Can we reliably assert that a brainwave scan can attest to your loyalty?
  • Can we reliably assert that a facial recognition scan can attest to your loyalty?

Hold on there, you might be yelling at the top of your lungs.

I realize and acknowledge that you might not care quite as much about the reliability aspects per se. Whether this can be done reliably is less crucial than the fact that it is being done at all. No one should be under such scrutiny. Forget about whether the technology works suitably for this task. We ought to not be undertaking the task at the get-go.

In any case, the answer right now is a resounding no, namely that existing AI systems that do any semblance of “brainwave scans” and facial recognition are not sufficiently capable to make that leap.

You’ve maybe seen lately that some of the facial recognition makers have done some backtracking in terms of how their facial recognition systems are being put into use. In an upcoming column posting, I will be discussing the recent efforts for example by Microsoft to try and stem the tide of those using the Microsoft-provided facial recognition tools for purposes far beyond what the technology is able to do or ought to be used for. You might find of interest my earlier look at the AI Ethics qualms that have already been well-publicized about facial recognition, see the link here. I’ve also discussed the arena of brainwave scans, see my discussion at the link here.

In brief, there are no reliable or sensible means to as yet suggest that a brainwave scan or a facial recognition scan can purport to depict someone’s loyalty. Even presumably basic facets such as whether you can reliably correlate those scans to whether someone is happy versus sad are still being hotly debated. Trying to up the ante to something as amorphous and variable as loyalty is a bridge too far.

I might add, that some believe ardently that we will eventually get there. That’s why I have carefully tried to note that we aren’t there yet, rather than stating that we will never get there. Never is a big word. You have to be absolutely certain if you are going to toss around that this will never be feasible (keeping in mind that “never” encompasses decades from now, centuries from now, and thousands or millions of years from now).

Some have reacted to the news story about this Chinese lab research study as an indicator of how dangerously the world is veering to inappropriate and dangerous uses of AI. I will share with you momentarily a glimpse of what AI Ethics is all about. This will help you to more vividly see why this particular study does seem to violate many if not nearly all the generally accepted precepts of Ethical AI.

Believe it or not, some have suggested that maybe we are making a mountain out of a molehill regarding this particular study.

Are we?

The counterargument is that a molehill can soon enough become a mountain. In the proverbial notion of a snowball that gets bigger and bigger as it rolls down a snowy hill, we need to stop the snowball from getting underway. If we tolerate these types of studies, we are letting that snowball begin its journey. By speaking out and calling out such studies, perhaps we can forestall the snowball.

One thing is for sure, we are on the brink of opening Pandora’s box when it comes to AI aspects, and the question remains as to whether we can prevent the opening of the box or at least find some means to deal prudently with whatever comes out once the box has unleashed its devilish contents.

If nothing else, hopefully, these kinds of media storms will prompt widespread discussion about how we are going to prevent AI-related evildoing and avert numerous AI-sparked existential risks. We need to kick up a notch our societal awareness of AI Ethics and Ethical AI considerations.

Before getting into some more meat and potatoes about the wild and woolly considerations underlying these kinds of AI systems, let’s establish some additional fundamentals on profoundly essential topics. We need to briefly take a breezy dive into AI Ethics and especially the advent of Machine Learning (ML) and Deep Learning (DL).

You might be vaguely aware that one of the loudest voices these days in the AI field and even outside the field of AI consists of clamoring for a greater semblance of Ethical AI. Let’s take a look at what it means to refer to AI Ethics and Ethical AI. On top of that, we will explore what I mean when I speak of Machine Learning and Deep Learning.

One particular segment or portion of AI Ethics that has been getting a lot of media attention consists of AI that exhibits untoward biases and inequities. You might be aware that when the latest era of AI got underway there was a huge burst of enthusiasm for what some now call AI For Good. Unfortunately, on the heels of that gushing excitement, we began to witness AI For Bad. For example, various AI-based facial recognition systems have been revealed as containing racial biases and gender biases, which I’ve discussed at the link here.

Efforts to fight back against AI For Bad are actively underway. Besides vociferous legal pursuits of reining in the wrongdoing, there is also a substantive push toward embracing AI Ethics to righten the AI vileness. The notion is that we ought to adopt and endorse key Ethical AI principles for the development and fielding of AI doing so to undercut the AI For Bad and simultaneously heralding and promoting the preferable AI For Good.

On a related notion, I am an advocate of trying to use AI as part of the solution to AI woes, fighting fire with fire in that manner of thinking. We might for example embed Ethical AI components into an AI system that will monitor how the rest of the AI is doing things and thus potentially catch in real-time any discriminatory efforts, see my discussion at the link here. We could also have a separate AI system that acts as a type of AI Ethics monitor. The AI system serves as an overseer to track and detect when another AI is going into the unethical abyss (see my analysis of such capabilities at the link here).

In a moment, I’ll share with you some overarching principles underlying AI Ethics. There are lots of these kinds of lists floating around here and there. You could say that there isn’t as yet a singular list of universal appeal and concurrence. That’s the unfortunate news. The good news is that at least there are readily available AI Ethics lists and they tend to be quite similar. All told, this suggests that by a form of reasoned convergence of sorts that we are finding our way toward a general commonality of what AI Ethics consists of.

First, let’s cover briefly some of the overall Ethical AI precepts to illustrate what ought to be a vital consideration for anyone crafting, fielding, or using AI.

For example, as stated by the Vatican in the Rome Call For AI Ethics and as I’ve covered in-depth at the link here, these are their identified six primary AI ethics principles:

  • Transparency: In principle, AI systems must be explainable
  • Inclusion: The needs of all human beings must be taken into consideration so that everyone can benefit, and all individuals can be offered the best possible conditions to express themselves and develop
  • Responsibility: Those who design and deploy the use of AI must proceed with responsibility and transparency
  • Impartiality: Do not create or act according to bias, thus safeguarding fairness and human dignity
  • Reliability: AI systems must be able to work reliably
  • Security and privacy: AI systems must work securely and respect the privacy of users.

As stated by the U.S. Department of Defense (DoD) in their Ethical Principles For The Use Of Artificial Intelligence and as I’ve covered in-depth at the link here, these are their six primary AI ethics principles:

  • Responsible: DoD personnel will exercise appropriate levels of judgment and care while remaining responsible for the development, deployment, and use of AI capabilities.
  • Equitable: The Department will take deliberate steps to minimize unintended bias in AI capabilities.
  • Traceable: The Department’s AI capabilities will be developed and deployed such that relevant personnel possesses an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including transparent and auditable methodologies, data sources, and design procedure and documentation.
  • Reliable: The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire lifecycles.
  • Governable: The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

I’ve also discussed various collective analyses of AI ethics principles, including having covered a set devised by researchers that examined and condensed the essence of numerous national and international AI ethics tenets in a paper entitled “The Global Landscape Of AI Ethics Guidelines” (published in Nature), and that my coverage explores at the link here, which led to this keystone list:

Sponsored
  • Transparency
  • Justice & Fairness
  • Non-Maleficence
  • Responsibility
  • Privacy
  • Beneficence
  • Freedom & Autonomy
  • Trust
  • Sustainability
  • Dignity
  • Solidarity

As you might directly guess, trying to pin down the specifics underlying these principles can be extremely hard to do. Even more so, the effort to turn those broad principles into something entirely tangible and detailed enough to be used when crafting AI systems is also a tough nut to crack. It is easy to overall do some handwaving about what AI Ethics precepts are and how they should be generally observed, while it is a much more complicated situation in the AI coding having to be the veritable rubber that meets the road.

The AI Ethics principles are to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems. All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI are subject to adhering to the AI Ethics notions. As earlier stated, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.

Let’s also make sure we are on the same page about the nature of today’s AI.

There isn’t any AI today that is sentient. We don’t have this. We don’t know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as the singularity, see my coverage at the link here).

The type of AI that I am focusing on consists of the non-sentient AI that we have today. If we wanted to wildly speculate about sentient AI, this discussion could go in a radically different direction. A sentient AI would supposedly be of human quality. You would need to consider that the sentient AI is the cognitive equivalent of a human. More so, since some speculate we might have super-intelligent AI, it is conceivable that such AI could end up being smarter than humans (for my exploration of super-intelligent AI as a possibility, see the coverage here).

Let’s keep things more down to earth and consider today’s computational non-sentient AI.

Realize that today’s AI is not able to “think” in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverage computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking.

ML/DL is a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models. Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the “old” or historical data are applied to render a current decision.

I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly. There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se.

Furthermore, the AI developers might not realize what is going on either. The arcane mathematics in the ML/DL might make it difficult to ferret out the now hidden biases. You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern matching models of the ML/DL.

You could somewhat use the famous or infamous adage of garbage-in garbage-out. The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making (ADM) of AI axiomatically becomes laden with inequities.

Not good.

Let’s return to our focus on AI systems that are used for inappropriate or possibly outright wrongdoing purposes and how that relates to the recently posted study on CCP loyalty.

Two primary considerations come to mind:

1) This AI instance is part of a larger ongoing pattern of disconcerting AI use and therefore ominous and eye-opening as to what is taking place

2) The cat could be let out of the bag such that if AI like this is adopted in one country it can be readily spread to other countries too

Start with the first point about this instance of AI being part of an ongoing pattern.

One notably important basis for being especially perturbed by this one particular study is that it is part of a larger pattern of how AI is aiming to be used by some. If this was the only such study ever conducted, we might be mildly stirred by it. Nonetheless, it would probably not resonate with such fervor as we are now seeing.

This is perhaps the drip-drip of an inching along toward something that is going to get out of hand.

As reported in the news, China is well-known for having insisted on effusive loyalty to the CCP. Furthermore, various means have been established or are being established to ensure that people are indoctrinated into the governmental doctrine. There have been cited instances of prior studies seeking to devise AI algorithms that could gauge the thought conditions of party members (see the China-backed Study Times in 2019 that mentioned these efforts).

You might recall that in 2018, Vice President Mike Pence gave a speech at the Hudson Institute and emphasized that “China’s rulers aim to implement an Orwellian system premised on controlling virtually every facet of human life” (this was a reference to the CCP implementation of a social credit scoring system, a topic of prominent controversy). You could readily assert that this recent CCP study is another step in that direction.

We don’t know when or if the last straw will break the camel’s back, such that these one-off studies are turned into widespread AI-based monitoring systems.

The second point that is worthy of attention is that we cannot assume that this type of AI will only be confined to China. In essence, though having this type of AI use in China that might go widespread is by itself disturbing, other countries might do likewise.

Once the AI for this is said to be ready for prime time, it won’t likely take much for other countries to decide they want to implement it too. The cat will be out of the bag. Some countries will presumably use this AI in outright oppressive ways and not try to proffer any pretenses about doing so. Other countries might seemingly seek to use this type of AI for what appears to be beneficial purposes, out of which there is ultimately a downside that will nearly be unavoidable.

Actually, suggesting that this type of AI will perhaps only be adopted once it is seen as ready for prime time is a bit of a misnomer. It might not make a whit of difference whether the AI can work assuredly in this manner. The AI can be used as a cover story, see my explanation at the link here. Regardless of what the AI is actually able to accomplish, the notion is that the AI can be a handy pretense to bring forth populace monitoring and schemes of gauging and ensuring absolute loyalty to authorities.

At this juncture of this weighty discussion, I’d bet that you are desirous of some illustrative examples that might showcase this topic. There is a special and assuredly popular set of examples that are close to my heart. You see, in my capacity as an expert on AI including the ethical and legal ramifications, I am frequently asked to identify realistic examples that showcase AI Ethics dilemmas so that the somewhat theoretical nature of the topic can be more readily grasped. One of the most evocative areas that vividly presents this ethical AI quandary is the advent of AI-based true self-driving cars. This will serve as a handy use case or exemplar for ample discussion on the topic.

Here’s then a noteworthy question that is worth contemplating: Does the advent of AI-based true self-driving cars illuminate anything about AI misuses, and if so, what does this showcase?

Allow me a moment to unpack the question.

First, note that there isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.

I’d like to further clarify what is meant when I refer to true self-driving cars.

Understanding The Levels Of Self-Driving Cars

As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, and we don’t yet even know if this will be possible to achieve, nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And AI Misuse

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.

Why is this added emphasis about the AI not being sentient?

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.

Let’s dive into the myriad of aspects that come to play on this topic.

First, it is important to realize that not all AI self-driving cars are the same. Each automaker and self-driving tech firm is taking its approach to devising self-driving cars. As such, it is difficult to make sweeping statements about what AI driving systems will do or not do.

Furthermore, whenever stating that an AI driving system doesn’t do some particular thing, this can, later on, be overtaken by developers that in fact program the computer to do that very thing. Step by step, AI driving systems are being gradually improved and extended. An existing limitation today might no longer exist in a future iteration or version of the system.

I hope that provides a sufficient litany of caveats to underlie what I am about to relate.

Let’s sketch out a self-driving car scenario that might leverage AI in sketchy or wrongdoing ways.

I am going to share with you some AI-based self-driving car ramifications that might cause you to shudder and be disturbed. These are facets that almost no one is currently discussing. I’ve repeatedly brought up the matters, though openly acknowledge that until we have a prevalent adoption of self-driving cars, we aren’t going to get much traction as to society being worried or upset on what seem like today as merely abstract notions.

Are you ready?

We shall start with some foundation laying.

AI-based self-driving cars will be equipped with video cameras, sensibly so. This allows the self-driving car to receive video imagery of the driving scene. In turn, the AI driving system that is running on board the computers in the self-driving car is intended to computationally examine the collected video and figure out where the road is, where nearby cars are, where pedestrians are, and so on. I realize that I am reciting self-driving cars 101 fundamentals.

There are video cameras mounted on the exterior of the autonomous vehicle and they are pointing outward. In addition, by and large, you can expect that there will be video cameras on or inside the vehicle aiming inward into the interior of the self-driving car. Why so? Easy-peasy, because there will be lots of important uses for video capturing the goings-on inside the autonomous vehicle.

When you go for a ride in a self-driving car, you no longer need to be the driver. What will you do then while inside a self-driving car?

One thing you could do would be to interact with others at home or in the office. There you are, on your way to work, which will say take an hour of driving time by the self-driving car, and you can already begin your day of work by doing a Zoom-like online real-time interactive session. They can see you, due to the cameras pointing inward into the self-driving car. You can see them on perhaps an LED screen inside the self-driving car. At the end of your day, while heading back home, you might do a similar interactive video discussion with your kids as they are starting to get their homework done for the evening.

Another use would be for taking classes. Now that you don’t need to waste your time driving, you can turn that deadhead time inside a self-driving car into enhancing your skills or getting a certification or degree. Via the cameras pointing inward, your instructor can see you and discuss how your training is coming along.

Yet another use would be to try and ensure that riders in self-driving cars do not go amok. In a human-driven car, the driver is an adult presence that usually keeps riders from doing zany things such as marking the interior with graffiti. What will happen with AI-based self-driving cars? Some worry that riders will opt to tear up the interior of the vehicles. To try and prevent this, the ridesharing firm that is deploying the self-driving cars will likely use inward-facing video cameras to monitor what people are doing while inside the autonomous vehicle.

I assume that you are convinced that we are going to have video cameras that point into the interior of self-driving cars, in addition to the cameras that point outward to discern the driving scene.

You are now ready for what I have referred to as the roving eye, see my analysis at the link here.

First, consider the video cameras that are pointing outward.

Wherever the self-driving car goes, it will potentially be able to video record whatever the cameras see. A self-driving car that gives a lift to someone from their home and takes them to the grocery store will traverse a neighborhood and the video will record not just the roadway but also everything else happening within eyesight. A father and son playing in their front yard. A family sitting on their front porch. On and on it goes.

Right now, we have so few self-driving cars on the public roadways that this ability to capture video of daily activities is relatively scarce and immaterial.

Envision that we ultimately achieve safe and widespread self-driving cars. Thousands of them. Maybe millions. We have about 250 million human-driven cars in the US today. Eventually, those will be pretty much replaced by self-driving cars or simply no longer used, and we will predominantly have self-driving cars on our roadways. Those self-driving cars are being driven by AI and as such can be roaming essentially 24×7. No rest breaks, no bathroom breaks.

The video data could be uploaded from these self-driving cars via OTA (Over-The-Air) electronic networking connections. Self-driving cars will be using OTA to get the latest AI software updates downloaded into the vehicle. In addition, the OTA can be used to upload data from the self-driving car into a cloud-based database.

All told, it would be feasible to stitch together this uploaded data. With the stitching, you could potentially piece together the daily comings and goings of anyone that stepped outside on any day in any locale that was extensively using self-driving cars.

That is the roving eye that I have been warning about.

We can now dovetail the CCP study into this kind of capability. Suppose a government has access to all of this collected video data. They could then use an AI-based facial recognition algorithm to ascertain where you went, at what time of day, throughout your daily travels of life. In addition, they could presumably use their “loyalty” AI-based analyzer to see if you seemed to have a loyal look on your face or not.

Just imagine that on a Tuesday afternoon you were walking to get a sandwich at a local diner. Self-driving cars were passing by on the roadway. All of the many videos captured you as you were doing your five-minute walk to get a bite to eat. The data was uploaded to a centralized database. The government ran its AI facial recognition program on the data.

Turns out the AI “determined” that you had a disloyal look on your face.

Maybe this disloyal look happened only for an instant. You were waiting at a street corner for the light to change so that you could cross the street to the diner. At that moment, you had a slight twitch of disgust that you had to wait overly long for the Walk symbol. Was this an indication perhaps of your disloyalty to the government?

Yes, the AI computationally calculated, you were abundantly disloyal at that instant in time. When you get home that night, the government has arranged for your arrest.

But wait, there’s more.

Remember that video cameras are pointing inward too.

On that same Tuesday, as you were riding to work in a self-driving car, the video cameras were capturing your every moment. This was uploaded to a centralized database. The AI software that analyses face patterns for disloyalty did a computational pattern examination of your facial expressions during the journey to the office.

At one point, you were casually looking outside the autonomous vehicle and noticed a construction worker that was partially blocking the roadway and caused the AI driving system to slow down the self-driving car. For a split second, your face registered a look of derision for this construction worker slowing down traffic.

The AI facial pattern analysis interpreted this as a sign of disloyalty to the government.

Two strikes against you in one day.

You are living on thin ice.

Of course, whether the AI is “right” or “wrong” about being able to determine your loyalty is almost inconsequential in this context. The gist is that AI has been put into place for this purpose. The humans that are deploying the AI might or might not care whether the AI is of any suitable use for this type of task. The AI allows for governmental control, regardless of the technological validity itself.

That covers the face scanning.

If we eventually have any kind of portable cost-effective devices for doing (alleged) brainwave scanning, this certainly can also be included in self-driving cars. Video cameras are a sure thing now. The possibility of having brainwave scan devices of this caliber is not in the cards right now, but clearly is something that is being envisioned for the future.

For my exploration of how a government might try to take over a populace by grabbing control of self-driving cars, see the link here. A similar possibility exists that a malicious actor might try to do the same, see the link here. Those aren’t meant to be scare tactics as to coverage on those related topics, and instead, a heads-up on the importance of cybersecurity and other precautions we must as a society seek to take regarding the onset of ubiquitous self-driving cars and other autonomous vehicles.

Conclusion

I’d like to quickly cover one additional aspect about the AI being used to ascertain loyalty that I think is a somewhat separate topic, but one that some tweets and social media have been belaboring.

I earlier mentioned that we do not have sentient AI and we do not know if or when we will. Let’s entertain the idea that we will have sentient AI. In that case, consider the following scenario.

We via non-sentient AI opt to put in place a widespread use of AI that ascertains computationally whether people are being loyal to their government, using facial scans, brainwave scans, and so on. This is entirely run by and used by humans in authority. That is the disturbing scenario that I have just moments earlier been describing.

Time to up the ante.

AI becomes sentient. We have now potentially handed to this sentient AI a widespread capability of presumably identifying loyalty and disloyalty in humans. An evildoing AI that is considering wiping out humans might use this capability to decide that indeed humans are going to be disloyal and should be destroyed entirely. Or maybe just those humans that showcase an indication of disloyalty via their face or their thoughts are to be particularly scrapped.

Another angle is that the AI wishes to enslave humans, see my discussion at the link here.

We would seem to have given the AI a perfect gift for carrying out that quest. The existing infrastructure that we put in place allows the AI to keep a careful watch on us humans. Those that appear to have expressed a disloyal facial indication or thought about the AI overlord(s) are going to feel the wrath of the AI.

I realize that I said that this is an upping of the ante. I am not sure that is the case. Seems to me that whether we have AI overlords directly deciding our fate versus human overlords that perchance use an AI system to determine loyalty, well, neither proposition seems especially desirable.

A final comment for now.

Roman scholar and philosopher Marcus Tullius Cicero stated that there is nothing more noble, nothing more venerable, than loyalty. We might be letting AI get ahead of us and become a tool to enlist and ensure “loyalty” by dreadful means.

A worthwhile mindful reason to put AI Ethics on the top of our To-Do list.

Sponsored

Leave a Reply

Your email address will not be published.