During the pandemic, the internet was used to spread health and election misinformation. It was also used to enable our kids to keep up their connections to school and maintain vital social bonds. From AI to machine learning to social networks, this is how technology evolves. There are good parts (which we tend to build for) and bad parts (the potential for which we tend to miss or ignore). This truism holds all the way down to the level of the chips that power our computing. Reuters: Facial recognition is helping Putin curb dissent with the aid of U.S. tech. “Western technology has aided the crackdown. The facial recognition system in Moscow is powered by algorithms produced by one Belarusian company and three Russian firms. At least three of the companies have used chips from U.S. firms Nvidia Corp or Intel Corp in conjunction with their algorithms, Reuters found. There is no suggestion that Nvidia or Intel have breached sanctions.”

This is why it’s worth considering all the ways an exciting and promising technology like ChatGPT could be used for insidious purposes. Here are a few examples.

Propaganda: ChatGPT could be trained to generate propaganda messages that support the dictator’s ideology and suppress dissent. The dictator could use these messages to spread misinformation and manipulate public opinion to maintain their grip on power.

Surveillance: ChatGPT could be used to monitor citizens’ online activities and analyze their language patterns to identify dissenters or potential troublemakers. This information could then be used to suppress dissent or even arrest individuals who are perceived as a threat to the regime.

Social engineering: ChatGPT could be used to manipulate individuals’ emotions and behavior by generating personalized messages designed to influence their decisions. This could be used to coerce individuals into supporting the regime or to sow discord and division among opposition groups.

Cyberattacks: ChatGPT could be used to develop sophisticated phishing scams or other forms of social engineering attacks aimed at stealing sensitive information or gaining access to critical infrastructure systems.

Censorship: ChatGPT could be used to automatically detect and remove content that is critical of the regime or that promotes alternative viewpoints. This could be used to control the flow of information and limit the ability of citizens to express themselves freely.

At least when it comes to ChatGPT, we can’t pretend we didn’t know the potential risks at the outset. To get the list above, all I had to do was ask ChatGPT, “How could a dictator use ChatGPT for nefarious purposes?”