Textual Healing

We’ve learned a lot about the promise of AI programs like ChatGPT. And we’ve predicted many of the perils associated with that promise. What we’ve spent less time discussing is how people actually use the software. It turns out that what people want from ChatGPT is what people have always wanted from the Internet. Shortcuts to getting their work done and good old fashioned titillation. John Herrman in NY Mag: ChatGPT Users Want Help With Homework. They’re Also Very Horny. “AI companies are training on a ton of news and encyclopedia content, in large part because that’s what’s available to scrapers in great quantities … Meanwhile, actual ChatGPT users are barely engaging with news at all. In reality, they’re asking ChatGPT to write stories, often of a sexual nature. They’re asking it for ideas, for assistance with research and code, and for help with homework. But, again, they’re very horny … ChatGPT users are asking a newsbot to write erotic fiction. Not ideal!” (Since I’m essentially the human version of the newsbots these tools intend to replace, I better spice things up a bit.)

+ “The science of detecting manipulated content is in its early stages. An April study by the Reuters Institute for the Study of Journalism found that many deepfake detector tools can be easily duped with simple software tricks or editing techniques. Meanwhile, deepfakes and manipulated video are proliferating.” WaPo: See why AI detection tools can fail to catch election deepfakes.

Copied to Clipboard