Casey Newton over at the excellent Platformer has written an interesting piece on dealing with AI generated content.

The article discusses the increasing use of AI-generated text in mainstream publications and websites, such as CNET and the Associated Press.
Newton notes that while the use of AI in this way is currently benign, answering reader questions and providing information, there is potential for it to be used for more nefarious purposes, such as spreading propaganda.
The piece references a new paper written by Georgetown University’s Center for Security and Emerging Technology and Stanford Internet Observatory that details the potential dangers of AI-generated text and potential solutions to mitigate them.
The paper argues that AI has the potential to make these attacks much more effective, in part by making them invisible.
And yes, a lot of this post was initially drafted with AI assistance.
You must be logged in to post a comment.