Generative AI comes with serious privacy concerns — particularly for those working in therapy, health care, legal or financial industries, according to a recent report by the Congressional Research Service. We’ve been writing about how AI could help or hinder marketers (here and here), and we’re now shifting our focus to privacy.
Part I: Protecting Ideas
Sarah Silverman’s recent lawsuit is just the latest in a string of legal actions accusing OpenAI of training ChatGPT on copyrighted materials. A group of artists also has filed a class action lawsuit against Stability AI, Midjourney and DeviantArt — all image generators — making similar claims.
These accusations alone should give marketers pause, but a 2023 investigation by the Washington Post and Allen Institute for AI found that a dataset used by Google, Facebook and OpenAI definitively included websites with copyrighted materials “as well as potentially sensitive information, such as state voter registration records.”
Part II: Keeping Data Private
The report also highlighted that the terms of service for many generative AI platforms include language that allows them to “reuse user data to improve services.” Translation: These technologies may be storing everything you enter, and incorporating it into future answers for anyone and everyone.
And if that’s not enough, data privacy laws in the U.S. can vary wildly from state to state. Even the federal government does not have comprehensive data privacy laws.
So, as marketers, we must be hyper-vigilant when using AI tools to avoid appropriating someone else’s work — from content calendars to campaign concepts.
Tips to protect your data — and protect yourself from plagiarism
1. Carefully read the terms and conditions. T&Cs vary from service to service, so know what you’re dealing with.
2. Brainstorm, don’t braindump. Avoid sharing identifying information — and this extends beyond specific names. For example, including “food distributor,” “California,” “CEO” and “in business for more than 60 years” could be enough for the LLM to deduce the person you’re referring to.
3. Avoid using AI to refine thought leadership. When novelty is core to a concept or campaign, it’s especially important to not include any proprietary details in your prompts.
4. Don’t use the output wholesale. Always edit the generated content and give it your voice/style/brand treatment. Start with the assumption that the AI’s output has incorporated copyrighted material in some way, and make sure you augment it enough that no one could accuse you of plagiarizing or copying.
5. Use transcription/summarization/translation features judiciously. Automating such tasks offers the most obvious and immediate cost/benefit. But keep in mind that whatever you’re recording or uploading is going into a cloud server somewhere, so be comfortable with the vendor’s or platform’s privacy policies.
Moving forward with caution
Keeping all this in mind, these tools can still help improve workflows and processes. Drafting copy, summarizing large amounts of text, SEO research, high-level concepting, calendarization, word association, and competitive analyses are a few examples of relatively safe ways to incorporate AI.
Imprint is prioritizing security as we strategically implement these technologies. If you have any additional questions or would like to discuss how AI can safely enhance your content and processes, contact us below or send us an email at imprint@imprintcontent.com.