AI for Response Management Series: Managing the Downsides of Generative AI

Jennifer Tomlinson
Written by Jennifer Tomlinson / Nov 30, 2023

As Executive VP of Marketing, I work to identify business needs and help QorusDocs’ clients generate revenue more effectively and efficiently. I spearhead efforts to increase brand awareness through digital marketing and client engagement.

In our AI for Response Management blog series, we’ve been taking a look at how generative AI and ChatGPT tools are impacting content creation, proposal management, and the sales response process across multiple industries and professional disciplines, from accounting and legal to manufacturing and recruitment

This fourth and final installment in the series focuses on the risks and challenges of using generative AI to automate content creation. While tools like ChatGPT can save a great deal of time when writing emails, creating RFP responses, or compiling monthly reports, they aren’t a panacea for content creators. Let’s talk about why: 

[Check out Part 1, Part 2, and Part 3 of the blog series to learn more about generative AI and how your organization can put this burgeoning technology to work for you.] 

Generative AI has the potential to deliver productivity and efficiency benefits across the organization. However, understanding the risks of generative AI—data quality; data security and privacy concerns; ethical considerations of transparency, accountability, and compliance—is integral to creating high-quality content and protecting the reputation of your organization. 

For anyone’s who’s dabbled with writing documents using generative AI or asking ChatGPT to answer random questions, they’ll have quickly realized that generative AI is not a perfect technology. The content AI-powered tools generate may be out-of-date, biased, plagiarized, or outright wrong—often with serious consequences.  

Bad data in, bad data out 

While generative AI delivers a significant productivity gain in getting to the first draft, the best way to use AI-powered tools like ChatGPT is to view them as a junior assistant, ready to whip up the first draft of your document for you. Generative AI can take the hard grind out of the content creation process but you still need to go through the process of reviewing that initial draft.  

Let’s be clear: if you’re using generative AI to generate content, the review step is critical. In the tech landscape, we’re all familiar with the adage “bad data in, bad data out.” This warning holds true with generative AI because the content produced by tools like ChatGPT, Meta AI, or Google Bard is only as accurate as the data set it was trained on. If you’re relying on data from the public domain instead of verified closed datasets or up-to-date content libraries, your content is highly susceptible to inaccuracies, mistakes, and outdated information. 

Unfortunately, you can’t just ask ChatGPT to produce a pitch, a proposal, an answer to a question, or even an email, and expect a finished product that’s ready to go. Thoroughly reviewing, editing, and supplementing any results or draft language is a crucial step in the content creation process. 

Depending on the content you’re creating—whether you’re writing an internal report or drafting a sales proposal to be sent out to a prospective client—you may also need to conduct a further level of editing and finessing in order to meet the needs of the client. Consider voice, style, or length of the response and tweak the first draft accordingly to optimize the engagement and impact of your document. 

Privacy concerns 

Did you know that Microsoft, Google, and Meta train their AI models on users’ conversations, documents, and photos? X (formerly Twitter) also uses public data (e.g., users’ biometrics, job and education history) for AI training and machine learning. Data privacy is quickly becoming a hot topic in the world of generative AI—and you should be paying attention if your organization is dipping its toe in these waters. 

Understanding the difference between public-facing and private generative AI tools is imperative for ensuring ring-fenced enterprise data does not leak into the public sphere. Unfortunately, if an organization leverages generative AI apps for content creation, the risk of data breaches—accidentally sharing intellectual property or confidential and/or sensitive information—is high.  

In plain language, by taking your private information and passing that data into the public domain using a public-facing AI tool like ChatGPT, you’ve effectively authorized the public domain to learn your information. And in certain cases, your data will be stored for a period of time to allow the generative AI model to learn and grow based on that information.  

The bottom line is that it’s important to know which AI agent you're using and what information you’re passing to which of those agents. Don’t be like Samsung; it learned this lesson the hard way when an employee inadvertently leaked confidential information from the company’s source code to ChatGPT.  

Being aware of the risks and limitations of generative AI and implementing best practices to thoroughly vet and edit any content generated by AI tools greatly mitigates the risk of errors and data security issues. With the right considerations, generative AI can transform the practice of content creation within your organization, accelerating and simplifying the process to boost productivity, optimize the response process, and drive business growth.  

For the full picture of the benefits of generative AI for content creators and to discover how tools like ChatGPT can help proposal teams accelerate and optimize the response process, check out Part 1, Part 2, and Part 3 of the “AI for Response Management” video series.

Downsides of Generative AI Video