Auto-generated content is not all rainbows and unicorns

About 15 years ago – pre-history, in social media terms – there was a phase during which various brands experimented with auto-generated content on their social media channels and invited the public to participate.

The idea was that people would submit text which would be automatically converted into a social media post published on the brand page. The brands no doubt expected charming, complimentary, and interesting content, but failed to put in place enough controls or checks.

Inevitably, this degenerated into deeply inappropriate and offensive language. Badly burnt by this, brands became much more careful about user-generated content.

Then, in 2019, there was another experiment: an AI social chatbot from Microsoft on Twitter called Tay. Remember her? She was supposed to learn from interacting with people on Twitter.

She was only online for about 18 hours before she had been trained by other Twitter users to use – again – inappropriate, offensive, and racist language, and had to be taken down. Once again, there appeared to be few safeguards in place.

What has this to do with your business?

Enter the new generative AI tools: ChatGPT is the best known, but there are many others.

While these are amazing tools, they depend on high-quality input to be able to generate high-quality output. And to collect enough input to be able to generate output, the tools are collating vast amounts of information, irrespective of whether it is sensitive, private or under copyright. ChatGPT, for example, now has access to the internet, so can provide up-to-date information on almost anything.

There are two points here:

  1. Just as for the various social media experiments in the past, if the input is bad quality or malicious, there is no way that the output can be good quality—and it can be brand damaging
  2. The AI tools may be collecting any information that you provide directly to the tool, or that you have available via the internet.

Unless and until you can validate or control the information going into your AI tool, we suggest you should be wary about the output. Use the tools (carefully) if you choose to, but be cautious about what information you provide and always check the output.

In short:

  • Understand where you are (or could) use generative AI in your business
  • Assess the risks to your business of using generative AI
  • Check what data you have made available to search engines (and therefore to AI)
  • Develop policies and procedures for generative AI
  • Determine whether your controls are adequate and if not, then fill any gaps
  • Train your staff in your new policies, procedures, and other controls.

If you’d like some help with developing policies and procedures to fit your business needs, or of course help on any other cyber security issues, call us on 0113 733 6230 or contact us via our form.