Ethics in the Age of AI: Crafting a Personal Code

Share

Date: January 31, 2024

filed in: AI, Marketing

I bought an audiobook about AI marketing this week. There’s no doubt in my mind that it was written by an AI.

As I began to listen, the signs became clear. The narrative had the distinct ability to say a lot without really saying much at all. It repeatedly used telltale words like “transformative” and started sentences with common AI phrases such as “It’s all about…”. Frankly, everything the author had to say was a Word Salad elaboration on just a few really simple ideas.

This wasn’t a full blown Sports Illustrated-level breach of trust, but it was not entirely dissimilar. As in, I am not sure that the author is a real person. AI could (and should, when used in the right way) aid in the creative writing process by complementing human intelligence. As we are still very much waking up to this technology, I can forgive an eager author for relying a bit too much (and too clumsily) on AI to help them produce content. But AI should not replace human intelligence and this looks suspiciously like a book written by an uncredited machine.

The facts support my suspicion. The author has written nearly two dozen books. All (with the exception of one odd title) revolve around the subject of technology, including a book on the healing power of psilocybin mushrooms (and if you’re paying attention to what Silicon Valley has become, this totally tracks and lands that title firmly in the Technology column). All show up on Amazon’s and Apple’s audiobook selling platforms. All assume a short, quick read type of format with an average audio length that comes in at just a skosh over 3 hours. A dip into the Internet’s collective wisdom shows that one hour in audiobook time equals 30 to 40 written pages, meaning the author’s average book size is an efficient 114 pages.

But if those clues weren’t enough, Dear Reader, consider that this author does not appear to have a profile on Amazon’s Author Central. And that I cannot find a LinkedIn profile for them. And that I cannot find them on a Google Search, beyond their audiobooks. And, perhaps most damning, all of the books were written in the last six months (the first title appeared in July 2023). Three or 4 books a month is a blistering pace for a human, but pretty manageable for an AI.

I am not naive enough to think that such books do not exist. But they are dishonest. As I discussed in my previous post on how Large Language Models (LLM) work, tools like ChatGPT have the power to generate mountains of text instantly and the lure of money is great. Amazon, Apple, and the person or persons behind this deceptive use of AI that resulted in the audiobook I purchased are all earning a bit of the money I paid.

This experience makes it painfully clear that we all must adopt and commit to a Code Of Ethics for our personal AI use. As the amount of harm an entity can do with AI is directly proportional to its stock of resources, it is even more important that organizations commit to transparent guidelines for how they use AI and manage the ethical considerations that surround its application. But that is a post for another day. Every global movement begins with the power of one, so let’s start closer to home with ourselves.

In my Automation and AI for Marketing classroom at the University of Notre Dame’s Mendoza College of Business discussions this week, we had a wide ranging and deep conversation about the responsibility we all have to use AI in the right way. The “5 Rules” we discussed were not particularly profound (and many, frankly, seem obvious in retrospect) but they do encourage responsible, critical, and reflective use of AI tools. For our specific purposes the guidelines ensure that students leverage a tool like ChatGPT effectively while maintaining academic integrity and depth in their work. When applied more broadly, they ensure that we don’t become bad actors in the AI landscape. Here they are:

  • Understand the Tool’s Capabilities and Limitations: You have a responsibility to understand how the tool you’re using does what it does. Not at a “I could have coded this tool” level but at a level that connotes respect for the tool’s power and pitfalls. For example, ChatGPT can be used as a supplementary tool for idea generation, but its limitations mean that users must critically evaluate its outputs for accuracy. Ignorance of the tool’s shortcomings is not a defense.

  • Maintain Ethical Integrity and Cite Appropriately: AI tools must be used in ways that respect copyright and privacy laws. One important way this is done is by appropriately crediting any insights or content directly derived from the tool. For example, it should be clearly noted when ideas are lifted from ChatGPT through simple mentions such as “According to ChatGPT…” or “When I entered this prompt into ChatGPT…”. It’s really not that difficult to do.

  • Engage in Critical Thinking: Treat AI tools as a starting point, not an authoritative source or (in the case of more artful applications like image or sound generation) a crafter of finished products. See “Limitations”: You must always independently verify the facts and data the tools provide, and integrate your own research and analysis for depth. Hallucinations happen and you’re on the hook for ensuring they don’t infiltrate your ideas. More importantly, don’t expect the tool to bring originality to your work. It won’t.

  • Prioritize Data Privacy and Security: Be cautious not to input or share sensitive personal or proprietary information in your interactions with AI tools. When working with ChatGPT it is critical that you maintain a high standard of data privacy and security. While it’s against ChatGPT policy to share data with other users, the data *does* go somewhere and *is* collected and managed in *some* way by OpenAI. Keep your personal data (and your client’s data) to yourself.

  • Reflect on the Role of AI in Your Creative Process: Regularly step back and give a think to how AI influences your creative and analytical practice. Seek to discern its impact on your decision-making and route out any bias the tool (or you) bring to your process. When you sense bias creeping in as a result of AI interaction, actively counteract those biases.

These simple rules help us approach AI with a healthy blend of enthusiasm and caution by ensuring that our experience is ethically grounded.

AI like ChatGPT can, without doubt, enhance our marketing strategies and everyday lives. But critical engagement and ethical practice are non-negotiable. As we venture further into the AI Era, we must commit to being conscientious users. This means not just adhering to the ‘5 Rules’ I’ve outlined in this post, but also fostering a culture of transparency, accountability, and continuous learning. By doing so, we ensure that AI serves as a complement to human intelligence, rather than a replacement.

If you’re looking for guidance on how to ethically use AI or produce a point of view on expectations for your employees or other stakeholders, Nore Analytics is here to help. Our AI expertise can help ensure that you’re on the right side of AI. Reach out to us at kevin@noreanalytics.com to discover what Nore can do for your business.

Reply...

Download your comprehensive 6-month roadmap to equip you with the necessary skills and expertise to become a proficient data analyst candidate and succeed in the field.

Getting Your Data Analyst Career Up And Running: Your 6-Month Starter’s Guide

download