Resources that solve a problem, offer new insights, or tickle your gray matter.
"The way ChatGPT writes in the first person about itself? The way it feeds you its output one word at a time instead of all at once as produced under the hood? All on purpose. Clever product design, marketing, and reporting are being invested in the illusion that LLMs can think. They cannot. Propagating this illusion, at bottom, is a power grab by people in the technology industry."
~ Raman Shah, Data Craftsman
The legal implications of using generative AI are considerable. It directly impacts your ability to protect your intellectual property and it may well result in your being in breach of your contract. Using generative AI can also impact your reputation. And it deprives you of the joy of thinking through an issue, wrestling with your ideas, and deepening your understanding.
Generative AI generates media (images, videos, and text) from prompts supplied by the user. Applications like ChatGPT rely on Large Language Models (LLMs), algorithms that generate probabilities of series of words based on large datasets consisting of trillions of words scraped from the internet.
Some people claim that generative AI can create art or poetry. But creating requires thinking. And AI can't think. All it can do is use an algorithm to identify words that usually go together and spit out those pairings as sentences and pair sentences into paragraphs.
My biggest concern with generative AI is what it takes from us.
When we use generative AI, we outsource creativity to an application — a thing, not a person. And that thing generates media that deprives us of thinking deeply about an issue and creating something in response.
It deprives us of our humanity.
(Of course, humans are the only animals smart enough to create such a tool and stupid enough to use it.)
My laments for humanity are often dismissed by those who believe I am too idealistic. Others can't hear my cries over the din of society's demands to produce more and more, faster and faster. It's true that I am an idealist. But I'm a pragmatic idealist (and a recovering attorney), so let's turn to the business case against generative AI.
The legal implications of using generative AI.
Erin Austin is an IP attorney who helps founders of expertise-based firms build and protect saleable assets. She recently discussed the legal implications of generative AI on her Hourly to Exit podcast with her guest, attorney Girija Patel. There's a lot of depth and nuance to their conversation, but here are three key takeaways:
The legal implications of using generative AI is a complex and evolving area of law. But the legal implications of using these tools shouldn't be your only considerations.
Generative AI may negatively impact your reputation.
Your reputation is your single most important asset. When considering whether to use generative AI, and if so, how to use it, you must evaluate it against the potential harm to your reputation. Keep these three points in mind:
Generative AI can be a helpful tool, but it can harm your business and your reputation if you don't use it wisely. Some will take the shortcut offered by AI writers, and many will get away with it even if their contract forbids it.
(They won't get away with it for long, however. Just like there are AI-based tools like Grammarly that can discover plagiarism, AI-based tools like originality can tell if your content is unique or generated by AI.)
The internet is a noisy place. If you put out as much stuff as possible as often as possible, you'll do nothing more than add to the noise. Instead, focus on sharing your experience-based expertise and the insights your clients value most. Let others go after the immediate dopamine hit and burn themselves out on all the socials while you play the long game and build a sustainable consulting business.