This is not an attempt to replace myself. But I had the notion to check ChatGPT to see how good it was at imitating me.
So the prompt I used was “Write a tech column in the style of Tom Merritt about why there are dangers to LLMs but people may yet be overreacting.”
This is the free version of ChatGPT, so it’s going on data from two years ago and before. But I doubt I’ve changed my style all that much in two years.
I found the writing to be a little less personable than mine would be. And I expect I would make a few more innovative points. At least I flatter myself I would. Here’s the first couple of paragarphs to give you a sample. Let me know what you think. And for paying subscribers I’ll throw in the whole thing in case you’re interested.
From here on out, remember, this is all an algorithm’s output.
Title: The Dangers of LLMs: Separating Fact from Fiction
By Tom Merritt | Tech Columnist
In recent years, Language Models like ChatGPT have gained immense popularity and brought significant advancements in various fields. However, as with any powerful technology, concerns have emerged regarding the potential dangers of Large Language Models (LLMs). While it is crucial to address these concerns, it's equally important to avoid unnecessary panic and consider the potential for overreaction. Let's explore both sides of the argument.
One of the primary concerns surrounding LLMs is the potential for misinformation and disinformation. Critics argue that these models can be exploited to spread falsehoods and propaganda, leading to harmful consequences for individuals and society. There is no denying the risk of misuse, especially considering the ability of LLMs to generate seemingly credible content. However, it is essential to note that the responsibility for fact-checking and critical thinking lies not only with the technology but also with the users and consumers of the information.
Keep reading with a 7-day free trial
Subscribe to Tom Merritt Tech Newsletter to keep reading this post and get 7 days of free access to the full post archives.