4 min read

Generative Artificial Intelligence

Generative Artificial Intelligence
Copilot: A fantasy world with a fountain pen, tomato, a small train, a Nokia phone, whiskey in a crystal bottle, and lots of clouds in an intricate hyper realistic style.

Ateş düştüğü yeri yakar / An ember burns where it falls. Don't forget about the ongoing, inhumane massacre in Palestine.


I was already skeptical about decentralized, scam-infused financial networks before they were collectively branded as web3. If we forget about the obvious rug pulls, pump-and-dumps, hit-and-runs, I think there is great potential for certain decentralized, quasi-ponzi-like use cases for finance. For example, what if you trained a team and got paid in the actual value that your training brought to the team? What if you also shared the losses? Such a technology would get rid of so many agile coaches, mentors, instructors on LinkedIn. They'll rename themselves to AI experts overnight, but that's for another post.

Overall, I think decentralized finance does more damage than good in its current form. I feel the same for the current AI zeitgeist. My concerns fall into four major themes:

  1. We mistake good presentation with quality: All popular AI models present themselves quite well, and we mistake that presentation layer for quality. ChatGPT, Gemini, Copilot are excellent at emulating different tones of voice when you chat with them. They do not make grammatical mistakes, they utter carefully crafted phrases that make us believe that they are feeling us. But they may be completely making things up. The users of these may not realize that this is the case.
  2. Using AI bots for tasks we can't accomplish makes it harder for us to improve ourselves in those tasks: I know many, many struggling non-native English speakers who are using ChatGPT to generate grammatically correct English texts. I didn't say proof-read, I said generate. These people struggle with written English, and using ChatGPT to avoid exercising those writing muscles make it harder for them to learn that skill.
  3. Mediocre challenges progress: The current incarnation of generative AI tools are great at preparing mediocre content: mediocre text, mediocre images, mediocre code, mediocre music... Mediocre is good enough in many contexts (see the post banner above) However, when mediocre becomes so easy to make, it drowns the effort of the humans who are pushing for progress.
  4. We are standing on the shoulders of giants and exploited workers without compensating them: The AI magic happens only because of the tremendous training effort put into those models. That training data doesn't come from ether - it is taken without paying anyone their copyrights. Thousands of low-paid human workers serve to fix problematic responses to our prompts, so that we feel good about our chat experience and the companies don't get sued when their models produce questionable, offensive content. All of this uses valuable natural resources for us to generate tomato-train-Nokia phone dreams... 😑

I am sharing some reads that touch upon these topics. I hope they spark valid concerns and offer a critical perspective on the current uses of generative AI tools.


Reads

AI isn't useless. But is it worth it? Molly White, who runs the Web3 page I mentioned above, has written a sound piece on AI.

"Though AI companies promise that their tools will shortly be able to replace your content writing team or generate feature-length films or develop a video game from scratch, the reality is far more mundane: these tools are handy in the same way that it might occasionally be useful to delegate some tasks to an inexperienced and sometimes sloppy intern."

This post is for subscribers only