3 Comments
User's avatar
Brennan Kenneth Brown's avatar

Hi there! I hope it's alright I comment on this as the author of two of the articles you've referenced.

My first article was about the Internet as a whole, but I do think I can concede that using the word "terrify" in my title was clickbait-y. I want clicks because I want people to be aware of how social media has astroturf'd the Internet into an incredibly sterilized and addictive oligopoly. I think sites like Substack and Medium are some of the last bastions of the values and ideals of the Internet that need to be preserved and cultivated. I am terrified of the state of the Internet because I am terrified of the state of the world.

Now, with AI specifically, I am not really afraid. https://ai-2027.com/ makes a great argument as to why you should be. And I actually wrote a follow-up article doing a deep-dive into the deaths (https://en.wikipedia.org/wiki/Deaths_linked_to_chatbots) specifically caused by LLM GAI chatbots.

I think real harm is being done because of AI, and I think there is a massive AI illiteracy problem contributing to people who start having relationships with AI or have it enable their psychosis, but I am far more enraged at this than I am afraid.

Thank you for the mention and cheers! I really enjoyed finding and reading your work.

Expand full comment
Len Romanick/Infonomena LLC's avatar

Greetings Brennan! So glad you dropped in. Of course, “it's alright” for you to comment, and I hoped you would once you found out you were cited, and appreciate that you did.

Speaking only for myself, there are aspects of my interactions with AI and its consequences that I surmise originally stemmed from an improper introduction of these tools as toys and the resulting “user error" (hoping you familiarized yourself with the genesis of my other project). What has happened since has only made that original sin of omission much worse. Technical events have escalated beyond the abilities of discourse to enhance proper understanding and use. In this new age of attention spans, apparently, only as long as the ability to swipe to the next window, the general populace is not going to make the effort to spend hours learning what we need to know about this tech. This is just fine with the AI producers, whose rebuttal will be that all the papers are out there for anyone to read. Right. Meanwhile, knowing that won’t happen, they are in an arms race of sorts to promote their commoditized, not-ready-for-prime-time, public-consumption products (who knows what is going on v. the gov/mil side).

My observation was that the use of “terror” by some is clickbait-ish. Some were genuine and well-elucidated deep fears. A lot see the state of all things AI as I do. My intention was not to be clear about how I see it, because at this point in the development of all “this,” nothing is. Readers were provided links to read for themselves (hoping they had full access), free to form their own level of discomfort.

This past weekend, for the first time, I spent several hours in conversations with two different bots. Some chats involved research assistance on the things I am working on, which ended up verifying why I don’t use AI for research or writing. As I guided the AIs to see their errors (not hallucinations), I asked specific questions about their programmed methods and limitations. For exploration, I used an example that I talked about here on TOG in a recent post. The explanations were initially benignly interesting, then became disquieting, moving on to dangerous. Perhaps I will take some time to post the chats in the near future as commentary about the new AI search paradigms and information regulation being put in place.

Terrifying.

Expand full comment
Rainbow Roxy's avatar

It's interesting how you've articulated this pattern of "terrify" being used as clickbait, almost as if these authors predict a vague threat of AI doom is more engaging then actual thoughtful discourse on its development.

Expand full comment