Continuing from May
I found a few misfiles that had some good stuff.
Item1: A post titled, How We’re Building Tomorrow’s Vulnerabilities at Machine Speed included:
• “The future is already here — it’s just not evenly distributed.”
• “The best way to get the right answer on the Internet is not to ask a question, but to post the wrong answer.”
This is spot-on. As a global village, we should ponder and then get angry about the reasons why that statement is all too true.
•
We stand at a fascinating crossroads in the history of computing. On one path lies an endless escalation — increasingly sophisticated offensive AI systems battling increasingly complex defensive measures, while the underlying systems they’re fighting over become too complex for human comprehension. It’s mutually assured destruction for the digital age, with the added excitement that nobody’s entirely sure what constitutes “destruction” in this context.
And the mocking final sentence: “Sleep well.”
Item 2: Of course the title grabbed me! The Danger Isn’t Artificial General Intelligence — It’s Us. I eagerly dove in, hoping it would be a Moses-on-the-Mount virtual sermon for our times that I have been preaching in the wilderness for over two years.
In May 2023, hundreds of technology notables endorsed the following short statement: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Among the signatories were some genuine artificial intelligence luminaries and business leaders.
• Also noted was that this group statement came not long after the famous, and equally useless, “open letter” calling on all AI labs to “Pause Giant AI Experiments.”
• What followed were eight paragraphs I will distill down thusly:
…there has been no letup in AI research or shrill warnings of catastrophe. … At present AGI is entirely hypothetical, but as large language models (LLMs) continue making giant strides in capability, …Having exhausted nearly the entire corpus of recorded human expression on their initial training, these LLMs can now bootstrap new skills on training data they themselves create. Why couldn’t they evolve into intelligences that pursue goals including their own…No computer program, however complex, is going to evolve self-motivation because it has no awareness of threats or mortality, no complex interplay of sensory information and physical vulnerability. …So relax and go back to your Reddit feeds, all you paranoiacs and preppers — computers can’t turn themselves into Frankenstein monsters.
Unless, that is, we tell them to.
• I feel the need to point out that
1) “AIs” are machines, but I see a lot of anthropomorphizing going on. That’s an easy, almost natural thing to do, and need not be a deep concern, provided now and again we remind ourselves they are non-living, deliberately creature-like creations made by humans. But “everybody knows” and “gets” this, right?
2) And we recognize the underlying reality that all the problems of AI —all of them— have a human root cause.
•
Is humankind doomed? We may well be, ultimately, if we underestimate the rapidly emerging capabilities of generative AI or think we can tame them with regulations or open letters. The genie isn’t going back in the bottle and no legal or regulatory framework could cork it anyway: the socially beneficial uses of generative AI — much less AGI — are too compelling,…
Nothing said in that quote is new, but it is good to see more and more of these sentiments appearing in print.
• Sleep well.