So everybody talks about AI, well most talk about ChatGPT, some of Large Language Models and even fewer on foundational models. Most in the more commercial space are excitedly looking to capitalize even more efficiency and productivity expected from this new gadget. Everyone who wants to be cool and hip are talking about AI.
I, perhaps arrogantly, think I am fairly smart. I think I have a pretty good track record of seeing what is going to happen with emerging technologies, and also historically been fairly optimistic about it.
I have to admit, right now, AI scares the bejesus out of me.
One thing is that the evolution of AI is so fast the natural learning curve of usage is not going to keep up. Somebody is going to make a lot mistakes. Samsung being the first ones to do it royally. But that one was just a minor blunder. Someone is going to tie in AI to Jarvis, to Huggingface to whatever, and stuff is going to go wrong! Seriously wrong!
Jan Leike the Alignment Lead at OpenAI tweeted on the 17th of march:
“Before we scramble to deeply integrate LLMs everywhere in the economy, can we pause and think whether it is wise to do so?
This is quite immature technology and we don’t understand how it works.
If we’re not careful we’re setting ourselves up for a lot of correlated failures.”
Secondly I am very concerned about the alignment problem. Eliezer Yudkowsky says:
“We are not prepared. We are not on course to be prepared in any reasonable time window. There is no plan. Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what … is going on inside those systems. If we actually do this, we are all going to die.”
and adds:
“AI does not care about human beings one way or the other, and we have no idea how to make it care”
I am no expert on AI, I am not even an expert in morality. Philosophy at university more than anything taught me that morality and ethics is a quite difficult topic to grasp even for humans. So how on earth are we going to train a superintelligence in aligning with what humanity wants? And is someone even trying to do so? I mean democratizing the public discourse on SoMe has not been a flawless win.
Jan Leike is however more optimistic https://lnkd.in/eZU6AQFZ. But me, I do still it warrant caution more than anything else. But a slowdown will not happen, this is going to be a race until something blows up, I am sure of it. No entity is capable of stopping it now, the potential benefits are to great. The Llama and Alpaca models are already so portable that it is not containable anymore. Anyone who stays behind is done.
LLMs and AI in general are fascinating and exciting stuff.
But I worry, a lot!