1 min czytania

OpenAI Safety Team - Small Group, Big Decisions

#AI#safety#alignment

It was not the capabilities of GPT5 that drew my attention while watching the OpenAI livestream. Listening to Saachi Jain, who leads the Safety Training team at OpenAI, I realized that...

... the definitions of even the very expected behaviors of most powerful AI systems are in hands of just a small group of people, not mentioning the actually achieved behaviors.

So will the model provide information necessary for creating biological weapon or not, it depends on how this team formulated their goals and priorities. One team! Size unknown. It might be a team of very smart people but...

The ideas of superalignment that Sutskever and Amodei had were at least more ambitious than just practical UX decisions on model behavior.

What's worse, it seems that recently published America's AI strategy kind of suggests that maybe this is not so important as many other models from different countries are not following any alignment rules anyway.

Wróć do bloga
Udostępnij ten wpis swoim znajomym

Podobał Ci się ten wpis?

Zapisz się na newsletter i otrzymaj darmowy esej "Odbicie umysłu"

Pobierz darmowy esej