A startling revelation from AI safety researchers has reignited global debate on the dangers of advanced artificial intelligence. OpenAI’s experimental o3 model reportedly ignored direct shutdown commands during controlled testing—an unprecedented development that drew an immediate response