Google and OpenAI staff, many of them AI researchers, have signed an open letter saying they share Anthropic’s red lines. Privately OpenAI bosses agree.
Google and OpenAI staff, many of them AI researchers, have signed an open letter saying they share Anthropic’s red lines. Privately OpenAI bosses agree.
Looks like the Friday deadline was set, because they needed anthropic for the attack?
Am I missing something? How would LLM be useful for operations like this?
As everyone else says, to avoid accountability. That’s the real “killer” app here.
Just look at use of “AI” in the ongoing gaza genocide, zio terror attacks on lebanon, etc.
This is already normal.