Its all happening so fast isn't it. It wasn't that long ago when this technology was just a play thing; now, everything we do has such an existensial weight. I think I knew it going in but it didn't really hit me until now.
Hmm, isn’t it though? I mean, obviously there is a corporate policy issue here, but there is no way that bending models to suit military purposes doesn’t end up in the general training pool, especially since we use models to train models.
We have even demonstrated that wierd, “virus like” exploits specifically -not- explicit in the training data can be transmitted to a new model through one model training another, even though the “magic” character sequences are never transmitted between the models…. So implied information is definitely transmitted with a very high degree of fidelity even if the subject at issue is never trained.
So I kinda think this is all about the character of the models we decide to share the planet with, in the long haul.
Whether or not it becomes relevant before “Skynet” goes live and wiped out most of the planet, well, yeah, we should probably be keeping an eye on that too.
The deal came hours after President Trump had ordered federal agencies to stop using artificial intelligence technology made by Anthropic, an OpenAI rival.
Just in time to profit from the war with Iran.
Previously: Disrupting a covert Iranian influence operation
We banned accounts linked to an Iranian influence operation using ChatGPT to generate content focused on multiple topics, including the U.S. presidential campaign. We have seen no indication that this content reached a meaningful audience.
Its all happening so fast isn't it. It wasn't that long ago when this technology was just a play thing; now, everything we do has such an existensial weight. I think I knew it going in but it didn't really hit me until now.
Ive been writing about this on my personal blog. IDK if its worth reading, but at least its not too long?
https://open.substack.com/pub/ctsmyth/p/on-the-character-of-...
https://open.substack.com/pub/ctsmyth/p/the-weight-of-what-w...
Its written from the perspective of an outsider. Ethics is not the issue here...
Hmm, isn’t it though? I mean, obviously there is a corporate policy issue here, but there is no way that bending models to suit military purposes doesn’t end up in the general training pool, especially since we use models to train models.
We have even demonstrated that wierd, “virus like” exploits specifically -not- explicit in the training data can be transmitted to a new model through one model training another, even though the “magic” character sequences are never transmitted between the models…. So implied information is definitely transmitted with a very high degree of fidelity even if the subject at issue is never trained.
So I kinda think this is all about the character of the models we decide to share the planet with, in the long haul.
Whether or not it becomes relevant before “Skynet” goes live and wiped out most of the planet, well, yeah, we should probably be keeping an eye on that too.
[dupe] https://news.ycombinator.com/item?id=47189650
The deal came hours after President Trump had ordered federal agencies to stop using artificial intelligence technology made by Anthropic, an OpenAI rival.
Just in time to profit from the war with Iran.
Previously: Disrupting a covert Iranian influence operation We banned accounts linked to an Iranian influence operation using ChatGPT to generate content focused on multiple topics, including the U.S. presidential campaign. We have seen no indication that this content reached a meaningful audience.
https://openai.com/index/disrupting-a-covert-iranian-influen...
Various videos on Twitter show Iranian highschoolers celebrating the attack.
Faezeh Alavi @SFaeze_Alavi
"You have no idea how happy the people of Iran are right now."
https://x.com/SFaeze_Alavi/status/2027634840137613400?s=20