Am I Missing Something? Why do you need to download an 'ablitirated' model when regular ones work just fine?
I'm genuinely curious if I'm missing something, but it seems like there's an assumption that 'uncensoring' these AI models is some kind of complex achievement. In reality, it's surprisingly easy to get any information you want from them with just the right prompt.
I've tested llama3.1, gemma2, qwen2, and phi3, and all of them will happily provide any information you ask for. It doesn't seem to matter how sensitive the topic is - as long as the prompt is worded correctly, they'll give you an answer.
It doesn't appear to improve their abilities in any meaningful way. Unless they've been specifically trained on the data in question, it's just a matter of using the right words to get the desired response. So, what am I missing? Is there some kind of magic prompt that shows you if they are truly uncensored that I don't know about?
Edit: I got it now. I wasn't thinking about the 'other uses' for abliteration. With right system prompt they will answer factual information without refusal. It will not do 'other' things.
It will do this -> https://imgur.com/a/JxE3SsI It won't do this ->https://imgur.com/a/waYnNNU