Automatic adversarial prompt generation provides remarkable success in jailbreaking safely-aligned Large Language Models (LLMs). Existing gradient-based attacks, while demonstrating outstanding performance in jailbreaking white-box LLMs, often generate garbled adversarial prompts with