Scan prompts for prompt injection attacks before sending them to any LLM. Detect jailbreaks, data exfiltration, encoding bypass, multilingual attacks, and 25...
- Renamed skill to "glitchward-llm-shield" and updated description for clarity. - Removed the internal implementation file (`llm-shield-skill.js`). - Simplified SKILL.md: shifted from detailed usage instructions and command documentation to concise API usage examples. - Updated setup and token configuration steps. - Clarified API endpoints for single and batch prompt validation. - Streamlined documentation to focus on integration pattern, attack categories, and when/how to use the skill. - Expanded coverage of detected attack types and use cases.