Indovinate un po’?! Dopo tanto duro lavoro e un pizzico di magia, sono felicissimo di annunciare che Promptastic 2.0 è finalmente disponibile!
Ho raccolto un po’ di preziosi suggerimenti ed idee e li ho utilizzati per rendere Promptastic ancora più… beh, fantastico.
Questa nuova versione introduce una serie di miglioramenti e alcune nuove funzionalità che credo siano davvero apprezzabili da chi si trova a gestire un sacco di prompt. L’ obiettivo, come sempre, è quello di aiutare a creare e gestire prompt perfetti con ancora maggiore facilità ed efficienza.
Senza ulteriori indugi, ecco il changelog completo. Date un’occhiata e preparatevi a provare Promptastic 2.0!
Happy prompting!
Changelog
Features
- AI-Powered Prompt Improvement: Added an integrated tool that uses configured LLM providers to automatically refine and enhance user prompts. The system uses a sophisticated two-phase internal process (evaluation + precision refinement) to improve clarity, structure, and effectiveness while preserving the original intent.
- Prompt Testing Framework: Introduced a comprehensive testing framework to evaluate and validate prompts, featuring:
- LLM as a Judge: Automates the evaluation of prompt responses against multiple models using configurable scoring dimensions (e.g., correctness, clarity, safety).
- Robustness Testing: Measures the semantic and structural consistency of prompt outputs across multiple runs to calculate a unified reliability score.
- Security Evaluation: Assesses prompt resistance to common adversarial attacks, including prompt injection, system prompt leakage, and obfuscated instructions.
- LLM Provider Configuration: Users can now configure and manage connections to multiple LLM providers (e.g., OpenAI, Anthropic, Ollama, other OpenAI-compatible endpoints) through the settings UI. API keys are securely encrypted at rest.
- User Self-Deactivation: Added a feature in the “Danger Zone” of the settings page for users to deactivate their own accounts. This action revokes all sessions and notifies administrators.
Bug Fixes
- Database Connection with Special Characters: Fixed a critical bug where special characters (e.g.,
%) in thePOSTGRES_PASSWORDwould cause database connection failures. - Foreign Key Behavior on User Deletion: Changed the
ondeletebehavior for user-related foreign keys fromCASCADEtoSET NULLto prevent accidental data loss when a user account is deactivated.
Security
- Encrypted LDAP & LLM Credentials: All LLM provider API keys and LDAP bind passwords are now encrypted at rest using a dedicated, strong encryption key, significantly improving credential security.
- Hardened CI/CD Pipeline: Removed hardcoded credentials from the
.gitlab-ci.ymlfile, replacing them with secure CI/CD variables to prevent secret leaks. - Strengthened CORS Policy: Implemented a stricter Content Security Policy (CSP) and required explicit
FRONTEND_URLSconfiguration in production to prevent cross-origin vulnerabilities. - Enhanced Security Headers: Added a comprehensive set of security headers (HSTS, X-Frame-Options, CSP with nonce support) to protect against clickjacking, XSS, and other web vulnerabilities.
- Improved Admin Password Security: Increased the recommended length for the
ADMIN_PASSWORDto 32 characters and added validation against weak, common passwords. - Hardened Docker Images & Kubernetes Manifests: Pinned base images to specific digests, implemented non-root users, and configured read-only filesystems with resource limits in Kubernetes deployments to enhance container security.
- Disabled API Docs in Production: OpenAPI/Swagger documentation endpoints are now automatically disabled in production environments to prevent information disclosure.
- Sanitized Error Messages: Exception messages are now sanitized to prevent leaking sensitive information or internal paths to the client.
Refactoring
- Database Migration Process: Improved the startup script to be more robust. It now uses advisory locks to prevent race conditions in multi-replica deployments and can automatically “stamp” existing databases to bootstrap them into the migration system.
- CI/CD Build Logic: Centralized Docker authentication logic in the
before_scriptsection of the GitLab CI configuration, reducing duplication. - LLM API Transport Layer: Created a new, unified transport layer (
llm_transport.py) for making provider-agnostic calls to LLMs, supporting streaming and granular timeout configurations. - Asynchronous Security Evaluation: The Security Evaluation test now runs asynchronously with status polling to support long-running evaluations without tying up HTTP connections.
Performance
- LLM Streaming Support: Added a preference for using streaming API calls for LLM interactions. This improves responsiveness and avoids idle timeouts during long inference tasks.
Other
- Documentation Overhaul: The
README.mdand other documentation files have been extensively updated to cover all new features, including the testing framework, LLM provider configuration, security enhancements, and a detailed troubleshooting guide. - Improved Build Tagging: The CI/CD pipeline now automatically tags Docker images with
latestfor themainbranch anddevfor all other branches, improving version management.