This post discusses a pattern to prevent a class of prompt injection attacks in LLM-based solutions. It emphasizes the importance of building strong foundational patterns to mitigate risks and avoid potential pitfalls. By implementing this pattern, teams can enhance the security of their tool-based solutions.
원문출처 : https://devblogs.microsoft.com/ise/llm-prompt-injection-considerations-for-tool-use
원문출처 : https://devblogs.microsoft.com/ise/llm-prompt-injection-considerations-for-tool-use