Hacker News

Scotrix
Ask HN: LLM Prompt Engineering

I’m working on a project where I need to extract user intents and move them to deterministic tool/function/api executions + afterwards refining/transforming the results by another set of tools. Since gathering the right intent and parameters (there are a lot of subtle differences in potential prompts) is quite challenging I’m using a long consecutive executed list of prompts to fine tune to gather exactly the right pieces of information needed to have somewhat reliable tool executions. I tried this with a bunch of agent frameworks (including langchain/langgraph) but it gets very messy very quickly and this messiness is creating a lot of side effects easily.

So I wonder if there is a tool, approach, anything to keep better control of chains of LLM executions which don’t end up in a messy configuration and/or code execution implementation? Maybe even something more visual, or am I the only struggling with this?


thekuanysh5 hours ago

What kind of IO do you have? JSON or plain language?

Scotrixop3 hours ago

I input text and preferably I output JSON but doesn’t matter much as long as it’s somewhat structured.

Ultimately I’d like to extract information like date ranges, specific indications of tool usages (e.g. I have a bunch of data apis with their own individual data and semantic meaning which need to be picked and then a combination of tools to transform the data)

zerodayai2 hours ago

I am creating something along these lines, https://github.com/zero-day-ai, it's meant for security testing, but probably has most of the functionality you need (and you can write plugins fairly easily if not); you can create a prompt repository, defined by a schema that are organized my domains (again, security testing domains, but they can be expanded). If you have any features you'd like to see, or have an ideal workflow feel free to ping me: [email protected]

hn-front (c) 2024 voximity
source