Using LLMs for market feedback
After attending the brxnd.ai conference yesterday, it was clear that one of the “hidden” gem use cases for AI that is ready for prime time TODAY is using LLMs and AI to provide quantitative feedback on marketing content. This use case is both a good fit for LLM capabilities as well as an area of lower risk. We’ll highlight why as well as share a live example where you can see this in action
A quick example:
Prompts can set the context of the rating framework and personas/groups
Example Responses:
Responses can give both quantitative and qualitative feedback, both prompt guidance and error handling can be used to use this data programmatically
Value/Use Cases:
Idea testing - our example demo Gen AI Product Evaluator shows how you can expand ideas, gain qualitative feedback and quantitative feedback to drive charts and reports
Consumer Polling - These techniques can be used to replace expensive polling of product ideas, ads and marketing communication
Target Refinement - Breaking down interest, willingness to pay and other metrics can help refine target markets early in ad or product lifecycle for almost no cost.
Enabling Capabilities of LLMs:
LLMs can represent approximations for the many different groups and personas that have been captured during model training
Context for quantitative evaluation can be set dynamically and with a text description meaning you can ask for ratings for any persona or demographic group that you can describe
Rating scale and metrics (1-10, Low/Med/High) can be defined dynamically
Avoidance of Risk:
Numerical statistics don’t carry the same risks of embarrassing or inappropriate content
In most cases, smaller errors (a rating of 6 vs 7 is not meaningful)
Try it out! Example from: genaiproductevaluator.com
While best known for generating text, you can also prompt LLMs to generate representative statistics