'How to work with OpenAI maximum context length is 2049 tokens?
I'd like to send the text from various PDF's to OpenAI's API. Specifically the Summarize for a 2nd grader or the TL;DR summarization API's.
I can extract the text from PDF's using PyMuPDF
and prepare the OpenAI prompt.
Question: How best to prepare the prompt when the token count is longer than the allowed 2049?
- Do I just truncate the text then send multiple requests?
- Or is there a way to sample the text to "compress" it to lose key points?
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
Solution | Source |
---|